Sloppy Optimism Notes (Mark Odayan)
Tenants of Optimism
- Re-architect network economics and “close the loop” → value produced by the network can be recycled to grow and sustain it.
The OVM
Alpha release date: Feb 2020
The Optimistic Virtual Machine (OVM) is an EVM-compatible execution environment built for rollups. Each computational step is called a transition.
Transitions are evaluated in one of the following ways:
- client-side by individual users wishing to compute or verify the latest state
- on-chain in a contract to verify fraud proofs.
In case of invalid state transitions in the OVM → we spawn an OVM environment which allows for an efficient stateless fraud proof.
The basis for optimistic rollup schemes is disputes.
The key property here is optimistic execution. A single party is able to submit a claim of what a future state is without requiring others to re-execute transactions to compute it. This claim can be disputed by another party via a challenge which would be posted to the base chain to start the arbitration logic.

A design goal of the rollup is to provide the functionality to run a generic L2 Ethereum transaction inside the L1 fraud proof transaction.
L2s provide an interface for efficiently using L1 VMs; instead of using transactions directly to update L1 state, off-chain data is used to guarantee what will happen to L1 state, this guarantee is called an “optimistic decision”.
Optimistic decisions involve:
- Looking at L1 to see what could possibly happen in the future.
- Look at off-chain messages and what they guarantee if used in L1.
- Restrict our expectations of future L1 state based on those guarantees.
Formalisation of state transition
OVM Design
A new contract was created called the “Execution Manager” which acts a virtual container for OVM contracts.
The Execution Manager virtualises various components that may be executed differently in L2 compared to L1 such as:
- Contract storage
- Transaction context
- Cross-contract message routing
The Execution Manager provides functions to make execution differences that may occur between L1 and L2 consistent.
Lifecycle of an L2 transaction
- User sends transaction off-chain to a bonded aggregator (a L2 block producer with a bond)
- Aggregator locally applies the transaction and computes a new state root.
- Aggregator submits an L1 transaction which contains transaction and the state root of the optimistic rollup block.
- Anyone can download blocks and find out whether an invalid transaction was included (with something like
verify_state_transition(prev_state, block, witness)
which will: - Slash the malicious aggregator (equivocated entity) as well as any aggregator who built on top of the invalid block
- Reward to prover with a portion of the aggregator’s bond.
Optimistic Rollup in Depth
Blockchains require 3 properties for permissionless generalised computation to be carried out:
- Available head state - Any relevant party can download the current head state.
- Valid head state - The head state is valid (e.g. no invalid state transitions)
- Live head state - Anyone can submit transactions which can update the head state.
1. State Availability (DA)
L2 block producers pass all blocks which include transactions and state roots through calldata and post it to L1.
The calldata block is then merklized and a single 32 byte state root is stored.
- For reference, calldata is 2000 gas per 32 bytes while storage is 20000 gas. Additionally, calldata was reduced 5x in the Istanbul hard fork.
Security Assumptions
- Honest majority on Ethereum (or equivalent DA layer)
2. Valid head state
- We use a cryptoeconomic validity game to ensure valid canonical state.
Cryptoeconomic validity game (arbitration)
- Aggregators post bond to produce blocks.
- Each block contains an access list, transaction group, post state root.
- Blocks are committed to a rollup contract by a bonded aggregator.
- Anyone is able to send a challenge to prove a block is invalid; if they prove so, they get a portion of the aggregator’s security bond.
A block can be proven invalid if one of the following properties is satisfied:
- Invalid block → committed block is invalid (calculated with
is_valid_transaction(prev_state, block, witness) => boolean
)
- Skipped valid block → committed block was skipped a valid block.
- Invalid parent → committed block’s parent is invalid.

Properties of state validity game
- Pluggable validity checkers → we can define different validity checkers for
is_valid_transitions(..)
allowing us to use different VMs to run smart contracts.
- Only one valid chain → blocks are submitted to Ethereum which give us the ordering of transactions and blocks. This enables us to know what the head block is, and therefore require aggregators to prune invalid blocks before submitting a new block.
- Sharded validation → we could perform this validity game using UTXOs as a method. Instead of validating full blocks, we partially invalidate them. This does not require proving all invalid transactions up front for a single block. Partial block invalidation means we can validate only UTXOs for contracts we care about to secure our state.
With permissionless optimistic rollups, all data is available and therefore anyone running a full node stands to gain the bond of all aggregators who build on an invalid chain. This risk incentivises aggregators to be watchtowers, validating the chain they are building on - mitigating the verifier’s dilemma.
Unlike with plasma smart contracts, rollups get around the data availability problem by posting the minimum information needed to compute state transitions on-chain.
Security Assumption
- A single honest or rational verifier assumption (rational implies economically incentivised with challenge games).
- Assume Ethereum is live and operating (base layer)
3. Live head state
The rollup must satisfy liveness (also known as censorship-resistance). The key insights which ensures this property are:
- Anyone with a minimum bond size can become an aggregator for the rollup.
- Even though honest aggregators may prune invalid blocks, the chain does not halt in the event of an invalid block.
Liveness vs instant confirmations
We make instant confirmations possible by designating short-lived aggregator monopolies on blocks. The downside is that we trade off censorship-resistance because a single party is able to censor for some period of time.
Security Assumptions
For liveness, we consider 2 security assumptions:
- A non-censoring aggregator exists.
- Ethereum is not censoring.
Universal Dispute Contract (Arbitration)
There exists an arbitration contract for handling user-submitted claims which evaluate to
.
E.g. the preimage to hash X does NOT exist.
Disputes involve counter-claims which logically contradict claims made (challenges). After a dispute timeout, the contract can decide true for an unchallenged claim. If a contradiction is made, logic of decisions about true/false statements is conducted (predicate calculus application).
The road to EVM equivalence
George Hotz replaced the 5000-line OVM-EVM transpiler to 300-lines a while ago. In more recent time, this optimised code too would be erased. This was the road towards EVM equivalence.
By strictly adhering to the Ethereum yellow paper specification, anyone who has written code targeting Geth could now deploy without change - even for advanced features like traces and gas.
The upgrade involved removing the custom compiler and over 25000 lines of code in favour of using what already existed, the Geth foundation. By building on top of Geth, they inherited improvements made to Ethereum client code and vice versa (net-positive for the Ethereum ecosystem).
There goal is to eventually make alternative node implementations (e.g. Erigon) possible in under 1000 lines of code.
EVM Equivalence
EVM equivalence describes the complete alignment with the EVM specification.
A history of optimistic dispute protocols
- state channels → only pre-agreed set of parties can transact among each other
- plasma → open participation (permissionless) but prone to censorship (strong trust assumptions)
- rollups → trustless and permissionless, hence open and censorship-resistant (a user could submit a transaction to L1 to force it to reflect on L2 if L2 operator is censoring)

Evolution of compatibility
In the development stages of Optimism, Unipig (the first ever L2 AMM) was Uniswap recreated using custom code that was compatible with the rollup dispute contract - not the EVM itself.
This was okay for the time, but in the long term an exponentially growing ecosystem could not be expected to rearchitect around a non-EVM interface. Therefore it became L2’s responsibility to ensure that the L1 arbitration system was minimally differentiated from the EVM.
This goal required solving the problem of how to run an EVM within another EVM (a smart contract that could simulate another EVM executing).
To achieve this goal within a reasonable timeframe, a compromise had to be made which was called EVM compatibility.
Optimism launched in 2021 with EVM compatibility. EVM compatibility is not the same as EVM equivalence. Compatibility meant that you were forced to modify, or re-implement, lower-level code that Ethereum’s supporting infrastructure relies on, hence there was still more work to be done to be able to fully leverage Ethereum’s network effects. They needed to become EVM equivalent.
EVM equivalence was how Optimism sought to bridge the gap between Ethereum L1 infrastructure network effects, and Ethereum L2 execution environments.
What is EVM equivalence?
EVM equivalence is complete compliance with the Ethereum yellow paper, the formal definition of the protocol.
This meant that the existing Ethereum stack should integrate with the L2 system (debuggers, toolchains, every node implementation).
From the early days, the OVM v1 introduced a containerisation system which sat on top of Geth’s EVM and helped avoid tediously re-implementing the entire EVM on L1.
However, the EVM does not natively support containerisation, so this came at a cost. There was risk of fragmenting ecosystem resources in pursuit of making OVM work with Ethereum tooling. A better approach was needed to achieve EVM equivalence.
How to achieve EVM equivalence?
1. Separating block generation and execution
We change how blocks are generated.
On L2, batches of transactions are applied via batches being sent to the L1 chain.
A core pattern of blockchain modularisation is separating consensus from execution.
We define a function which:
- Accepts L1 blocks
- Processes them for rollup transactions
- Outputs L2 blocks (in exact same format as L1 blocks)
From this point, L2 execution can be defined as equivalent to L1

2. ETH Merge API (Engine API)
Ethereum (post-merge) has the exact same abstraction that EVM equivalent rollups do. The beacon chain serves the same role that L1 does for rollups.

3. Enforcing the standard
So long as a solution is EVM-equivalent, we can use it.
This means that improvements to the fraud proof, and even EVM-equivalent zero-knowledge proofs when they become feasible, can be trivially slotted into the existing off-chain stack.
A short-term goal was to implement a perfect EVM-equivalent implementation in Solidity however this has challenges; the EVM is very complex and further more future EVM updates would have to be re-implemented in Solidity too.
A solution to this challenge (implementing the EVM in Solidity) was to implement a VM with a much smaller, simpler instruction set (see RISC) and run the EVM within this VM during fraud proofs.
In order to do this, we compile an existing EVM interpreter to run within the simpler VM (for example → geth’s interpreter)
We allow Geth itself to run inside a dispute-friendly environment.
Since Geth is EVM-equivalent, so is that environment. This allows us to bypass re-implementing the EVM on-chain, and future-proofs the system against future upgrades to the EVM.
Optimism are working with George Hotz to build the first EVM-equivalent proof system (see Cannon). Currently the system can run all L1 blocks since the London hard fork.
Running L1 blocks through a fraud proof is exactly what it takes to be EVM equivalent.
Future of fraud proofs
The modular, Geth-centric design is a first step towards the adoption of fraud proof infrastructure.
Currently, launching an L2 requires deeper knowledge of L2 dispute games and how they work with node software. This limits how fast we can build novel mechanisms on L2. Abstraction from strong foundations solves this.
Rollups will be able to decide what they will provide security over. This includes variance over:
- Performance, stability, uptime
- Network effects, ecosystem specialisation, and community
- MEV prevention and sequencing tools
Fees on Optimism
13/07/2021
At time of the article(13/07/2021) the system currently reduces baseline user transaction costs by around 10x-50x.
Optimism targets a gas limit with a EIP-1559-style pricing mechanism.
Optimism currently maintains control of the targeting algorithm at launch to test what tuning should be applied before enshrining it into the protocol.
- The Optimism congestion pricing was set to target 50 000 transactions/day (on Uniswap’s launch day) which translates to 9x cheaper for users.
- The plan thereafter was to increase from this 50000 limit up to a theoretical system limit over time.
Primer on Optimism transaction fees
Fees are made up of:
- Rollup costs (cost of rolling up transactions into batches and submitting to L1).
- L2 Execution costs (cost to run the transactions on l2).
1. Rollup Costs
- Users only pay prices for the portion of transaction data that is submitted to L1 in a transaction batch.
- Cost 1 = Calldata (calldata for your transaction)
- Cost 2 = Fixed Overhead (additional processing required to add an additional transaction to a larger batch)
- Cost 3 = Fee Scalar = dynamic overhead cost. This gives Optimism a buffer in case L1 prices rapidly increase (excess funds are directed towards public goods - see Optimism PBC)
The L1 gas fee represents the above rollup costs:
L1 gas fee =
2. L2 Execution Cost
Transactions on Optimism use the same amount of gas as an equivalent transaction would use on Ethereum; however, the standard cost for gas on Optimism is only 0.001 Gwei.
This gas price may increase during high usage periods, but only makes up 0.4% of the total transaction fee on average.
The L2 gas fee represents the execution cost:
L2 gas fee
Therefore the total transaction fee is:
Total TX Fee
Transaction savings
- For ETH transfers → Optimism fees are around 5x cheaper than L1.
- Swaps or options trade → Optimism can be 200x cheaper than L1.
Finding new fee parameters
Prior to lowering fees, Fixed Overhead parameter was set to 2750 gas/tx and the Fee Scalar was set to 1.5x.
Many changes have been made since then:
- Lower Cost Structure: After EVM equivalence upgrade, less gas is needed to submit transaction batches to L1
- Fixed Overhead cost decreased almost 25%, from 2750 to 2100 gas per tx.
- Tuning parameters: Optimism reduced the premium buffer (Fee Scalar) from around 35% margin to a 10% margin.
Margin
Projections and Optimisations (assessing fee margin influences)
To properly tune the Fixed Overhead and Fee Scalar parameters, we had to understand what influences our fee margin:
1. Calldata
- Calldata gas varies by transaction type:
- ETH transfer →
0
- Chainlink Oracle Update →
890 gas
- Uniswap V3 trade →
3200 gas
- The average transaction uses
1100 gas
2. Overhead
- Cost for adding a transaction to a batch decreases as the total batch size increases.
- Overhead has already dropped from 2750 to 2100 gas and will continue to decrease as Optimism usage increases.
3. L1 Gas Prices
- Transactions are submitted to L1 a few minutes after they are executed on Optimism, during this time L1 gas prices change quite a bit.
- If L1 gas price rises → submitter pays more than expected.
- If L1 gas price falls → submitter pays less than expected.
- During volatile periods, this gap can get as wide as a 10% difference.
The optimisation goal is to get as close to the target margin of 10% as possible. This can be achieved by tuning the Fixed Overhead and Fee Scalar.
Result of Optimisation:
Fixed Overhead = 2100 gas
Fee Scalar = 1.24
The above tuning changes were implemented via the following transactions:
- Modifying Fixed Overhead → Transaction to set fixed L1 base fee by calling
setL1BaseFee(uint256 _baseFee)
(https://optimistic.etherscan.io/tx/0x580314e15d078298f44c874ebd77549c732aeea32309156d5932e106a791519e)
- Modifying Fee Scalar → Transaction to set the fee scalar to 1.24 by calling
setScalar(uint256 _scalar)
(https://optimistic.etherscan.io/tx/0xa333fc5da7e9d0dc81df16243c99fa238c23c24ac8d91e3d21121a06b332adfd)
This results in 30% less transaction fees!
This is the L2 contract where you change these parameters (
0x420000000000000000000000000000000000000F
): Compression Upgrades
In April 2020, Optimism decreased fees by 30-40% by deploying system-wide calldata compression on the network. This upgrade is part of the Bedrock update
Calldata Overview
Optimism uses Ethereum as a data availability layer.
This means that each transaction executed on Optimism is stored (but not executed) on Ethereum.
We store Optimism transactions in calldata:
- Multiple L2 transactions are batched up into a binary blob.
- The blob is stored in the data field of the transaction.
- To retrieve this data, we look at the body of transactions (stored in blocks themselves - namely the transaction trie).
- With this information we can reconstruct the Optimism chain with the data available on Ethereum.
Storing data in the blocks is MUCH CHEAPER than storing it in contract state.
The consequence, however, is that keeping historical blocks around forever incurs a cost on node operators.
- Calldata costs = 4 gas per zero byte and 16 gas per non-zero byte of call data
- zeroes represent around 40% of bytes in transactions submitted to Optimism.
Posting calldata is a large part of where rollup savings come from.
That being said, it still represents the primary cost passed on to users. We can further reduce these costs with compression.
Compression Overview & Results
Optimism team used a sample of 22000 batches (~3 million transactions) that were submitted to Ethereum and applied different compression configurations to determine the optimal strategy to enshrine.
This is what they did:
- Looked at variety of compression algorithms and measured:
- Compression rate (size of compressed data as percentage of uncompressed size)
- Estimated fee savings (assuming that 40% of bytes in a tx are the zero-byte)
- Use of a configuration option which utilises a dictionary:
- A dictionary is created ahead of time to show the algorithm data fragments that are commonly used in real world data.
- The compression algorithm uses a dictionary to better compress data, particularly when compressing a small amount of data at once.
- By taking a random sample of transactions → they create a dictionary for
zlib
andzstd
which improves the compression ratio when compressing individual transactions and batches.
TLDR Details (from compression study)
- To get highest fee savings, an advanced algorithms was needed over as much data as possible.
- Compressing individual transactions on their own will only get you 10-15% savings.
zstd
with a dictionary is more performant when common transactions are involved.
zstd
performs better when compressing larger amounts of data at once.
- Because users tend to interact with some contracts more than others and similar preferential types of transactions, compressing batches of transactions is slightly worse than compressing every transaction at once.
- This shows that when transactions have certain common fields, we can use this to optimise our use of compression algorithms.
zlib
,zstd
,brotli
were the algorithms that did the most compression.
zstd
tends to perform better than other compression algorithms over a wide range of compression speed/ratio options in common benchmarks.
Final Decisions after the study:
zlib
compression (without a dictionary) is being rolled out in the short term because it has good results, good speed, and good availability in different programming languages.
zstd
compression will be the long-term goal for achieving the highest possible compression ratio and lowest possible user fees.Compression Upgrade Scheduling
zlib
batch compression coming to Optimism mainnet (24/03/2022)Retroactive Public Goods Funding
On 20/07/20/21, Optimism committed to giving all the profits made from sequencing (prior to decentralising the sequencer) to public goods funding experiments.
The proposal is that a DAO gets formed called the Results Oracle which will fund public good projects.
Intro to Retroactive Public Goods Funding via Result Oracle
In the longer term, the Results Oracle can be funded by protocol fees (implemented by an L2 project, sequencer auctions etc.). Unlike, other public goods funding DAOs, the Results Oracle funds projects retroactively, rewarding projects that it recognises as having already provided value.
Designing the Oracle
The results oracle can send rewards to any address, here are some ideas for what kind of addresses to send rewards to:
- A single individual or organisation that was responsible for making the project happen.
- A smart contract representing a fixed allocation table splitting funds between multiple parties that have contributed towards a project.
- A project token, whose supply is distributed among on or more parties that have contributed time or value to the project, but which can be traded.
Seed Funding
- Grants programs by projects and foundations
- Projects selling their public good tokens on Uniswap
- Quadratic funding
Bedrock
Bedrock is a network upgrade to reduce fees and increase implementation simplicity.
Bedrock achieves this by redesigning the rollup architecture.
Discrete Components
- Rollup Node (
op-node
) + Execution Engine (op-geth
) - Sequencer code is like miner code in Ethereum
- Batcher (
op-batcher
) - Submits L2 transactions to L1 for data availability
- Proposer (
op-proposer
) - Submits L2 output roots to L1 to enable withdrawals
- Challenge Agent (agent that interacts with fault proof system) + Fault Proof (
cannon
) - Secures withdrawals
System Actors
- Sequencers
- Rollup node that produces blocks
- Gossips out blocks as it creates them
- Verifiers (Replicas, Nodes, etc)
- Rollup nodes that don’t produce blocks
- Batcher
- Submits L2 blocks to L1
- Proposer
- Submits L2 state roots (+ other information) to L1 to enable withdrawals
- Challenge Agents (for Fault Proofs, not yet live)
- Challenge L2 state proposals when faults occur
Actor Interactions
Let us describe some interactions:
- User sends deposit transaction to L1.
- Sequencer reads deposit.
- Sequencer can also get transactions on L2.
- Users can send transactions to verifiers (these eventually get relayed to the sequencer)
- Sequencer creates transaction batches (L2 blocks).
- Batcher submits batches to L1.
- Verifier reads deposits and batches (from L1) and derives a chain that matches the sequencer.
- Output proposer reads the output (state root) from the verifier and submits it to L1 (is this the same actor that is playing the sequencer role)
- The challenge agent would compare the state root from a verifier against what was submitted to L1 (the transactions and the corresponding state root)

Rollup Node
- Rollup Node reads data from L1.
- Deposits - L2 transactions initiated on L1, does more than mint L2 ETH.
- Batches - direct to L2 transactions.
- The node talks to the L2 Execution Engine via a slightly modified Engine API.
- Optionally takes part in P2P network.
- Significantly simpler than the current L2 Geth implementation.


Rollup Node - Engine API
- Rollup Node (
op-node
) is like the Ethereum consensus client.
- Engine API connects
op-node
toop-geth
(L2 execution client) NewPayloadV1
- insert new block into the EL (execution layer)ForkChoiceUpdateV1
- does everything else- Specify preferred head block (including safe + finalised blocks)
- Start block building process on the head block.
GetPayloadV1
- get block that was built by the FCU call.ExchangeTransitionConfigurationV1
- not relevant.
Rollup Node - Deposits & Batches
Deposits serve 2 purposes:
- deposit into L2
- Enable L2 to progress in the absence of the sequencer
The sequencer has very limited ability to delay deposits and must include them in a certain time range.
Replicas will advance the chain just on deposits alone.
Direct to L2 transactions (meaning via sequencer) are cheaper because we can batch + compress a group of transactions. These are batch submitted.
Rollup Node - P2P
- Sequencer gossips blocks as they are created to the network.
- Replicas can optimistically insert these blocks into their local chain.
- Replicas prefer blocks derived from L1 over what is produced by the sequencer.
- Net result is that replicas stay in sync with the L2 chain with minimal trust assumptions.
- Enables snap sync (not live yet)
Sequencer
- Creates L2 blocks based on deposits + incoming L2 transactions.
- Distributes blocks via P2P as they are created.
- All rollup nodes have sequencing code, it’s like the mining code in Ethereum.
Batchers (component performing compression!)
- Takes L2 blocks produced by the sequencer and transforms them to the data format expected by the rollup node.
- Compresses all transactions inside a group of L2 blocks.
The following 2 actors enable withdrawal from the rollup (Proposer, Challenge Agent)
Proposer
- Submits data about the L2 state to L1.
- After finalisation period has passed, this data enables withdrawals.
Fault Proof + Challenge Agent
- Not yet live - see Cannon talk for more information.
- Secures the bridge by ensuring that all finalised outputs are valid.
- Run against the proposer during the finalisation period after the L2 output has been submitted to L1.
- Cannon:
- Interactive proof game over MIPS execution trace
- Binary search over MIPS execution state to find the diverging instruction.
- Pre-image oracle - load up data and easily prove it on-chain.
- Anything that is deterministic and compiles to MIP can be proven
Need to read up about MIPS! https://github.com/ethereum-optimism/cannon/blob/master/contracts/MIPS.sol
Data Flows
Deposits
- Send transaction to the
OptimismPortal
contract.
- The portal contract charges for L2 gas usage and custodies funds that are bridged to L2.
- The portal contract emits an event.
- The Optimism Node reads the deposit event and constructs a special transaction based on it.
- The deposit transaction is executed on L2 with a fixed amount of gas.

L2 Transactions
- Submit a transaction to the sequencer.
- The sequencer sees the tx and includes it in a block.
- The batcher sees a new L2 block and submits batches based on that block.
- Replicas create a new L2 block that contains transactions from the batch.
- Replicas execute the L2 block on their L2 EE.

Withdrawals
- Users send an L2 tx to the L2 withdrawal contract.
- The L2 withdrawal contract locks the funds and emits an event and touches L2 state based on the tx.
- The proposer submits an output proposal that includes the L2 state & withdrawals contract root.
- The user submits a tx on L1 to finalise the withdrawal on L1 after 7 days.

Implementation
1 kloc = 1000 lines of code
Lots of small modular codebases
op-node
: currently 15 kloc
op-geth
: under 1 kloc diff from geth
op-batcher
+op-proposer
: Around 2 kloc. Likely to remain under 5 kloc
- Contracts: Order 10 kloc (with tests + supporting code)
- Clean API boundaries between components.
More Information
- Code in the Optimism Monorepo
op-code
op-batcher
op-proposer
- Bedrock contracts in
packages/contracts-bedrock
- Specs in the specs folder of the Optimism monorepo
Bedrock Workshop (19 July 2022)
Why redesign the rollup?
- Target new fault proof backend - Cannon
Previous Fault Proofs:
- Custom L2 execution engine (custom geth implementation)
- Custom L1 contracts
Cannon:
- Essentially generic optimistic fault proof
- Need a fully deterministic + pre-image based system
- Compile program to MIPs and run interactive game over MIPs execution on-chain.
- New fault proof system enables a design that mirrors post-merge Ethereum
More benefits
- Snap Sync
- Tx pool
- Fast L2 Block Propagation
- Stateless rollup node
- Re-use geth as the execution engine
- Easier to audit
- No replica complexity
- Improved L2 block derivation
System Overview
Rollup Node Overview
- Rollup Node reads data from L1
- Deposits - L2 transactions initiated on L1, does more than mint L2 ETH.
- Batches - direct to L2 transactions (typical tx)
- It talks to the L2 Execution Engine (EE) via the Eth Engine API.
- Optionally takes part in a P2P network
- Enables unsafe block distribution (live)
- Enables snap sync (on the roadmap)
- Significantly simpler than the current L2 geth implementation.
Rollup Node diagram


System Actors
Sequencer
- Rollup Node that produces blocks.
- Gossips out blocks as it creates them.
Verifiers (Replicas etc)
- Rollup Node that doesn’t produce blocks.
Batcher
- Submits L2 blocks to L1.
Proposer
- Submits L2 state roots (+ other information) to L1 to enable withdrawals.
- This is where the challenge period begins (with the submission of the state root).
Challenge Agents (for Fault Proofs, not live yet)
- Challenge L2 state proposals when faults occur.
System Diagram

- Sequencers and Verifiers are the same piece of software.
- Sequencers and Verifiers talk to each other.
- Sequencers and Verifiers read data from L1.
- Batcher looks at the Sequencer, submits new blocks to L1.
- Verifier detects the data that the Batcher submitted to L1.
- You could submit L2 txs to either the Verifier or Sequencer, there is going to be a mempool so the tx are going to be gossiped around.
- When you want to do withdrawals you fetch Merkle proof from a Verifier holding canonical state and submit it to L1.
Sequencer/Verifier are a combination of the rollup node client and L2 execution client.
For the Sequencer/Verifier you need to have the L2 execution engine as well as the consensus client (
op-node
of which also compromises of the rollup node).
For L1, you could get away with just using an Ethereum provider (like Infura, Alchemy).Batcher and Proposer are logically separate from Sequencer, the binaries are also separate.
The 3 roles are trusted together at the current moment. They have the same trust assumptions currently
With Cannon, we will decentralise the Proposer and the Batcher.
Sequencer is the actor capable of extracting MEV.
Data Flows
Deposits
- Send transaction to the OptimismPortal contract on L1.
- OptimismPortal contract charges for L2 gas usage and custodies funds that are bridged to L2.
- OptimismPortal contract emits an event (L2 node will look for this event).
- Optimism Node (Verifier) reads the the deposit event and constructs a special tx based on it.
- The deposit tx is executed with a fixed amount of gas on L2.
L2 Transactions
- Submit a tx to the tx pool.
- Sequencer sees the tx and includes it in a block.
- Batcher sees a new L2 block and submits batches based on that block.
- Replicas read batches for the block.
- Replicas create a new L2 block that contains transactions from the batch.
- Replicas execute the L2 block on their local L2 EE.
Withdrawals
- Users send an L2 tx to the L2 withdrawal contract.
- L2 withdrawal contract locks the funds and emits an event and touches L2 state based on the tx.
- Proposer submits an output proposal that includes the L2 state root and withdrawal contract root.
- User submits a tx on L1 to finalise the withdrawal on L1 after 7 days.
Optimism Portal - Deposit Contract on L1
- Holds all native ETH bridged to L2
- Does not hold ERC-20/ERC-721 (these are held in native bridge contracts)
- Native bridges call into this contract
- Arbitrary messaging passing to L2
- Emits Deposit Event which is turned into the deposit tx on L2
- To, From, Value, Gas Limit, Data
- No signatures (authentication is done by sending L1 tx to this contract)
- No gas refunds b/c gas is bought on L1
- Also have mint parameters (always succeed)
- Very live + are the unwind mechanism
- Fallback receive function if you send ETH to the portal
Deposit gas is bought on L1 for L2. Because of this we do not have refunds.
L1 Deposit Gas Market
- Deposits buy non-refundable gas on L1.
- Price of gas is set with EIP-1559-like fee market.
- Target an amount of gas less than the L2 gas limit.
- Deposits are executed first in the L2 block.
- Not connected to the L2 EIP-1559 fee market.
- Fees are currently paid via a gas burn
- Plan to support directly paying fees in the future.
You have to do a conversion from L1 gas to L1 ETH to the gas price for L2 gas, and then calculate how much L2 gas you bought.
- Credit users the gas spent on deposits + gas price updates to the fee burn.
At the end of this process you have emitted an event that can be detected by L2 nodes.
L2 Block Construction - Sequencer
- Each L2 block references an L1 block (the L1 Origin)
- This is recorded in the first tx of the block as the L1 info or L1 Attributes tx.
- L1 Attributes tx also stores info about L1 into L2 state.
- Multiple L2 blocks per L1 Block (an Epoch)
- Each L2 block contains a sequence number
- Deposits are always included in the first block of the epoch
- Pulled from the L1 Origin Block.
- Sequencers cannot produce blocks with very old L1 Origin
- Sequencer drift parameter (do need to enable confirmation depth + fewer L2 reorgs)
- Read from L2 mempool
- Charged L1 Fee (based on L1 basefee + estimated cost to submit tx)
L1 Origin = reference to L1 block by an L2 block
When you query a block via JSON-RPC (e.g.
eth_getBlockByNumber
), the first transaction of the block is an L1 Attributes TX.
There can be multiple L2 blocks per L1 block.
If multiple L2 blocks, the L2 block with deposits must be the first block in the epoch.L2 block first TX raw response (L1 Attributes TX)
L2 Block Construction - Verifier
- Read deposits just like the sequencer.
- Read batches (all non-deposit transactions) from L1
- Construction payload from deposits + batches
- Sequencing Window (the liveness mechanism)
- Time that the batcher has to to submit batches. If no batches are submitted → create empty batches.
- For an L2 block, the window is [L1 Origin, L1 Origin Number + Sequencing Window]
- Windows overlap
- Eagerly read batches from L1
Diagram of L2 Chain

- For L2 blocks #1 and #2 with origin 1, the sequencing window is for 1,2 and 3.
- This means the batch could be submitted in any one of these 3 possible L2 blocks.
- If there is a batch in #4 that references #1 or #2, it is just thrown out.
- The same applies for #3 and #4, you only reference those 2.
Unsafe Blocks & P2P
- Safe Block: An L2 block fully derived from L1 data.
- May be subject to change if the last L1 block involved in the L2 block is re-orged.
- Unsafe Block: An L2 block not derived from L1 data.
- All blocks that the sequencer produces are unsafe until they are batch submitted.
- Reconciliation / Consolidation Process
- Compare Unsafe Block against Payload Attributes that is fully derived from L1 data.
- If there is a difference, execute an L2 reorg to the payload attributes.
- If they are the same, mark the Unsafe Block as Safe.
- Sequencer gossips Unsafe Blocks to Verifiers.
- Sequencer + Verifiers use the same reconciliation logic.
Differences (non-exhaustive)
- Fixed L2 block times (likely will be 2s)
- No more CTC/SCC contracts
- Much simpler batch submission
- L2 reorgs now exist (but are unlikely)
- Handles L1 reorgs - much closer to L1 tip
- Snap Sync is coming
- Expect to stay closer to Ethereum HFs
- Use typical HF upgrade path in the future
- Use P2P network to distribute unsafe blocks.
- Reduced batch inclusion period
Kelvin Fichter (Optimism’s Technical Roadmap)[August 11 2022]
Useful References
Blog Posts
Analytics (Dune, Notebooks)
Specs
Discussions
Podcasts
Twitter
Wikipedia Glossary
Videos/Talks
Podcasts