How Ethereum scales with Arbitrum Nitro and how to use it
A blockchain on a blockchain deep dive
Have you heard of Arbitrum Nitro? The new WAVM enables Plasma but for smart contracts in a super efficient way! It enables having a side chain with guarantees of the Ethereum mainnet chain. Arbitrum has already been one of the most successful Layer 2s so far, and the new Nitro is a major upgrade for it.
Since the publishing of this post Arbitrum has further solidified his position as a leading L2. Take a look at this analysis from Coingecko:
But let's start at the beginning...
What are Merkle Trees?
Merkle Trees are at the foundation of how this scaling technology works. At the root of the Merkle tree is the root hash. It's created by hashing all its original values as leaf nodes. Now two leaf hashes are combined by creating a new hash just for those together. We do this all the way until we have one tree with a single root hash. A Merkle proof now is a way for you to prove to someone who only knows the root hash that any value is in fact part of this tree as one of the leafs.
I wrote a long guide on Merkle Trees in case you want to dive deeper into the topic.
State of a smart contract
In Ethereum one Merkle tree is the state tree which contains all state like user ETH balances, but it also contains the contract storage itself. This allows us to create Merkle proofs on smart contract state!
So it's possible to prove a smart contract has a certain state using the Merkle proof mechanism. Keep that in mind for later.
How does Plasma work?
Plasma uses a combination of smart contracts and Merkle proofs. Together, these enable fast and cheap transactions by offloading these transactions from the main Ethereum blockchain into a plasma chain. In contrast to regular sidechains, you cannot run any smart contract in here.
In Plasma users send transactions between each other in UTXO style where the results of new balances are continuously updated in the Ethereum smart contract as Merkle tree roots. Once a Merkle root is updated in the smart contract, it gives users the security over their funds even if the plasma chain operator is malicious. The root encapsulates the result from many sent funds transactions. Should a Plasma operator submit an invalid root, users can contest it and safely get their funds back. For more details have a look here.
But as said before, it cannot run smart contracts. So no Uniswap with Plasma is possible.
Arbitrum: How to run a blockchain on a blockchain
But this is where Arbitrum comes in. It's Plasma for smart contracts!

The core idea here is actually quite simple. Just like in Plasma you have a layer 2 chain which is running all transactions and you update only the Merkle root within layer 1 occasionally. The Merkle root in this case is not for UTXO transactions as for regular plasma, but for the full state of a smart contract. Or rather for the full state of all smart contracts being used.
Yes this means we can run arbitrary smart contracts on Arbitrum! In very short, this is how it works:
- Represent smart contract states as Merkle tree
- Run all transactions only on the Arbitrum chain
- Continuously update the state roots on Ethereum layer 1
- Arbitrum chain has low security, but through the state roots on Ethereum, they enable fraud proofs
- When a validator from layer 2 submits a malicious state root and it's contested, they loose their bond.
- Fraud proofs are gas expensive, but more efficient than Optimism through an interactive mechamism (see details below).
- Run single execution step which is contested with prover submitting any required state.
Now you might realize, this is where the scaling comes from. You only run transactions on layer 1 that are contested with a fraud proof. That’s the gain. So the scaling advantage comes solely from the fact that you won’t run 99.9% of transactions on layer 1.
Arbitrum Detailed Overview
The big ideas behind Arbitrum Nitro are:
- Sequencing
- Geth at the core
- Wasm for Proving
- Optimistic Rollups via Interactive Fraud Proofs

To actually run the transactions, we need native Geth, Geth with Wasm and Merkle Proofs. The architecture looks like this:
- On the highest level you have blockchain node functionality.
- ArbOS handling L2 functionlity like batch decompression and bridging.
- Core Geth EVM contract execution either native or via WASM.

How are transactions included?
New transactions can be added in three ways:
- Normal Inclusion by Sequencer
- Message from L1 included by Sequencer
- Message from L1 force-included on L2
1. Normal Inclusion by Sequencer
In the normal case the currently still centralized sequencer will add new messages to the inbox. This is done by calling addSequencerL2Batch. Here we check that the sender is the stored sequencer, only he is allowed to call this function:
function addSequencerL2Batch(
uint256 sequenceNumber,
bytes calldata data,
uint256 afterDelayedMessagesRead,
IGasRefunder gasRefunder,
uint256 prevMessageCount,
uint256 newMessageCount
) external override refundsGas(gasRefunder) {
if (
!isBatchPoster[msg.sender]
&& msg.sender != address(rollup)
) revert NotBatchPoster();
[...]
addSequencerL2BatchImpl(
dataHash_,
afterDelayedMessagesRead_,
0,
prevMessageCount_,
newMessageCount_
);
[...]
}
And then inside addSequencerL2BatchImpl the bridge is called to enqueue the message to the inbox:
bridge.enqueueSequencerMessage(
dataHash,
afterDelayedMessagesRead,
prevMessageCount,
newMessageCount
);
Which then calls enqueueSequencerMessage in the bridge which simply adds a new hash to the inbox array:
bytes32[] public sequencerInboxAccs;
function enqueueSequencerMessage(
bytes32 dataHash,
uint256 afterDelayedMessagesRead,
uint256 prevMessageCount,
uint256 newMessageCount
)
external
onlySequencerInbox
returns (
uint256 seqMessageIndex,
bytes32 beforeAcc,
bytes32 delayedAcc,
bytes32 acc
)
{
[...]
acc = keccak256(abi.encodePacked(beforeAcc, dataHash, delayedAcc));
sequencerInboxAccs.push(acc);
}
2. Message from L1 by Sequencer
Messages can also be added by anyone directly using calls in L1. This is useful for example when making deposits from L1 to L2.
Eventually this will call enqueueDelayedMessage on the bridge inside deliverToBridge.
bytes32[] public delayedInboxAccs;
function enqueueDelayedMessage(
uint8 kind,
address sender,
bytes32 messageDataHash
) external payable returns (uint256) {
[...]
delayedInboxAccs.push(
Messages.accumulateInboxMessage(
prevAcc,
messageHash
)
);
[...]
}
function deliverToBridge(
uint8 kind,
address sender,
bytes32 messageDataHash
) internal returns (uint256) {
return
bridge.enqueueDelayedMessage{value: msg.value}(
kind,
AddressAliasHelper.applyL1ToL2Alias(sender),
messageDataHash
);
}
3. Message from L1 force-included on L2
There is one issue with the second case. A sequencer can take messages from the delayed inbox and process them, but he may also simply ignore them. In that case messages might not ever end up in L2. And since the sequencer is still centralized, there is a third backup option called forceInclusion.
Anyone can call this function and should the sequencer stop posting messages for a minimum amount of time, it allows others to continue posting messages.
So why is there a delay at all and why not allow users always immediately to force include transactions? If the sequencer has priority, he can give soft-confirmations about transactions to users, leading to a better UX. If there would be constant force inclusions, the sequencer cannot pre-confirm to users what will happen. Why? Well, a force-included transaction may invalidate one that the sequencer was planning to post.
function forceInclusion(
uint256 _totalDelayedMessagesRead,
uint8 kind,
uint64[2] calldata l1BlockAndTime,
uint256 baseFeeL1,
address sender,
bytes32 messageDataHash
) external {
[...]
if (l1BlockAndTime[0] + maxTimeVariation.delayBlocks >= block.number)
revert ForceIncludeBlockTooSoon();
if (l1BlockAndTime[1] + maxTimeVariation.delaySeconds >= block.timestamp)
revert ForceIncludeTimeTooSoon();
[...]
addSequencerL2BatchImpl(
dataHash,
__totalDelayedMessagesRead,
0,
prevSeqMsgCount,
newSeqMsgCount
);
[...]
}
How do the Fraud Proofs work?
Let's explore how the frauf proofs of Arbitrum Nitro work in detail.
1. WAVM
New in Arbitrum Nitro is the WAVM. They basically re-use the Geth Ethereum Node code and compile it to Wasm (or rather a slightly modified version of Wasm). Wasm stands for Web Assembly and is an environment which allows running code regardless of the platform. So similar to the EVM, but without gas. It’s also a web-wide standard, so it has more support by other languages and better performance. So compiling the Geth code written in Go into Wasm is indeed possible.
How does this Wasm execution help us?

We can run proofs for it! Because it’s a controlled execution environment, we can replicate its execution inside a Solidity smart contract. That is the requirement for running fraud proofs.
So are we just running everything within the WAVM? Well Wasm is still slower execution compared to native compiled code. But here’s the beauty of Nitro: The same Geth code will be compiled to Wasm for the proving, but to native code for execution. This way we can get the best of both worlds: Run the chain with native performance, but still be able to execute proofs.
2. Fraud Proofs

Now let’s take a look how these fraud proofs work in detail. What do we need?
- We need a mechanism to get pre- and post-state of an execution.
- We need to be able to run the WAVM execution in a Solidity contract.
- We need an interactive mechanism to determine which execution step to prove.
The last step is optional, but a performance improvement if we only require a proof for one single execution. It does however require a few additional interactive steps between challenger and challenged node. We won’t go into the details for that, but you can read more about it here. And of course in the source code directly.
But we will go into detail about the other two parts now.
3. Get Pre- and Post-state of an Execution
During the interactive challenge, eventually the challenger will point down to a single execution disagreement. This single execution has a known pre- and post-execution state Merkle root hash. The post-execution hash is challenged, so in the end we will compare it to what we got from executing it ourselves. The pre-execution hash is not challenged and thus trusted.
It will be used to initialize the WAVM machine:
struct Machine {
MachineStatus status;
ValueStack valueStack;
ValueStack internalStack;
StackFrameWindow frameStack;
bytes32 globalStateHash;
uint32 moduleIdx;
uint32 functionIdx;
uint32 functionPc;
bytes32 modulesRoot;
}
The challenger will initialize this machine with all the data.
In the contract we then only need to double check that this data represents the stored Merkle root hash:
require(mach.hash() == beforeHash, "MACHINE_BEFORE_HASH")
Now we can also trust the modules root and use it to verify the modules data.
A module is defined as:
struct Module {
bytes32 globalsMerkleRoot;
ModuleMemory moduleMemory;
bytes32 tablesMerkleRoot;
bytes32 functionsMerkleRoot;
uint32 internalsOffset;
}
This holds data in the form of further Merkle root hashes for WAVM machine data. And the challenger also initializes this data.
The contract again just verifies it with the previous modulesRoot:
(mod, offset) = Deserialize.module(proof, offset);
(modProof, offset) = Deserialize.merkleProof(proof, offset);
require(
modProof.computeRootFromModule(mach.moduleIdx, mod) == mach.modulesRoot,
"MODULES_ROOT"
);
And lastly we do the same again for the instruction data:
struct Instruction {
uint16 opcode;
uint256 argumentData;
}
And it will be verified via the functionsMerkleRoot:
MerkleProof memory instProof;
MerkleProof memory funcProof;
(inst, offset) = Deserialize.instruction(proof, offset);
(instProof, offset) = Deserialize.merkleProof(proof, offset);
(funcProof, offset) = Deserialize.merkleProof(proof, offset);
bytes32 codeHash = instProof.computeRootFromInstruction(mach.functionPc, inst);
bytes32 recomputedRoot = funcProof.computeRootFromFunction(
mach.functionIdx,
codeHash
);
require(recomputedRoot == mod.functionsMerkleRoot, "BAD_FUNCTIONS_ROOT");
So now we have an initialized WAVM machine and all that is left to do is execute this one single operation. This now depends on the exact instruction we need to run.
Take for example a simple addition. This is extremely simple:
uint32 b = mach.valueStack.pop().assumeI32();
uint32 a = mach.valueStack.pop().assumeI32();
[...]
return (a + b, false);

That’s basically it. Take the first two values from the machine stack and add them together.
Let’s look at another instruction. A local get instruction:
function executeLocalGet(
Machine memory mach,
Module memory,
Instruction calldata inst,
bytes calldata proof
) internal pure {
StackFrame memory frame = mach.frameStack.peek();
Value memory val = merkleProveGetValue(frame.localsMerkleRoot, inst.argumentData, proof);
mach.valueStack.push(val);
}
The StackFrame comes from the WAVM initialization where we can find the localsMerkleRoot:
struct StackFrame {
Value returnPc;
bytes32 localsMerkleRoot;
uint32 callerModule;
uint32 callerModuleInternals;
}
And via Merkle Proof we can retrieve the value and push it to the stack.
Lastly we double check that the resulting final hash from this computational step equals the stored hash:
require(
afterHash != selection.oldSegments[selection.challengePosition + 1],
"SAME_OSP_END"
);
Only if it doesn't match, the proof is valid and we continue. Now the challenger has won and a new post-state will be accepted.
How to implement on Arbitrum yourself
Arbitrum fully supports Solidity, so you can take your contracts as they are with just a few caveats:
blockhash(x)
returns a cryptographically insecure, pseudo-random hash.l return0
.block.coinbase
returns zeroblock.difficulty
returns the constant 2500000000000000block.number
/block.timestamp
return an "estimate" of the L1 blockmsg.sender
works the same way it does on Ethereum for normal L2-to-L2 transactions; for L1-to-L2 "retryable ticket" transactions, it will return the L2 address alias of the L1 contract that triggered the message. See retryable ticket address aliasing for more.
How to use the Arbitrum networks
Those are the two important Aribtrum networks. You can use the wallet_addEthereumChain functionality from supported wallets like MetaMask or otherwise users will manually need to add the network.
For now the mainnet is still operating on the older architecture. But the Rinkeby testnet is fully upgraded to the new Arbitrum Nitro stack.
To get funds use the bridge available at https://bridge.arbitrum.io/.
const params = [{
"chainId": "42161", // testnet: "421611"
"chainName": "Arbitrum",
"rpcUrls": [
"https://arb1.arbitrum.io/rpc"
// rinkeby: "https://rinkeby.arbitrum.io/rpc"
// goerli: "https://goerli-rollup.arbitrum.io/rpc"
],
"nativeCurrency": {
"name": "Ether",
"symbol": "ETH",
"decimals": 18
},
"blockExplorerUrls": [
"https://explorer.arbitrum.io"
// rinkeby: "https://rinkeby-explorer.arbitrum.io"
// goerli: "https://goerli-rollup-explorer.arbitrum.io"
]
}]
try {
await ethereum.request({
method: 'wallet_addEthereumChain',
params,
})
} catch (error) {
// something failed, e.g., user denied request
}
{
arbitrum_mainnet: {
provider: function () {
return new HDWalletProvider(
mnemonic,
"https://arbitrum-mainnet.infura.io/v3/"
+ infuraKey,
0,
1
);
},
},
arbitrum_rinkeby: {
provider: function () {
return new HDWalletProvider(
mnemonic,
"https://rinkeby.arbitrum.io/rpc",
0,
1
);
},
},
arbitrum_goerli: {
provider: function () {
return new HDWalletProvider(
mnemonic,
"https://goerli-rollup.arbitrum.io/rpc",
0,
1
);
}
}
}
How to deploy to the Arbitrum networks
Now you can add the Arbitrum Mainnet into Truffle or Hardhat as shown left.
A good practice I would recommend is writing your tests with Hardhat with a regular config, so you can run the tests fast and with console.log/stacktraces. And only occasionally use Truffle to run tests against a testnet.
Lastly you will need to activate Arbitrum in the Infura settings: https://infura.io/payment.

Solidity Developer
More great blog posts from Markus Waas
How to use ChatGPT with Solidity
Using the Solidity Scholar and other GPT tips
How to integrate Uniswap 4 and create custom hooks
Let's dive into Uniswap v4's new features and integration
How to integrate Wormhole in your smart contracts
Entering a New Era of Blockchain Interoperability
Solidity Deep Dive: New Opcode 'Prevrandao'
All you need to know about the latest opcode addition
The Ultimate Merkle Tree Guide in Solidity
Everything you need to know about Merkle trees and their future
The New Decentralized The Graph Network
What are the new features and how to use it
zkSync Guide - The future of Ethereum scaling
How the zero-knowledge tech works and how to use it
Exploring the Openzeppelin CrossChain Functionality
What is the new CrossChain support and how can you use it.
Deploying Solidity Contracts in Hedera
What is Hedera and how can you use it.
Writing ERC-20 Tests in Solidity with Foundry
Blazing fast tests, no more BigNumber.js, only Solidity
ERC-4626: Extending ERC-20 for Interest Management
How the newly finalized standard works and can help you with Defi
Advancing the NFT standard: ERC721-Permit
And how to avoid the two step approve + transferFrom with ERC721-Permit (EIP-4494)
Moonbeam: The EVM of Polkadot
Deploying and onboarding users to Moonbeam or Moonriver
Advanced MultiSwap: How to better arbitrage with Solidity
Making multiple swaps across different decentralized exchanges in a single transaction
Deploying Solidity Smart Contracts to Solana
What is Solana and how can you deploy Solidity smart contracts to it?
Smock 2: The powerful mocking tool for Hardhat
Features of smock v2 and how to use them with examples
How to deploy on Evmos: The first EVM chain on Cosmos
Deploying and onboarding users to Evmos
EIP-2535: A standard for organizing and upgrading a modular smart contract system.
Multi-Facet Proxies for full control over your upgrades
MultiSwap: How to arbitrage with Solidity
Making multiple swaps across different decentralized exchanges in a single transaction
The latest tech for scaling your contracts: Optimism
How the blockchain on a blockchain works and how to use it
Ultimate Performance: The Aurora Layer2 Network
Deploying and onboarding users to the Aurora Network powered by NEAR Protocol
What is ecrecover in Solidity?
A dive into the waters of signatures for smart contracts
How to use Binance Smart Chain in your Dapp
Deploying and onboarding users to the Binance Smart Chain (BSC)
Using the new Uniswap v3 in your contracts
What's new in Uniswap v3 and how to integrate Uniswap v3
What's coming in the London Hardfork?
Looking at all the details of the upcoming fork
Welcome to the Matrix of blockchain
How to get alerted *before* getting hacked and prevent it
The Ultimate Ethereum Mainnet Deployment Guide
All you need to know to deploy to the Ethereum mainnet
SushiSwap Explained!
Looking at the implementation details of SushiSwap
Solidity Fast Track 2: Continue Learning Solidity Fast
Continuing to learn Solidity fast with the advanced basics
What's coming in the Berlin Hardfork?
Looking at all the details of the upcoming fork
Using 1inch ChiGas tokens to reduce transaction costs
What are gas tokens and example usage for Uniswap v2
Openzeppelin Contracts v4 in Review
Taking a look at the new Openzeppelin v4 Release
EIP-3156: Creating a standard for Flash Loans
A new standard for flash loans unifying the interface + wrappers for existing ecosystems
Tornado.cash: A story of anonymity and zk-SNARKs
What is Tornado.cash, how to use it and the future
High Stakes Roulette on Ethereum
Learn by Example: Building a secure High Stakes Roulette
How to implement generalized meta transactions
We'll explore a powerful design for meta transactions based on 0x
Utilizing Bitmaps to dramatically save on Gas
A simple pattern which can save you a lot of money
Using the new Uniswap v2 as oracle in your contracts
How does the Uniswap v2 oracle function and how to integrate with it
Smock: The powerful mocking tool for Hardhat
Features of smock and how to use them with examples
How to build and use ERC-721 tokens in 2021
An intro for devs to the uniquely identifying token standard and its future
Trustless token management with Set Protocol
How to integrate token sets in your contracts
Exploring the new Solidity 0.8 Release
And how to upgrade your contracts to Solidity 0.8
How to build and use ERC-1155 tokens
An intro to the new standard for having many tokens in one
Leveraging the power of Bitcoins with RSK
Learn how RSK works and how to deploy your smart contracts to it
Solidity Fast Track: Learn Solidity Fast
'Learn X in Y minutes' this time with X = Solidity 0.7 and Y = 20
Sourcify: The future of a Decentralized Etherscan
Learn how to use the new Sourcify infrastructure today
Integrating the 0x API into your contracts
How to automatically get the best prices via 0x
How to build and use ERC-777 tokens
An intro to the new upgraded standard for ERC-20 tokens
COMP Governance Explained
How Compound's Decentralized Governance is working under the hood
How to prevent stuck tokens in contracts
And other use cases for the popular EIP-165
Understanding the World of Automated Smart Contract Analyzers
What are the best tools today and how can you use them?
A Long Way To Go: On Gasless Tokens and ERC20-Permit
And how to avoid the two step approve + transferFrom with ERC20-Permit (EIP-2612)!
Smart Contract Testing with Waffle 3
What are the features of Waffle and how to use them.
How to use xDai in your Dapp
Deploying and onboarding users to xDai to avoid the high gas costs
Stack Too Deep
Three words of horror
Integrating the new Chainlink contracts
How to use the new price feeder oracles
TheGraph: Fixing the Web3 data querying
Why we need TheGraph and how to use it
Adding Typescript to Truffle and Buidler
How to use TypeChain to utilize the powers of Typescript in your project
Integrating Balancer in your contracts
What is Balancer and how to use it
Navigating the pitfalls of securely interacting with ERC20 tokens
Figuring out how to securely interact might be harder than you think
Why you should automatically generate interests from user funds
How to integrate Aave and similar systems in your contracts
How to use Polygon (Matic) in your Dapp
Deploying and onboarding users to Polygon to avoid the high gas costs
Migrating from Truffle to Buidler
And why you should probably keep both.
Contract factories and clones
How to deploy contracts within contracts as easily and gas-efficient as possible
How to use IPFS in your Dapp?
Using the interplanetary file system in your frontend and contracts
Downsizing contracts to fight the contract size limit
What can you do to prevent your contracts from getting too large?
Using EXTCODEHASH to secure your systems
How to safely integrate anyone's smart contract
Using the new Uniswap v2 in your contracts
What's new in Uniswap v2 and how to integrate Uniswap v2
Solidity and Truffle Continuous Integration Setup
How to setup Travis or Circle CI for Truffle testing along with useful plugins.
Upcoming Devcon 2021 and other events
The Ethereum Foundation just announced the next Devcon in 2021 in Colombia
The Year of the 20: Creating an ERC20 in 2020
How to use the latest and best tools to create an ERC-20 token contract
How to get a Solidity developer job?
There are many ways to get a Solidity job and it might be easier than you think!
Design Pattern Solidity: Mock contracts for testing
Why you should make fun of your contracts
Kickstart your Dapp frontend development with create-eth-app
An overview on how to use the app and its features
The big picture of Solidity and Blockchain development in 2020
Overview of the most important technologies, services and tools that you need to know
Design Pattern Solidity: Free up unused storage
Why you should clean up after yourself
How to setup Solidity Developer Environment on Windows
What you need to know about developing on Windows
Avoiding out of gas for Truffle tests
How you do not have to worry about gas in tests anymore
Design Pattern Solidity: Stages
How you can design stages in your contract
Web3 1.2.5: Revert reason strings
How to use the new feature
Gaining back control of the internet
How Ocelot is decentralizing cloud computing
Devcon 5 - Review
Impressions from the conference
Devcon 5 - Information, Events, Links, Telegram
What you need to know
Design Pattern Solidity: Off-chain beats on-chain
Why you should do as much as possible off-chain
Design Pattern Solidity: Initialize Contract after Deployment
How to use the Initializable pattern
Consensys Blockchain Jobs Report
What the current blockchain job market looks like
Provable — Randomness Oracle
How the Oraclize random number generator works
Solidity Design Patterns: Multiply before Dividing
Why the correct order matters!
Devcon 5 Applications closing in one week
Devcon 5 Applications closing
Randomness and the Blockchain
How to achieve secure randomness for Solidity smart contracts?