Solidity Deep Dive: New Opcode 'Prevrandao'
All you need to know about the latest opcode addition
Let’s back up for a second and figure out what has changed since ‘The Merge’. The upgrade finally brought a new consensus mechanism to Ethereum. Instead of the old Proof of Work, blocks are now produced via Proof of Stake.
Proof of Work finds consensus via block hashes and a process called mining. In Ethereum prior The Merge miners typically would look for specific block-hashes using GPUs. This process was unpredictable and only solvable using brute-force. Hence if you find a fitting hash, you proved some work.
Now you prove some stake instead of work. A miner now is called a validator and each one has to put up 32 ETH as stake. New blocks are proposed by a validator that has the ‘correct’ 32 ETH stake.
Why Randomness in Proof of Stake?
But why does Ethereum even require randomness for its Proof of Stake protocol? Naively you might just design it, so that in a round-robin fashion, each validator will be chosen one after another in a pre-defined order.
But such a predictable fashion comes with a range of potential attacks.
- Denial of Service (DOS): If you know in advance who will be the next block proposers, you can more easily launch DOS attacks one by one for the chosen validators. If the order is random, you have less time to plan your attack.
- Selfish Validator Registrations: One could try to register especially advantageous validators that are chosen sooner and game the mechanism to earn more rewards.
- Bribing: You could also try to bribe validators in advance for blocks that might interest you to e.g. censor some specific transactions or not propose a block at all.
- Double-spending: Maybe one of the most critical attacks could be a double-spending attack. If you can predict the order in advance, it makes it easier to plan such an attack through a combination of bribing and simply owning a sequence of validators yourself. And you’ll know exactly which block you’ll be able to try your double-spend attack.
Enter the Randao
So how is the randomness in Ethereum created? In simple terms you have every validator in a given epoch pre-commit to a random number computed just locally. The final random number is every validator’s local random number combined via XOR (explicit or). The combination with XOR ensures that even just one single honestly computed random number of one of the validators in an epoch will lead to a new unmanipulated random number.
An epoch consists of 32 slots, so 32 potential validators and assuming all validators are online and proposing a new block, also a total of 32 blocks.
Technically validators are not even computing a local random number anymore actually. Instead they sign the current epoch number with their private key using BLS signatures. The signature is then hashed and used as random number. This simplifies the protocol while also allowing for multi-party validators, something that wouldn’t be possible with a commitment scheme.
Updating Randao
Once an epoch is finished, the newly computed randomness will be used to determine the validators in the next epoch. Technically actually the second next to give validators time to be learn and prepare themselves of their new roles, but that's a detail we can ignore.
And each shuffle is basically:
- Sign over current epoch number.
- Compute hash of signature.
- Calculate new randomness as Hash XOR Previous Randomness
Updating the Randao randomness like this has a few great properties.
- The hash ensures an even distribution over all possible random values => fairness to all possible outcomes.
- XOR makes sure that every single bit is affected, meaning that just one truly random value in one epoch will make the final result truly random.
- Signing over the epoch number (instead of e.g. the slot number) slightly decreases influence on future Randao epochs. Why?
- Imagine I have the last validator in one epoch. I can choose to reveal my signature which updates the Randao or not reveal the signature which would keep the previous Randao value. In total two possible outcomes, let's call them outcome A and B.
- Outcome A would give me the next epoch that ends with validators my own validator V1 followed by my own validator V0.
- Outcome B would give me the next epoch that ends with validators my own validator V0 followed by my own validator V1.
- The resulting influence on the randomness is the same for outcome A and B, so regardless of my choice, I will have the same influence.
- Imagine I have the last validator in one epoch. I can choose to reveal my signature which updates the Randao or not reveal the signature which would keep the previous Randao value. In total two possible outcomes, let's call them outcome A and B.
Biasability
We've touched on biasability a little bit already. Essentially it comes down to the last revealer problem.
- You cannot directly change the randomness.
- But you can choose to not sign the data in your slot.
And if you don't sign the data in your slot? Well no one else has the private key, so there is no way to get this data. And in this case the Randao update for the slot is simply skipped.
What is the cost of such an attack?
The block reward is roughly 0.044 ETH (depending on ETH inflation rates). Not signing simply means loosing this reward.
ultrasound.money has a nice visualization of staking inflation compared to transaction fees. The transaction fees since EIP-1559 are burnt, so if transaction fees in one block > block reward, ETH is actually deflationary.
Now if this happened to be the last slot in an epoch, then the final random value will be the one from the second last slot. And of course an attacker could even have control of multiple validators in the last slots of an epoch. So if an attacker controls the last four validators of an epoch, in each of the 4 slots he can choose to sign or not. In total giving him 2^4 = 16 possible total randomness values to choose from.
In other words for every controlled validator at the end of a slot, an attacker has a one bit influence on the final output.
Improving Biasability through VDFs
So we can see that the Randao randomness is biasable to some degree. The good news is that this doesn't break the Ethereum protocol security. So it's good enough for the protocol itself.
But can we improve on it anyways? Especially because Dapp developers like us might want a more secure, less biasable random number.
An improvement for this last revealer problem that Ethereum was planning is based on Verifiable Delay Functions (VDFs). It relies on computing some function that no one should be able to do in a time that is orders of magnitude faster than everyone else. One approach would be to enforce sequential computations, e.g. by sequential squarings of a number. You can see the talk from Justin Drake about this here.
The output of the VDF would then be used for updating the Randao value. So even the last revealer would not know the VDF output in time before having to decide to sign or not to sign.
Some research has been done into this already under https://www.vdfalliance.org/. But it's going to take quite some time for this to actually be implemented. No plans are made for this any time soon.
EIP-4399: A new Prevrandao opcode
Now with the understanding of how Randao works, say hello to EIP-4399 and a new opcode called 'prevrandao
'. It was added in the Paris network upgrade.
To make it backwards compatible, the old block.difficulty
opcode which used to give you the current block's Proof of Work difficulty, is now also returning the prevrandao value. The difficulty opcode doesn't make sense anymore now with Proof of Stake and re-using it like this is a nice way to add backwards compatibility for old contracts.
So what exactly is returned by prevrandao? Well the name makes it quite clear. You will get the Randao value from the latest Randao update. That means it’s from the last slot where someone actually updating the Randao and produced a new block.
Why not the new Randao from the current block? Because during block execution the new Randao value is not yet known. So keep that in mind for security, it’s the previous round value which is known at the time of execution.
How to use current Prevrandao
So with all that said, how can you actually use it prevrandao
in a secure way? There are many things to consider and in many cases you’re better off just not using it for now until better methods exist.
Update: Solidity 0.8.18 now supports block.prevrandao
. For any earlier versions, you need to use block.difficulty
.
First the obvious, randomness from the past is known and predictable. So what shall we do? We pick a future prevrandao value.
How far in the future? Well this is where it gets tricky. And also where there is further support required in the EVM. EIP-4399 recommends to use:
- At least four epochs in the future. A new epoch ensures a new set of validators. And in particular four epochs ensures the network will miss at least one proposal further reducing Randao predicability.
- A slot that is not near the beginning of an epoch. Imagine you would use the validator in slot 4 of an epoch. It’s known roughly 6 minutes before the epoch runs who will be the validators in the new epoch. An attacker could use this time to try and bribe or attack these exact 4 validators. He can then gain knowledge of the randomness early, while also having an influence on the randomness, choosing between 2^4 = 16 different outcomes.
Unfortunately the EVM doesn't even currently allow to access the current epoch number. So all we could do is use the block number to approximate it. Remember a slot can be empty and not producing a block. That means if we wait 128 blocks we at least have the guarantee of waiting four full epoch, so let's do that.
There are now two ways you could implement it.
Idea 1: require future block >= n
We can enforce that a prevrandao value from a block higher or equal to the block of playing the game is used.
- In a first transaction you would determine the block number to be used.
- In a second transaction you would wait until the block number has passed and play the game.
This method of course allows validators to censor the second transaction and withhold it up to a moment where the prevrandao (block.difficulty) is a favorable value to them.
mapping (address => uint256) public gameWeiValues;
mapping (address => uint256) public blockNumbersToBeUsed;
function playGame() external payable {
uint256 blockNumberToBeUsed = blockNumbersToBeUsed[msg.sender];
if (blockNumberToBeUsed == 0) {
// first run, determine block number to be used
blockNumbersToBeUsed[msg.sender] = block.number + 128;
gameWeiValues[msg.sender] = msg.value;
return;
}
require(block.number >= blockNumbersToBeUsed[msg.sender], "Too early");
uint256 randomNumber = block.prevrandao; // block.difficulty
if (randomNumber != 0 || randomNumber % 2 == 0) {
uint256 winningAmount = gameWeiValues[msg.sender] * 2;
(bool success, ) = msg.sender.call{value: winningAmount}("");
require(success, "Transfer failed.");
}
blockNumbersToBeUsed[msg.sender] = 0;
gameWeiValues[msg.sender] = 0;
}
mapping (address => uint256) public gameWeiValues;
mapping (address => uint256) public blockNumbersToBeUsed;
function playGame() external payable {
uint256 blockNumberToBeUsed = blockNumbersToBeUsed[msg.sender];
if (blockNumberToBeUsed == 0) {
// first run, determine block number to be used
blockNumbersToBeUsed[msg.sender] = block.number + 128;
gameWeiValues[msg.sender] = msg.value;
return;
}
require(block.number > blockNumbersToBeUsed[msg.sender], "Too early");
require(block.number < blockNumbersToBeUsed[msg.sender], "Too late");
uint256 randomNumber = block.prevrandao; // block.difficulty
if (randomNumber != 0 || randomNumber % 2 == 0) {
uint256 winningAmount = gameWeiValues[msg.sender] * 2;
(bool success, ) = msg.sender.call{value: winningAmount}("");
require(success, "Transfer failed.");
}
blockNumbersToBeUsed[msg.sender] = 0;
gameWeiValues[msg.sender] = 0;
}
Idea 2: require future block == n
Alternatively we can enforce a specific block. Then we don’t have the issue of validators withholding the transaction.
- In a first transaction you would determine the block number to be used.
- In a second transaction you would wait until the exact block number and play the game.
This method of course has the issue that you may miss the block and would need to handle the case of not having any randomness.
WAY 1: require future block >= n
Pros:
- It allows playing the game at any time after block n.
Cons:
- It gives validators even more influence on randomness by censoring the playing the game transaction, so that it's included in a later block.
WAY 2: require future block == n
Pros:
- It doesn't give validators even more influence.
Cons:
- It allows playing the game only in block n. If you miss it, there's no way to get the value later.
How to use future Prevrandao(n)
Now if the prevrandao opcode could be used to obtain values in the past, that would be much better. This is a likely future feature.
So let's assume we had this feature. How could we change the game?
- We still determine the block of which we want to use the prevrandao value from.
- Then we simply wait for the block to arrive and retrieve it.
This time it doesn't matter if we miss it, because we can access the old prevrandao value.
You can follow the Ethereum Magicians Thread to be updated on any potential future EIPs.
// NOTE: below code is speculation on a possible future design
mapping (address => uint256) public gameWeiValues;
mapping (address => uint256) public blockNumbersToBeUsed;
function playGame() external payable {
uint256 blockNumberToBeUsed = blockNumbersToBeUsed[msg.sender];
if (blockNumberToBeUsed == 0) {
// first run, determine block number to be used
blockNumbersToBeUsed[msg.sender] = block.number + 128;
gameWeiValues[msg.sender] = msg.value;
return;
}
require(block.number >= blockNumberToBeUsed, "Too early");
uint256 randomNumber = block.prevrandao(blockNumberToBeUsed);
if (randomNumber != 0 || randomNumber % 2 == 0) {
uint256 winningAmount = gameWeiValues[msg.sender] * 2;
(bool success, ) = msg.sender.call{value: winningAmount}("");
require(success, "Transfer failed.");
}
blockNumbersToBeUsed[msg.sender] = 0;
gameWeiValues[msg.sender] = 0;
}
But keep in mind that still all things mentioned in the beginning about biasability still applies. A validator can still cause a re-shuffle.
The real long-term solution would be the addition of VDFs.
Solidity Developer