Viewed 36k times. Yilmaz 13k 7 7 gold badges 67 67 silver badges bronze badges. Add a comment. Sorted by: Reset to default. Highest score default Date modified newest first Date created oldest first. Without the memory keyword, Solidity tries to declare variables in storage.
Only in methods. I quote it here: The Ethereum Virtual Machine has three areas where it can store items. There are defaults for the storage location depending on which type of variable it concerns: state variables are always in storage function arguments are always in memory local variables of struct, array or mapping type reference storage by default local variables of value type i.
Do you have any links to the docs that explain this? I would like to read a bit more on how does the storage works. The FAQ links doesn't work, but if you want to read a similar link I suggest docs. I read it but still need a beginner explanation on this, so basically to avoid an expensive operation save on storage we should use the memory keyword before a function param?
If Memory is ephemeral then what's the reason for using it? And how can a contract still call those functions and therefore modify memory once it's already deployed? As someone who hasn't used Solidity it seems bizarre that variables wouldn't be by default in memory and persisting them would be the thing that needs to be explicit — Dominic.
Could you add what is the difference to calldata? Show 2 more comments. Let's say we want to modify the top-level state variable inside a function. I showed the difference on a simple function so it can be easily tested on Remix. Yilmaz Yilmaz 13k 7 7 gold badges 67 67 silver badges bronze badges. Since the int storage myArray is only a pointer to the numbers variable and no space in storage is reserved for myArray. What's the gas cost for myArray being assigned to numbers?
Also, myArray is a storage reference, so does this pointer is stored in memory or storage itself? Hi Yilmaz, can you please help me out here. So in simple words please current me if I'm wrong : memory keyword means 2 things: 1 copy by value. StavAlfi with memory keyword you make the storage varible local. Updated the answer — Yilmaz. Jasper 6 6 silver badges 19 19 bronze badges. Naima Ghulam. M Naima Ghulam. M 1 1 silver badge 5 5 bronze badges. The two uses are: Where a Solidity contract store data How Solidity variables store values Examples of each: 1.
Richard K Richard K 21 2 2 bronze badges. Sign up or log in Sign up using Google. Sign up using Facebook. Sign up using Email and Password. Post as a guest Name. Email Required, but never shown. The Overflow Blog. Will chatbots ever live up to the hype? The Authorization Code grant in excruciating detail Part 2 of 2. Featured on Meta. Question Close Reasons project - Introduction and Feedback. Linked 2. Related Hot Network Questions. Question feed. One example of an invariant is the totalSupply of a fixed issuance ERC20 token.
As no functions should modify this invariant, one could add a check to the transfer function that ensures the totalSupply remains unmodified to ensure the function is working as expected. In particular, there is one apparent invariant , that may be tempting to use but can in fact be manipulated by external users regardless of the rules put in place in the smart contract.
This is the current ether stored in the contract. Often when developers first learn Solidity they have the misconception that a contract can only accept or obtain ether via payable functions. This misconception can lead to contracts that have false assumptions about the ether balance within them which can lead to a range of vulnerabilities.
The smoking gun for this vulnerability is the incorrect use of this. As we will see, incorrect uses of this. There are two ways in which ether can forcibly be sent to a contract without using a payable function or executing any code on the contract. These are listed below. Any contract is able to implement the selfdestruct address function, which removes all bytecode from the contract address and sends all ether stored there to the parameter-specified address. If this specified address is also a contract, no functions including the fallback get called.
Therefore, the selfdestruct function can be used to forcibly send ether to any contract regardless of any code that may exist in the contract. This is inclusive of contracts without any payable functions. This means, any attacker can create a contract with a selfdestruct function, send ether to it, call selfdestruct target and force ether to be sent to a target contract.
Martin Swende has an excellent blog post describing some quirks of the self-destruct opcode Quirk 2 along with a description of how client nodes were checking incorrect invariants which could have lead to a rather catastrophic nuking of clients. The second way a contract can obtain ether without using a selfdestruct function or calling any payable functions is to pre-load the contract address with ether.
Contract addresses are deterministic, in fact the address is calculated from the keccak sometimes synonomous with SHA3 hash of the address creating the contract and the transaction nonce which creates the contract. This means, anyone can calculate what a contract address will be before it is created and thus send ether to that address.
When the contract does get created it will have a non-zero ether balance. This contract represents a simple game which would naturally invoke race-conditions whereby players send 0. Milestone's are denominated in ether. The first to reach the milestone may claim a portion of the ether when the game has ended.
The game ends when the final milestone 10 ether is reached and users can claim their rewards. The issues with the EtherGame contract come from the poor use of this. A mischievous attacker could forcibly send a small amount of ether, let's say 0. As all legitimate players can only send 0. This prevents all the if conditions on lines ,  and  from being true.
Even worse, a vengeful attacker who missed a milestone, could forcibly send 10 ether or an equivalent amount of ether that pushes the contract's balance above the finalMileStone which would lock all rewards in the contract forever. This is because the claimReward function will always revert, due to the require on line  i. This vulnerability typically arises from the misuse of this. Contract logic, when possible, should avoid being dependent on exact values of the balance of the contract because it can be artificially manipulated.
If applying logic based on this. If exact values of deposited ether are required, a self-defined variable should be used that gets incremented in payable functions, to safely track the deposited ether. This variable will not be influenced by the forced ether sent via a selfdestruct call. Here, we have just created a new variable, depositedEther which keeps track of the known ether deposited, and it is this variable to which we perform our requirements and tests.
Notice, that we no longer have any reference to this. I'm yet to find and example of this that has been exploited in the wild. However, a few examples of exploitable contracts were given in the Underhanded Solidity Contest. This feature enables the implementation of libraries whereby developers can create reusable code for future contracts. The code in libraries themselves can be secure and vulnerability-free however when run in the context of another application new vulnerabilities can arise.
Let's see a fairly complex example of this, using Fibonacci numbers. Consider the following library which can generate the Fibonacci sequence and sequences of similar form. This library provides a function which can generate the n -th Fibonacci number in the sequence. It allows users to change the starting number of the sequence start and calculate the n -th Fibonacci-like numbers in this new sequence. This contract allows a participant to withdraw ether from the contract, with the amount of ether being equal to the Fibonacci number corresponding to the participants withdrawal order; i.
There are a number of elements in this contract that may require some explanation. Firstly, there is an interesting-looking variable, fibSig. This is known as the function selector and is put into calldata to specify which function of a smart contract will be called. It is used in the delegatecall function on line  to specify that we wish to run the setFibonacci uint function.
The second argument in delegatecall is the parameter we are passing to the function. Secondly, we assume that the address for the FibonacciLib library is correctly referenced in the constructor section External Contract Referencing discuss some potential vulnerabilities relating to this kind if contract reference initialisation. Can you spot any error s in this contract? If you put this into remix, fill it with ether and call withdraw , it will likely revert.
You may have noticed that the state variable start is used in both the library and the main calling contract. In the library contract, start is used to specify the beginning of the Fibonacci sequence and is set to 0 , whereas it is set to 3 in the FibonacciBalance contract. You may also have noticed that the fallback function in the FibonacciBalance contract allows all calls to be passed to the library contract, which allows for the setStart function of the library contract to be called also.
Recalling that we preserve the state of the contract, it may seem that this function would allow you to change the state of the start variable in the local FibonnacciBalance contract. If so, this would allow one to withdraw more ether, as the resulting calculatedFibNumber is dependent on the start variable as seen in the library contract.
In actual fact, the setStart function does not and cannot modify the start variable in the FibonacciBalance contract. The underlying vulnerability in this contract is significantly worse than just modifying the start variable. Before discussing the actual issue, we take a quick detour to understanding how state variables storage variables actually get stored in contracts. State or storage variables variables that persist over individual transactions are placed into slots sequentially as they are introduced in the contract.
There are some complexities here, and I encourage the reader to read Layout of State Variables in Storage for a more thorough understanding. As an example, let's look at the library contract. It has two state variables, start and calculatedFibNumber. The first variable is start , as such it gets stored into the contract's storage at slot i. The second variable, calculatedFibNumber , gets placed in the next available storage slot, slot.
If we look at the function setStart , it takes an input and sets start to whatever the input was. This function is therefore setting slot to whatever input we provide in the setStart function. Similarly, the setFibonacci function sets calculatedFibNumber to the result of fibonacci n.
Again, this is simply setting storage slot to the value of fibonacci n. Now let's look at the FibonacciBalance contract. Storage slot now corresponds to fibonacciLibrary address and slot corresponds to calculatedFibNumber. It is in this incorrect mapping that the vulnerability occurs. This means that code that is executed via delegatecall will act on the state i. Now notice that in withdraw on line  we execute, fibonacciLibrary. This calls the setFibonacci function, which as we discussed, modifies storage slot , which in our current context is calculatedFibNumber.
This is as expected i. However, recall that the start variable in the FibonacciLib contract is located in storage slot , which is the fibonacciLibrary address in the current contract. This means that the function fibonacci will give an unexpected result. This is because it references start slot which in the current calling context is the fibonacciLibrary address which will often be quite large, when interpreted as a uint.
Thus it is likely that the withdraw function will revert as it will not contain uint fibonacciLibrary amount of ether, which is what calculatedFibNumber will return. Even worse, the FibonacciBalance contract allows users to call all of the fibonacciLibrary functions via the fallback function on line .
As we discussed earlier, this includes the setStart function. We discussed that this function allows anyone to modify or set storage slot. In this case, storage slot is the fibonacciLibrary address. This will change fibonacciLibrary to the address of the attack contract. Then, whenever a user calls withdraw or the fallback function, the malicious contract will run which can steal the entire balance of the contract because we've modified the actual address for fibonacciLibrary.
An example of such an attack contract would be,. Notice that this attack contract modifies the calculatedFibNumber by changing storage slot. In principle, an attacker could modify any other storage slots they choose to perform all kinds of attacks on this contract.
I encourage all readers to put these contracts into Remix and experiment with different attack contracts and state changes through these delegatecall functions. It is also important to notice that when we say that delegatecall is state-preserving, we are not talking about the variable names of the contract, rather the actual storage slots to which those names point. As you can see from this example, a simple mistake, can lead to an attacker hijacking the entire contract and its ether.
Solidity provides the library keyword for implementing library contracts see the Solidity Docs for further details. This ensures the library contract is stateless and non-self-destructable. Forcing libraries to be stateless mitigates the complexities of storage context demonstrated in this section. Stateless libraries also prevent attacks whereby attackers modify the state of the library directly in order to affect the contracts that depend on the library's code.
The Second Parity Multisig Wallet hack is an example of how the context of well-written library code can be exploited if run in its non-intended context. There are a number of good explanations of this hack, such as this overview: Parity MultiSig Hacked. To add to these references, let's explore the contracts that were exploited. The library and wallet contract can be found on the parity github here. Let's look at the relevant aspects of this contract.
There are two contracts of interest contained here, the library contract and the wallet contract. Notice that the Wallet contract essentially passes all calls to the WalletLibrary contract via a delegate call. The intended operation of these contracts was to have a simple low-cost deployable Wallet contract whose code base and main functionality was in the WalletLibrary contract.
Unfortunately, the WalletLibrary contract is itself a contract and maintains it's own state. Can you see why this might be an issue? It is possible to send calls to the WalletLibrary contract itself. Specifically, the WalletLibrary contract could be initialised, and become owned. A user did this, by calling initWallet function on the WalletLibrary contract, becoming an owner of the library contract.
The same user, subsequently called the kill function. Because the user was an owner of the Library contract, the modifier passed and the library contract suicided. As all Wallet contracts in existence refer to this library contract and contain no method to change this reference, all of their functionality, including the ability to withdraw ether is lost along with the WalletLibrary contract.
More directly, all ether in all parity multi-sig wallets of this type instantly become lost or permanently unrecoverable. Functions in Solidity have visibility specifiers which dictate how functions are allowed to be called. The visibility determines whether a function can be called externally by users, by other derived contracts, only internally or only externally. There are four visibility specifiers, which are described in detail in the Solidity Docs. Functions default to public allowing users to call them externally.
Incorrect use of visibility specifiers can lead to some devestating vulernabilities in smart contracts as will be discussed in this section. The default visibility for functions is public. Therefore functions that do not specify any visibility will be callable by external users.
The issue comes when developers mistakenly ignore visibility specifiers on functions which should be private or only callable within the contract itself. This simple contract is designed to act as an address guessing bounty game. To win the balance of the contract, a user must generate an Ethereum address whose last 8 hex characters are 0.
Once obtained, they can call the WithdrawWinnings function to obtain their bounty. Unfortunately, the visibility of the functions have not been specified. It is good practice to always specify the visibility of all functions in a contract, even if they are intentionally public. Recent versions of Solidity will now show warnings during compilation for functions that have no explicit visibility set, to help encourage this practice.
A good recap of exactly how this was done is given by Haseeb Qureshi in this post. Essentially, the multi-sig wallet which can be found here is constructed from a base Wallet contract which calls a library contract containing the core functionality as was described in Real-World Example: Parity Multisig Second Hack.
The library contract contains the code to initialise the wallet as can be seen from the following snippet. Notice that neither of the functions have explicitly specified a visibility. Both functions default to public. The initWallet function is called in the wallets constructor and sets the owners for the multi-sig wallet as can be seen in the initMultiowned function. Because these functions were accidentally left public , an attacker was able to call these functions on deployed contracts, resetting the ownership to the attackers address.
All transactions on the Ethereum blockchain are deterministic state transition operations. Meaning that every transaction modifies the global state of the Ethereum ecosystem and it does so in a calculable way with no uncertainty. This ultimately means that inside the blockchain ecosystem there is no source of entropy or randomness. There is no rand function in Solidity. Achieving decentralised entropy randomness is a well established problem and many ideas have been proposed to address this see for example, RandDAO or using a chain of Hashes as described by Vitalik in this post.
Some of the first contracts built on the Ethereum platform were based around gambling. Fundamentally, gambling requires uncertainty something to bet on , which makes building a gambling system on the blockchain a deterministic system rather difficult. It is clear that the uncertainty must come from a source external to the blockchain.
This is possible for bets amongst peers see for example the commit-reveal technique , however, it is significantly more difficult if you want to implement a contract to act as the house like in blackjack our roulette. A common pitfall is to use future block variables, such as hashes, timestamps, blocknumber or gas limit. The issue with these are that they are controlled by the miner who mines the block and as such are not truly random. Consider, for example, a roulette smart contract with logic that returns a black number if the next block hash ends in an even number.
Using past or present variables can be even more devastating as Martin Swende demonstrates in his excellent blog post. Furthermore, using solely block variables mean that the pseudo-random number will be the same for all transactions in a block, so an attacker can multiply their wins by doing many transactions within a block should there be a maximum bet.
The source of entropy randomness must be external to the blockchain. This can be done amongst peers with systems such as commit-reveal , or via changing the trust model to a group of participants such as in RandDAO. This can also be done via a centralised entity, which acts as a randomness oracle. Block variables in general, there are some exceptions should not be used to source entropy as they can be manipulated by miners.
Arseny Reutov wrote a blog post after he analysed live smart contracts which were using some sort of pseudo random number generator PRNG and found 43 contracts which could be exploited. One of the benefits of the Ethereum global computer is the ability to re-use code and interact with contracts already deployed on the network.
As a result, a large number of contracts reference external contracts and in general operation use external message calls to interact with these contracts. These external message calls can mask malicious actors intentions in some non-obvious ways, which we will discuss. In Solidity, any address can be cast as a contract regardless of whether the code at the address represents the contract type being cast.
This can be deceiving, especially when the author of the contract is trying to hide malicious code. Let us illustrate this with an example:. Consider a piece of code which rudimentarily implements the Rot13 cipher. This code simply takes a string letters a-z, without validation and encrypts it by shifting each character 13 places to the right wrapping around 'z' ; i. The assembly in here is not important, so don't worry if it doesn't make any sense at this stage.
The issue with this contract is that the encryptionLibrary address is not public or constant. Thus the deployer of the contract could have given an address in the constructor which points to this contract:. Again, there is no need to understand the assembly in this contract. The deployer could have also linked the following contract:.
If the address of either of these contracts were given in the constructor, the encryptPrivateData function would simply produce an event which prints the unencrypted private data. Although in this example a library-like contract was set in the constructor, it is often the case that a privileged user such as an owner can change library contract addresses.
If a linked contract doesn't contain the function being called, the fallback function will execute. For example, with the line encryptionLibrary. Thus if users can alter contract libraries, they can in principle get users to unknowingly run arbitrary code. Note: Don't use encryption contracts such as these, as the input parameters to smart contracts are visible on the blockchain. Also the Rot cipher is not a recommended encryption technique :p.
As demonstrated above, vulnerability free contracts can in some cases be deployed in such a way that they behave maliciously. An auditor could publicly verify a contract and have it's owner deploy it in a malicious way, resulting in a publicly audited contract which has vulnerabilities or malicious intent. One technique, is to use the new keyword to create contracts. In the example above, the constructor could be written like:.
This way an instance of the referenced contract is created at deployment time and the deployer cannot replace the Rot13Encryption contract with anything else without modifying the smart contract. In general, code that calls external contracts should always be looked at carefully. As a developer, when defining external contracts, it can be a good idea to make the contract addresses public which is not the case in the honey-pot example given below to allow users to easily examine which code is being referenced by the contract.
Conversely, if a contract has a private variable contract address it can be a sign of someone behaving maliciously as shown in the real-world example. A number of recent honey pots have been released on the mainnet.
These contracts try to outsmart Ethereum hackers who try to exploit the contracts, but who in turn end up getting ether lost to the contract they expect to exploit. One example employs the above attack by replacing an expected contract with a malicious one in the constructor. The code can be found here :. This post by one reddit user explains how they lost 1 ether to this contract by trying to exploit the re-entrancy bug they expected to be present in the contract.
This attack is not specifically performed on Solidity contracts themselves but on third party applications that may interact with them. I add this attack for completeness and to be aware of how parameters can be manipulated in contracts. When passing parameters to a smart contract, the parameters are encoded according to the ABI specification. It is possible to send encoded parameters that are shorter than the expected parameter length for example, sending an address that is only 38 hex chars 19 bytes instead of the standard 40 hex chars 20 bytes.
In such a scenario, the EVM will pad 0's to the end of the encoded parameters to make up the expected length. This becomes an issue when third party applications do not validate inputs. The clearest example is an exchange which doesn't verify the address of an ERC20 token when a user requests a withdrawal. Consider, the standard ERC20 transfer function interface, noting the order of the parameters,. Now consider, an exchange, holding a large amount of a token let's say REP and a user wishes to withdraw their share of tokens.
The user would submit their address, 0xdeaddeaddeaddeaddeaddeaddeaddeaddeaddead and the number of tokens, The exchange would encode these parameters in the order specified by the transfer function, i. The encoded result would be acbbdeaddeaddeaddeaddeaddeaddeaddeaddeaddead bc75e2d Notice that the hex 56bc75e2d at the end corresponds to tokens with 18 decimal places, as specified by the REP token contract.
Ok, so now let's look at what happens if we were to send an address that was missing 1 byte 2 hex digits. Specifically, let's say an attacker sends 0xdeaddeaddeaddeaddeaddeaddeaddeaddeadde as an address missing the last two digits and the same tokens to withdraw. If the exchange doesn't validate this input, it would get encoded as acbbdeaddeaddeaddeaddeaddeaddeaddeaddeadde bc75e2d The difference is subtle. Note that 00 has been padded to the end of the encoding, to make up for the short address that was sent.
When this gets sent to the smart contract, the address parameters will read as 0xdeaddeaddeaddeaddeaddeaddeaddeaddeadde00 and the value will be read as 56bc75e2d notice the two extra 0 's. This value is now, tokens the value has been multiplied by In this example, if the exchange held this many tokens, the user would withdraw tokens whilst the exchange thinks the user is only withdrawing to the modified address.
Obviously the attacker won't possess the modified address in this example, but if the attacker were to generate any address which ended in 0 's which can be easily brute forced and used this generated address, they could easily steal tokens from the unsuspecting exchange. I suppose it is obvious to say that validating all inputs before sending them to the blockchain will prevent these kinds of attacks. It should also be noted that parameter ordering plays an important role here.
As padding only occurs at the end, careful ordering of parameters in the smart contract can potentially mitigate some forms of this attack. There a number of ways of performing external calls in solidity. Sending ether to external accounts is commonly performed via the transfer method. However, the send function can also be used and, for more versatile external calls, the CALL opcode can be directly employed in solidity. The call and send functions return a boolean indicating if the call succeeded or failed.
Thus these functions have a simple caveat, in that the transaction that executes these functions will not revert if the external call initialised by call or send fails, rather the call or send will simply return false. A common pitfall arises when the return value is not checked, rather the developer expects a revert to occur. This contract represents a Lotto-like contract, where a winner receives winAmount of ether, which typically leaves a little left over for anyone to withdraw.
The bug exists on line  where a send is used without checking the response. In this trivial example, a winner whose transaction fails either by running out of gas or being a contract that intentionally throws in the fallback function allows payedOut to be set to true regardless of whether ether was sent or not.
In this case, the public can withdraw the winner 's winnings via the withdrawLeftOver function. Whenever possible, use the transfer function rather than send as transfer will revert if the external transaction reverts. If send is required, always ensure to check the return value. An even more robust recommendation is to adopt a withdrawal pattern.
In this solution, each user is burdened with calling an isolated function i. The idea is to logically isolate the external send functionality from the rest of the code base and place the burden of potentially failed transaction to the end-user who is calling the withdraw function.
Etherpot was a smart contract lottery, not too dissimilar to the example contract mentioned above. The solidity code for etherpot, can be found here: lotto. The primary downfall of this contract was due to an incorrect use of block hashes only the last block hashes are useable, see Aakil Fernandes's post about how Etherpot failed to implement this correctly.
However this contract also suffered from an unchecked call value. Notice the function, cash on line  of lotto. Notice that on line  the send function's return value is not checked, and the following line then sets a boolean indicating the winner has been sent their funds. This bug can allow a state where the winner does not receive their ether, but the state of the contract can indicate that the winner has already been paid.
A more serious version of this bug occurred in the King of the Ether. An excellent post-mortem of this contract has been written which details how an unchecked failed send could be used to attack the contract. The combination of external calls to other contracts and the multi-user nature of the underlying blockchain gives rise to a variety of potential Solidity pitfalls whereby users race code execution to obtain unexpected states.
Re-Entrancy is one example of such a race condition. In this section we will talk more generally about different kinds of race conditions that can occur on the Ethereum blockchain. As with most blockchains, Ethereum nodes pool transactions and form them into blocks. The miner who solves the block also chooses which transactions from the pool will be included in the block, this is typically ordered by the gasPrice of a transaction.
In here lies a potential attack vector. An attacker can watch the transaction pool for transactions which may contain solutions to problems, modify or revoke the attacker's permissions or change a state in a contract which is undesirable for the attacker. The attacker can then get the data from this transaction and create a transaction of their own with a higher gasPrice and get their transaction included in a block before the original.
Let's see how this could work with a simple example. Consider the contract FindThisHash. Imagine this contract contains ether. The user who can find the pre-image of the sha3 hash 0xb5b5b97fafdeec9b41f74dfb6c38ff9a3ecd7f44dbee0a can submit the solution and retrieve the ether.
Let's say one user figures out the solution is Ethereum! They call solve with Ethereum! Unfortunately an attacker has been clever enough to watch the transaction pool for anyone submitting a solution. They see this solution, check it's validity, and then submit an equivalent transaction with a much higher gasPrice than the original transaction. The miner who solves the block will likely give the attacker preference due to the higher gasPrice and accept their transaction before the original solver.
The attacker will take the ether and the user who solved the problem will get nothing there is no ether left in the contract. A more realistic problem comes in the design of the future Casper implementation. The Casper proof of stake contracts invoke slashing conditions where users who notice validators double-voting or misbehaving are incentivised to submit proof that they have done so. The validator will be punished and the user rewarded. In such a scenario, it is expected that miners and users will front-run all such submissions of proof, and this issue must be addressed before the final release.
There are two classes of users who can perform these kinds of front-running attacks. Users who modify the gasPrice of their transactions and miners themselves who can re-order the transactions in a block how they see fit. A contract that is vulnerable to the first class users , is significantly worse-off than one vulnerable to the second miners as miner's can only perform the attack when they solve a block, which is unlikely for any individual miner targeting a specific block.
Here I'll list a few mitigation measures with relation to which class of attackers they may prevent. One method that can be employed is to create logic in the contract that places an upper bound on the gasPrice. This prevents users from increasing the gasPrice and getting preferential transaction ordering beyond the upper-bound. This preventative measure only mitigates the first class of attackers arbitrary users.
Miners in this scenario can still attack the contract as they can order the transactions in their block however they like, regardless of gas price. A more robust method is to use a commit-reveal scheme, whenever possible. Such a scheme dictates users send transactions with hidden information typically a hash. After the transaction has been included in a block, the user sends a transaction revealing the data that was sent the reveal phase. This method prevents both miners and users from frontrunning transactions as they cannot determine the contents of the transaction.
This method however, cannot conceal the transaction value which in some cases is the valuable information that needs to be hidden. The ENS smart contract allowed users to send transactions, whose committed data included the amount of ether they were willing to spend. Users could then send transactions of arbitrary value. During the reveal phase, users were refunded the difference between the amount sent in the transaction and the amount they were willing to spend. An efficient implementation of this idea requires the CREATE2 opcode, which currently hasn't been adopted, but seems likely in upcoming hard forks.
The ERC20 standard is quite well-known for building tokens on Ethereum. This standard has a potential frontrunning vulnerability which comes about due to the approve function. A good explanation of this vulnerability can be found here. This function allows a user to permit other users to transfer tokens on their behalf. The frontrunning vulnerability comes in the scenario when a user, Alice, approves her friend, Bob to spend tokens.
Alice later decides that she wants to revoke Bob 's approval to spend tokens , so she creates a transaction that sets Bob 's allocation to 50 tokens. Bob , who has been carefully watching the chain, sees this transaction and builds a transaction of his own spending the tokens. He puts a higher gasPrice on his transaction than Alice 's and gets his transaction prioritised over hers. Some implementations of approve would allow Bob to transfer his tokens , then when Alice 's transaction gets committed, resets Bob 's approval to 50 tokens , in effect giving Bob access to tokens.
The mitigation strategies of this attack are given here in the document linked above. Another prominent, real-world example is Bancor. Ivan Bogatty and his team documented a profitable attack on the initial Bancor implementation. His blog post and Devon 3 talk discuss in detail how this was done. Essentially, prices of tokens are determined based on transaction value, users can watch the transaction pool for Bancor transactions and front run them to profit from the price differences.
This attack has been addressed by the Bancor team. This category is very broad, but fundamentally consists of attacks where users can leave the contract inoperable for a small period of time, or in some cases, permanently. This can trap ether in these contracts forever, as was the case with the Second Parity MultiSig hack. There are various ways a contract can become inoperable.
Here I will only highlight some potentially less-obvious Blockchain nuanced Solidity coding patterns that can lead to attackers performing DOS attacks. External calls without gas stipends - It may be the case that you wish to make an external call to an unknown contract and continue processing the transaction regardless whether that call fails or not. Let us consider a simple example, where we have a contract wallet, that slowly trickles out ether when the withdraw function is called.
The reason the CALL opcode is used, is to ensure that the owner still gets paid, even if the external call reverts. The issue is that the transaction will send all of its gas in reality, only most of the transaction gas is sent, some is left to finish processing the call to the external call.
If the user were malicious they could create a contract that would consume all the gas, and force all transactions to withdraw to fail, due to running out of gas. If a withdrawal partner decided they didn't like the owner of the contract. They could set the partner address to this contract and lock all the funds in the TrickleWallet contract forever. To prevent such DOS attack vectors, ensure a gas stipend is specified in an external call, to limit the amount of gas that that transaction can use.
In our example, we could remedy this attack by changing line  to:. This modification allows only 50, gas to be spent on the external transaction. The owner may set a gas price larger than this, in order to have their transaction complete, regardless of how much the external transaction uses.
Looping through externally manipulated mappings or arrays - In my adventures I've seen various forms of this kind of pattern. Typically it appears in scenarios where an owner wishes to distribute tokens amongst their investors, and do so with a distribute -like function as can be seen in the example contract:. Notice that the loop in this contract runs over an array which can be artificially inflated.
An attacker can create many user accounts making the investor array large. In principle this can be done such that the gas required to execute the for loop exceeds the block gas limit, essentially making the distribute function inoperable. Owner operations - Another common pattern is where owners have specific privileges in contracts and must perform some task in order for the contract to proceed to the next state.
One example would be an ICO contract that requires the owner to finalize the contract which then allows tokens to be transferable, i. In such cases, if a privileged user loses their private keys, or becomes inactive, the entire token contract becomes inoperable. In this case, if the owner cannot call finalize no tokens can be transferred; i. Progressing state based on external calls - Contracts are sometimes written such that in order to progress to a new state requires sending ether to an address, or waiting for some input from an external source.
These patterns can lead to DOS attacks, when the external call fails or is prevented for external reasons. In the example of sending ether, a user can create a contract which does not accept ether. If a contract requires ether to be withdrawn consider a time-locking contract that requires all ether to be withdrawn before being useable again in order to progress to a new state, the contract will never achieve the new state as ether can never be sent to the user's contract which does not accept ether.
In the first example, contracts should not loop through data structures that can be artificially manipulated by external users. A withdrawal pattern is recommended, whereby each of the investors call a withdraw function to claim tokens independently. In the second example a privileged user was required to change the state of the contract.
In such examples wherever possible a fail-safe can be used in the event that the owner becomes incapacitated. One solution could be setting up the owner as a multisig contract. Another solution is to use a timelock, where the require on line  could include a time-based mechanism, such as require msg.
This kind of mitigation technique can be used in the third example also. If external calls are required to progress to a new state, account for their possible failure and potentially add a time-based state progression in the event that the desired call never comes. Note: Of course there are centralised alternatives to these suggestions where one can add a maintenanceUser who can come along and fix problems with DOS-based attack vectors if need be.
Typically these kinds of contracts contain trust issues over the power of such an entity, but that is not a conversation for this section. GovernMental was an old Ponzi scheme that accumulated quite a large amount of ether. In fact, at one point it had accumulated ether. Unfortunately, it was susceptible to the DOS vulnerabilities mentioned in this section. This Reddit Post describes how the contract required the deletion of a large mapping in order to withdraw the ether.
The deletion of this mapping had a gas cost that exceeded the block gas limit at the time, and thus was not possible to withdraw the ether.
unsigned integers. fixed-size byte arrays (bytes1 to bytes32). fixed point numbers (they are not fully supported yet).