Matomo

Smart contract security audit: tips & tricks | Cossack Labs

🇺🇦 We stand with Ukraine, and we stand for Ukraine. We offer free assessment and mitigation services to improve Ukrainian companies security resilience.

List of blogposts

Smart contract security audit: tips & tricks

Smart contracts occupy a separate niche in software development. They are small, immutable, visible to everyone, run on decentralised nodes and, on top of that, transfer user funds.

The smart contracts ecosystem is evolving rapidly, obtaining new development tools, practices, and vulnerabilities. The latter often costs a lot, as security weaknesses in smart contracts result in immediate financial losses. That’s why the space of smart contracts security also evolves rapidly.

In many cases, smart contracts cannot be easily updated after deployment. So, they should be analysed and checked in every way before they land on the blockchain—to mitigate possible exploits and provide quick response mechanisms for potential threats.

things to pay attention when performing smart contract security audit

We focus on the Tezos network in this post: LIGO language, liquid proof of stake (LPoS), gas issues, and, in general, the “Tezos way of doing things”. Some blockchains share similar approaches, while others (like ETH or SOL) face unique threats not covered here.

The security audit of smart contracts differs from auditing the “traditional software”—we are to cover tactics in this post.


  1. What are smart contracts
  2. Smart contract audit process
  3. Smart contract specific attacks
  4. Summary

What are smart contracts #

In simple words, a smart contract is a code stored on a blockchain. Let’s have a deeper look.

We can think of smart contracts as state machines. A smart contract has storage, or state, which is a collection of some data fields. A user can invoke the contract by providing specific parameters. The contract executes the code and either fails or returns a new state (storage with updated data fields). What exactly is stored and accepted by the contract is determined by its source code.

a smart contract works as a state machine

Smart contract as a state machine on a blockchain.

In Tezos, invocations and parameter passing are performed with transactions or, more generally, operations. To call the contract, a user creates a regular transaction (but with arguments) to the contract’s address. Then the transaction goes into the transaction pool.

Bakers (often called “miners” in other blockchains) choose transactions from the pool for creating the next block. If the transaction is a contract invocation, the baker executes the code, obtains new storage, and embeds it into the block. When the block is baked, other nodes execute the same contract with the same parameters and compare obtained storages with the original one to validate the operation.

how smart contract is executed on blockchain network

Execution and verification of smart contracts on a blockchain network.

Interaction with other contracts #

Besides the storage, the contract can generate a list of operations that may contain calls to other contracts, which, in turn, can create new operations. In Tezos, these operations are collected into a queue. It drastically differs from what Ethereum has with its stack-based approach. The queue-based design makes it hard to conduct reentrancy attacks, as we will discuss later.

queue-based order of smart contract calls in Tezos

The order of Tezos smart contract calls. Note that C, not D, is executed after B.

If one of the contracts fails, the whole operation fails. In this way, contract executions are atomic.

Accounts #

On Tezos, you can have implicit or originated accounts—both with their own address and balance.

Implicit accounts are created from key pairs and used to transfer and store user assets. To spend assets, an implicit account creates a transaction, signed by its private key.

Originated accounts containing some code are called smart contracts. They can receive Tez (XTZ, a native Tezos cryptocurrency) via transactions from other accounts.

Smart contracts cannot hold private keys, as they don’t have a place to store them securely. Instead, when a smart contract creates a transaction for spending Tez, this transaction appears on all validating nodes. If some node tries to forge a transaction from a smart contract account, all other nodes will detect and reject the transaction. In other words, smart contracts’ assets are protected by the consensus mechanism.

Fees #

In Tezos, users pay two kinds of fees for their operations:

  • storage fees—for bytes used on the blockchain,
  • gas fees—to bakers for their work.

Gas is a unit of contract execution. Each operation of the virtual machine consumes some gas, for example, instruction execution, data serialisation, type checking, etc. The total fee is calculated based on the minimal value, consumed gas, and storage. Every transaction and block has gas limits to protect them from infinite loops and guarantee fast block creation.

As prices change unpredictably, users choose fees they are willing to pay for the transactions. Then, bakers choose transactions based on fees and gas, fitting the maximum transactions into the block.

an example of a real Tezos transaction

An example of contract invocation.

Other tokens and FA standards #

Tezos operates only one cryptocurrency named Tez or XTZ. Users send it in transactions and bakers receive it for their work. However, a bunch of other tokens exist on the Tezos blockchain too. These tokens are implemented as smart contracts that keep track of user accounts.

Let’s create an imaginary crypto token: the Cossack Coin. In the simplest case, we can describe it as in the image below.

an example of simple token contract

An example contract for an imaginary cryptocurrency called Cossack Coin.

The storage contains a map that tracks the number of tokens on specific accounts. It can have several other fields, like administrators, metadata, operators, etc.

The Cossack Coin contains three entrypoints (functions, that can be called):

  • transfer—a main entrypoint for users to send their tokens,
  • mint and burn—add or remove tokens to / from a specific account.

Note, in our example, mint and burn entrypoints can only be called by the administrator. This is how authorization is often implemented on smart contracts: the contract just verifies if the sender is allowed to execute the function. The security guarantees rely on the fact that the adversary has to impersonate a user (steal private keys) or break consensus to forge the transaction.

Real-world contracts are more complex. They can have multiple tokens: fungible or non-fungible, support integration with other contracts, carry additional information, or be part of complicated workflows.

To standardise a variety of tokens, Tezos released FA1.2 and later FA2 standards. They define the behaviour of entrypoints implemented by token contracts. Following the standards makes integrating third-party tools, wallets, and bridges easy, reducing chaos in the ecosystem.


Smart contract security audit process #

Security audit of smart contracts is a rapidly evolving field and compared to the world of ‘traditional’ software, it’s still a ‘wild west’ where security standards and best practices are still taking shape.

Good examples of security verification guidelines are Smart Contract Security Verification Standard (SCSVS) by SecuRing, Tezos security assessment checklist and Tezos security baseline checking framework by Inference.

Smart contracts have a lot in common with distributed applications but differ in details. They are generally small and easier to review. They have unique threat vectors, like malicious bakers or gas exhaust. They don’t store any private data but they still operate with sensitive information: signatures, administrator addresses, user balances, etc.

Smart contract security audit by Cossack Labs #

At Cossack Labs, we use the standards and best practices as a baseline for our audits, but we rarely stop there. When auditing, we look at smart contracts (just like any other piece of code) as a component of a larger system, as many incidents result from several minor security weaknesses rather than one fatal flaw.

Cossack Labs' security review of smart contracts consists of several steps:

  1. General risk model clarification: formulating risks and threats vectors that affect the contract’s consistency.

  2. Research of fundamental domain issues: studying recent real-world vulnerabilities, mitigations and tools. As the field changes rapidly, we want to stay up-to-date.

  3. Design and use case review: we analyse how the contract behaves, its entrypoints, and the interactions between contracts. We proactively look for design flaws that lead to the manipulation of transaction flow.

  4. Cryptographic design and implementation review: verifying whether the chosen combination of cryptographic primitives and their implementation embodies desired security properties.

  5. Smart contracts security review to ensure security controls are implemented well. We test for reentrancy, replay attacks, gas exhaustion, denial of service, unhandled edge cases and blind spots.

  6. Security review of the surrounding infrastructure: tests, CICD pipelines, dependency management and supply chain issues.

  7. Operational security: deploy procedures, logging, centralisation issues (f.e. contract security could depend on the opsec of an admin person that holds all the keys to deploy contracts).

We have experience building, auditing and improving security/cryptography within cryptocurrency fundamental protocols, nodes, wallets, and bridges. We believe it’s essential to pay attention not only to the smart contract’s code but their infrastructure and data flow:

  • Test coverage: do tests cover all major use cases and ensure that edge cases (like sending 0 Tez) are covered?
  • Deploy: how smart contracts are deployed? Do developers or other users have a chance to inject vulnerabilities / abusing behaviour into smart contracts before they are deployed?
  • Keys: what does the key infrastructure look like? Where administrator keys are stored? Who has access to them? Are the typical key management procedures (generation, rotation, expiration) supported and logged?
  • Interactions: how do contracts interact with each other? Can we force a contract to perform a certain action by sending certain signals from another contract?
  • Life cycle: are migration procedures in place? What is the strategy for updating the contract?
  • Emergency situations: are there any rapid response mechanisms that can stop the system and reduce the consequences of a bug, exploit, key leakage, etc? Are such scenarios tested and verified?

Often we work tight with developers and give them a long list of issues after the audit, as well as correction advice. Some are directly related to mistakes, but others suggest fixing security weaknesses to improve general code quality, maintenance, user experience or compliance with specifications.

We don’t think that it is sufficient (or reasonable) to stop at highlighting deficiencies, so it’s worth investing in making sure that developers have enough inputs to fix problems well.


We improve Web3 security bringing our experience in protecting finance and critical infrastructure.


Smart contract specific attacks #

Reentrancy attacks #

Reentrancy is a type of attack when a contract invokes other contracts, which, in their turn, invoke the original contract again. If the state of the original contract wasn’t updated properly, it can be used to drain funds multiple times, mount replay attacks, etc.

This attack is more common in Ethereum because a contract invocation is stack-based, not queue-based, as in Tezos. In Ethereum, it’s easy to forget to update the state before calling other contracts. In Tezos, reentrancy is still possible when contracts require multistep round-trip communication between each other.

Let’s imagine a smart contract that stores users’ ETH. It can be a bank that supports depositing users’ assets. The bank records the user’s balance in a map (address -> balance). To withdraw their money, the users call the withdraw entrypoint.

The following example is a simplified version of the real 2016 DAO hack.

The ordering of the contract is shown in the picture:

step-by-step explanation of the 2016 DAO hack

Simplified code flow of the 2016 DAO hack.

Let’s imagine that the user wants to withdraw all his assets. The user invokes the withdraw entrypoint and the contract starts executing. First, the contract gets the user balance. Then, it issues a transaction to the user, sending the required amount of ETH. It ends by updating the user’s balance to 0, indicating that the contract no longer holds the user’s assets.

However, if the user is a malicious contract, after receiving the ETH, it can call the withdraw again, repeating the sequence of get-transfer. The malicious contract can do it multiple times and eventually stop.

When we unroll the sequence, it can look like this:

execution of the 2016 DAO hack

The sequence of execution steps in the DAO hack. Note that the transfer is executed several times.

As a result, the user withdraws assets multiple times.

To prevent such attacks in Ethereum, commit your state before any call. If it’s not possible, implement a guard mechanism, similar to mutex in software.

In both cases (Tezos and Ethereum), pay close attention to the places where contracts are making calls, especially to the untrusted addresses.


Front-running attacks #

Front-running attacks exploit the unpredictable nature of the blockchain network. Transactions are visible to the nodes before being collected into the block. It means malicious bakers or other users can take advantage of the ordering by issuing transactions mined immediately before or after the chosen transaction.

Front-running attacks can impact decentralised finance, where tokens’ price depends on supply and demand. Another example is a marketplace for NFT where a user can issue a request to buy an item, but the malicious baker buys it before and immediately sells it at a higher price.

example of front-running attacks on blockchain

Front-running attack example. The user wants to buy an item for 10 Cossack Coins. Observing this transaction, the malicious baker issues two new transactions, first purchasing the item and then selling it at a higher price.


Gas exhaustion #

Gas is used to limit execution of the contracts, prevent infinite loops and abuse of bakers’ computing power. Whenever an operation or block reaches its gas limit, the contract execution stops. Gas is consumed during execution, data deserialization, type checking, etc.

If a contract’s data structure becomes huge, its deserialization / type checking / serialisation can consume all available gas. In this case, the contract is blocked as every invocation fails immediately. To avoid such situations contracts restrict the size of their data structures or, in the case of Tezos, use lazy serialised big maps that can store millions of records.

Gas exhaustion can happen if a contract has unbound loops or is too big. Also, it can occur if a contract calls other malicious code, which consumes the gas. As a contract can send tokens to many different contracts, so if one of them consumes all the gas, the operation fails, and no one receives the tokens.

example of gas exhaustion due to a malicious contract on blockchain

Gas exhaustion due to calling a malicious contract. The bank contract pays payouts to users, but one user is a malicious contract which consumes all the gas. In this case, nobody receives the payout.

When designing or reviewing a contract, always pay attention to the items that the user can control: data structures, number of operations, calls to other contracts. Model actions a malicious user can perform to DoS a contract and the consequences it could have.


Sensitive data leakage #

All data on a blockchain is public, so smart contracts don’t store any secrets like keys or PII. Still, some contracts require random numbers, but producing a “good enough” randomness is complicated. To achieve this, one can use “commit and reveal” schemes.

Some developers use current time and block as a source for randomness. It is insecure as these values are predictable, and bakers can tweak them to gain an advantage.


Software vulnerabilities #

Smart contracts are code in the first place, so they can suffer from typical application security issues. Like logical bugs, overflows and underflows, unhandled errors, improper initialization, unused code, inappropriate types and data structures, etc.

// Example of transfer entrypoint for some imaginary contract.
// Can you spot a bug?
//
// ROT13: Jung unccraf jura fraqre vf qfg?
function transfer(dst, amount) {
   const src_balance = storage.accounts.get_or(sender, 0);
   const dst_balance = storage.accounts.get_or(dst, 0);

   require(src_balance >= amount);

   src_balance -= amount;
   dst_balance += amount;

   storage.accounts[sender] = src_balance;
   storage.accounts[dst] = dst_balance;
}

To fight them, use linters and verifiers, write tests (smart contracts are generally small self-sufficient programs, so it’s easy to test them), update a compiler version, manage dependencies, and follow the best coding practices.

It’s worth saying that the design of some languages makes it hard to miss bugs. For example, Michelson is a stack-based, strictly typed language designed to make safe coding and formal verification easy. So, we definitely suggest developers use the most modern and secure tooling.


Signature replay #

Smart contracts operate with different types of signatures: transaction signatures (when the user creates and signs the transaction with their private keys) and signatures used by smart contracts in their customised authentication mechanisms.

For example, a smart contract may require an administrator’s signature for minting NFT or three out of five signatures for releasing an account’s assets. However, signatures are not trivial to handle, as they can be vulnerable to replay attacks.

Signature replay is not something specific to smart contracts. Many signature algorithms are malleable: signatures can be altered in some way and still remain valid. As a result, they can be reused in places that naively keep track of seen signatures or their hashes for deduplication.

Instead, the messages themselves or their hashes should be stored. To reject a reused signature, check whether the corresponding message has already been sent. The good idea is to include a unique ID (random, 128 bits should be enough) in each message. Mixing additional data into the signature helps prevent replay attacks across different contexts. The contract’s address in the message ensures that the signature cannot be replayed on other contracts or after an upgrade.

Languages like Michelson and LIGO make it impossible to hash and compare signatures, preventing such attacks.


Summary #

Smart contracts are often used in decentralised finance (DeFi) to transfer assets between accounts under chosen (programmatically evaluated) circumstances. Thus, smart contracts security should be comparable with banking processing software security. But smart contracts are very different: small, immutable, written in unusual languages, and quite complicated to update and re-deploy.

Smart contracts operate in a unique threat landscape: immutability of bugs, reentrancy attacks, transactions replay, gas issues, transactions' edge cases, deadlocks, dealing with always-updating compiler versions, and many more.

But the attack surface of smart contracts is not limited to a contract’s code. It includes interaction between contracts (invocations), deployment and migration procedures, key management issues, and project developers' operational security.

Security standards and best practices are still evolving: there needs to be a central source documenting known vulnerabilities, attacks, and problematic constructs.

Therefore, smart contracts security audit requires several steps: design review and analysis, threat modelling, testing focused on mitigating security weaknesses, and extensive attention to the surrounding processes.

We described how we deal with smart contracts security audits. Let us know what you think!

Contact us

Get whitepaper

Apply for the position

Our team will review your resume and provide feedback
within 5 business days

Thank you!
We’ve received your request and will respond soon.
Your resume has been sent!
Our team will review your resume and provide feedback
within 5 business days