Skip to main content

Tellor Security 301

By November 14, 2024Tellor Info

The Tellor oracle is a decentralized network of reporters dedicated to creating transparent and censorship resistant data.  Having been live for several years now, with several upgrades, Tellor has become quite the robust system, consistently driving towards decentralization and security. Now Tellor is in the process of becoming its own L1.  Although much of the terminology and the goal of the system remains the same, the security of an L1 is much different from the security of a smart contract. This article should give you a solid understanding of the crypto-economic guarantees underlying the new system and potential attack vectors.

‘Security’ isn’t defined by a single feature or one line of code, but by an aggregate of several attributes designed to prevent any potential misuse of the network and any tangent attack vectors. Before we jump into Tellor’s system, we’re going to clarify some pieces about blockchain security in general, notably on L1s and cross-chain communications, so we can clearly differentiate between attacks on Tellor vs attacks on the user’s protocol or chain, both of which would shut down Tellor for the user. 

The Basics

For a full walkthrough of how the new and improved tellor system works, please reference the whitepaper.  But for a quick synopsis of how it works:

Tellor is a standalone L1 built using the cosmos sdk.  Chain validators work to process transactions from reporters, who push oracle data when requested by users.  The chain can be validated trustlessly and the data bridged to any other smart contract enabled chain.  

There’s some caveats here you need to be aware of, namely that there are other dependencies for a user’s protocol here, specifically the blockchain they are on and the other protocol(s) they are using.  Take a stablecoin for instance; how that stablecoin is designed with regard to the chain its on (e.g. is there MEV or censorship at the chain level) and how it interacts with the oracle (are they using it properly, how fast are updates expected, expect updates, can they correct things if a bad value gets through) all determine the security of the system.   

The basic flow for any application regarding security is this:

End user’s risk = application risk + chain risk + oracle risk + other dependency risks  (e.g. tokens, bridges, etc.)

So figuring out how the total security or cost to break will depend on numerous factors and it makes the analysis just that much more complicated as a deployment on one chain or with one design may have completely different risk profiles even if they technically use the same end product of “tellor”.  

That said, let’s talk blockchain and L1 security.  

In case you didn’t know, all blockchains ultimately get security from the social layer:  There is “crypto-economic” security in the amount of tokens locked up or subject to slashing, but between bribe attacks, bugs, and potentially unlimited (i.e. unmeasurable) profits that can be gained from attacking some systems, the idea that we can calculate out a “cost to break” is naive at best.  Blockchain architects and attackers know this and this is why the “social layer” exists. 

Let’s take an example where someone buys up ⅔ of ETH, stakes it and then double spends or censors everyone’s transactions.  We won’t just go along with it.  We’ll fork them out.  Meaning we’ll pretend like they don’t exist, they will lose all that ETH, and we’ll continue, probably a few stressful days later.  For this reason, how users handle seeing if the social layer kicked in is of utmost importance.  The crypto-economic piece here is more “cost to break tellor in the short term or force a fork”.  It’s still a big deal, but it should change how you think about funds locked or even using tellor (or any cross-chain application).  

Tellor Security

There are countless ways to break a blockchain system, so the goal here will be to formalize, describe them, and discuss which ones might be most likely.  

The full list here:

  • Control Tellor Layer L1
  • Submit a bad value for given query – consensus data
  • Submit a bad value for given query – optimistic data
  • Censor Tellor Layer L1
  • Censor all reports for a given query
  • Censor given query from reaching consensus
  • Break voting mechanism for disputes
  • Break the tellor user bridge
  • Censor the bridge
  • Edge Cases and Caveats
    • Liquidity Issues
    • Delegation
    • Bribes 
    • Social layer attacks

For the formulas, these are the terms:

  • Total amount of validators stake (V)
  • Total amount of reporter Stake for given Query (R)
  • Total amount of tokens willing to dispute on a given Query (D)
  • Total amount of TRB tokens on Layer (H)
  • Total amount of user tips on Layer (U)
  • Cost to bribe team(T)

 

Cost to control Tellor Layer L1 = ⅔ V

Simply put, ⅔ of validator power is needed (either you control the tokens directly or the tokens are delegated to you) to sign a malicious block on a tendermint chain.  This is not unique to us, as tellor uses Tendermint Core, a third party maintained BFT engine as part of the cosmos SDK.  Full documentation can be found here:  https://tendermint.com/core/ 

Cost to control the tellor reporter set (consensus data) = 1/3 * R

Reporting costs-to-break analysis is not as straightforward in layer. To reach “consensus” on data, ⅔ of total staked have to report a given piece of data (e.g. BTC/USD price) and the median is taken to provide an official value. Hence, to affect the median value half the ⅔ have to be compromised or ⅓.  For the user who consumes consensus data instantly, this would be the cost to break.  If there is any delay however, the analysis gets more complicated as the reporter set is controlled by validators and subject to disputes. 

Cost to control the tellor reporter set (optimistic data) =  D

If any piece of data does not reach ⅔ consensus of all stake (most small queries and likely big queries in times of uncertainty), data is considered “optimistic” and should be used with a delay.  The cost to break here is much greater. If parties are monitoring the data (which they are financially incentivized to do), anyone can prevent bad data from remaining on-chain via a dispute.  For this reason, the data is secure up to the amount parties are willing to spend on disputes.  

As with consensus data, any attack on reporting is fundamentally limited by the number of reporters/disputers that actively support a given query.  If for instance you have a query such as BTC/USD, it is likely that anyone with TRB tokens should be able to determine whether the system is under attack.  If however the query is the result of some obscure event (e.g. who won the armwrestling match that happened last night in my garage), very few parties will be able to support the query or have confidence in disputing it, so the security will be much less.  

Cost to censor Tellor Layer L1 = ⅓ V

This is  based on tendermint security.  If you can gain over 1/3rd the validator power, you can prevent the chain from creating new blocks.  The only option for the chain to recover is to socially fork out the attacking validators so the chain can move forward.  

Cost to censor all reports for a given query = R

There are multiple ways to censor reports on tellor layer.  The first, as with any chain, is to just fill up a block.  The cost to censor using this method is: 

max gas cost of those willing to push through transactions X number of transactions in a block  X  number of blocks you want to censor for

If I’m willing to spend $1 on a txn and there are 200 transactions per block, you would need to pay over $1 * 200 times to prevent my transaction from being included in the block. 

The next and more likely way is to dispute all the valid reports.  To do so would be relatively easy on Layer because you would open a dispute on all of the reporters who support a given query ID ( or R, being the total stake owned by them).  Of course more parties could come on and start reporting for that data (or the disputed parties could go borrow/buy more stake), but the chain would be censored for a short period of time until then.  

Censor given query from reaching consensus = ⅓ V

Tellor users rely on a value reaching “consensus” or 2/3rd of all stake in order to consume the data instantly (this is a tendermint assumption). One potential way to censor or slow down a feed is to remove the consensus flag or have <2/3rd V support the query.  

In order to do this, you would need to own ⅓ of the reporting stake.  Note that it is possible to control ⅓ of the reporting power via delegation without owning ⅓ of validator power (you could halt the whole chain).  Another way to temporarily censor or push out of consensus would be to dispute the amount of tokens above the 2/3rd threshold.  This might be an option if R (reporter support) is only slightly above that 2/3rd threshold.  But even then, it’s likely others could start reporting quickly.  

This censorship attack is relatively dangerous because the attacker could accumulate TRB and pull the chain out of consensus on certain values, but not necessarily halt the chain…so he wouldn’t be slashed or face inactivity penalties.  The way to fix this however is just to stake more TRB and start reporting so they cannot keep the data from consensus and make it more expensive to dispute above the ⅔ threshold. 

That said, using a value optimistically is not the end of the world and most protocols should have a way of doing so as a best practice or just being slightly delayed as any blockchain can experience slight delays as a result of just being blockchains.  

Break voting mechanism for disputes

Layer uses the same voting distribution as tellor always has, 25% token holders(H), 25% users(U) (as measured by tips), 25% reporters(R), and 25% the team(T).  The cost to break (CTB) the network therefore is the cheapest way to get 51% of the vote:

Cost to Break Dispute Voting System = >50% of vote total (CTB(H),CTB(R),CTB(U),CTB(T))

In practice, it is likely that tips will be a lower target threshold and token holders may be less likely to vote than reporters, so an attacker could gain more voting power within those groups.  There is also no slashing risk for voting incorrectly. Therefore, reporters can gain voting power through delegation and don’t have to spend money on getting voting power or be penalized for voting incorrectly.  Token holders can also be persuaded to vote in the wrong direction since they also face no penalty.

It is likely that whatever your tokens are worth though will be less valuable than before since the oracle will be broken.  

Additionally another wrinkle comes into the mix for voting results, namely that not everyone can vote.  If you have a disputed query that is behind a paid API wall or only a subset of people know (e.g. did Bob show up to work (who knows?)), then it is likely that not many people will be able to vote on the dispute.  For this reason, all those numbers for the minimum cost to break need to be scaled down by voter participation.  

However these risks are mitigated by having multiple dispute rounds in layer (like previous  tellor versions).  So even if you break one vote because no ones’ paying attention, more rounds can be opened up and you will eventually hit the point where anyone who does support the query will place their opinion (or even look up a way to have an opinion) as they are financially incentivized to do so.  

Break the tellor user bridge/ token bridge = ⅔ V (sort of)

All tokens on layer will be deposited into the bridge contract, making it a giant honeypot.  

There are multiple safeguards to prevent an attack on the system. First off, to perform an attack, you would need to break our validator set.  We use the Blobstream bridge (created by Celestia) which uses a smart contract to check the signatures of our entire validator set to see if a block was finalized with information to release tokens (as a side note, this bridge is awesome and it’s what our users use for data (or a zk version)).  So to break the bridge, you need 2/3rd V.  But even then it’s not that simple.  Hardcoded in our bridge contract is also a 5% withdrawal limit every 12 hours and pause functionality.  The pause can be initiated only by the team address and only if the team burns 10k TRB (a lot of money).  It pauses the withdrawals in the contract only for 21 days (the data is not affected) and allows the system to recover or be forked.  

The reason for this is that any attacker going after the token bridge is looking to sell them.  By burning the tokens, what we’re admitting is that the chain was compromised and we’re going to fork the chain and the token (e.g. launch a new contract and do a migration).  The pause would allow us to contact exchanges and users of the old tellor (which would not be broken) and have time to decide to exit (stop using old tellor) or stay.  

Censor the Bridge = ⅓ V

Similar to censoring the chain, you can censor the bridge if you have ⅓ of the validator set.  To do this, the chain could actually keep running, but they could stop attesting to bridge updates.  This would be a serious attack that we would most likely stop at the social layer (slashing, so they could expect to lose that ⅓ V).  

For the bridge functionality however, it does halt token bridging in both directions and can even halt user data.  Users should have a fallback in this scenario, but the system would recover.  In the main contract, if the validator set is not updated (at least checkpointed) within 21 days, the system goes into recovery mode and a guardian address (e.g. the team) is able to reset the validator set manually.  This means that tokens could get stuck for three weeks if someone is willing to burn ⅓ of the tokens on the tellor chain.  

What’s the security I can have on my feed?

⅔ of the validator set is needed to explicitly break the whole chain.  To break a specific feed, you need to have greater than R (the amount of reporter stake that supports the query (e.g. is willing to dispute or vote on your behalf)).  To censor, the cost is either ⅓ of the validators or again greater than R for a given query.  

These are nice numbers, but even then it’s not that simple.  

Tellor uses a layered security model, not because of a big number but because it’s subjective data.  To give you an example on a price feed (say TRB/USD).  

First an attacker would have to break the reporters (get the bad value to be the median), next the voting mechanism for determining what a good value is (they need to convince everyone that maybe it’s a “good enough” value), and lastly they need to convince the validators not to step in and fork them out AND convince the social layer not to pressure the validators to do this (the validators risk being dropped).  

Reporters => dispute mechanism => validators => social layer

The idea here is that the reporters are just shells.  They are crypto-economically secure up to how much stake is actively reporting and willing to dispute.  The crypto-economics here prevents CENSORSHIP.  

If they put a bad value, they will be disputed.  The reporter therefore can be thought of as a relayer for putting on what the dispute mechanism deems valid.  The dispute mechanism is the core of “what is true”.  They will step in and determine whether a disputed value was good or bad and should be thought of as the court system with appeals (as multiple dispute rounds are allowed).  

If this too somehow becomes compromised, then the validators can step in.  They should be thought of as neutral.  They don’t necessarily follow the disputes or what a good/bad value is.  BUT they can step in if something is blatantly wrong or the system is threatened (e.g. BTC/USD = 10M$ or something absurd).  They mainly act at the behest of the social layer who stands to fork even the validators out if they are compromised.  

All in all there are layers of censorship resistance, expertise, crypto-economic security, and broad social coordination. It’s not meant to sit behind one big number, but more so have a series of incentives that line up properly to give users correct data in an open and permissionless manner.

Edge cases

  • Liquidity Issues

For any of the cost side of the analysis, we assume that there is a liquid market for that amount of TRB or the attacker could potentially sell the tokens at that amount.  In reality, the cost to acquire the tokens is a minimum.  It assumes that buying up 1/3rd or 2/3rd of the supply would even be possible, and much more unlikely that it would have no price impact.  

That said, you can have the reverse problem as well when you have too low liquidity.  If a validator is in your chain with 10% of your supply and he knows that he can’t sell it, you’re likely to see him open for bribes, or willing to attack, at values even less than the (price x quantity of tokens) amount since the liquidity doesn’t exist for his size.  

One benefit of having an L1 also is that flash loans are not available.  On Ethereum, and many other L1’s with applications, you can obtain tokens by borrowing them for an intra-block attack.  This is particularly dangerous as it allows zero cost attacks if the system is not set up properly.  Luckily for tellor, this issue is mitigated by the necessity of bridging to layer.  

  • Bribes

Bribing is a real threat for any voting system (which essentially all DAO’s, and even to some extent blockchains, are) and is likely the form of attack when liquidity issues exist around getting the necessary control.  Obviously if a validator has a stake of $10M, he would accept a bribe to break tellor at some value >$10M. 

But the interesting bribes come more so when the attacker doesn’t have the money to break, but only part of it.  For tellor, imagine someone makes a smart contract on Ethereum that says we’re going to break the Tellor chain.  The contract says that if you as a reporter submit a bad value, you get to split the profit from an attack at certain percentages.  Say the total stake is 100M, but the total locked in all of the contracts tellor secures is $150M. The attack could say, if you participate, we all split the $149M and give $1M to the coordinator.  Now you are incentivized to participate but the attacker didn’t put up any money!  He just made a smart contract on Ethereum with credible guarantees.  On private chains where validators could potentially signal support without fear of proactive social slashing to stop the attack, it becomes even more prevalent.  

  • Delegation

Delegation is a dirty word in crypto.  We all want to have crypto economic security, but then users actually don’t want to run validators or reporters.  They want return on capital, but they’d much rather someone else run the actual system.  In practice this leads to centralized services custodying keys or tokens for parties and passing them back the returns.  To prevent this, protocols launched “liquid staking tokens” or just delegation.  This means that you can give your tokens’ voting/reporting rights to someone else without having to give up custody.  It’s a benefit, but the downside is security.  

If a validator has a $10M stake traditionally, he will lose $10M if he cheats the system.  If the validator is delegated $10M by you and 100 parties, what does he lose now if he cheats?  The answer is nothing, you and the other 100 parties lose out.  You can force the validator to have his own stake, but even this is just subject to borrowing or the centralized custodian problem.  Some people say this is still net good because the $10M would not have been staked otherwise, but we actually don’t have good data on that.

Most academics see delegation as unavoidable, but they differ on what they see the end game being.  Some think we all just need to accept actual governance and not traditional crypto-economic security.  I think we at tellor just need to monitor it.  There have been some proposals with promise to fight delegation, but it’s a problem we have right along with the rest of the space. It’s a real threat, but the blockchain world still doesn’t know which ways it swings.  If validators are doxxed and subject to being sued, and have revenues/reputations across dozens of chains…the security is probably greater than pure crypto-economics. If all validators are anon and borrowing capital…now delegation might look like a major attack vector (we’ll be able to tell from looking at pools developing in our ecosystem). 

  • Social layer attacks

Social layer attacks are probably the biggest attack vector out there.  For any of these systems, you never actually get the funds necessary to attack them (the feds are an additional security layer we don’t like to admit to).  What you do is you break what people think is “truth”.  You don’t tell them you’re stealing the funds, you convince them to give it to you.  You tell the DAO that restaking, leveraged ponzi schemes, and/or ossification has always been the goal.  For an oracle, you start to get them to question “what is a good value?”   Can we actually say that the price submission that was off by 1% was actually wrong? Maybe it did trade at that price on some exchange.  Maybe you convince people that the “truth” that one government gives has been co-opted.  That the “official” news is now an arm of the state and propaganda of an authoritarian regime.  Our “network” is actually right. 

It’s happened before and it will continue to happen. The social layer is both the savior to these systems and the primary source of their downfall.  You have to constantly talk with your reporters, voters, and users about what a just and fair decision means and how you come to these decisions.  You have to instill faith that those in the social layer are acting in the best interest of the protocol or it all fails.  No pressure tellor community.  

Conclusion

It costs a lot to break Tellor. If you’re actually talking about getting bad values on-chain, it’s going to cost tens to hundreds of millions of dollars to break the system (likely the total amount of validators stake). The Tellor network is secure for participants who use the lack of finality in the system properly, allow for disputes and have mechanisms as a safeguard if the system comes under attack. The best way to protect your protocol is by acknowledging its vulnerabilities and adding processes to mitigate risks. A good design has transparent, measurable security of its own and third party components are only a portion of the key items to consider.

If you need data in your smart contracts and you’re thinking about using Tellor, reach out to us and we’d be happy to discuss the pros and cons as well as the necessary steps to building a robust data feed for your system.