The alt L1s are not going anywhere, thanks to a little known concept called weak subjectivity

Oh god, not another one…

Dana J. Wright
8 min readSep 3, 2022
Abstract system design. Image created by the author.

Is what I’m thinking as I scan the announcement of Sui.

Sui is yet another L1 blockchain. This one created by MystenLabs, a team that previously lead Facebook’s now defunct blockchain projects, Novi and Deim, and that raised $36 million from a16z.

I think it’s safe to say, alternative Layer 1s aren’t going anywhere. But why? And how many do we need?

From their websites and roadmaps, the L1s look very similar

👍
👌
🤘
🤙

Buzz. Word. Salad.

To find the actual differences between them, you have to look to the margins. You have to look at how they handle extraordinary events.

For example, what happens if the network is knocked offline? What if it gets partitioned? Or what if the private keys for five out of nine validator nodes (which all happen to be stored on the same server) get swiped, and $625 million gets drained from the protocol?

What happens then?

Worst case scenarios

Image created by the author.

The early Bitcoin community thought a lot about worst case scenarios.

They witnessed the fate of DigiCash, E-gold and Liberty Reserve, and they anticipated that the network would one day be the target of nation-state attacks. For this reason, at pretty much every turn they opted for maximum resilience and decentralization.

Perhaps the most famous example of this was the decision not to increase the block size.

Bitcoin has a a block size of 1MB and a block time of 10 minutes, which allows for a transaction throughput of about seven transactions per second (TPS).

That’s pretty low, compared to the Visa Network (1700 TPS) or Mastercard (5000 TPS).

But, touka koukan (等価交換).

That’s that’s a Japanese phrase which roughly translates to “nothing comes for free,” or “there will always be trade offs.”

In this case, the tradeoff for a bigger block size would have been to make the blockchain more cumbersome, increase the data storage requirements for hardware, and generally make it more difficult to run a full node.

This, the early bitcoiners decided, was too high a price. They believed it would eventually reduce the overall number of full nodes, have a centralizing effect on mining, and increase the odds of attack or regulatory capture.

That laser focus on security and decentralization made bitcoin what it is, an unstoppable censorship resistant digital store of value that runs without the need for trusted intermediaries.

Let’s build a new chain

The blockchain trilemma.

Fast forward five years and Ethereum was born, a Turing-complete blockchain that facilitates programmable smart contracts.

Ethereum, Namecoin, XRP and a few others changed the use case for blockchains and retooled the entire security model from the ground up. This opened the pandora’s box of blockchain configuration and set the stage for the hundreds of different L1 chains we have today.

Let’s say, for example you want to build a sandbox for DeFi applications. What you will find pretty quickly is that there are a ton of questions for which there aren’t necessarily any right or wrong answers, only tradeoffs.

For example:

  1. How many nodes/ validators do you really need in order to provide adequate defense for the network?
  2. How will this defense system scale as the network grows and becomes a juicer target for attacks?
  3. What happens exactly when the network gets attacked?

Dealing with Bob

Blockchain success.

If enough developers agree with your answers to the first two questions and decide to start building applications on your chain, congratulations.

Developers are the cornerstone of your community, and community is key. If properly incentivized, your community will want to take part in staking and validating.

That’s the culture of crypto. As your chain succeeds, more participants will pile in to earn rewards and help increase redundancy and security.

Awesome.

Until one day, a new validator comes online, we’ll call him Bob.

Bob doesn’t know any hashes or consensus rules. He walks up to some other nodes, inquires about the rules of the network and asks for some hashes to validate.

Bob could be a chill new node or an attack node.

Bob could even pretend to be chill for a while until there’s a billion dollars locked in the protocol, and then start validating malicious blocks that drain the whole thing.

Bob could also have poor security hygiene and one day get his private keys swiped.

This is where things start to get interesting.

Let’s look at how some different chains handle Bob.

Binance Smart Chain

BSC effectively doesn’t.

No Bobs allowed.

In a sort of nod to decentralization, BSC will let you run a witness node which serves up duplicate data on the current state of the chain.

But these nodes don’t confirm transactions or participate in the consensus process.

Requirements for running a validator node on BSC include having a firewalled 64-bit VPS server, a 1GB fiber network connection and holding at least 10k BNB (~$4m), among other things.

There are only 21 validators for BSC.

Here’s a thread where CZ was asked about BSC node operators and how they are incentivized. The thread was deleted from twitter but saved by the internet archive.

Yikes.

Solana

Solana also requires powerful machines, high bandwidth, and protocol expertise to be a validator.

There are currently 1736 Solana nodes.

While that’s significantly more decentralized than BSC on paper, it’s worth noting that there are only 22 nodes that hold 33% of the stake, which means they alone could achieve consensus if they decided to collude.

Back in the early days of the protocol, my friend John Light asked one of the Solana founders why their nodes don’t need to store the whole chain to validate blocks.

Read the full thread.

Answer: Weak subjectivity.

Rather than independently verify all transactions back to genesis, blocks go through “side channel checks.”

Meaning, they sort of get a pat down from a trusted third party (in this case client developers).

Seems a bit fragile? Perhaps. But allowing nodes to reach consensus without forcing them to store/ check transaction history any further back than the current epoch is one of the key innovations that makes Solana so fast.

Avalanche

Anyone can run an Avalanche node easily on a laptop. However you must have 2000 AVAX (~$50k at time of writing) staked to be a validator.

Avalanche’s way of validating transactions is interesting. The Avalanche virtual machine determines whether a transaction is spendable by first asking a random subset of validators.

When there are no conflicts (vast majority of the time), transactions are finalized in just a few seconds. When there is a conflict, validators cluster around conflicting transactions, entering into a positive feedback loop until a supermajority (80%) of nodes prefer a particular transaction over another.

They call this process snowball consensus.

Here’s a cool demo of how it works.

If I understand it correctly, the mechanism enables two really powerful features:

  1. High transaction throughput by achieving consensus through a representative sample of validations.
  2. Increased security by removing the ability of a simple majority of nodes to take over and start approving malicious blocks.

Basically, Avalanche prioritizes transaction speed and low cost by introducing a degree of subjectivity. And in order to reduce the security tradeoff of having fewer validators, they created a new kind of consensus model that theoretically blocks a 51 percent attack.

We’ll see if that holds up in practice over time.

Conclusion

The whole modern world is weakly subjective.

We don’t examine the blueprints or the foundation of a high rise building before we hop on the elevator.

We don’t (typically) order lab tests or examine the genetic material of our significant others under a microscope before having kids.

Practically everything we do is necessarily based on some degree of trust. If we had to independently verify all the things, society would cease to function.

That being said, we live in an era of trust erosion. Centralized institutions like banks, media and government have lost our trust. And rightly so.

Bitcoin was invented as an antidote to institutional trust. And many believe weak subjectivity represents an unacceptable compromise on that fundamental social value.

Returning to the question John posed to the cofounder of Solana, I don’t believe there is a technically correct answer.

The question is philosophical.

Should booting up a new node require one to examine every transaction going back to the genesis block? Or, can we somehow ascertain the “truth” through a degree of trust, a random sampling, or by some other clever means.

Where most people come down on this will depend on their own personal beliefs about human nature and where they think we’re headed as a society.

Do you expect to see a world where everyone is scared, defensive and hiding their money under the mattress with the biggest guns possible pointed out?

If so, then the narrative of maximum security with minimal concessions to speed, usability or interoperability may appeal to you.

On the other hand, if you think we’re all going to make it; that crypto democratizes access to financial innovation and that the promises of web3 are real, then you may be more open to a degree of subjectivity in exchange for scale and/or reducing cost.

I personally find myself more in the latter camp these days.

I see virtue in every blockchain innovation from Bitcoin all the way to rebasing DAOs and I’m excited to see where it all takes us.

Thanks for reading until the end. I work in crypto and think about it non-stop. You can find me on Twitter @danajwright_

--

--