6 min read

Vellir #23 → Programmable trust

As confidence in public institutions falters, blockchains provide an alternative via programmable trust. But we will still need to build social and institutional trust into code based systems through verifications, identity and digital credentials.
Vellir #23 → Programmable trust

This week we’ve teamed up with Jessi Baker, CEO of Provenance, to write about programmable trust. For transparency purposes, while we talk about Provenance in the article, we don’t have a commercial relationship with the firm and aren’t being paid to write about them. Thanks to Jessi for her input! With that out of the way…


Trust is an essential component of modern societies. We trust that the trains will take us to work, the banks will send our money where we want it to go and our governments will keep us safe.

“Trust is a confident relationship with the unknown” - Rachel Botman

In the past, trust was built largely through individual social connections. People would congregate to trade physical goods and services, with social signals and historical factors important components in creating and maintaining trust.

At some point, we also started trusting institutions too – the banks, governments, companies, legal systems, infrastructure providers and telecom networks that we rely on for our economies and social structures to function. But trust in public institutions is diminishing – both with respect to government and corporations. As we wrote in Vellir #12:

“There is mistrust of institutions, election results, politicians and the mainstream media. You can see this in narratives of fake news, debates over free speech on platforms like Twitter and Facebook, anti-establishment movements including Brexit, and the Capitol Hill riots after the US Presidential elections. In the UK, only 23% of people say that they trust governments to put the national interest first and 44% say they almost never trust politicians to tell the truth.”

This isn’t helped by TV shows like The Capture, the second season of which recently aired on the BBC, and which surrounds state use of deep fake technology. Well worth a watch…

I was also reminded recently of Michael Lewis’ (author of Liar's Poker, The Big Short and many others) excellent podcast series ‘Against The Rules’. In it, he looks at what’s happened to fairness in American life through the lens of people who depend on public trust. S1E1 is a fascinating insight into what sports referees have to do to maintain confidence in their ability to adjudicate fairness.

One of the tools that these referees use is technology – in this case rapid video footage to help improve officials’ decision making. Technology can be an important tool in fostering trust.

This extends to software too. By using particular kinds of software, we may not need to trust another person or institution because they aren’t involved in the transaction or contract; it is handled by code or a machine. This is what’s known as programmable trust.

Source: a16z

There were trusted open source protocols like TCP, IP, SMTP and HTTP in the web1 era, and we all still trust and rely on these open standards today for sending emails and browsing the internet. Web2 business models meant creating closed systems on top of these open protocols. These web2 companies are the gatekeepers of the internet, facilitating what we see and do, curating content and monetising user data in the process (see Vellir #22). They have total control and users have no choice but to trust them. But increasingly, internet users aren’t quite so sure about the algorithms that are being used to feed us photos or news; aren’t so sure about whether to trust critical data sitting on one company's servers; and aren’t so sure about whether they want their data to be monetised without having a stake.

Enter web3.

Crypto protocols are self-policing decentralised systems based in code. There is no single human or institution that controls the functioning of these systems and databases – they are decentralised networks of computers with programmes running on top. And the incentive structures in proof of work or proof of stake protocols are designed to prevent tampering. (You can find out more about the technical details in our 100 slide explainer on web3).

What this means is that these systems run without human or institutional interference. They are trustless systems where the programmes running on blockchains execute automatically and independently. And this means they can facilitate ‘smart contracts’ or agreements between users without central intermediaries. In other words, they don’t require trust; or rather, they provide programmable trust.

An early motto inside Google was ‘don’t be evil’, which seemed to imply that evil is a possible outcome. The latest meme for web3 is that ‘can’t be evil > don’t be evil’.

To take a couple of examples…

  • Compound is a software programme that runs on top of the Ethereum network. It facilitates lending and borrowing, and everything down to the algorithmically determined interest rate is run by code. To lend or borrow money, you don’t trust a bank; you trust the code.
  • Arweave is a decentralised file storage solution. It’s like a collectively owned hard drive that permanently stores data. In other words, you don’t need to trust AWS or another cloud provider to store your data, instead you trust the code that is managed by Arweave’s decentralised protocol.

Despite the potential of programmable trust in web3, it hasn’t prevented there from being widespread hacks, scams, rug pulls and other forms of illicit activity. Consumers don’t always know which protocols are robust, resilient or regulated. Code is still written by humans, and humans are fallible.

There may be issues with social and institutional trust, but it is possible that code also isn’t enough on its own to provide fully resilient systems. We may therefore need to build social and institutional trust into code too – for instance, via verifications, identity systems and digital credentials/reputation.

The phrase ‘trust, but verify’ is originally a Russian proverb that became known after President Reagan used it in the context of nuclear disarmament in the 1980s. While it is sadly still relevant in that context, the idea that we should not trust, but verify, has also caught on in web3.

Companies like KYCDAO and Disco are building solutions to verify identity and reputation respectively for web3 platforms, to further increase trust and reliability.

Provenance is a software solution enabling shoppers to trust sustainability claims made by consumer goods brands on e-commerce. Their Proof Points – digital badges each representing an impact claim e.g. vegan, recycled packaging, female owned business – enable a shopper to see the standard, evidence and independent verification behind the claim via a smart contract on the Ethereum blockchain.

Diving into the Provenance case, traditionally information on an e-commerce website was just something you had to trust the brand on. Trust was a large barrier to adoption of online payments, essential to the success of e-commerce. It’s still an issue with consumer reviews leading to campaigns and crackdowns on fakes, and the rise of branded systems e.g. TrustPilot. There are similar issues around sustainability claims, but with less of a spotlight.

Products marketed as “sustainable” are winning in the market relative to those that aren’t. This presents a huge opportunity for brands, but has also fuelled a growth in greenwashing – misleading or untrue sustainability claims. The UK’s Competition and Markets Authority (CMA) has suggested that 40% of green claims could be misleading.

Blockchains present a new approach to trusting information on the internet. Rather than just trusting a brand’s sustainability claims, a shopper can actually inspect that a third party with greater knowledge (e.g. an auditor, certifier or even a satellite monitoring a supply chain) has vouched for the claim, which is recorded on chain. Provenance is bringing consistency to claims through an open source Framework, so all claims mean the same thing, but also independent verification in situ with a connection to the 3rd party.

“We didn’t want the shopper to have to trust Provenance (as an organisation) that the claims brands are making were correct as this could lead to corruption, but instead know that Provenance (the system) can help you independently inspect the blockchain to see the verification of the claim directly for yourself” - Jessi Baker
A Provenance Proof Point shows impact in a format where the shopper doesn’t have to trust Provenance or the brand but can independently inspect the verification via a public blockchain.

The notion of programmable trust is a powerful concept – trust is no longer monopolised by institutions, governments and corporations, and is being built from the bottom up by individuals and software in decentralised networks. But we will also need social and institutional wrappers too. We need code-based verifiable trust.

This is going to unlock all sorts of new human coordination.


Thanks for reading. Please share it with your friends and colleagues, and if you haven't already, sign up to the Vellir community!