Analysis of the Web 3.0 Stack

With this paper, we aim to analyze the different views on Web 3.0 and how these different perspectives constitute the technological layers that will complete the stack of Web 3.0. Consequently, we have reviewed and taken into consideration some of the most prominent ideas published in the industry. This article introduces the Web 3.0 stack derived from our analysis and discusses the layers where value will accrue in the long-term. Additionally, it is important to note that the Web 3.0 stack is currently at a rather nascent stage of its development. Therefore, substantial changes in its structure are likely to occur in the years ahead.

Distributed Ledger Technology and the Evolution of the Internet

The advent of Bitcoin and distributed ledger technology (DLT) have created a plethora of debates on how this new wave of technology will disrupt the world as we know it today. While the early discourses of DLT were around cryptocurrencies and their potential to replace today’s financial institutions, the introduction of Ethereum expanded that horizon. Essentially, Ethereum was a substantial innovation of DLT as it opened an entirely new concept of decentralized trustless computing. Furthermore, Ethereum’s support for Turing complete smart contracts, enables business processes and contracts to be coded into a decentralized network and executed faithlessly by the participating nodes of the network. Corporations and startups were quick to realize the inherent potential of smart contracts for their business as many of the third party services and inefficient processes in the value chain of their current business could be replaced using efficient smart contracts. More importantly, smart contracts facilitate the creation of distributed applications (dApps), which are utilized in a peer-to-peer (P2P) manner and have the potential to deliver more value and to be less exploitative than existing centralized applications. In essence, we are keen to believe that the stage is set for disruption, aided by advancements in the technology, similar to the new wave of products and services initiated by the advancements in telecommunication technologies and the Internet. This will be achieved by transforming the Internet as we know it today (Web 2.0) into a new decentralized and democratic Internet (Web 3.0), which has always been the ultimate vision for the Internet since its infancy.

In the early 1990s, when the Word Wide Web (www) revolutionized information, the Internet (Web 1.0) had very limited capabilities compared to its current avatar (Web 2.0). Web 1.0 was unidirectional, which meant that the interaction of a user was read-only; they were known as static websites, as only the content curators – but not the user – could edit and write content. As a result, there were only chat applications with limited interaction features compared to the current social media platforms and hardly any multi-sided platform business models existed. 

Web 2.0 allowed users to write as well as read data and introduced new services such as video streaming, online gaming, among others. The dynamic websites in Web 2.0 created an ecosystem where many of the human interactions – either personal or economic – took place online. The introduction of web applications and cloud services opened a new avenue of products, services, and user experiences. Next, the Internet Protocol Suite (TCP/IP) emerged as the standard technology stack that powers it. Nonetheless, Web 2.0 has many disadvantages. For instance, the nature of multi-sided platform business models is that it requires an intermediary (e.g., Facebook, Amazon, Uber, etc.) in order to match both parties on either side of the platform. Even though the intermediary is required for efficient transactions and to be the trusted anchor for customers on both sides of the platform, such a practice relegates significant power and control into the hands of these intermediaries. This not only creates significant overheads to the sellers and limits the value they capture, but also the value to the customer. The inefficiency of Web 2.0 is more worrisome because the intermediaries depend on technologies that are not proprietary, and for which they do not incur any costs; Web 2.0 allows the intermediaries to accrue significant economic rent for the marginal investments they have made on top of the universally accessible factors of production (in the case of web applications, this is the Internet Protocol Suite). Moreover, the users often go through sub-optimal experiences due to the advertising revenue model largely used in Web 2.0, the aggregation of personal data by the platforms, and users having no or at best limited control of their personal data. Additionally, there are high opportunity costs involved with shifting from one platform to another due to network effects. 

The general vision for Web 3.0 in the crypto ecosystem is a decentralized democratic web that empowers the users via P2P solutions. Apart from reading and writing, applications in Web 3.0 could execute various data (i.e., the execution of transactions, arbitrary logics (smart contracts), etc.). Web 3.0 is often referred to as the semantic web. According to the World Wide Web Consortium (W3C), an international community to develop web standards, the semantic web is a web of data; it provides a common framework that allows data to be shared and reused across application, enterprise, and community boundaries. The data generated today is often published without any coordination of publishers and without any modeling and common vocabularies; standard data exchange formats, models, tools, and guidance will facilitate web-scale data integration and processing. According to Tim Berners-Lee, inventor of the world wide web, the computers in the semantic web will scan and interpret information on web pages using software agents who crawl through the web searching for relevant information. This will be facilitated through the collection of information called ontologies, which are files that define the relationships among a group of terms/keywords. Thus, Web 3.0 will create a web structure that is geared for computers that not only can search for the keywords but also understand their use in the context of the web page. This will create more value to the users through better search results by gathering, analyzing, and presenting data that is most relevant and contextual to the query. Such an ambitious vision for the future web is only possible with some fundamental changes to the current web setup. Decentralization will be a major characteristic of Web 3.0 as the computers need access to uninterrupted data with no gatekeepers. Substantial amount of data today is created by individual users, who do not have any control of their data nor are they incentivized for their data. In order to realize the dream of Web 3.0, there needs to be a system that can store data in a decentralized and distributed manner, an accounting system to keep track of the exchange of value, digital currencies and an integrated payment system, mechanisms that facilitate the trustless interaction of different software agents, a decentralized, distributed, and immutable ledger to save the state of all the interactions, common identity standards and platforms to confirm the legitimacy of users, and many more. DLT, as it happens, have a solution for each of these requirements.

From an application developer’s perspective, most of the DLT solutions are in the backend. That means, when a user uses a dApp in Web 3.0 to buy a product or exchange their data to receive a solution or participate in trading, he is not exposed to the underlying DLT protocols that power his application but he experiences all the benefits of these protocols. The tech stack of Web 3.0 will be focused on the P2P technology that rules out the middlemen. There are no central points of control as the new web will not be dependent on the gigantic data servers controlled by a private company. The decentralization of data stored will decrease the threat to the privacy of customers as no party solely controls the data – instead, the data is distributed over the whole network and users are the gatekeepers to their own private data. The decentralization and distribution of data will reduce the hacks and data breaches dramatically as hackers would need to turn off the entire network to attack. The applications of Web 3.0 will be easy to customize and device-agnostic; they will be capable of running on smartphones, TVs, automobiles, microwaves, and smart sensors. At present, applications are operating system (Windows, iOS) or platform (Facebook) specific; this causes frustration for users who use multiple devices and adds expenses for developers who need to issue multiple iterations and updates to their applications. Further, DLTs, especially the public facing blockchains, are permission-less and anyone can create an address and interact with the network; users will not be restricted to access the benefits of the network on account of geography, income, gender, or any other sociological or demographic factors. This allows a system that is impartial to the users who want access to the products and services or the developer who wants to utilize the infrastructure; more importantly, it allows anyone to become a vested interested party and enjoy the fruits of the project’s success by owning the tokens that support the network. Lastly, the DLT infrastructure can provide uninterrupted services as there is no single point of failure. The immutable ledger and the network of many blockchains are resistant to attacks by the participants in the network, external actors, or even the governments that want to shut down the protocol. 

Similar to the web or mobile applications in Web 2.0, there are applications in Web 3.0 called dApps (decentralized applications). Creating normal applications and dApps require a few things in common: communication, computation, file storage, external data, monetization, and payments. The Web 3.0 ecosystem is growing in such a way that new solutions are competing to establish themselves in one or more of these areas. Such progress has made it possible to create basic dApps that require minimal computation and file storage, whereas, it was almost impossible to develop them three or four years ago. Thus, the Internet is gradually transitioning from a centralized client-server based web to a partially decentralized web and ultimately becoming fully decentralized as the technology matures. Even though the decentralized architectures are more fault tolerant and attack resistant, they are often slower and not suitable for many of the heavyweight applications running on Web 2.0. Therefore, one could expect to see centralized applications existing for specific use cases alongside dApps in Web 3.0 – at least in the near to medium terms.

In such a scenario, it is important to map the development of the Web 3.0 stack. This will allow us to identify crucial technologies that are filling these gaps and those gaps that are more difficult to fill. Such a high-level understanding not only helps us identify the value of a protocol or a technology in the distributed ledger technology ecosystem but also to compare it to peers. Furthermore, such a framework also helps towards identifying and classifying the different layers according to their level of importance within the Web 3.0 ecosystem. Blockwall has developed a Web 3.0 stack according to its understanding of the ecosystem and incorporating other expert interpretations of the Web 3.0 stack in the ecosystem. The model is illustrated below:

The Web 3.0 Stack: Blockwall’s View

The Web 3.0 Stack

According to the current level of understanding, the Web 3.0 ecosystem can be separated into six major levels with their own sub-levels/components within them. The sublevels that are built on top of one another completes Web 3.0’s tech stack. The explanation for each level and the sublevels within them is provided in the following pages.

1. Hardware Level

The hardware level encompasses the hardware infrastructure that is required to support the Web 3.0 ecosystem. Some of the components of this infrastructure are Trusted Execution Environments (TEE), mining as service providers, the mining equipment manufacturers, and the crypto hardware manufacturers. 

1.1. Trusted Execution Environments (TEE)

A Trusted Execution Environment (TEE) is an isolated execution environment inside the main processor and runs parallel to the operating system. In TEE, the loaded data confidentiality and integrity is guaranteed using a hybrid approach that utilizes both hardware and software and is much more secure than classic systems like REE (Rich Execution Environment).

For example, Ekiden, being built by Oasis Labs, is a neutral platform that will allow many chains to support private, off-chain, TEE-based computation. Ekiden decouples smart contract execution from the underlying state consensus protocol to achieve scalability and privacy using the Trusted Execution Environments. Lastly, Microsoft is considering the TEEs for its blockchain as a service (BaaS) offerings. 

1.2. Mining as a Service (MaaS)

The decentralized web will require mining as a service when more and more public, private, and consortium blockchains are formed. In fact, there are many companies that already offer MaaS (e.g., DMG Blockchain Solutions, Hashnest, Hashflare, etc.). These services are usually provided to investors or individuals who want to engage in mining on an industrial scale. Cloud-based online mining services (e.g., Argo, Genesis Mining, etc.) are rented out to the customer on a contract basis, where the service company gets a steady flow of revenue through the mining activity and the miners will get a fair share of the profit. Some MaaS companies are also providing tokenized mining solutions (e.g., The Mine), where the customer holds the token issued by the company and receives mining proceeds on a monthly basis.

1.3. Mining Hardware and Equipment Manufacturers

These are companies that manufacture hardware and equipment required for mining. The hardware is to a large extent the ASIC (Application Specific Integrated Circuit) miners, which are specifically used to mine Bitcoin but also other altcoins. Some of the mining hardware companies are Halong Mining, Bitmain, BitFury, etc. The mining equipment is generally the special power supply equipment needed to funnel electricity efficiently to the mining rigs and the cooling fans needed to keep the mining rig at an optimal temperature.

1.4. Crypto Hardware Manufacturers

These manufacturers create products directed to the end user such as hardware wallets, blockchain smartphones, etc., that easily connect to the blockchain ecosystem and provide a seamless customer experience. This industry is set to grow as Web 3.0 matures. Some of the hardware wallet manufacturers are Ledger, Tezor, or KeepKey; blockchain smartphones are produced by Sirin Labs and HTC.  

2. Internet and Network Level

The Internet and Network Level encompass the protocols that are used by the current web 2.0 and the P2P protocols used to form a network of nodes/computers in DLT systems. Therefore, the components in this layer are the Internet Protocol Suite (TCP/IP) and the P2P Internet Overlay Protocols, which mainly facilitate the communication between the nodes in the network. 

2.1. Internet Protocol Suite (TCP/IP)

Internet Protocol Suite defines the communication protocols used in the current Web 2.0 that are necessary for any computer to communicate with each other through the Internet and centralized applications to function. It is commonly known as TCP/IP because the foundational protocols of the suite are the Transmission Control Protocol (TCP) and the Internet Protocol (IP). These sets of rules for exchanging information over Web 2.0 are classified into four protocol layers, thus forming a four-layer protocol stack. In the protocol stack, each protocol layer leverages the service of the protocol layers below it. The four layers from top to bottom are:  

  • Network Access/Link Layer: This is the lowest level of the TCP/IP model that encompass the communication protocols which only operate on the link. The link is the physical component that connects all nodes or hosts to the network, and the protocols define the details of the data that is physically sent through the network. In other words, it sets the protocols that help the hardware to signal the bits electrically or optically to network mediums such as coaxial cable, optical fiber, twisted pair copper wire, etc.– for instance, Ethernet, Token Ring, FDDI (Fiber Distributed Data Interface), ARP (Address Resolution Protocol), etc. 

  • Internet Layer: The datagrams (packets) from the originating hosts are transported across network boundaries using the group of internetworking methods, protocols, and specifications in the Internet layer. The destination of a datagram is specified by the IP address. The Internet layer pack data into data packets known as IP datagrams, which contain the source and destination address information needed to route the data packets – for example, IP (Internet Protocol), ICMP (Internet Control Message Protocol), RARP (Reverse Address Resolution Protocol), IGMP (Internet Group Management Protocol), etc. 

  • Transport Layer: The protocols in this layer provide host-to-host communication services for applications; the two primary transport layer protocols are the Transmission Control Protocol (TCP), which is a reliable connection-oriented transport service that provides end-to-end reliability, resequencing, and flow control, and the User Datagram Protocol (UDP), which is a connectionless transport service. The messages sent from an application to another computer on the Internet is routed to the correct application on the destination computer by TCP. Port numbers are used for this routing purpose, where each port is a separate channel on each computer (e.g., the web browser and the email client uses different port numbers).

  • Application Layer: Encompass the general communication protocol and interface methods used in process-to-process communications among the hosts in the network. The communication is standardized by the higher level protocols in the application layer before the transport layer establishes host-to-host data transfer channels – such as DNS (Domain Naming System), HTTP (Hypertext Transfer Protocol), FTP (File Transfer Protocol), SMTP (Simple Mail Transfer Protocol), etc.

It is unrealistic to think there will be significant changes to the Internet Protocol Suite used in Web 2.0. The properties needed for a decentralized web will probably be built on top of the existing TCP/IP in such a way that Web 2.0 and 3.0 can co-exist simultaneously. Over time, most of the communication protocols needed for a P2P network will manifest as additional P2P Internet Overlay Protocols. Nevertheless, some of this overlay protocols could become an essential component of the Internet Protocol Suite itself. Some of the identified protocols are:

  • Decentralized DNS:

Online information is accessed by humans through domain names such as medium.com or wikipedia.org; nevertheless, the web browsers interact through IP addresses. Domain Name System (DNS) facilitates the translation of domain names to IP addresses so that the browsers can load the Internet sources. In the Internet Protocol Suite, the DNS belongs to the application layer. Currently, the DNS is centralized such that the DNS records for a domain are saved in a server. It is inconceivable why the DNS of Web 3.0 cannot be decentralized, such as in Handshake, a decentralized, permission-less naming protocol compatible with DNS. It is a UTXO (Unspent Transaction Output)-based blockchain (e.g., Bitcoin) protocol that manages the registration, renewal, and transfer of DNS top level domains (TLDs).

  • Mix Network Packet Routing

These protocols use chain proxy servers known as mixes to create hard-to-trace communications. The mixes receive messages from multiple senders, mix them up, and send them back to the next destination, possible another mix node, in random order. The eavesdroppers cannot trace the end-to-end communications as the link between the source of the request and the destination is broken using the mix networks. For example, Kovri, developed by Monero and based on I2P’s open specification, uses garlic encryption and routing to create a private, protected overlay network that helps users to hide their geographical location and Internet IP address.

  • Block Delivery Network

Block Delivery Networks or Block Distribution Networks can be seen as a novel blockchain-specific Content Delivery Network (CDN). “A Content Delivery Network (CDN) is a system of distributed servers (network) that deliver pages and other Web content to a user, based on the geographic locations of the user, the origin of the webpage, and the content delivery server”. The main purpose of a CDN is to reduce the latency in the networks, which is affected by a number of factors. In all cases, however, the delay duration is impacted by the physical distance between the user and that website’s hosting server. A CDN virtually shortens that physical distance and improves the site rendering speed and performance. In the blockchain context, the contents delivered are the blocks between the peers in a network. The Block Delivery Networks are neutral transport layers that can run under blockchain systems, and ideally should be protocol, coin, and blockchain agnostic. They facilitate the propagation of transactions and blocks across a trustless P2P network.

For example, bloXroute is a Blockchain Distribution Network (BDN) that increases the on-chain throughput through effective broadcast primitive without affecting a blockchain’s functionality or the balance of power among its current participants. The legacy P2P network is used to audit the BDN and its neutrality.

Gladius – Gladius is a software solution that aggregates unused bandwidth to provide a distributed and decentralized CDN, WAF (Web Application Firewall), and DDoS (Distributed Denial of Service Attack) solution via blockchain.

2.2. Peer-to-Peer (P2P) Internet Overlay Protocols

In a P2P network, equally privileged and equipotent participants form a distributed application architecture that partitions tasks and resources between the peers. Each node in the P2P network extends a portion of its processing power, disk storage or network bandwidth to form a pool of resources that can be accessed by other network participants without the need for central coordination (servers or hosts) or any single point of failure. Thus, the peers are both the suppliers and consumers of the resources. 

The nodes in a P2P network form a subnetwork among all the nodes in the physical network by implementing some form of virtual overlay network on top of the physical network topology. The P2P network relies on the TCP/IP to exchange data, but at the application level, the peers communicate with each other directly via the logical overlay link. The overlay link consists of paths composed by underlying physical links and are used for indexing, peer discovery and efficient routing of data. Many blockchain systems use different P2P protocols in order to form a P2P network, which is a prerequisite for the participants in a DLT system to communicate with each other. Some of the P2P protocols used by blockchains today are:

  • DevP2P: DevP2P, developed by Ethereum foundation, is an application layer networking protocol for communication among nodes in a P2P network. The nodes communicate by sending messages using RLPx (a TCP-based transport protocol used for communication among Ethereum nodes). DevP2P powers Ethereum, Whisper (the communication protocol developed by Ethereum), and Swarm (decentralized file system in Ethereum).

  • LibP2P: LibP2P is a networking stack and library modularized out of the IPFS Project and bundled separately for other tools to use it. LibP2P encompasses a variety of transport and P2P protocols and is interoperable with most of the existing network protocols. IPFS and Polkadot are powered by this modular networking suite. 

3. DLT System Level

The DLT system level encompasses the most crucial components needed for a decentralized trustless computing infrastructure on which decentralized applications (dApps) of the Web 3.0 can be developed. Any computing – whether centralized or decentralized – require three basic elements: storage, processing, and communication. The protocols in the ‘Internet and Network Level’ combined with components in the ‘DLT system level’ is sufficient to provide the basic elements of storage, processing, and communication needed for any DLT system to act as an infrastructure for dApp development. The DLT system level provides processing for dApps through the state transition function and the consensus mechanism, and facilitate the storage and communication requirements of dApps by providing protocols for file system, database, and transient messaging. The components of the DLT system are usually provided by a layer 1 protocol such as Ethereum, EOS, NEO, etc. Thus, the sub-levels/components at this level are the state transition machine, consensus mechanism, data distribution protocols, and transient data messaging. The components are discussed in more detail in the following section. 

3.1. State Transition Machine

State transition machines are required to execute the smart contracts that power the dApps. The state transition function of a DLT system is defined using a platform neutral computation description language. State machines are abstract systems that have a set of rules governing the transactions that occur within the system. The state machine stores the status of something at a given time; the status changes based on inputs, providing the resulting output for the implemented. There are two types of state machines: finite and infinite state machines. Finite state machines have a finite number of states, transitions, and actions, whereas, an infinite state machine can have an endless number of states and transitions. A blockchain is an infinite state machine (with regular transition time), which uses cryptography to immutably link new states to the chain of old states; the states are counted and arranged in a directed graph structure according to the consensus method agreed by the participants of the P2P network. Therefore, blockchains are a rules-based state transition system where the rules are stated through a platform neutral computation description language. The state transition machines used by current blockchains are:

  • Ethereum Virtual Machine (EVM) is the virtual machine that executes the logic of the Ethereum network and supports mainly the execution of the solidity programming language (and a couple of other rarely used smart contract languages). There are several implementations of the EVM in different languages such as Rust, Python, Java, etc. The EVM is still the most widely used virtual machine in the industry and is used by Ethereum (1.0), Ethermint, Hashgraph, WANchain and many others. Ethereum however, the most prominent blockchain using the EVM (and the project that built it) has stated it intends to adapt solidity so it can be compiled into WebAssembly, in which case it will be executable on a greater range of computer platforms (better support for various nodes) and support stronger security and better functionality. It is expected that most of the blockchain projects using the EVM will follow suit and implement a WebAssembly interpreter rather than Ethereum's homegrown interpreter add it is being discontinued.

  • WebAssembly (WASM) is an important up and coming binary instruction format for a stack-based virtual machine. It is considered by some to be the biggest advancement in web technology in a decade. WASM is a compiler target, meaning that developers do not need to write to WebAssembly directly, but rather they can write in the language of their choice, which is then compiled into WebAssembly bytecodes. The bytecode is then run on the client - typically a web browser - where it is translated into native machine code and executed at high speed. WASM is quickly gaining traction among DLTs and was already or will be implemented by a growing number of prominent projects such as Dfinity, Polkadot, EOS, Ethereum 2.0 and others. WASM can currently be compiled from the programming languages C, C++ and Rust, but this list of compatible languages is growing quickly as there are quite a few other programming languages of which their ability to be compiled into the WASM format is already in experimental or developmental status.

  • Unspent Transaction Output (UTXO) is an output of a blockchain transaction that has not been spent – meaning an output that can be used as an input in a new transaction. Bitcoin uses the UTXO model where all currently existing unspent transaction outputs are being stored in the UTXO set. This table is updated as new transactions occur on the blockchain.

  • LLVM: The LLVM foundation orchestrates the open source development of a collection of compilers, interpreters, and debuggers. LLVM allows different programming languages to be compiled into an intermediate representation that can then be optimized before it is translated into a different target machine code which can then be executed natively on a given system (or node). Used in Cardano, Solana, etc.

 

  • Custom state transition machines: A blockchain can be referred to as a state machine as its state is stored within its immutable ledger. A transfer of the ownership of a token, or the recording of the result of the execution of a smart contract, once consensus is reached to add such new information to the networks ledger may be seen as a state transitions. Some blockchain projects such as Kadena, Tezos, Rchain, and Coda have their own custom sets of rules that dictate how the state of their ledgers may change. These blockchains would be referred to as custom state transition machines
3.2. Consensus Mechanism 

A consensus mechanism is a fault-tolerant mechanism used in DLT systems to achieve the necessary agreement on a single data value or single state of the network among the distributed processes or multi-agent systems. The consensus algorithms are designed to achieve reliability in a network involving multiple unreliable nodes. There are so many blockchain projects working on the consensus layer compared to the other layers. This is because the consensus layer is the single biggest bottleneck in blockchains, and consensus schemes are bound by fundamental tradeoffs. The consensus mechanism is provided by Blockchains or Directed Acyclic Graphs (DAGs) that allow the processing of a logic in a trustless way. These protocols have different consensus mechanisms that provide different levels of security, scalability, and decentralization. Some of the DLT systems and the consensus mechanisms used in them are:

  • Bitcoin: Bitcoin is the longest existing crypto asset and focuses on decentralization, immutability, censorship resistance, and scarcity, which constitutes the use case of a digital currency. Bitcoin uses the Proof-of-Work (PoW) consensus. 

  • Ethereum: Ethereum is a smart contract platform with a Turing complete virtual machine, that allows for programmable contracts which enable the development of dApps on top of the protocol. Ethereum uses the Proof-of-Work consensus, but will eventually adopt the Proof of Stake (PoS) consensus. 

  • EOS: A general-purpose smart contract platform for large-scale dApps that focuses on providing a high throughput, zero-fees layer 1 infrastructure and developer-friendly dApp development toolkits. As a main competitor of Ethereum, EOS uses the Delegated Proof-of-Stake (DPoS) consensus.

  • Hedera Hashgraph: Hashgraph is a DLT system known as Directed Acyclic Graph (DAG) that provides a dApp development platform with very high scalability and instant finality, which are a prerequisite for IoT solutions. Hashgraph uses a Asynchronous Byzantine Fault Tolerant consensus mechanism. 
3.3. Data Distribution Protocols and Transient Data Messaging

Apart from the transaction data that are saved immutably in the ledger of a DLT system, there is a need for other storage components in a DLT system. Such storage is crucial to develop applications on top of these DLT infrastructures. Any application needs a file system to store large files (movies, mp3s, large datasets) organized in a hierarchy of directories and files; this file system needs to be decentralized and distributed in the Web 3.0 ecosystem so that there is no one point of failure or any gatekeepers who can control accessing data. Some of the decentralized file systems are IPFS, Tahoe-LAFS, or Ethereum Swarm. On the other hand, applications also make use of databases that specialize in storing structured metadata, for example as tables (relational databases), document stores (e.g., JSON), key-value stores, time series, or graphs; and rapidly retrieving that data via queries (e.g., SQL). BigchainDB is one such decentralized database software specifically used as a document store and IPDB is a public net instance of BigchainDB, with governance. In addition to these protocols that provide storage capability, a DLT system should also have protocols that allow for the transfer of transient data, which basically are the messages sent between the applications built on top of the DLT infrastructure. Therefore, the components of this layer are the data distribution protocols and transient data messaging protocols.

Data Distribution Protocols are the protocols that allow the transfer of static messages such as images or the text of the particular application using the protocol. There can be more complex data structures built on top of them. The examples of data distribution protocols are:

  • IPFS: The InterPlanetary File System is a P2P file system to connect all computing devices with the same file system. “IPFS provides a high throughput content-addressed block storage model, with content addressed hyperlinks. This forms a generalized Merkle DAG, a data structure upon which one can build versioned file systems or blockchains”. 

  • BigchainDB: BigchainDB is similar to a database but with blockchain characteristics. It provides high throughput, low latency, powerful query functionality, decentralized control, immutable data storage and built-in asset support. BigchainDB can be used to deploy blockchain proof-of-concepts, platforms, and applications with a blockchain database. 

  • Bluzelle: Bluzelle is a decentralized database that manages and stores data through sharding to achieve security and scale. It also provides a framework that supports private data control, data syndication and decentralized web infrastructure. 

  • Swarm: “Swarm is a distributed storage platform and content distribution service, a native base layer service of the Ethereum Web3 stack. The primary objective of Swarm is to provide a sufficiently decentralized and redundant store of Ethereum’s public record, and in particular, to store and distribute dApp code and data as well as blockchain data”.

Transient Data Messaging protocols are communication protocols that facilitate the sending of transient data. In computer programming, ‘transient’ is a property of any element in the system that is temporary. The term applies to transient applications, like software for the end-user, which is displayed with a transient application posture. Therefore, transient data are discarded after it is no longer needed for the computation. On an application level, this will be the messages transferred between the users of an application (e.g., the customer and the vehicle driver in a ride-sharing application). Once the message (from the customer) is broadcasted to all the other service providers (e.g., vehicle driver), the message is no longer stored. The examples for transient data messaging are:

  • Whisper: Whisper is a communication protocol for dApps to communicate with each other. These communications could be, for example, a sell order of a currency exchange dApp, messages between dApps to collaborate on a transaction, or general communication between dApps. It has a low-level API that is only exposed to dApps, but never to the users. Whisper is not designed for large data transfers, has no reliable methods for tracing packets and is not designed for RTC (real-time clock).

  • Matrix: Matrix is a new ecosystem for open federated Instant Messaging and VoIP (Voice over Internet Protocol). Matrix specifies a set of pragmatic RESTful HTTP JSON APIs as an open standard. The Matrix ecosystem is open sourced and the APIs can be implemented on a wide variety of servers, services, and clients in order to build a new generation of fully open and interoperable messaging and VoIP apps.

4. Optional Component Level

The optional component level encompasses two sub-levels that are neither mandatory for the functioning of a DLT system nor the dApp development on top of the infrastructure protocols. The first sub-level, layer 2 protocols, provide extra functions or features to a layer 1 protocol or enhance the existing features of a layer 1 protocol. They are known as layer 2 protocols as they are complementary to a layer 1 protocol, or are often built on top of the layer 1 protocols. Additionally, there are certain protocols that facilitate the exchange of value and state between different DLT systems (essentially allowing interoperability between dApps that are built on different layer 1 protocols). They are known as interoperability protocols and is the final sub-level in the optional component level. 

4.1. Layer 2 Protocols

Layer 2 protocols amend or extend the capabilities of the layer 1 protocols. These meta-protocols provide enhanced features such as scaling, compute and encrypted messaging. Some protocols have been identified and classified according to their characteristics, but the list is not exhaustive as more and more layer 2 protocols will emerge as the DLT technology matures. The different layer 2 protocols identified are:

4.1.1. State Channels

State channels are a technique for performing transactions and other state transitions ‘off chain’ on a second layer built on top of a blockchain. Payment channels are the most familiar approach here (e.g., Lightning Network); the state channels are the more general form of payment channel since they can be used for both payments and arbitrary state update (changes inside a smart contract) on a blockchain. State channels allow the formation of private channels among the transaction parties after locking up their respective state on the blockchain; the transactions can be sent among the parties through the channels with instant finality; the final state of the accounts are settled on-chain once the state channel is closed. The examples for state channels are:

  • Counterfactual: A generalized framework for native state channels integration in Ethereum-based decentralized applications.

  • Raiden Network: An off-chain scaling solution, enabling near-instant, low-fee, and scalable payments. It’s complementary to the Ethereum blockchain and works with any ERC20 compatible token.

  • Lightning Network: Lightning is a decentralized network using smart contract functionality in the blockchain to enable instant payments on the Bitcoin blockchain.

  • FunFair: A state channel opened for the duration of a gaming session, supporting custom gaming messages between the FunFair client and server. The only transactions on the blockchain occur at the beginning and end of the gaming session.
4.1.2. Plasma Protocols and Side Chains

Side chain is a separate blockchain that is attached to its parent blockchain (mainchain) using a two-way peg. This enables the interchangeability of assets between the parent blockchain and the side chain. Plasma chains are similar to the side chains as they also have their own parent chains. The parent blockchains launch child blockchains; the child blockchains can also launch their own child blockchains; thus, forming a hierarchy of interlinked blockchains where the main parent chain is the ultimate authority. Side chains or plasma chains are incentivized to execute smart contracts on their own chains (off the mainchain); thereby increasing the scalability of their parent chain. Examples are:

  • Loom network: Loom network is a layer 2 scaling solution for Ethereum. It is a network of DPoS sidechains, which allow for highly scalable games and user-facing dApps while still being backed by the security of Ethereum.

  • OmiseGo: The OMG network is a scaling solution for finance on Ethereum, enabling transparent, P2P transactions in real-time. The decentralized network facilitates self-sovereign financial services across geographies, asset classes, and applications.

  • Rootstock: Rootstock is an open source smart contract platform with a 2-way peg to Bitcoin that also rewards the Bitcoin miners via merge mining. Rootstock aims to add value and functionality to the Bitcoin ecosystem by enabling smart-contracts, near instant payments, and higher-scalability.
4.1.3. Encrypted Storage

Encrypted storage encrypt/decrypt backed-up and archived data that is in transit or on storage media. Encrypted storage provides storage security, which is an important requirement for enterprise blockchains and storage networks.

 

  • Enigma Protocol: Enigma is a permission-less P2P network that allows (secret contracts) with strong correctness and privacy guarantees. Enigma protocol is currently limited to Ethereum. 

  • Keep: Keep is an off-chain container for private data. Keeps allow contracts to access stored private data that can be bought, sold, transferred, and revealed on the public blockchain. Currently it is a privacy layer for Ethereum.  
4.1.4. Storage Incentivization

Storage incentivization mechanisms are built on top of data distribution protocols. They incentivize individuals and organizations to store data on their machines. Some of these mechanism are dApps that reward storage providers with a native token.

  • Filecoin: The Filecoin protocol is a decentralized storage network built on a blockchain and with a native token. The Filecoin allows clients to spend tokens for storing and retrieving data and miners earn tokens by storing and serving data. Users can commit their unused storage to the network and become a Filecoin miner.

  • Sia: Sia is a decentralized cloud storage platform secured using blockchain technology. Sia’s decentralized storage market place offers a cheaper option than the existing cloud solutions.

  • Storj: Storj is a decentralized cloud object storage network that is affordable, easy to use, private, and secure. The Storj network uses client-side encryption so that only the data owners can access their files. 
4.1.5. Heavy Computation or Distributed Computing 

Heavy computation or distributed computing protocols and applications provide users with reliable and cheap computation, secured using its distributed network. 

  • Golem: Golem is a global, open source, decentralized supercomputer that anyone can access. It is made up of the combined power of users' machines, from PCs to entire data centers.

  • TrueBit: TrueBit gives Ethereum smart contracts a computational boost that allow them to conduct heavy or complex computation off-chain. Nevertheless, they are different from other off-chain solutions such as state channels and plasma, which are more useful for increasing the total transaction throughput. 
4.1.6. Distributed Secret Management

Distributed secret management encrypts information while distributing the keys to selected authorities who can access it.

  • Parity Secret Store: The Parity Secret Store is a core technology of distributed key generation cryptographic protocols. It also offers a distributed key storage and a threshold retrieval according to blockchain permissions.
4.1.7. Oracles

An oracle, in the context of blockchains and smart contracts, is an agent that finds and verifies real-world occurrences and submits this information to a blockchain to be used by smart contracts.

  • Oraclize.it: Solves the walled garden problem of smart contracts that cannot fetch external data on their own. Oraclize acts as a data carrier and a reliable connection between web APIs and dApps. The good behavior in the network is enforced by cryptographic proofs. 

  • ChainLink: ChainLink is blockchain middleware that provides tamper-proof inputs and outputs for complex smart contracts on any blockchain. 

The list of the layer 2 protocols is not exhaustive as more and more solutions will be created as the technology matures.

4.2. Interoperability Protocols

Interoperability protocols facilitate communications between different DLT systems by allowing the transfer of values (tokens) or arbitrary cross-chain communication (smart contracts). There are different approaches by different projects in the blockchain ecosystem to achieve this capability, nevertheless, no project has come forward with a working solution until now. Therefore, it is quite tricky to identify the interaction point of this component within the Web 3.0 stack. As a consequence, we currently consider this sub-level as a separate entity from the layer 2 protocols, while the exact positioning of this component in the Web 3.0 stack will manifest itself only when the interoperability projects mature. Some of the interoperability projects are:

  • Polkadot: Polkadot is a protocol that allows independent blockchains to exchange information (both tokens and smart contracts). Polkadot is an inter-chain blockchain protocol which unlike Internet Messaging Protocols (e.g. TCP/IP) also enforces the order and the validity of the messages between the chains. By creating a general environment for multiple state machines, the interoperability protocol benefit form scalability. 

  • Aion: Aion aims to become the common interoperability protocol to connect blockchains so that they can communicate arbitrary data (smart contracts) as well as value transfer (token) using a two-way bridging protocol and connecting networks. 

  • Cosmos: Cosmos introduces the inter-blockchain communication (IBC) protocol, which is a TCP/IP like protocol for inter blockchain communication. Cosmos focuses on Token transfers among different blockchains without the need for any exchange liquidity among them.

  • ICON: ICON is building a public blockchain that connects many private blockchains of various institutions, predominantly focused in South Korea, such as financial institutions, insurance companies, hospitals, universities, and more. The private, public, and consortium blockchains connected to the ICON network will be able to carry out token transfers among them. 

5. Developer Level

The developer level encompasses the end product (dApps) and the tools required for the development of the end products. Without these development tools, the developer is not able to interact with the protocols below, let it be interaction at a particular layer or multiple layers. Therefore, it is crucial for each layer 1 or layer 2 or interoperability protocol to provide easy to use developer tools to become an important infrastructure of the Web 3.0 stack. 

5.1. Developer Tools

This is the layer of human-readable languages and code libraries that make development easier. They are essentially the protocol-extensible developer APIs and languages. Some of the APIs and languages used to develop on the current blockchains are:

  • web3.js: This is the Ethereum compatible JavaScript API which implements the Generic JSON RPC specification. It's available on npm as a node module, for bower and component as an embeddable scripts and as a meteor.js package.

  • ether.js: Complete Ethereum wallet implementation and utilities in JavaScript.

  • Solidity: Solidity is a statically typed, contract-oriented, high-level language for implementing smart contracts that run on the Ethereum Virtual Machine (EVM). The language was influenced by C++, Python, and JavaScript.  

  • Rust: Rust is a systems programming language that runs blazingly fast, prevents segfaults, and guarantees thread safety. Rust can be used on Polkadot, Solana, etc.

  • C++: C++ is a sophisticated, efficient and a general-purpose programming language based on C. Many of today’s operating systems, system drivers, browsers, and games use C++ as their core language. Smart contracts can be written using C++ in NEO, EOS, and many other layer 1 protocols. 
5.2. Decentralized Applications

Similar to the current web applications, the dApps cater to the end user and provide them a product or service. Yet, the inherent difference of dApps from centralized applications will create significant value to the customer through P2P solutions that nullify any role of intermediaries. It is difficult to imagine what the nature of these applications will be, but one thing that is clear is it will empower individuals to engage in commercial activity directly with one another. Nevertheless, the sheer technological advancement will allow many online and offline value chain processes to be automated in a trustless way; and many real world assets will be tokenized and transacted online. This could create entirely new markets, services, and products that are both B2C and B2B.

6. User Level

At the user level, the user should be able to find and download the dApps and have a platform or browser to interact with these dApps. Therefore, the components at this level are application hosting and protocol extensible user-interface cradles.

6.1. Application Hosting

Currently, most of the application hosting of today’s dApps is centralized and not much importance is given to decentralize the application hosting. The apps are hosted either by a centralized web server, or there exists a single download link to the application; in both cases, the central body is able to sensor the applications; this goes against the ethos of decentralization. An example for a decentralized hosting solution is:

  • Codius: Codius is an open hosting protocol that provides distributed P2P hosting and third-party verified smart contracts.
6.2. Protocol-extensible User-interface Cradle

This is the final layer of the Web 3.0 stack where the user interacts with the dApps created on top of the stack. Some of the user interfaces are:

  • Status: Status is a mobile application that allows to chat, browse and access the dApps that are developed on the Ethereum blockchain. Status is essentially a light client node of Ethereum.

  • MetaMask: MetaMask is a bridge that allows you to visit the distributed web of tomorrow in your browser today. It allows to run Ethereum dApps right in the browser without running a full Ethereum node. MetaMask includes a secure identity vault, providing a user interface to manage your identities on different sites and sign blockchain transactions.

  • MyEtherWallet: MyEtherWallet is a free, open-source, client-side interface. It allows to interact directly with the Ethereum blockchain while users remaining in full control of their private keys & funds.

  • Brave Browser: Brave Browser is an open source browser which is built with a focus on privacy and monetization of contents. They support the content creators through token incentivization.

Summary of the Web 3.0 stack

Based on the current discussions on the constitution of the Web 3.0 stack, we came up with a revised illustration of the stack that groups the different components into six broader levels. The six levels are:

  1. The Hardware Level, 
  2. The Internet and Network Level, 
  3. The DLT System Level, 
  4. The Optional Component Level, 
  5. The Developer Level, and 
  6. The User Level.

The Hardware Level includes all the hardware infrastructure needed to support the different DLT systems. Although it is quite intuitive to think that hardware (computers, mobile phones, processors, etc.) is developed independent of the blockchain ecosystem, we have been able to identify several new products and solutions that cater solely to the blockchain ecosystem. Hardware wallets, crypto mobile phones, and ASIC mining equipment are some of the examples; as the Web 3.0 matures, we can expect high growth in the hardware infrastructure supporting this ecosystem.

The Internet and Network Level facilitates the communication in the Web 3.0 stack. They include the Internet Protocol Suite and the P2P overlay protocols. Due to the inherent advantages of Web 2.0 and limitations of Web 3.0, we expect both the new web and the old web to co-exist, at least in the near to medium term. Therefore, the Internet Protocol Suite will be conserved in its current form with necessary changes needed for Web 3.0 implemented on top of the Internet Protocol Suite. This could include different avatars of existing protocols of TCP/IP (especially in the application and transport layer of TCP/IP) such as Decentralized DNS, Mix Network Packet Routing, Block Delivery Networks, etc.; at the same time the communication protocols specific to the DLT system and its participants will be implemented as P2P overlay protocols.

The third and most important level of Web 3.0 stack is the DLT System Level. This is the level of layer 1 protocols, the main infrastructure of the dApps in Web 3.0. In addition to communications, storage and processing are computing elements that are mandatory for developing dApps using smart contracts. The DLT system provides decentralized storage and processing elements through the state transition machine, consensus mechanism, data distribution protocols, and transient data messaging protocols. Some of the main bottlenecks of current DLT systems are encompassed in this layer - mainly the consensus mechanism and state transition functions. As a result, much of the DLT projects have traditionally focused on this layer; nevertheless, we are seeing a shift in the focus as more and more projects are working on other layers of the Web 3.0 stack. Many such projects, known as layer 2 protocols, aims to increase the desirability of a layer 1 protocol by enhancing its functionalities or adding new features. Additionally, there are some DLT systems (e.g., Shyft, Fetch, etc.) that provide extra features (KYC, data markets) in addition to the basic functionalities (smart contracts) of a layer 1 protocol.   

The fourth level, the Optional Component Level, represents the layer 2 protocols and the interoperability protocols that function on top of the layer 1 infrastructure protocols. These are not essential components for the dApps to function, but they add more features or enhance the features of the infrastructure protocols. The layer 2 protocols started emerging as a solution to the limitations and bottlenecks of infrastructure protocols; they can also add new features to an infrastructure protocol and widen its scope. Interoperability protocols are a must have for the Web 3.0 ecosystem to mature. Even though they are not required for dApp development, it is a mandatory component if the blockchains and the dApps built upon them need to talk to each other. Thus, the interoperability protocols envision to create an Internet of blockchains.

The fifth level is the Developer Level, which essentially encompasses the tools that help developers to interact with the underlying protocols and create applications on top of them. The tools here need to be easy to use and need to have minimal learning curves for maximum adoption. 

This is followed by the final level: the User Level. It facilitates the interaction with the dApps by providing hosting and interfaces. Currently, this level is under-developed and does not provide a seamless experience to users. As the Web 3.0 matures, we will see more development activity on this layer.

Conclusion

In order to enable the innovation as promised by crypto assets, a new stack of protocols is required as the required capabilities are not offered by the current stack. Only a new stack of protocols will enable the promises of the Web 3.0, which is mainly decentralization and therein increased sovereignty for users. This is achieved by Web 3.0 protocols enabling users to hold and transfer value directly without intermediaries. The development of the Web 3.0 stack is fundamental to the realization of this technological revolution and therefore a requirement for the innovation of crypto assets. 

With this deep dive we have explained our attempt to visualize the current state of the Web 3.0 stack. However, it has to be kept in mind that the Web 3.0 stack is still evolving and currently is at a rather nascent stage of its development. Therefore, substantial changes in its structure are likely to occur in the years ahead. These future developments are difficult to precisely predict as such developments are non-linear. Additionally, the complexity and depth of the Web 3.0 suggest that it will take time for it to mature. Thus it is important to track the evolution of this stack, as this will enable further innovation and thereby create exciting, potential investment opportunities.


References

(We are grateful for the prior works done on the Web 3.0 stack; especially the quality representations done by Web3 foundation, Kyle Samani, Alexander Lange, Trent McConaghy and Stephan Tual. All sources that helped to write this article are listed below.) 

Share this post

Disclaimer

To avoid any misinterpretation, nothing in this blog should be considered as an offer to sell or a solicitation of interest to purchase any securities advised by Blockwall, its affiliates or its representatives. Under no circumstances should anything herein be interpreted as fund marketing materials for prospective investors considering an investment in any Blockwall fund. None of the data and information constitutes general or personalized investment advice and only represents the personal opinion of the author. The author and/or Blockwall may directly or indirectly be exposed to the mentioned assets/investments. For further information please view the full Disclaimer by clicking the button below.

Read more >_