WEB 3.0 - the second approach to the projectile

WEB 3.0 - the second approach to the projectile

First a bit of history.

Web 1.0 is a network for accessing content hosted by site owners. Static html pages, read-only access to information, the main joy is hyperlinks leading to pages on this and other sites. A typical site format is an information resource. The era of transferring offline content to the network: digitizing books, scanning pictures (digital cameras were still a rarity).

Web 2.0 is a social network that brings people together. Users immersed in the Internet space create content directly on web pages. Interactive dynamic sites, content tagging, web syndication, mash-up technology, AJAX, web services. Information resources give way to social networks, blog hosting, wiki. The era of online content generation.

It is clear that the term “web 1.0” came into existence only after the advent of “web 2.0”, to refer to the former Internet. And almost immediately began talking about a future version 3.0. There were several options for seeing this future, and all of them, of course, were associated with overcoming the shortcomings and limitations of web 2.0.

Netscape.com CEO Jason Calacanis was primarily concerned about the poor quality of user-generated content, and he suggested that the future of the Internet would be for “gifted people” who would start “creating high-quality content” (Web 3.0, “official” definition, 2007). The idea is quite reasonable, but he did not explain how and where they will do it, on what sites. Well, not on Facebook.

Tim O'Reilly, the author of the term "web 2.0", reasonably suggested that such an unreliable intermediary as a person is not necessary to place information on the web. Technical devices can also supply data to the Internet. And the same technical devices can read data directly from web storages. In fact, Tim O'Reilly proposed to associate web 3.0 with the term “Internet of Things”, which is already familiar to us.

One of the founders of the World Wide Web, Tim Berners-Lee, saw in the future version of the Internet the realization of his long-standing (1998) dream of a semantic web. And his interpretation of the term won - most of those who said “web 3.0” until recently had in mind exactly the semantic web, that is, a network in which the content of site pages would be meaningful for a computer, machine-readable. Somewhere in the region of 2010-2012, there was a lot of talk about ontologization, semantic projects sprang up in batches, but the result is known to everyone - we still use the Internet version 2.0. In fact, only the Schema.org semantic markup schema and the knowledge graphs of the Internet monsters Google, Microsoft, Facebook, LinkedIn fully survived.

Powerful new waves of digital innovation helped cover up the failure of the Semantic Web. The interest of the press and the public has shifted to big data, the Internet of things, deep learning, drones, augmented reality and, of course, blockchain. If the first ones on the list are mostly offline technologies, then the blockchain is a network project in its essence. At the peak of its popularity in 2017-2018, it even claimed the role of the new Internet (this idea was repeatedly expressed by one of the founders of Ethereum, Joseph Lubin).

But time has passed, and the word “blockchain” has become associated not with a breakthrough into the future, but rather with unjustified hopes. And the idea of ​​rebranding naturally arose: let's not talk about the blockchain as a self-sufficient project, but include it in the stack of technologies that embody everything new and bright. Immediately for this “new” there was a name (though not new) “web 3.0”. And in order to somehow justify this non-newness of the name, it was necessary to include the semantic network in the “light” stack.

So, now the trend is not blockchain, but the infrastructure of the decentralized web 3.0 Internet, which consists of several main technologies: blockchain, machine learning, the semantic web and the Internet of things. In the many texts that have appeared over the past year on the new reincarnation of web 3.0, you can learn in detail about each of its components, but, unfortunately, there is no answer to natural questions: how do these technologies combine into something whole, why do neural networks need the Internet of Things, and semantic web blockchain? Most teams just continue to work on the blockchain (probably in the hope of creating a crypt that can beat the cue ball, or simply working off investments), but under the new banner of “web 3.0”. Still, at least something about the future, and not about unjustified hopes.

But not everything is so sad. Now I will try to briefly answer the above questions.

Why the blockchain semantic network? Of course, here we should not talk about the blockchain as such (a chain of crypto-linked blocks), but about a technology that provides user identification, consensus validation and content protection based on cryptographic methods in a peer-to-peer network. So, the semantic graph as such a network receives a reliable decentralized storage with cryptographic identification of records and users. This is not the semantic markup of pages on free hosting.

Why does a conditional blockchain need semantics? Ontology, it is generally about spreading content across subject areas and levels. And this means that the semantic web thrown over the peer-to-peer network - or, to put it simply, the organization of network data into a single semantic graph - ensures the natural clustering of the network, that is, its horizontal scaling. The level organization of the graph allows you to parallelize the processing of semantically independent data. This is already a data architecture, and not dumping everything indiscriminately into blocks and storing it on all nodes.

Why does the Internet of Things need semantics and blockchain? With the blockchain, everything seems to be trivial - it is needed as a reliable storage with a built-in system for identifying actors (including IoT sensors) using cryptographic keys. And semantics, on the one hand, allows you to segregate the data flow by subject clusters, that is, it ensures the unloading of nodes, on the other hand, it allows you to make the data sent by IoT devices meaningful, and therefore independent of applications. It will be possible to forget about requesting documentation for application APIs.

And it remains to find out what is the mutual profit from crossing machine learning and the semantic web? Well, everything is extremely simple here. Where, if not in a semantic graph, can one find such a colossal array of validated, structured, semantically defined data in a single format, which is so necessary for training neurons? On the other hand, what better than a neural network to analyze the graph for useful or harmful anomalies, say, to identify new concepts, synonyms or spam?

And this is the web 3.0 we need. Jason Calacanis will say: I told you it would be a tool for high-quality content creation by gifted people. Tim Berners-Lee will be pleased: semantics rules. And Tim O'Reilly will also be right: web 3.0 is about “the interaction of the Internet with the physical world”, about blurring the line between online and offline, when we forget the words “enter the network”.

My previous approaches to the topic

  1. Philosophy of Evolution and the Evolution of the Internet (2012)
  2. Internet evolution. The future of the Internet. Web 3.0 (video, 2013)
  3. WEB 3.0. From site-centrism to user-centrism, from anarchy to pluralism (2015)
  4. WEB 3.0 or life without websites (2019)

Source: habr.com

Add a comment