How We Lost The Internet

How We Lost The Internet

Have technical limitations of the Internet architecture contributed to the rise of doom scrolling on social media?

Micah Beck and Terry Moore

Expectations of and (dis)satisfaction with the Internet has changed a lot over the 40-odd years since the first application of TCP/IP to the implementation of shared public networks, as has the very meaning of the term “the Internet,” originally the name of a “network of networks’” which created a global, interoperable overlay on top of local area infrastructure. The Internet was uniquely capable of passing datagrams from any participating endpoint to any other in the wide area. When this model achieved explosive growth and acceptance in the nascent computer networking community, it became the foundation of a utopian ideal of democratically collaborating end users of distributed applications that was termed the Open Data Network (ODN).

As the technology matured, however, what has emerged is a complex Information and Communication Technology (ICT) environment which uses TCP/IP as the primary endpoint-facing component. For that reason, it is still called “the Internet.” This ICT environment includes elements implemented in machine rooms connected by private networks based on many technologies, some of them quite exotic. It includes cloud data/computation centers and content delivery network points of presence. It is, in some cases, connected by trucks that carry 100PBs of data stored on SSDs. It may process data using massive clusters of GPU-enabled processors. To differentiate this new, richer, and more general environment from the original Internet architecture, it might more appropriately be called “Internet++.”

Another way in which the current ICT environment differs from the vision of ODN is that many of the largest and most powerful application providers rely on business practices that are widely considered overly aggressive. They employ strategies which encourage engagement and tools which monitor (or “surveille”) end users. ?And they monetize all the data they gather, either by using it themselves or by selling it to others. The ills that are attributed to these business practices vary widely. While some concerns may be overblown, in other cases the dangers are very real. There is a general sense that individuals are being stripped of control over their own lives and identities by shadowy unregulated corporate actors.

This change from an ODN that would serve the common good to an Internet++ whose largest service providers seem to be predatory is often attributed to the greed or malicious intent of those who develop and implement the latter.?Our recent paper?argues that another important factor driving this change was in fact a mismatch between the capabilities of TCP/IP as a universal communication technology and the requirements of the most economically important category of services that reach a mass audience.

The issue is that the only universally deployed service of the classical Internet architecture is?loosely?synchronous unicast datagram delivery?(meaning that both sender and receiver must participate actively throughout an interval of time). In contrast, all the early mass media applications starting with FTP, the Web, and streaming of stored media, were purely?asynchronous point-to-multipoint?in nature, with a single source file being delivered to many receivers at a time of their choosing. Complex modern media and service distribution applications still have a significant asynchronous point-to-multipoint component, although it may be combined with synchronous point-to-point elements such as remote telepresence.


Figure 1: The Internet protocol stack models only communication. Modern applications require distributed storage and processing. This results in the necessary growth of resources which are not constrained by the “thin waist” design of the Internet’s spanning layer. Adapted from

The reasoning in the paper draws on the idea, as expressed by Messerschmidt and Szyperski, that the common services layer of the Internet, known as the Internet Protocol Suite, represents a thin waist (or “spanning layer”) in the communication protocol stack (see Figure 1). This communication “stovepipe” is only one of the three silos required to implement distributed ICT applications; the other two are storage and computation. Thus, applications cannot rely solely on the “stovepiped communication spanning layer” provided by the Internet. Instead, they must augment it with other resources. Ultimately, the solutions that have prevailed (Content Delivery Networks and cloud) work by building private infrastructure to augment the Internet’s thin waist.

Services that rely on such costly and complex infrastructure must pay the fees required by their operators. Many applications do so by charging hefty end user fees for services that might otherwise have been provided at little or no cost as part of a broader business strategy. An infrastructure with greater deployment scalability might not have imposed such high communication costs.

The more problematic outcome was when the early idea that Internet services such as Web search could be viable without charging any end user fees led to massive investment in companies providing those services. The notion was that capturing market share would eventually somehow translate into profits. As it turns out, the most effective way achieve profitability was through surveillance and monetization of end user behavior. Exploitation of end user surveillance data, when combined with targeted marketing, proved not only viable but hugely profitable. The rise of social media added a new twist: rather than relying on organic search queries, end users could be encouraged to scroll compulsively by using aggressive engagement-maximizing algorithms. End users naively opened the door and invited such vampiric services into the unregulated environment of their online lives.

This analysis suggests two questions: Could it have been otherwise? Are there technical responses which could assist in alleviating the current situation? The answer to the first question is unknowable, but our paper discusses unsuccessful efforts made over multiple decades to extend the Internet’s thin waist with additional resources and services. These sought to achieve deployment scalability by making use of distributed storage and processing in restrained ways.? The second question is more salient, because of the widespread belief that the thin waist (or spanning layer) of the Internet can no longer be extended, modified, or replaced. This leaves only two possibilities: either 1) implement additional services as overlays added to the Internet stack?above?its spanning layer; or 2) define a spanning layer that is broader than the Internet communication stovepipe, including the local resources (storage, processing, and local area communication) which are offered by the layer?below?the Internet and used to implement it.

Our paper argues that overlay solutions, including the current efforts to define an?Extensible Internet, are unlikely to exhibit the degree of deployment scalability required to achieve universal service. It also describes another approach, known as?Exposed Buffer Architecture, which would define a standard for interoperable “underlay” services that would create a spanning layer capable of supporting a variety of ICT utilities and services using a highly generic model of storage, processing and local communication.

Our analysis lays some of the responsibility for the emergence of disturbing Internet++ business practices at the feet of the same limitations of the Internet architecture that have been key to its widespread deployment and universal adoption. This suggestion has been viewed as heretical. Exposed Buffer Architecture suggests that greater generality in a lower ICT spanning layer could be achieved while preserving deployment scalability. Seeking to create a spanning layer below the level of the Internet Protocol Suite which includes storage and processing also has been viewed as heretical.

Having thus barred the door from the inside, our ICT community voluntarily offers the public as nourishment to the parasitic service providers of Internet++. Resistance, apparently, is futile.

Gabriele Scheler

Computational Neuroscience and Theoretical Biology

5 个月

The internet was distributed decentralized. From the times of google people wanted to own it. Now LLMs have arrived, several companies own a copy of it. Because we let them expropriate it.??

回复

要查看或添加评论,请登录

Micah Beck的更多文章

  • End-to-End Arguments: Networking’s Vestigial Rule of Thumb

    End-to-End Arguments: Networking’s Vestigial Rule of Thumb

    The notion of an End-to-End Principle emerged as an answer to the question of why the generation of layered systems…

    1 条评论
  • Data Persistence, Movement and Transformation

    Data Persistence, Movement and Transformation

    Information and communication technologies (ICT) are commonly described in terms of three broad categories of service:…

    2 条评论
  • Beyond End-to-End Packet Delivery

    Beyond End-to-End Packet Delivery

    In the mid 1980s, I took a Networking course at Cornell University which was taught by my advisor. I was asked by…

    8 条评论
  • The Fire Down Below

    The Fire Down Below

    Storage, Networking and Computation: They’ve got one thing in common! The history of Computer System Architecture has…

  • Minimal Sufficiency and AI Governance

    Minimal Sufficiency and AI Governance

    The recent paper "Complexity and the Global Governance of AI” offered by researchers from New America, Princeton…

  • Be A Simple Kind of Man

    Be A Simple Kind of Man

    “Freedom and happiness are found in the flexibility and ease with which we move through change.” - Buddha The history…

  • Datagram Routing Considered Harmful

    Datagram Routing Considered Harmful

    In 1968 letter to Communications of the ACM [https://dl.acm.

    8 条评论
  • Less Is More, But Too Little Ain't Enough

    Less Is More, But Too Little Ain't Enough

    Disclaimer: This is an editorial (or if you prefer, fan-fiction) piece that discusses the development of a number of…

  • We Will All Pay For The Metaverse

    We Will All Pay For The Metaverse

    Will the Metaverse bring us all together, or will it further separate us (who have immersive access) from them (who are…

  • Reducing The True Cost of Broadband Connectivity

    Reducing The True Cost of Broadband Connectivity

    In common business parlance the cost of a product is the expense incurred in producing it. This is distinct from the…

社区洞察

其他会员也浏览了