To increase a network’s speed reduce latency!
According to Zipf’s law the relative frequency of a word is inversely proportional to its rank.

To increase a network’s speed reduce latency!

In one previous short article of mine, I promised to come back and discuss how to get little latency in telecommunications networks. 

To start with, I wish to summarize how we arrived where we are. We are tempted to give for granted that in networks “bandwidth=data-rate.” In previous short articles of the series, we saw that this is not true in general unless we do the right things. Unfortunately, under Murphy’s law, “Anything that can go wrong will go wrong,” so we must be cautious!

Traditionally, European policymakers and their consulting companies think that the right choice to get the economic and societal improvements that we ask for is increasing the data-rate in the access network, laying right now as much optical fiber as possible. This belief explains the campaigns in favor of access networks capable of delivering at home up to 1 Gbps. Meanwhile, in most of Europe, the ultra-broadband take up is still low, which means that huge investments are looking for clients.

According to the Digital Economy and Society Index for the year 2020 (DESI2020):

?As new service offers emerge, take-up is growing sharply. 26% of European households currently subscribe to ultrafast broadband (at least 100 Mbps), a marked improvement from 2% 7 years ago.?

This news is undoubtedly good news. However, the growth was a considerable 11% over DESI2018 (i.e., a 5.5% YoY increase). This increase makes us extrapolate to the end of 2020 the figure of 32%, i.e., 18% points less than the original Digital Agenda for Europe (DAE) objective ?of at least 50% of households subscribing to ultrafast broadband? fixed for the end of this decade. At the same pace, we can extrapolate universal adoption for the year 2040! Of course, the EC and the MSs can envisage acceleration measures based on public funds devolution. However, it is plausible that the target could be reached sometime in the 2030s.

In rural areas, the VHCN coverage grows slowly from 14% in 2018 to 20% in 2019. By simple linear extrapolation, full European coverage of the rural regions will emerge around 2032.

What is worst, by doing so, the throughput (i.e., the “actual” application bandwidth for the customer, as we clarified in our previous short articles) does not increase much because:

  • The bottleneck often is in the network’s part upstream of the access network (in Italy, about 2 Mbps on average). This part connecting the access to the big internet at a certain point of presence (POP) mostly works as a fixed “pipe” shared by a large number of clients (better, signals) with a relatively low value of the sharing factor at the peak hour. Said differently, the “pipe” is a multiple access line, and many simultaneous signals share the fixed bandwidth available.
  • Even removing the indoor bottleneck by only considering fixed-line access points, the end-to-end link between the remote server and the customer’s device dominates the bandwidth.
  • Besides, if other causes were not present, the home Wi-Fi in practice penalizes packet loss and latency, as explained in one previous short article of this series (“Why Very High Capacity Networks (VHCNs) may disappoint European customers’ expectations in spite of high cost and risk for the telecoms ecosystem?”). It is doubtful at the present date that the trend could be too much favorable by adopting in the future more powerful Wi-Fi solutions, or even with new femtocell standards, although undoubtedly beneficial.

Data-rate heavily depends on latency, and – to make the situation worse – this manifests what in my last published short article was called the Red Queen’s race effect “It takes all the running you can do, to keep in the same place.” 

Therefore, the risk is that European citizens must wait long before enjoying real and diffuse speed and quality.

What should we do? Make all efforts to escape from the “UBB trap” as we defined it (https://arxiv.org/ftp/arxiv/papers/2006/2006.01674.pdf), and recover the “old” narrowband internet condition: “bandwidth=data-rate.” We can do this even if the “new” UBB internet signals are mostly video signals (more than 80%) having large bandwidth. 

As we already discussed, acting on the “distance metric” d = RTT ? √PLR, we have to reduce RTT, meaning that we must bring servers close to the end-user. The well-known enabling technology – Edge cloud computing (ECC) or, with ETSI’s wording, MEC (Multi-function edge computing) – has many merits and no downside. 

The fundamental component inside ECC is the transparent cache. It is a content repository located close to the end-user (how close is a design problem out of the scope of these notes). Video content respect (a variant of) the Zipf’s Law. According to Zipf’s law, the statistics respect an f(n) = 1/n popularity rule (see Figure 1 below): if the most popular element weighs 100, the second popular weight is 50, the third 33, and so on (the first five elements have a 97% probability of being “hit,” i.e., encountered).

For the case of video content, the applicable generalized popularity Zipf’s Law (called Zipf-like distribution) is slightly modified as follows: f(n) = (1/n)^a with a < 1 (typically between 0.6 and 0.8). By storing a limited fraction of customers’ content, the hit ratio (HR), i.e., the probability that the local ECC serves the customer, is relatively high. For example, if a = 0.8, only 10% of stored content provides HR ≈ 50%, which means that long-distance video traffic is reduced by half. This fact brings about several advantages:

  • The repository is close to the customer, so the ‘distance metric’ is sufficiently low, and d is not the bottleneck: we live in the well-behaved data-rate limited world where “bandwidth=data-rate.”
  • Since 80% of internet traffic is video, if we provide half (or, possibly more) of it locally, the upstream part of the network is now lightly loaded. Therefore, it will be easier to equalize the bandwidth in the access and the upper part of the network (no more a bottleneck).
  • Since the transparent cache platform acts at protocol Layer 4 (i.e., transport layer, TCP) upstream of it the IP packets flow (Layer 3), this approach avoids putting signals in a multiple access “pipe” (Layer 2). In several cases, access traffic could be routed along lightly loaded paths (this also reduces the probability of bottlenecks).

The network cost (TCO) intensity is easily reduced compared with the “brute force” approach to investing heavily in the access network since the beginning. The investments in access networks are necessary. However, they can be planned on a longer time frame, cautiously monitoring the increase of demand, traffic, and new services.

The Telco operator is not forced by policymakers to spend vast amounts of money untimely, and the customer enjoys better throughput very early. The European Authorities should redesign “speed” parameters, e.g., measuring the actual speed (throughput) enjoyed by customers. This redesign could be done already today with crowdsourced speed campaigns able to make sample measurements for a given MS.

More rational time distribution of CAPEX investments, not too much concentrated in a few years (say, five years), can liberate some amount of money to be judiciously invested. Some can be spent in semirural and rural areas, so attempting to reduce the digital divide between city areas and countryside areas.

Non è stato fornito nessun testo alternativo per questa immagine

Figure 1: The Zipf’s Law. The American linguist George Zipf found that the relative frequency of a word is inversely proportional to its rank, r. Said differently, if you open any book written in any language, count the number of times words appear and order them according to the rank. You’ll find their frequency obey to the striking law f(r) ? 0.5/r. Even more strikingly, Zipf’s generalized law, f(r) = 0.5/r^a, with a ~ 1 applies to many different fields and data collections.

(4 - end)

Aldo Milan

Officer at Communications Regulatory Authority (Agcom)

4 年

Salve Prof. "Mi sembra" che in alcune specifiche recenti linee guida BEREC, proprio per le ragioni indicate, le definizioni adottate riferiscono la misura della velocità indicando un livello "OSI" differente rispetto a quello utilizzato, motivatamente, nella regolamentazione della net neutrality. La soluzione è probabilmente imperfetta, ma il problema sembrerebbe individuato e quantomeno è riportato nei vari considerati. Sempre a livello europeo, la discussione su alcuni strumenti come i "technology group" (soluzioni capaci di offrire prestazioni similari in termini di latenza/prestazione) sembrano oggetto di grande dibattito, specialmente da parte di quanti vedono esclusa la tecnologia utilizzata nelle loro reti. Rispetto a questo punto, è anche necessario assicurare i principi di neutralità tecnologica. In ogni caso, ottima la "spiega" perché l'argomento è poco noto e, non di rado, l'impatto del rtt e della packet loss nel protocollo tcp/ip (semplifico, si dovrebbe dire del transport...) viene sottovalutato e, comunque, trova scarsa considerazione nel dibattito. Forse è utile ricordare che sono ancora aperti i termini per la consultazione del BEREC in tema di VHCN (scadenza al 4 settembre).

José Delgado-Penín

Catedrático Emérito Universidad (Emeritus Prof.)

4 年

Ho letto i suoi post e considero che ho imparato alcuna cosa in piú su le reti digitali attuali.Grazie per le sue considerazioni.

回复

要查看或添加评论,请登录

Francesco Vatalaro的更多文章

社区洞察

其他会员也浏览了