What is the QUIC protocol? Everything you need to know
Author: Raffaele Sabatino
What ist the QUIC Protocol and why it matters?
QUIC, enabler of HTTP/3, is the new emerging Internet protocol which, thanks to several new features, should help make Internet services better. Especially services where every millisecond matters (edge etc.) should benefit from QUIC, and big tech companies have set high expectations for it. More generally, the Internet experience should get better for everyone; however, the new technique poses some challenges. Companies and organizations using gateways to prevent unsecured, potentially malicious, Internet traffic from entering their internal network might be facing new issues.
Over the last 30 years, we have witnessed an enormous effort to constantly improve the efficiency in transferring hypertext documents on the World Wide Web. We have seen several versions of HTTP, the Hypertext Transfer Protocol, which has quickly become the standard for exchanging messages between browsers and servers. Thus, terms like HTTP/1.0, HTTP/1.1, HTTP/2 have become part of our technological landscape: things we know are important for our daily Internet experience, because they “make things work”. ?
How does HTTP work?
HTTP/1.0, HTTP/1.1, HTTP/2 are all protocols based on TCP/IP, the connection-oriented transport protocol, which is the foundation of the modern Internet. Or was it?
HTTP is a request-response protocol. If the browser needs a picture from a Web server, it must issue an HTTP request for it. If the browser needs an audio file from a Web server, it must issue an HTTP request for it. All these requests, up to HTTP/2, are transmitted over TCP/IP “connections”, and the improvements achieved in the evolution of HTTP have been driven, to a large extent, by the need of better user experience, that is: faster. Therefore, the big way-opener for a “higher” version of HTTP was the “head-of-line blocking” problem, along with the security improvement of Internet transactions.
The evolution of the HTTP protocol
The early days
HTTP/1.0, in the early 90’s, was based on the rigid usage of TCP: for every TCP connection there is only one HTTP “dialogue”, a request and a response. With that, if a browser needs a picture from a Web server, a TCP connection must be established, and once the picture is transferred, the TCP connection must be closed. It became obvious that the next step would be the reuse of TCP connections: over one TCP connection it should be possible to handle multiple request-responses, and the client should be able to request more than one resource from the server at once. This, known as “pipelining”, came in the late 90’s with HTTP/1.1. In the meantime, HTTPS (HTTP Secure) was created by Netscape, and SSL became more and more the standard in browsing the Internet.
With HTTP/1.1, multiple HTTP requests used dedicated TCP connections for each “logical” stream. However, performance were not optimal, basically due to the “head-of-line blocking” on application (i.e. HTTP) level: clients had to wait, in many situations, for the first HTTP request in line to finish, before the next one could go out. This implied slowness and impacted user feeling.
Figure 1: Head-of-line blocking
Problem solving through HTTP/2
In 2015, the attempt to overcome this issue led to HTTP/2, an initiative pushed by Google and based on SPDY. HTTP/2 introduced two main features: multiplexing and server push. Multiplexing allows synchronous sending/receiving of multiple logical, prioritized HTTP data streams over a single TCP connection, instead of adding parallel TCP connections. Server push enables servers to anticipate resources, and push them prior to the request from the client, which still retains the authority to deny the server push. These features add a lot of efficiency to the process in most cases.
From HTTP/2 to the QUIC-based HTTP/3 protocol
After HTTP/2 was adopted, and the “head-of-line-blocking” issue was solved on application level through request multiplexing, the same issue on transport level (TCP) became a predominant problem. In TCP, if a single packet is dropped or lost, the entire TCP connection, with all HTTP streams running over it, is halted until the missed packet is re-transmitted and reaches the destination. No matter if one HTTP data stream is flowing freely, the whole transmission is stopped, because another HTTP stream is affected by packet loss. This is deeply rooted in the fundamental features of TCP, aiming to guarantee flowless and reliable data transmission: connection-oriented, packet loss recovery, re-transmission, window scaling, congestion control.
Back in 2012, Google designed a new protocol, called QUIC (Quick UDP Internet Connections). They implemented and deployed it on their browser, Chrome, as well as in their server-side services (Google search, Gmail, Youtube, etc). As a transport protocol for it, UDP was chosen. By re-using a world-wide established transport protocol like UDP, instead of designing a brand new one just for QUIC, it was possible to avoid a complicated, if not impossible, rollout. In fact, the big majority of firewalls, NAT’s, routers and other middle-boxes between users and servers only foresee TCP or UDP (the de-facto transport protocols of the Internet).
The good, well-proven features of TCP have been implemented on the QUIC protocol itself, which seats between UDP and HTTP, while on the application level HTTP was adapted and called HTTP/3. In other words, the advantages of TCP (reliability, robustness) are kept and extended by implementing those ?on QUIC. At the same time, by adopting the simpler UDP as “lower” transport protocol, the new approach gets rid of the main limitations of TCP (first of all the slow handshakes).
In 2016, the IETF started a working group for the standardization of QUIC. Capabilities of the new protocol were extended to also transfer other protocols than "just" HTTP2 frames (despite the Google version), and TLS 1.3 was preferred as an encryption and security standard for QUIC over the approach used by Google-QUIC. The IETF QUIC protocol architecture was then split in two separate layers: transport QUIC and HTTP over QUIC. The latter was finally renamed HTTP/3 (2018). QUIC and its HTTP/3 started taking off.
Figure 2: QUIC on the OSI stack
HTTP/3 builds upon HTTP/2, yet some of the specifics are pushed away from the HTTP layer and fulfilled by QUIC. Most new implementations under development have been focusing on the IETF version and are not compatible with the Google version. IETF RFC’s 9000, 9001, 9002 are the current proposed standard, and foundation of upcoming improvements.
The four basic features of the QUIC Protocol
1.?????Independent streams
QUIC separates logical streams within physical connections. A QUIC connection is tied to a pair UDP port/IP address, and is negotiated between two endpoints, like for TCP connections. Once established, the connection is identified by a "connection ID". QUIC offers flow control on both connection and streams.
Once a QUIC connection is established, either side can create streams over it and send data to the other end. Streams data are delivered in-order and reliably, yet different streams may be delivered out-of-order. In other words, each stream sends data and maintains data order, but data may reach the destination in a different order than the application sent it. Supposing streams A and B are transferred from server to client, with stream A started first, then stream B, if stream A loses a packet and stream B does not, stream B continues the transfer, while lost packet is re-transmitted. Lost packets only affect the stream to which they belong to. This was not possible with HTTP/2.
With the stream concept, another logical connection to the same host can also be created at once, without having to wait for the existing one to end.
For the ones willing to “touch” it, the well-knwn free, open-source protocol analyzer Wireshark offers captures and filters for identification and analysis of QUIC traffic.
Figure 3: QUIC streams
The figure above shows an example of HTTP3/QUIC transactions, as seen by Wireshark: protocol, QUIC connection ID’s, streams with their ID’s, and details (direction, initiator).
2.?????Always secure
There is no clear-text version of QUIC. Once a few initial handshake packets are sent “in clear” to negotiate the encryptions, no unencrypted QUIC connections are to be seen. QUIC connections are established with cryptography and security according to TLS 1.3 (RFC 8446).
3.????Reduced latency
One problem of current Internet is latency, that is, the delay (RTT, Round-Trip Time) of messages travelling back and forth prior to the actual data exchange. By instance, latency can be as high as 300-400 ms when considering the average distance between Europe and US. I checked it out with Wireshark, as I, from Switzerland, was trying to reach a website located in the US (California). Next figure shows the result: around 356 msec to build up the secure TCP connection, then 204 more msec of TTFB (Time-to-First-Byte, i.e. the time needed to receive the first data since the GET was sent). Overall, more than half a second to see some data since I clicked the URL on my browser.
领英推荐
Figure 4: Time to First Byte
Now, QUIC connections are single conversations between two QUIC endpoints. Connection establishment combines version negotiation with the cryptographic and transport handshakes to reduce RTT.
QUIC offers 1-RTT and 0-RTT “fast handshakes”, reducing the time it takes to negotiate and setup a new connection. On user level, this promises a better user experience, because the well-known TCP 3-way handshake is compressed, and TTFB reduced.
The idea behind 0-RTT is “zero roundtrip time connection resumption”. If client and server establish a TLS connection, a subsequent connection can re-use information that were cached from the first one. Clients compute private keys beforehand, without contacting the server, but based on previous TLS connection, skipping the transactions needed to build up a new one from zero.
0-RTT was already introduced in TLS 1.3 for TCP-based connections. There, the idea only refers to TLS handshake, because a TCP connection is still needed before TLS data can be exchanged, and the client must wait anyway for the TCP handshake to complete before “speeding up” the TLS exchange with the server.
Figure 5: 0-RTT with TLS and QUIC
With QUIC, the principle is pushed further: clients can send application data already during the first connection roundtrip, without waiting for any previous handshake at all. HTTP requests are sent to the peer literally as soon as possible, servers can answer and send data back sooner.
At the same time, all this may raise security concerns. Issues will be, most probably, refined with future specification work.
4.?????Reliability
Being UDP no reliable transport, QUIC adds a layer on top of it, including the classical TCP capabilities. While the HTTP level still sticks to the same paradigms and concepts as before (headers and body, request and response, verbs, cookies, caching), main changes have been made to make HTTP/3 work with QUIC as transport mechanism.
HTTP/3 sets up QUIC streams and sends frame sets over. The most important frame types are HEADERS (to send compressed HTTP headers), DATA (to send binary data contents), GOAWAY (to shutdown connections).
With HTTP/3, streams are provided by the QUIC transport, while HTTP/2 streams were managed within the same HTTP layer. Furthermore, streams are independent of each other, therefore the header compression protocol used for HTTP/2, HPACK (RFC 7541), could not be used without causing a head- of-line block situation, so a new compression protocol QPACK was developed. QPACK is like HPACK but modified to work with streams delivered out of order.
For practical reasons, no new scheme for the new protocol has been created. HTTP/3, as well as HTTP/2, still uses the secure URL’s https://. The legacy, clear-text URL’s https:// are left as-is and will probably disappear over time, as non-secure transfers will gradually disappear. Requests to such URL’s will simply not be upgraded to use HTTP/3 (on the other hand, they are not upgraded to HTTP/2 either).
QUIC protocol in practice: is my browser using HTTP/3?
As Internet users, we can ask ourselves whether our browser is using HTTP/3. We can easily check this out. On MS Win10 and with Chrome (v99), Edge (v99), we just need to look at the flags (chrome://flags, edge://flags) and search for “QUIC”. QUIC can be enabled/disabled through the toggle Experimental QUIC protocol. With Firefox (v98), we can type about:config, search for “HTTP3”, and set network. http.http3.enabled to true or false.
Figure 6: Browser settings for QUIC
Again, for the ones familiar with Wireshark, it is possible to see the fallback to HTTP2/TCP if one disables QUIC in the middle of a Web session. Similarly, we can see how our “normal” HTTP2/TCP Web session would switch to HTTP3/QUIC, if we try to enable it “on the fly” on the?browser.
Web servers supporting HTTP/3 QUIC usually advertise this capability by using the alternative service header, alt-srv, in a status code 200 OK, or other (for example 301). Should the browser have QUIC enabled, a switch from current session (HTTP/1.1 or HTTP/2) to HTTP3/QUIC takes place (with an interruption, if one was downstreaming music, by instance).
Wireshark is capable to filter QUIC and HTTP/3. Useful filters implemented for QUIC are quic, http3, and others, more specific. In the next figure, a Wireshark trace related to a Web session to Youtube is presented. There, the server proposes HTTP/3 in a 200 OK and the client (Chrome) switches to it from HTTP2/TCP.
Figure 7: QUIC offer
With Wireshark it is also possible to identify HTTP2 offers from server side by filtering on: http2.altsvc. field_value, and: http2.header.name == “alt-svc”.
Performance: is the QUIC protocol as quick as its name suggests?
QUIC should bring better performance, mainly based on the capability to fetch multiple objects simultaneously. Thanks to 0-RTT, as said, the client can start requesting data much faster than by a full TLS negotiation. The website starts loading earlier, and the browser receives data more quickly. In fact, several studies suggest that, on average, with HTTP/3 the TTFB is better over HTTP/2, even if not dramatically.
Yet, performance comparison HTTP/3 - HTTP/2 in terms of page load time seems to depend on page size, probably due to other factors, like different congestion algorithms of the two protocols.
All in all, in wired networks QUIC seems to outperform TCP clearly.
The increasing usage of mobile devices has highlighted problems on current, pre-QUIC Internet mechanisms. Passing from one mobile network to another (by instance Wi-Fi) is an issue with TCP, because a device needs to establish a new TCP connection each time network is changed. This contributed greatly to the rethinking of web techniques and drove efforts in direction of QUIC.
Thanks to the introduction of connection ID’s, connections can be “moved” between different network interfaces in ways TCP could not afford. So, by instance, a download in progress, which must be moved from a cellular network connection to a faster WiFi connection as the user moves into a WiFi location, should be able to “survive” and continue also on the target network.
Nevertheless, HTTP/3 performance in Wi-Fi networks seem not to be clearly better than HTTP/2. Further improvements can be reached, as other aspects of the protocol improve.
What’s next? A peek into the future of the QUIC protocol
Apart from many Google sites and Youtube, more and more websites are enabling QUIC.?According to w3techs, as of March 2022, QUIC is used by 7.7% of all websites, with a market positioning which clearly needs to ramp up, both in quantitative (number of websites supporting it) and qualitative (QUIC still used by web sites bearing rather low traffic) terms.
Because QUIC is based on UDP and not on TCP, upgrading HTTP/2 to HTTP/3-QUIC cannot be as straightforward as passing from HTTP/1.1 to HTTP/2. Therefore, HTTP/3 will be probably made available to most users and customers through external service providers, rather than implemented by customers themselves on their servers.
QUIC adopts UDP and, at the same time, is an encrypted protocol. Therefore, if a firewall allows UDP/443, not so much can be inspected in QUIC sessions; firewalls might not even recognize QUIC as a protocol. Blocking UDP/443 would not impact the user, though, because a QUIC-complaint browser (for ex. Chrome) would automatically, silently, fall back to TCP. However, this is no good strategy, because companies’ Internet infrastructures will be more and more exposed to encrypted protocols in the years to come.
Implementing architectures able to cope with these protocols should be the way to go, as specifications get consolidated, and best practices emerge. A learning curve awaits us in the next years, because the protocol is complex, and it needs some time to get familiar to it, and fully profit from its potential.
About the Author
Raffaele Sabatino is an experienced consultant with focus on voice services and networking, in particular product and requirements management/engineering, architecture, pre-sales and pre-sales support, troubleshooting for several services, signalling scenarios and networking techniques (GSM, GSM-R, 3G, 4G, IoT) at international telecoms vendors and operators. Raffaele is PMP and a passionate certified trainer mit federal diplom (CH).?
--
9 个月Well understood
Cyber Security Specialist at LvivSoft
2 年Thank you very much for such a clear article.
Thanks for sharing ??