Building a Faster and More secure web with TCP Fast Open, TLS False Start, and TLS 1.3
Priyanka Shyam
Network Geek with a robust skill set | CCDE (Written) | CCIE | CWNA | Cisco SCOR | Cisco SD-WAN Expert | Technical Writer | Multitasker | Considerate & Empathic Communicator
Performance and security matter to everyone. We trust the internet with our most important information, including financial data. Ensuring the integrity and security of these transactions is critical to the entire community. More than half of web connections use TLS for securing the network traffic on the web, and this number grows every day. This is great for security and privacy, but we would like to deploy encryption without slowing down the web.Better page load performance improves the user’s experience and influences their choice over which web pages to use. At the same time, users just expect their browsing experience to be secure and private. Modern encryption itself is very fast, but requires negotiating keys to establish a connection before fetching page resources. Each extra exchange through the network delays the connection by one “round trip time” (RTT).With TCP Fast Open, TLS False Start, and TLS 1.3, we can improve both performance and security.
With current standards, connections requiring TLS over TCP require round trips to the server (3-RTT) to negotiate—1 for TCP and 2 for TLS—before starting sending something useful, like the first HTTP GET command. This gets even more problematic when sites split content across multiple domains. In practice, adding encryption adds delays in the range of hundreds of milliseconds to the page load time. Research shows that even 250ms delay is enough for a user to consider trying another website .
The good news is that a new standard, TLS1.3 , will allow developers to eliminate that delay in most cases while still encrypting content. That means delivering better performance and security in Microsoft Edge, using modern encryption on top of the continually improved TCP stack. I have already discussed about TLS 1.3 in depth in my previous article, kindly have a look to get more information.
https://www.dhirubhai.net/pulse/overview-tls-13-faster-more-secure-priyanka-kumari/
How TCP and SSL/TLS interacts?
For the TCP or for the transport layer, everything in the TLS handshake is just application data. Once the TCP handshake is completed the TLS layer will initiate the TLS handshake. The Client Hello is the first message in the TLS handshake from the client to the server.
TLS does not require TCP, it only requires a reliable transport which could be either TCP or UDP. There is even a standard for TLS over SCTP which is another reliable transport protocol. But, if you take today's internet then you usually only have UDP and TCP as transport protocols on top of IP and from these two TCP is the only reliable one.
Full handshake with TCP and TLS
The current TCP and TLS standards require 3 roundtrips to the server (3-RTT). The first round trip is where we negotiate the TCP connection parameters. In the second roundtrip, the client and server exchange TLS messages starting with Client Hello and Server Hello to agree on parameters and keys of the connection. The last roundtrip includes the verification of the TLS handshake integrity through the Client and Server Finished messages.
Exchanging Data More Efficiently Using TCP Fast Open
Achieving 1-RTT with TLS False Start and TCP Fast Open
The first improvement comes from the TLS False Start option, which allows the client to start sending encrypted data immediately after the first TLS roundtrip. With that, we are down to 1-RTT for TLS, or 2-RTT if we count the TCP connection. We have already enabled TLS False Start in Microsoft Edge, with a set of strong cipher suites.
The next improvement comes from the TCP Fast Open procedure, defined in RFC 7413. The RFC defines a new TCP option, containing a “Fast Open Cookie.”
TCP Fast Open (TFO) is an update to TCP that saves up to one full round-trip time (RTT) over the standard three-way connection handshake during a TCP session. TFO support is for MS-MPC and MS-MIC.
The standard three-way connection handshake involves three sets of send and receive messages between two hosts and the following exchange of SYN (synchronize) and ACK (acknowledgement) packets:
- Host A sends a TCP SYN packet to Host B. Host B receives it.
- Host B sends a SYN-ACK packet to Host A. Host A receives it.
- Host A sends an ACK packet to Host B. Host B receives it.
In standard TCP, although data can be carried in SYN packets, this data cannot be delivered until the three-way handshake is completed. TFO removes this constraint and allows data in SYN packets to be delivered to the application, yielding significant latency improvement.
The key component of TFO is the Fast Open Cookie (cookie), which is a Message Authentication Code (MAC) tag generated by the server. The client requests a cookie in one regular TCP connection, then uses it for future TCP connections to exchange data during the handshake.
The TFO option is used to request or to send a TFO cookie. When a cookie is not present or is empty, the option is used by the client to request a cookie from the server. When the cookie is present, the option is used to pass the cookie from the server to the client or from the client back to the server.
The following list outlines how the client requests a TFO cookie:
- The client sends a SYN with a TFO option that has the cookie field empty.
- The server generates a cookie and sends it through the TFO option of a SYN-ACK packet.
- The client caches the cookie for future TFO connections.
Thereafter, the two devices perform a TFO exchange:
- The client sends a SYN with data and the cookie in the TFO option.
- The server validates the cookie:
- If the cookie is valid, the server sends a SYN-ACK acknowledging both the SYN and the data.
- The server then delivers the data to the application.
- Otherwise, the server drops the data and sends a SYN-ACK acknowledging only the SYN sequence number.
The rest of the connection proceeds like a normal TCP connection. The client can repeat many TFO operations once it acquires a cookie (until the cookie is expired by the server). Thus, TFO is useful for applications in which the same client reconnects to the same server multiple times and exchanges data.
What’s next: 0-RTT with TLS 1.3
The next stage in our journey is to move from 1-RTT to 0-RTT using TLS 1.3. One of the biggest advantages of TLS 1.3 over earlier versions is that it only requires one round trip to set up the connection, resumed or not. This provides a significant speed up for new connections, but none for resumed connections. Our measurements show that around 40% of HTTPS connections are resumptions (either via session IDs or session tickets). With 0-RTT, a round trip can be eliminated for most of that 40%.
But it turns out that doing 0-RTT safely is quite tricky—all 0-RTT solutions require sending key material and encrypted data from the client without waiting for any feedback from the server. At a minimum, that means that adversaries can capture and replay the messages, which implies that the feature has to be used with great care. In addition to that, there are many potential pitfalls, such as compromising privacy by carrying identifiers in clear text in the Hello message, or risking future compromise if the initial encryption depends on a server public key.
But even without TLS 1.3, we can combine TCP Fast Open and the TLS False Start option, and reduce the delay from 3-RTT to 1-RTT. Even reducing your page load time by an average of 50 milliseconds will contribute to a better browsing experience.