Over the last year or so, I have heard quite a few discussions and read many articles around why QUIC is so good and why we will replace TCP with QUIC (Quick UDP Internet Connection). One such article talking about QUIC benefits says:
QUIC was initially developed by Google as an alternative transport protocol to shorten the time it takes to set up a connection. Google wanted to take benefits of the work done with SPDY, another protocol developed by Google that became the basis for the HTTP/2 standard, into a transport protocol with faster connection setup time and built-in security. HTTP/2 over TCP multiplexes and pipelines requests over one connection but a single packet loss and retransmission packet causes Head-of-Line Blocking (HOLB) for the resources that were being downloaded in parallel. QUIC overcomes the shortcomings of multiplexed streams by removing HOLB. QUIC was created with HTTP/2 as the primary application protocol and optimizes HTTP/2 semantics.
What makes QUIC interesting is that it is built on top of UDP rather than TCP. As such, the time to get a secure connection running is shorter using QUIC because packet loss in a particular stream does not affect the other streams on the connection. This results in successfully retrieving multiple objects in parallel, even when some packets are lost on a different stream. Since QUIC is implemented in the userspace compared to TCP, which is implemented in the kernel, QUIC allows developers the flexibility of improving congestion control over time, since it can be optimized and better replaced compared to kernel upgrades (for example, apps and browsers update more often than OS updates).
Georg Mayer mentioned about QUIC in a recent discussion with Telecom TV. His interview is embedded below. Jump to 5:25 for QUIC part only
Georg Mayer, 3GPP CT work on 5G from 3GPPlive on Vimeo.
Below are some good references about QUIC in case you want to study further.
Very interesting. Read up on QUIC, plus went over some of the material related to how it improves latency as well as some threat/security analysis presentation. The zero RTT is only possible if the server and client have previously communicated /authenticated and derived the session keys so the numbers are more in line with 0 RTT for 75% of the time which is still pretty good. Plus savings in time to derive the session key as the initial / integrity key could be used in the interim hence allowing the communication to begin in parallel versus TLS where the total time is TCP handshake then TLS 1.x transaction and only after the session keys are derived actual data is encrypted and sent. There are surely some good ideas in this protocol from Google to reduce transport latency which will be needed to achieve some of the low latency goals of 5g. Thx for posting this, something new to read up and learn.
ReplyDelete