There are many blogs and discussions about WebSocket and HTTP, and many developers and sites strongly advocate WebSockets, but I still can not understand why.
For example (arguments of WebSocket lovers):
HTML5 Web Sockets represents the next evolution of web communications—a full-duplex, bidirectional communications channel that operates through a single socket over the Web. – websocket.org
HTTP supports streaming: request body streaming(you are using it while uploading large files) and response body streaming.
During making the connection with WebSocket, client, and server exchange data per frame which is 2 bytes each, compared to 8 kilobytes of HTTP header when you do continuous polling.
Why do that 2 bytes not include TCP and under TCP protocols overhead?
GET /about.html HTTP/1.1
Host: example.org
This is ~48 bytes HTTP header.
HTTP chunked encoding – Chunked transfer encoding:
23
This is the data in the first chunk
1A
and this is the second one
3
con
8
sequence
0
- So, the overhead per each chunk is not big.
Also, both protocols work over TCP, so all TCP issues with long-live connections are still there.
Questions:
- Why is the WebSockets protocol better?
- Why was it implemented instead of updating the HTTP protocol?