I am building a web server application that sends files of varying size from a server to a client upon request using the HTTP 1.1 protocol and chunked Transfer-Encoding.
I am using Java Sockets
/ServerSockets
to handle the connections. I am sending data to the client using the output stream obtained from socket.getOutputStream()
. Note: I am not wrapping the socket's output stream in a BufferedOutputStream
.
(I am using chrome debugger to analyze resource timings)
The problem that I am seeing is that a file of size 1285 bytes (plus headers and chunk encoding) takes well over 50ms for chrome to receive its first byte for the request. (Chrome reports a TTFB of ~50ms), followed by a quick remainder of the transfer (1-2ms) (total transfer time ~52ms)
But if I increase the size of the file to 1286 bytes, the TTFB dramatically decreases down to ~1ms. (total transfer time ~3ms)
I have tried force flush
ing the OutputStream
at varying points along the way, including after the request headers, after chunks, and even attempting multiple flush
calls at each point just for fun.
My question: Why is the transfer time so much longer for a small file as compared to any file above or equal to 1286 bytes? And what can I do to fix this performance issue?
My theory: Something in the underlying socket implementation is ignoring the Java request to flush
the underlying socket's buffer, in spite of the Java calls to flush()
.
Disabling Nagle's algorithm would resolve this issue.
http://www.boundary.com/blog/2012/05/know-a-delay-nagles-algorithm-and-you/
Collected from the Internet
Please contact [email protected] to delete if infringement.
Comments