Tcl Source Code

View Ticket
Bounty program for improvements to Tcl and certain Tcl packages.
Tcl 2018 Conference, Houston/TX, US, Oct 15-19
Send your abstracts to
or submit via the online form by Aug 20.
Ticket UUID: 46b6edad51e645c70e88dd4f944a2d4afc63a96a
Title: HTTP Keep-Alive with Pipelined requests fails
Type: Bug Version: 8.6.7
Submitter: kjnash Created on: 2017-11-30 16:01:25
Subsystem: 29. http Package Assigned To: nobody
Priority: 5 Medium Severity: Important
Status: Open Last Modified: 2018-04-20 16:56:55
Resolution: None Closed By: nobody
    Closed on:
When http::geturl is called with -keepalive 1, and then called again (for a URL on the same server) before the first HTTP response is complete, the two HTTP requests are not correctly handled:
(a) if the time delay between the two calls is short, then the first HTTP request is never sent;
(b) if the delay is longer, but not long enough for the first HTTP response to be complete, then the second HTTP response is thrown away, and the first HTTP response is assigned to the second HTTP request.

In both cases, Tcl reports the first HTTP response as a timeout.

The attached test script has a full explanation.
User Comments: kjnash added on 2018-04-20 16:56:55:
Attached file http-files-05ac4d0cdd.tar.gz provides all the files that differ from Tcl 8.6.8.  The tarball is up to date with branch bug-46b6edad51-concurrent-http commit 05ac4d0cdd.

Earlier tarballs have been removed.

kjnash added on 2018-04-13 15:17:38:
I mean ticket 3599789 which provides a patch as artefact a3dcbc5e6dd81251.

KA = Keep-Alive

kjnash added on 2018-04-12 14:16:02:
Yes, I believe that if the changes in branch bug-46b6edad51-concurrent-http are accepted, they will fix bug a3dcbc5e6dd81251.

The patch supplied with ticket a3dcbc5e6dd81251 makes three changes:

1. http::CloseSocket - when closing a socket, use sockettokenmap to reset tokens for any KA requests that use the socket.
2. http::geturl - create sockettokenmap as a list of tokens for KA-requests that use a particular socket.
3. http::wait - return immediately (without waiting) if the socket for a KA request is found to be closed.

These changes handle some of the issues that arise when a persistent socket is half-closed by the server, but this event is not detected by the client.

Branch bug-46b6edad51-concurrent-http handles this problem slightly differently:

(a) when a persistent socket is idle, "fileevent readable" events are bound to http::CheckEof which closes the socket if it is half-closed by the server;
(b) http::Event, http::Connect and http::Connected include code to handle an asynchronous close event when a persistent socket is used after a period of idleness;
(c) if a request has not been completed when its persistent socket closes, its token is not reset (as in ticket a3dcbc5e6dd81251), but sent over a new connection;
(d) several tests in tests/httpPipeline.test cover cases where the server half-closes a persistent socket either after timeout, or by unexpectedly sending a "Connection: close" header. The tests pass.

dgp added on 2018-04-06 16:03:03:
Is there any overlap between this work and the older ticket...


erikleunissen added on 2018-04-01 20:09:10:

This bug shows many similarities with the bug reported here:

See especially the report.pdf attached to this ticket.

The bug was cross-filed here:

Erik Leunissen

kjnash added on 2018-04-01 15:12:20:
Attached file http-files-386c1a2e43.tar.gz provides all the files that differ from Tcl 8.6.8.

kjnash added on 2018-04-01 14:27:46:
Tcl branch bug-46b6edad51-concurrent-http fixes this problem.


The existing http library does not handle concurrent use of a persistent
connection correctly: it makes it the responsibility of the caller of
http::geturl to check that one HTTP transaction is complete before submitting
the next.  Also, it does not monitor idle connections for closure at the
server end.

The revisions in branch bug-46b6edad51-concurrent-http arrange for such HTTP
requests to be queued until the connection is available, and pipelined if
appropriate.  If the connection is lost, RFC 7230 permits retry in some cases.
Connections are monitored for server timeout.

New options for http::config control these facilities:

-pipeline boolean
    Specifies whether HTTP/1.1 transactions on a persistent socket will be
    pipelined. The default is 1.
-postfresh boolean
    Specifies  whether requests that use the POST method will always use a
    fresh socket, overriding the -keepalive option of command http::geturl.
    The default is 0.
-repost boolean
    Specifies what to do if a POST request over a persistent connection fails
    because the server has half-closed the connection.  If boolean true, the
    request will be automatically  retried; if boolean false it will not, and
    the application that uses http::geturl is expected to seek user confirmation
    before retrying the POST. The default is 0.

Further discussion in section "PERSISTENT CONNECTIONS" of http(n).


Tcltls bug

A https request for an entity that is larger than the http::geturl option
-blocksize hangs if the server response uses a persistent connection and
does not use chunked encoding (but does supply an accurate Content-Length
header).  The client does not finish reading until the server times out.
I am not sure whether this bug lies in Tcltls, Tcl stacked channels, the
Tcl [read] command, fileevent generation or buffering, but I have submitted
a ticket on Tcltls.

The bug is worked around by specifying the number of characters for each
[read] as ($state(totalsize) - $state(currentsize)).

It appears to be necessary to do this for every read, not only for the last
read(s) where this quantity is smaller than the -blocksize that would normally
be requested.


This behavior is mandatory in RFC 7230, but NOT in the superseded RFC 2616.
Much of the web is delivered by servers that comply with the older standard.
These may close the connection if the client does not send a
"Connection: keep-alive" header.  The http package sends the header only for
HTTP/1.0.  The new branch fixes this bug and sends the header for HTTP/1.1 also.


By default http asks the server to compress the HTTP response.  A new option for
http::config allows this request to be switched off.

-zip boolean
    If the value is boolean true, then by default requests will send a header
    "Accept-Encoding: gzip,deflate,compress".  If the value is boolean false,
    then by default  this header will not be sent.  In either case the default
    can be overridden for an individual request by supplying a custom
    Accept-Encoding header in the -headers option of http::geturl. The default
    is 1.

V. MODIFIED FILES (cf. 8.6.8)

EDITED     doc/http.n
EDITED     library/http/http.tcl
EDITED     tests/http11.test
ADDED      tests/httpPipeline.test
ADDED      tests/httpTest.tcl
ADDED      tests/httpTestScript.tcl


The tests in the new test file tests/httpPipeline.test all have a constraint
"serverNeeded" and so by default are not executed.

There are over 1000 tests, which are time-consuming because they use a delay
to ensure that persistent connections to the server have timed out, and they
also suffer from network latency.  Some tests may appear superfluous, but they
do reveal bugs, including some where test failure depends on the details of
client/server timing and is therefore intermittent.


Refactor variable names:
    conn_id -> connId (in ::http::CloseSocket)
    socketmap -> socketMapping (namespace variable)
    In http::Event, local variable "n" is used for multiple purposes.
    Reserve it for an increment to the length of the response body, and apply
    n -> nsl, nhl, ntl for its other meanings. Set n 0 in one place.
    In http::Event, local variable "chunk" is used for two purposes.
    chunk -> hexLenChunk when it holds the length of the chunk in

Refactor command name:
    Eof -> Eot (the command is sometimes called when not eof.  Comments in
    command header updated.)


      a. Carefully distinguish expected and premature eof.
         In http::Eot --
         - Remove optional argument "force" which is never used.
         - Do not use ($state(state) eq "header") to test whether the
           transaction is complete.  Rely on a new argument "reason" instead.
         - Pass modified value $reason to http::Finish.
         In http::Finish --
         - Call CloseSocket if state(status) is "eof".
         In http::Entry --
         - Set state(state) to new value "complete" to signal that the response
           is complete, or will be complete when the server closes the
           connection.  The value is read within http::Entry (and nowhere else)
           as a test of completion.
      b. Bugfixes in recording the size of the response body.
         In http::Entry --
         - Do not count $n towards the size of the body if it is not
           the length of a part of the body.  Set it to 0 instead.
           This includes HTTP trailers.
      c. Terminate the response and the connection if there is an error with
         chunked encoding.
      d. When using -handler option to http::geturl, terminate the response if
         the handler command does not return an integer result as required (and
         - This change can be reversed if considered inappropriate, but at the
           cost of (a) not knowing the size of the response body, (b) having to
           use server eof to define the end of the resource, and
           (c) incompatibility with keep-alive.
      e. Add some commented-out "##Log" calls for debugging.
      f. Define new variable state(log_size) purely for logging the cumulative
         size of the response body.

And cf. commit 60eee5809f:
      a. Amend the command "handler" in tests/http11.test, to return an integer
         as required by http(n), and also not to check for eof which is the
         responsibility of http. This form of the test is appropriate for commits from 78b23edb6b onwards.


Bugfixes for occasional test failures, and improvements for (possibly hypothetical) cases.

1. Modify http::init to use CloseSocket and clear old persistent-connection data gracefully.
2. Unset state(after) when its event is cancelled.
3. When the server sends "Connection: close", do not clear the queues in http::Event because they are needed if http::Finish is called with error status.  Instead clear them in http::CloseQueuedQueries; also (in several places) use socketPlayCmd(*), socketClosing(*) instead of empty queues to signal that the socket will close.
4. Split http::Unset from http::CloseQueuedQueries and http::geturl.
5. In http::geturl and http::ReplayCore, initialize all socket*($state(socketinfo)) array elements.  Test existence of the correct variables before setting traces.  Reorder code so the new tests occur before setting the variables.
6. Re-sync http::geturl and http::ReplayCore.  If socket cannot be opened, pass error message to http::Finish and set state(sock) NONE.  Initialise socketPlayCmd(*) to same value.
7. Add Log calls to report code that appears to be rarely executed.
8. Improve Log messages in http::CancelReadPipeline http::CancelWritePipeline
9. Do not use "unset socketRdState(*)" as a "hard cancel" for queued tokens in http::ReplayIfDead; instead a "soft cancel" is provided in http::ReplayCore and http::ReInit, that does not set the state to indicate failure. The state(after) event is now cancelled in http::ReInit not http::ReplayIfDead.
10. Split http::ReInit from http::ReplayCore, and also apply it to queued tokens so they are re-initialized correctly.
11. Do not use argument skipCB when calling Finish from ReplayCore - this was copied in error from http::geturl.

Rename tests to distinguish them from tests in other files.