converted to 1.6 markup
|Deletions are marked like this.||Additions are marked like this.|
|Line 6:||Line 6:|
|Line 16:||Line 16:|
| * [wiki:Self:UseCases/SingleAsyncRequest single asynchronous request]
* [wiki:Self:UseCases/SingleAsyncRequestNotify single asynchronous request with notification]
* [wiki:Self:UseCases/AsyncRequestSequence send a request asynchronously, with automatic follow-up requests]
* [wiki:Self:UseCases/AsyncFireAndForget send a single request asynchronously, forget about the response]
| * [[UseCases/SingleAsyncRequest|single asynchronous request]]
* [[UseCases/SingleAsyncRequestNotify|single asynchronous request with notification]]
* [[UseCases/AsyncRequestSequence|send a request asynchronously, with automatic follow-up requests]]
* [[UseCases/AsyncFireAndForget|send a single request asynchronously, forget about the response]]
|Line 31:||Line 31:|
|[wiki:Self:UseCases/SingleAsyncRequestNotify single asynchronous request with notification].||[[UseCases/SingleAsyncRequestNotify|single asynchronous request with notification]].|
|Line 37:||Line 37:|
|Line 47:||Line 47:|
|Line 58:||Line 58:|
|Line 62:||Line 62:|
Use Case: Multiple Asynchronous Requests
I want to send several independent requests asynchronously with a single thread.
Then, the application should process the responses in the order of arrival.
- all requests are targetting a single server, for example SOAP calls
- many requests to the same server or via the same proxy are sent with pipelining
Related / Out Of Scope
This use case requires queueing of requests, since it cannot be guaranteed and may not be intended that there is a connection available for each request. Queueing requires a scheduling algorithm for picking up the queued requests. Different algorithms may be appropriate depending on the scenario, for example if the requests are all targetted at a single server, as opposed to all or most requests being targetted at different servers. A mechanism is required so that the application can associate the sent requests with the received responses, as suggested for the single asynchronous request with notification.
The order of arrival of the responses can differ from the order in which the requests are sent by the application. Reasons for this are different response times of different servers, different response times of the same server on multiple connections, or a scheduling mechanism that sends the requests in an order different from that in which they were generated by the application.
Due to connection availability, connection re-use and pipelining, there may be dependencies between the responses for independent requests. The application MUST NOT block to wait for availability of a particular response, since that response might never become available until other responses have been processed and their connections re-used or released. There is one exception to this rule: if the scheduling algorithm guarantees in-order sending of requests, then the application can block on responses in exactly the same order in which the requests were generated and sent.
The focus of this use case is the design of a queueing mechanism for requests.
Interfaces for connection management and request scheduling are desirable.
Discussion of Pipelining
With HTTP pipelining, requests are sent over a connection before the responses of previous requests on that connection have been processed. Since each response could require closing the connection and re-sending the requests that have not been handled by the time, requests have to be repeatable. Detection and handling of non-repeatable requests, for example by disabling pipelining for them, needs to be considered. Asking applications to flag non-repeatable requests is an option.
Special tracking is needed to associate responses with requests on a pipelined connection, and to detect which requests have to be repeated if the connection is closed prematurely. Feeding requests back into the queue of requests should give them a higher priority, since they have already gone through scheduling before.
Requests that are sent using the expect/continue handshake effectively flush the pipeline, since the request body should not be sent until the 100-continue response has been received. Timeouts for servers that do not support the expect/continue handshake can not be based on the time at which the request is sent, since the delay for the 100-continue response depends on the preceeding requests and responses. A timeout can instead be started when all preceeding responses have been handled.