- Fixes an issue in the html flow that would prevent overriding the
value of the 'Server' response-header.
- Add tests that ensure we emit a single and correct server header
in all flows when not overriding it.
- Add tests that ensure overriding the 'Server' response header
works. The resource and IPRO flow are added to the expected failures
as those are not working yet (and will be adressed in a follow-up).
Fixed https://github.com/pagespeed/ngx_pagespeed/issues/864 (html flow)
We should not rename the ETag header to work around the gzip module
clearing it, when the gzip module actually isn't built. In that case
we will fail to inject the header filter that renames the header back
to 'ETag' before it gets send out.
Fixes https://github.com/pagespeed/ngx_pagespeed/issues/770
Based on @dinic his work, add keep-alive support for the native fetcher.
Adds a new option, usable at the http{} level in configuration:
pagespeed NativeFetcherMaxKeepaliveRequests 50;
The default value is 100 (aligned to nginx). Setting the value to 1 turns off
keep-alive requests altogether).
Most notable changes:
- Request keep-alive by adding the appropriate request header
- Fixes connections getting reused while they are servicing other requests:
- Remove connection from the pool of available connections for keepalive when applicable
- Disable keepalive in more appropriate situations
- Response parsing fixes
- Remove connections that timeout from the k.a. pool
- Add a few sanity (D)CHECKS
- Emit debug messages for traceability
- Fix for ignoring ipv6 addresses returned from dns queries when ipv6 is enabled.
- Bump the fetch timeout in test configuration to deflake tests that require dns
lookups (which will be done via 8.8.8.8 currently for the native fetcher)
Conflicts:
src/ngx_fetch.cc
- Prevent logging to stdout/stderr, make sure we log to error.log for
early messages during initialization. Note that nginx is still working
to setup its logging configuration, so these early messages will go
through its defaults. Which means that only warnings or worse will pass
for early logging messages.
- Make sure we init ProxyFetchFactories's NgxMessageHandler to the correct
server{} specific log so it will write to the error_log configured
in the server{} block instead of the global error_log.
Fixes https://github.com/pagespeed/ngx_pagespeed/issues/808
Helps https://github.com/pagespeed/ngx_pagespeed/issues/817
In connection_read_handler(), make sure we act accordingly when
r->connection->error is set (indicating the the current request has
been finalized).
Reproduction of what happens when we don't: enable IPRO+SPDY, and
rapidly refresh a page with chrome. These rapid abortions will
eventually trigger a segfault/hang/misc bad behaviour.
For FetchInPlaceResource, NgxBaseFetch would send two bytes down its
pipe, one upon HeaderComplete() and one upon HandleDone(). We need
only one to resume processing on the nginx side.
There is a race between ps_connection_read_handler() and processing
of the byte send by NgxBaseFetch::HandleDone().
ps_connection_read_handler() clears the pipe when the request is
finalized, and also drains it on each event - so two writes could be
processed as one when lucky, masking the problem).
One concrete problem this solved for me was that SPDY + IPRO +
proxy_pass would segfault, hang, and/or pass on 5xx/404 responses
from IPRO lookup fetches to the browser, next to alerts about
r->count being zero in nginx's error.log
Might fix https://github.com/pagespeed/ngx_pagespeed/issues/788
Fixes https://github.com/pagespeed/ngx_pagespeed/issues/792