Compare commits

..

42 Commits

Author SHA1 Message Date
Anupama Dutta ac1c845b11 Trunk tracking update to r3736 to r3758.
Updated global_only options to include the correct APACHE_CONFIG_OPTIONX directives.
Removed repeated tests for prioritize_critical_css basic functionality.
Added new tests, mostly downstream caching tests and related pagespeed.conf updates. Also added missing pagespeed.conf updates for downstream caching.
2014-02-07 09:39:51 -05:00
Jeff Kaufman f5252b569a trunk-tracking: update from r3715 to r3736
Squash-merge of Jan's #608 and Otto's #611.

* r3726:
  * Updated closure compiler flags for static JS files.
* r3729:
  * Centralize parsing of FetchHttps in SystemRewriteOptions so ngx_pagespeed
    can get it too.
  * To keep the helpful error_message from SerfUrlAsyncFetcher, wire it through
    RewriteOptions as a new-fangled error_detail.
* r3735:
  * Follow-up changes for downstream caching integration with beaconing
    dependent filters: If a downstream cache rebeaconing key is configured, we
    should instrument the page only if the key present in the PS-ShouldBeacon
    header matches the one in the configuration. This allows us to send no-cache
    headers for anything that carries the right beaconing key, and continue to
    send out the original cache control headers in other cases where downstream
    caching is enabled.
* Native fetcher: fortify handling of content length (and absense).
* Native fetcher: fail when the stream terminates before having
  completely parsed the headers.
* Tests: Rename `test_filter` -> `start_test` in ngx_system_test.sh for
  a test.
* Tests: Move blockingrewrite key to the http {} block.
* Tests: Update localhost -> 127.0.0.1. The native fetcher uses
  dns to resolve, and won't be able to retreive an ip for
  localhost.
* Tests: Allow outstanding proxy fetches some time to finish
  when running under valgrind, before terminating nginx.
* Valgrind: Add suppressions to make testing a release build pass.

This pull update was extra work because the valgrind and native fetcher flows
had rotted a bit.  We need to make sure to test them with every update.
2014-01-31 10:07:36 -05:00
Jeff Kaufman 4783144e7d Merge branch 'master' into trunk-tracking 2014-01-24 16:56:26 -05:00
Jeff Kaufman 83205c9c31 Merge pull request #606 from pagespeed/oschaaf-multiple-experiment-cookies
Experiments: fix sending out multiple experiment cookies
2014-01-24 13:12:27 -08:00
Otto van der Schaaf 625e762961 Experiments: fix sending out multiple experiment cookies
Only classify people into an experiment when we are rewriting html.
Fixes https://github.com/pagespeed/ngx_pagespeed/issues/586
2014-01-24 22:09:31 +01:00
Jeff Kaufman c20affe323 Merge pull request #605 from pagespeed/oschaaf-date-header
Date header: use current date when we don't get one handed over
2014-01-24 08:47:00 -08:00
Otto van der Schaaf 7a9e6de802 Date header: use current date when we don't get one handed over
When the content generator does not supply us with a date header,
we need to create one ourselves and set it to the current date.

Fixes:
https://github.com/pagespeed/ngx_pagespeed/issues/604 (duplicate)
https://github.com/pagespeed/ngx_pagespeed/issues/577
2014-01-24 16:49:37 +01:00
Huibao Lin 96cf9a22f7 Update to 1.7.30.3 release 2014-01-16 18:37:36 -05:00
Jeff Kaufman ab83a70a35 Merge pull request #599 from eezis/docfix
Added a missing 'cd ~' command to the README
2014-01-16 13:30:44 -08:00
Ernest Ezis 658b2cf7a9 Added the missing 'cd ~' command to the '3. Download and build nginx:' section 2014-01-16 12:45:14 -07:00
Jeff Kaufman 092bbf2862 Merge pull request #597 from pagespeed/jefftk-update-trunk-tracking
trunk-tracking: update from r3696 to r3714
2014-01-16 07:18:31 -08:00
Jeff Kaufman f04c533df0 trunk-tracking: update from r3696 to r3714
https://code.google.com/p/modpagespeed/source/detail?r=3697
 * system test improvements

https://code.google.com/p/modpagespeed/source/detail?r=3699
 * moved some config from location block to server block
 * system test improvements

https://code.google.com/p/modpagespeed/source/detail?r=3707
 * tests for OptimizeForBandwidth
   * had to switch tests from directory blocks to server+location blocks

https://code.google.com/p/modpagespeed/source/detail?r=3708
 * update to a test that had never been ported

* ngx_pagespeed.cc:
  * change in signature of FindIgnoreCase

https://code.google.com/p/modpagespeed/source/detail?r=3689
 * Was apparently skipped with #591.
2014-01-15 13:18:47 -05:00
Jeff Kaufman 8468e4849a Merge branch 'master' into trunk-tracking 2014-01-08 10:51:42 -05:00
Jeff Kaufman df5736609d native-fetcher: add support for FetchProxy
The native fetcher previously ignored FetchProxy settings; now it doesn't.

Squash-merge of tcpper's #590.
2014-01-08 10:51:24 -05:00
Jeff Kaufman 7fbb2c61ee readme: release 1.7.30.2 2014-01-06 16:51:41 -05:00
Jeff Kaufman af772c2fe8 Merge pull request #592 from pagespeed/jefftk-better-configure-error
config: point people to obj/autoconf.err when psol isn't detected
2014-01-03 03:33:47 -08:00
Jeff Kaufman a4bd9b9c13 config: point people to obj/autoconf.err when psol isn't detected by ./configure 2014-01-02 23:09:42 -05:00
Jeff Kaufman f87d0f7ae2 Merge pull request #591 from pagespeed/jud-trunk-tracking
Updates to trunk tracking branch.
2014-01-02 13:43:11 -08:00
Jud Porter 501742cb56 trunk-tracking: update from r3677 to r3696.
Add inline unauthorized resources test and fix rendered image dimensions test.
2014-01-02 16:42:10 -05:00
Jeff Kaufman a669be99b1 Merge branch 'master' into trunk-tracking 2014-01-02 16:25:23 -05:00
Jeff Kaufman 328d3afc9b Merge pull request #583 from pagespeed/jefftk-support-purge
native-fetcher: support non-GET request methods like PURGE
2013-12-20 07:11:55 -08:00
Jeff Kaufman 2681c24ee0 native-fetcher: fix to work with nginx 1.5.8+
nginx 1.5.8 changed the resolver api, which the native fetcher uses.

Fixes #578.

Squash-merge of @dinic's #581.
2013-12-19 12:46:53 -05:00
Jeff Kaufman f86f47fda4 native-fetcher: support non-GET request methods like PURGE 2013-12-19 11:38:46 -05:00
Jeff Kaufman 179c81afa3 test: don't run downstream caching test with native fetcher 2013-12-19 11:03:53 -05:00
Jeff Kaufman 8d7eb20c89 Merge pull request #575 from pagespeed/dec9-trunk-tracking-updates
Trunk tracking updates to sync to PSOl r3677.
2013-12-13 07:14:30 -08:00
Anupama Dutta 1fe6c54b94 Trunk tracking updates to sync to PSOl r3677.
Includes minor additions to test only.

Trunk tracking updates to sync to PSOl r3677.

Trunk tracking update to sync to PSOL r3677

Includes minor additions to test only.
2013-12-12 22:27:27 -05:00
Jeff Kaufman 1667879202 Merge pull request #571 from morlovich/morlovich-trunk-tracking-20131125
Trunk tracking update for up to r3646
2013-12-02 07:45:08 -08:00
Maks Orlovich 0076e45677 Port over ModPagespeed r3635: RespectXForwardedProto is vhost-scope 2013-11-27 12:59:47 -05:00
Jeff Kaufman 1f3560ea21 backport header-only fix
Was: trunk-tracking: update to r3632 from 1.7.30.1
2013-11-26 16:39:52 -05:00
Jeff Kaufman 54352bff72 trunk-tracking: update to r3632 from 1.7.30.1 2013-11-26 16:37:30 -05:00
Maks Orlovich 329985659c Fix flakey test
It doesn't make sense to fetch_until the uncombined URL and then
grep for combined one: you might just get entirely unoptimized
output and fail the test. Instead wait for the combining to happen,
and make sure it combined the right # of things.
2013-11-25 16:32:31 -05:00
Maks Orlovich db870f7023 Update to pagespeed_automatic.ca more symbols + symbol renaming
This make it possible to build w/o disabling SSL, makes us need
less extra .a + .cc files, but does mean that APR can't be used
directly (since it got renamed), so just use PosixTimer where it was
used.
2013-11-25 15:36:30 -05:00
Jeff Kaufman ed14455412 valgrind: unflake cache purging test
Fixes #569.
2013-11-25 14:38:24 -05:00
Jeff Kaufman be4d263d10 valgrind: suppressions file might not be in current directory 2013-11-25 10:23:25 -05:00
Jeff Kaufman 0bafd6b7e0 Merge pull request #565 from pagespeed/oschaaf-valgrind
Valgrind: Add an automated test
2013-11-25 07:03:33 -08:00
Otto van der Schaaf 9bbe912bd7 Valgrind: Add an automated test
This makes nginx run in the background under valgrind,
with both a master and a child process.
Valgrind errors will be redirected to `valgrind.log`.
When `USE_VALGRIND` is set, all system tests will be run under valgrind,
and at the end a new test is appended which ensures no valgrind errors
where encountered.

It is also worth noting that:
- There is a new file, `valgrind.sup`, which contains a few suppressions.
- Some tests behave flakey under valgrind. For now these are appended
  to the expected failures (when under valgrind only).
- 'Possibly lost' errors are all suppressed to get the amount of false
  positives manageable.
2013-11-21 21:26:15 +01:00
Jeff Kaufman b78eb8a939 Merge pull request #567 from pagespeed/oschaaf-304-timeout
system-tests: Test keepalive behaviour after a 304 response
2013-11-21 11:05:18 -08:00
Otto van der Schaaf e082a01912 system-tests: Test keepalive behaviour after a 304 response 2013-11-20 23:15:14 +01:00
Jeff Kaufman fa5815e1e8 Merge pull request #560 from pagespeed/jefftk-fix-messages
messages: unbreak /ngx_pagespeed_messages
2013-11-12 11:51:57 -08:00
Jeff Kaufman f12af2f03b messages: unbreak /ngx_pagespeed_messages
The shared circular buffer wasn't hooked up fully, which meant loading
/ngx_pagespeed_messages didn't work.  This fixes that and adds a test.

I also noticed while adding this that the 'Handling of large files' test
wasn't set up properly, so I converted that to use start_test.

Fixing that exposed another bug where the 'Handling of large files' test
was actually failing but being marked as an expected failure by being
grouped in with the test above.  Adding `pagespeed MaxHtmlParseBytes 5000`
to the appropriate location made it test what it was supposed to be testing
again, and the underlying feature wasn't broken.
2013-11-12 13:11:12 -05:00
Jeff Kaufman e22fae46bc readme: use release 2013-11-08 11:41:53 -05:00
Jeff Kaufman 53a599fbd4 readme: recommend tmpfs for file cache 2013-11-08 11:36:16 -05:00
12 changed files with 932 additions and 249 deletions
+2 -1
View File
@@ -47,6 +47,7 @@ recompiling Tengine](https://github.com/pagespeed/ngx_pagespeed/wiki/Using-ngx_p
3. Download and build nginx:
```bash
$ cd ~
$ # check http://nginx.org/en/download.html for the latest version
$ wget http://nginx.org/download/nginx-1.4.4.tar.gz
$ tar -xvzf nginx-1.4.4.tar.gz
@@ -72,7 +73,7 @@ In your `nginx.conf`, add to the main or server block:
```nginx
pagespeed on;
pagespeed FileCachePath /var/ngx_pagespeed_cache;
pagespeed FileCachePath /var/ngx_pagespeed_cache; # Use tmpfs for best results.
```
In every server block where pagespeed is enabled add:
+9 -22
View File
@@ -27,8 +27,8 @@ if [ "$mod_pagespeed_dir" = "unset" ] ; then
echo " You need to separately download the pagespeed library:"
echo ""
echo " $ cd /path/to/ngx_pagespeed"
echo " $ wget https://dl.google.com/dl/page-speed/psol/1.7.30.3.tar.gz"
echo " $ tar -xzvf 1.7.30.3.tar.gz # expands to psol/"
echo " $ wget https://dl.google.com/dl/page-speed/psol/1.7.30.2.tar.gz"
echo " $ tar -xzvf 1.7.30.2.tar.gz # expands to psol/"
echo ""
echo " Or see the installation instructions:"
echo " https://github.com/pagespeed/ngx_pagespeed#how-to-build"
@@ -89,9 +89,7 @@ if [ "$uname_arch" = "i686" ]; then
FLAG_MARCH='-march=i686'
fi
# Building with HTTPS fetching enabled pulls in a version of OpenSSL that causes
# linker errors, so disable it here.
CFLAGS="$CFLAGS -DSERF_HTTPS_FETCHING=0 $FLAG_MARCH"
CFLAGS="$CFLAGS $FLAG_MARCH"
case "$NGX_GCC_VER" in
4.8*)
@@ -128,17 +126,10 @@ ngx_feature_path="$pagespeed_include"
if $build_from_source ; then
psol_library_binaries="\
$mod_pagespeed_dir/net/instaweb/automatic/pagespeed_automatic.a \
$mod_pagespeed_dir/out/$buildtype/obj.target/third_party/serf/libserf.a \
$mod_pagespeed_dir/out/$buildtype/obj.target/third_party/aprutil/libaprutil.a \
$mod_pagespeed_dir/out/$buildtype/obj.target/third_party/apr/libapr.a"
$mod_pagespeed_dir/net/instaweb/automatic/pagespeed_automatic.a"
else
psol_library_dir="$ngx_addon_dir/psol/lib/$buildtype/$os_name/$arch_name"
psol_library_binaries="\
$psol_library_dir/pagespeed_automatic.a \
$psol_library_dir/libserf.a \
$psol_library_dir/libaprutil.a \
$psol_library_dir/libapr.a"
psol_library_binaries="$psol_library_dir/pagespeed_automatic.a"
fi
pagespeed_libs="-lstdc++ $psol_library_binaries -lrt -pthread -lm"
@@ -190,13 +181,7 @@ if [ $ngx_found = yes ]; then
$ps_src/ngx_rewrite_driver_factory.cc \
$ps_src/ngx_rewrite_options.cc \
$ps_src/ngx_server_context.cc \
$ps_src/ngx_url_async_fetcher.cc \
$mod_pagespeed_dir/out/$buildtype/obj/gen/data2c_out/instaweb/net/instaweb/apache/install/mod_pagespeed_example/mod_pagespeed_console_out.cc \
$mod_pagespeed_dir/out/$buildtype/obj/gen/data2c_out/instaweb/net/instaweb/apache/install/mod_pagespeed_example/mod_pagespeed_console_css_out.cc \
$mod_pagespeed_dir/out/$buildtype/obj/gen/data2c_out/instaweb/net/instaweb/apache/install/mod_pagespeed_example/mod_pagespeed_console_html_out.cc \
$mod_pagespeed_dir/net/instaweb/system/add_headers_fetcher.cc \
$mod_pagespeed_dir/net/instaweb/system/loopback_route_fetcher.cc \
$mod_pagespeed_dir/net/instaweb/system/serf_url_async_fetcher.cc"
$ps_src/ngx_url_async_fetcher.cc"
# Make pagespeed run immediately before gzip.
HTTP_FILTER_MODULES=$(echo $HTTP_FILTER_MODULES |\
@@ -206,9 +191,11 @@ if [ $ngx_found = yes ]; then
sed "s/$HTTP_GZIP_FILTER_MODULE/ngx_pagespeed_etag_filter $HTTP_GZIP_FILTER_MODULE/")
CORE_LIBS="$CORE_LIBS $pagespeed_libs"
CORE_INCS="$CORE_INCS $pagespeed_include"
echo "List of modules (in reverse order of applicability): "$HTTP_FILTER_MODULES
else
cat << END
$0: error: module ngx_pagespeed requires the pagespeed optimization library
$0: error: module ngx_pagespeed requires the pagespeed optimization library.
Look in obj/autoconf.err for more details.
END
exit 1
fi
+60 -47
View File
@@ -70,7 +70,8 @@ namespace net_instaweb {
fetch_start_ms_(0),
fetch_end_ms_(0),
done_(false),
content_length_(0) {
content_length_(-1),
content_length_known_(false) {
ngx_memzero(&url_, sizeof(url_));
log_ = log;
pool_ = NULL;
@@ -142,19 +143,26 @@ namespace net_instaweb {
// The host is either a domain name or an IP address. First check
// if it's a valid IP address and only if that fails fall back to
// using the DNS resolver.
GoogleString s_ipaddress(reinterpret_cast<char*>(url_.host.data),
url_.host.len);
// Maybe we have a Proxy.
ngx_url_t* tmp_url = &url_;
if (0 != fetcher_->proxy_.url.len) {
tmp_url = &fetcher_->proxy_;
}
GoogleString s_ipaddress(reinterpret_cast<char*>(tmp_url->host.data),
tmp_url->host.len);
ngx_memzero(&sin_, sizeof(sin_));
sin_.sin_family = AF_INET;
sin_.sin_port = htons(url_.port);
sin_.sin_port = htons(tmp_url->port);
sin_.sin_addr.s_addr = inet_addr(s_ipaddress.c_str());
if (sin_.sin_addr.s_addr == INADDR_NONE) {
// inet_addr returned INADDR_NONE, which means the hostname
// isn't a valid IP address. Check DNS.
ngx_resolver_ctx_t temp;
temp.name.data = url_.host.data;
temp.name.len = url_.host.len;
temp.name.data = tmp_url->host.data;
temp.name.len = tmp_url->host.len;
resolver_ctx_ = ngx_resolve_start(fetcher_->resolver_, &temp);
if (resolver_ctx_ == NULL || resolver_ctx_ == NGX_NO_RESOLVER) {
// TODO(oschaaf): this spams the log, but is useful in the fetcher's
@@ -166,8 +174,8 @@ namespace net_instaweb {
}
resolver_ctx_->data = this;
resolver_ctx_->name.data = url_.host.data;
resolver_ctx_->name.len = url_.host.len;
resolver_ctx_->name.data = tmp_url->host.data;
resolver_ctx_->name.len = tmp_url->host.len;
#if (nginx_version < 1005008)
resolver_ctx_->type = NGX_RESOLVE_A;
@@ -263,37 +271,14 @@ namespace net_instaweb {
return false;
}
str_url_.copy(reinterpret_cast<char*>(url_.url.data), str_url_.length(), 0);
size_t scheme_offset;
u_short port;
if (ngx_strncasecmp(url_.url.data, reinterpret_cast<u_char*>(
const_cast<char*>("http://")), 7) == 0) {
scheme_offset = 7;
port = 80;
} else if (ngx_strncasecmp(url_.url.data, reinterpret_cast<u_char*>(
const_cast<char*>("https://")), 8) == 0) {
scheme_offset = 8;
port = 443;
} else {
scheme_offset = 0;
port = 80;
}
url_.url.data += scheme_offset;
url_.url.len -= scheme_offset;
url_.default_port = port;
// See: http://lxr.evanmiller.org/http/source/core/ngx_inet.c#L875
url_.no_resolve = 0;
url_.uri_part = 1;
if (ngx_parse_url(pool_, &url_) == NGX_OK) {
return true;
}
return false;
return NgxUrlAsyncFetcher::ParseUrl(&url_, pool_);
}
// Issue a request after the resolver is done
void NgxFetch::NgxFetchResolveDone(ngx_resolver_ctx_t* resolver_ctx) {
NgxFetch* fetch = static_cast<NgxFetch*>(resolver_ctx->data);
NgxUrlAsyncFetcher* fetcher = fetch->fetcher_;
if (resolver_ctx->state != NGX_OK) {
if (fetch->timeout_event() != NULL && fetch->timeout_event()->timer_set) {
ngx_del_timer(fetch->timeout_event());
@@ -322,6 +307,11 @@ namespace net_instaweb {
fetch->sin_.sin_family = AF_INET;
fetch->sin_.sin_port = htons(fetch->url_.port);
// Maybe we have Proxy
if (0 != fetcher->proxy_.url.len) {
fetch->sin_.sin_port = htons(fetcher->proxy_.port);
}
char* ip_address = inet_ntoa(fetch->sin_.sin_addr);
fetch->message_handler()->Message(
@@ -352,7 +342,13 @@ namespace net_instaweb {
bool have_host = false;
GoogleString port;
size = sizeof("GET ") - 1 + url_.uri.len + sizeof(" HTTP/1.0\r\n") - 1;
const char* method = request_headers->method_string();
size_t method_len = strlen(method);
size = (method_len +
1 /* for the space */ +
url_.uri.len +
sizeof(" HTTP/1.0\r\n") - 1);
for (int i = 0; i < request_headers->NumAttributes(); i++) {
// if no explicit host header is given in the request headers,
@@ -380,7 +376,8 @@ namespace net_instaweb {
return NGX_ERROR;
}
out_->last = ngx_cpymem(out_->last, "GET ", 4);
out_->last = ngx_cpymem(out_->last, method, method_len);
out_->last = ngx_cpymem(out_->last, " ", 1);
out_->last = ngx_cpymem(out_->last, url_.uri.data, url_.uri.len);
out_->last = ngx_cpymem(out_->last, " HTTP/1.0\r\n", 11);
@@ -488,9 +485,16 @@ namespace net_instaweb {
}
if (n == 0) {
// connection is closed prematurely by remote server,
// or the content-length was 0
fetch->CallbackDone(fetch->content_length_ == 0);
// If the content length was not known, we assume that we have read
// all if we at least parsed the headers.
// If we do know the content length, having a mismatch on the bytes read
// will be interpreted as an error.
if (fetch->content_length_known_) {
fetch->CallbackDone(fetch->content_length_ == fetch->bytes_received_);
} else {
fetch->CallbackDone(fetch->parser_.headers_complete());
}
return;
} else if (n > 0) {
fetch->in_->pos = fetch->in_->start;
@@ -557,13 +561,21 @@ namespace net_instaweb {
if (n > size) {
return false;
} else if (fetch->parser_.headers_complete()) {
int64 content_length = -1;
fetch->async_fetch_->response_headers()->FindContentLength(
&content_length);
fetch->content_length_ = content_length;
if (fetch->fetcher_->track_original_content_length()) {
if (fetch->async_fetch_->response_headers()->FindContentLength(
&fetch->content_length_)) {
if (fetch->content_length_ < 0) {
fetch->message_handler_->Message(
kError, "Negative content-length in response header");
return false;
} else {
fetch->content_length_known_ = true;
}
}
if (fetch->fetcher_->track_original_content_length()
&& fetch->content_length_known_) {
fetch->async_fetch_->response_headers()->SetOriginalContentLength(
content_length);
fetch->content_length_);
}
fetch->in_->pos += n;
@@ -582,11 +594,12 @@ namespace net_instaweb {
return true;
}
fetch->bytes_received_add(static_cast<int64>(size));
fetch->bytes_received_add(size);
if (fetch->async_fetch_->Write(StringPiece(data, size),
fetch->message_handler())) {
fetch->content_length_ -= size;
if (fetch->content_length_ <= 0) {
if (fetch->content_length_known_ &&
fetch->bytes_received_ == fetch->content_length_) {
fetch->done_ = true;
}
return true;
+2 -1
View File
@@ -121,12 +121,13 @@ class NgxFetch : public PoolElement<NgxFetch> {
AsyncFetch* async_fetch_;
ResponseHeadersParser parser_;
MessageHandler* message_handler_;
size_t bytes_received_;
int64 bytes_received_;
int64 fetch_start_ms_;
int64 fetch_end_ms_;
int64 timeout_ms_;
bool done_;
int64 content_length_;
bool content_length_known_;
struct sockaddr_in sin_;
ngx_log_t* log_;
+5 -6
View File
@@ -18,13 +18,13 @@
#include <signal.h>
#include "apr_time.h"
#include "net/instaweb/util/public/abstract_mutex.h"
#include "net/instaweb/util/public/debug.h"
#include "net/instaweb/util/public/shared_circular_buffer.h"
#include "net/instaweb/util/public/string_util.h"
#include "net/instaweb/public/version.h"
#include "pagespeed/kernel/base/posix_timer.h"
#include "pagespeed/kernel/base/time_util.h"
namespace {
@@ -118,10 +118,9 @@ void NgxMessageHandler::MessageVImpl(MessageType type, const char* msg,
// Prepend time and severity to message.
// Format is [time] [severity] [pid] message.
GoogleString message;
char time_buffer[APR_CTIME_LEN + 1];
const char* time = time_buffer;
apr_status_t status = apr_ctime(time_buffer, apr_time_now());
if (status != APR_SUCCESS) {
GoogleString time;
PosixTimer timer;
if (!ConvertTimeToString(timer.NowMs(), &time)) {
time = "?";
}
StrAppend(&message, "[", time, "] ",
+41 -46
View File
@@ -69,6 +69,7 @@
#include "net/instaweb/util/public/string_writer.h"
#include "net/instaweb/util/public/time_util.h"
#include "net/instaweb/util/stack_buffer.h"
#include "pagespeed/kernel/base/posix_timer.h"
#include "pagespeed/kernel/thread/pthread_shared_mem.h"
#include "pagespeed/kernel/html/html_keywords.h"
@@ -244,6 +245,13 @@ void copy_response_headers_from_ngx(const ngx_http_request_t* r,
headers->Add(HttpAttributes::kContentType,
str_to_string_piece(r->headers_out.content_type));
// When we don't have a date header, invent one.
const char* date = headers->Lookup1(HttpAttributes::kDate);
if (date == NULL) {
headers->SetDate(ngx_current_msec);
}
// TODO(oschaaf): ComputeCaching should be called in setupforhtml()?
headers->ComputeCaching();
}
@@ -476,53 +484,25 @@ enum OptionLevel {
// we end up needing te compare.
// TODO(oschaaf): this duplication is a short term solution.
const char* const global_only_options[] = {
"BlockingRewriteKey",
"CacheFlushFilename",
"CacheFlushPollIntervalSec",
"DangerPermitFetchFromUnknownHosts",
"CriticalImagesBeaconEnabled",
"ExperimentalFetchFromModSpdy",
"FetcherTimeoutMs",
"FetchHttps",
"FetchWithGzip",
"FileCacheCleanIntervalMs",
"FileCacheInodeLimit",
"FileCachePath",
"FileCacheSizeKb",
"FetchProxy",
"ForceCaching",
"ImageMaxRewritesAtOnce",
"GeneratedFilePrefix",
"ImgMaxRewritesAtOnce",
"InheritVHostConfig",
"InstallCrashHandler",
"LRUCacheByteLimit",
"LRUCacheKbPerProcess",
"MaxCacheableContentLength",
"MemcachedServers",
"MemcachedThreads",
"MemcachedTimeoutUs",
"MessageBufferSize",
"NumRewriteThreads",
"NumExpensiveRewriteThreads",
"RateLimitBackgroundFetches",
"ReportUnloadTime",
"RespectXForwardedProto",
"SharedMemoryLocks",
"SlurpDirectory",
"SlurpFlushLimit",
"SlurpReadOnly",
"SupportNoScriptEnabled",
"StatisticsLoggingChartsCSS",
"StatisticsLoggingChartsJS",
"TestProxy",
"TestProxySlurp",
"TrackOriginalContentLength",
"UsePerVHostStatistics",
"XHeaderValue",
"UsePerVHostStatistics", // TODO(anupama): What to do about "No longer used"
"BlockingRewriteRefererUrls",
"CreateSharedMemoryMetadataCache",
"LoadFromFile",
"LoadFromFileMatch",
"LoadFromFileRule",
"LoadFromFileRuleMatch",
"UseNativeFetcher"
"UseNativeFetcher" // TODO(anupama): Ask Jeff about this one.
};
bool ps_is_global_only_option(const StringPiece& option_name) {
@@ -1296,8 +1276,9 @@ bool ps_set_experiment_state_and_cookie(ngx_http_request_t* r,
bool need_cookie = cfg_s->server_context->experiment_matcher()->
ClassifyIntoExperiment(*request_headers, options);
if (need_cookie && host.length() > 0) {
int64 time_now_us = apr_time_now();
int64 expiration_time_ms = (time_now_us/1000 +
PosixTimer timer;
int64 time_now_ms = timer.NowMs();
int64 expiration_time_ms = (time_now_ms +
options->experiment_cookie_duration_ms());
// TODO(jefftk): refactor SetExperimentCookie to expose the value we want to
@@ -1341,7 +1322,8 @@ bool ps_determine_options(ngx_http_request_t* r,
RequestHeaders* request_headers,
ResponseHeaders* response_headers,
RewriteOptions** options,
GoogleUrl* url) {
GoogleUrl* url,
bool html_rewrite) {
ps_srv_conf_t* cfg_s = ps_get_srv_config(r);
ps_loc_conf_t* cfg_l = ps_get_loc_config(r);
@@ -1379,7 +1361,7 @@ bool ps_determine_options(ngx_http_request_t* r,
if (request_options != NULL) {
(*options)->Merge(*request_options);
delete request_options;
} else if ((*options)->running_experiment()) {
} else if ((*options)->running_experiment() && html_rewrite) {
bool ok = ps_set_experiment_state_and_cookie(
r, request_headers, *options, url->Host());
if (!ok) {
@@ -1634,7 +1616,7 @@ ngx_int_t ps_resource_handler(ngx_http_request_t* r, bool html_rewrite) {
RewriteOptions* options = NULL;
if (!ps_determine_options(r, request_headers.get(), response_headers.get(),
&options, &url)) {
&options, &url, html_rewrite)) {
return NGX_ERROR;
}
@@ -1653,7 +1635,7 @@ ngx_int_t ps_resource_handler(ngx_http_request_t* r, bool html_rewrite) {
// parameters. Keep url_string in sync with url.
url.Spec().CopyToString(&url_string);
if (options->respect_x_forwarded_proto()) {
if (cfg_s->server_context->global_options()->respect_x_forwarded_proto()) {
bool modified_url = ps_apply_x_forwarded_proto(r, &url_string);
if (modified_url) {
url.Reset(url_string);
@@ -1677,12 +1659,25 @@ ngx_int_t ps_resource_handler(ngx_http_request_t* r, bool html_rewrite) {
ctx->in_place = false;
ctx->pagespeed_connection = NULL;
// See build_context_for_request() in mod_instaweb.cc
// TODO(jefftk): Is this the right place to be modifying caching headers for
// html fetches? Or should that be done later, in the headers flow for
// filter mode, rather than here in resource fetch mode?
if (!options->modify_caching_headers()) {
ctx->preserve_caching_headers = kPreserveAllCachingHeaders;
} else if (!options->downstream_cache_purge_location_prefix().empty()) {
ctx->preserve_caching_headers = kPreserveOnlyCacheControl;
} else {
} else if (!options->IsDownstreamCacheIntegrationEnabled()) {
// Downstream cache integration is not enabled. Disable original
// Cache-Control headers.
ctx->preserve_caching_headers = kDontPreserveHeaders;
} else {
ctx->preserve_caching_headers = kPreserveOnlyCacheControl;
// Downstream cache integration is enabled. If a rebeaconing key has been
// configured and there is a ShouldBeacon header with the correct key,
// disable original Cache-Control headers so that the instrumented page is
// served out with no-cache.
StringPiece should_beacon(request_headers->Lookup1(kPsaShouldBeacon));
if (options->MatchesDownstreamCacheRebeaconingKey(should_beacon)) {
ctx->preserve_caching_headers = kDontPreserveHeaders;
}
}
ctx->recorder = NULL;
@@ -2159,7 +2154,8 @@ ngx_int_t ps_in_place_check_header_filter(ngx_http_request_t* r) {
return ngx_http_next_header_filter(r);
}
if (status_code == CacheUrlAsyncFetcher::kNotInCacheStatus) {
if (status_code == CacheUrlAsyncFetcher::kNotInCacheStatus &&
!r->header_only) {
server_context->rewrite_stats()->ipro_not_in_cache()->Add(1);
server_context->message_handler()->Message(
kInfo,
@@ -2392,8 +2388,7 @@ ngx_int_t ps_simple_handler(ngx_http_request_t* r,
char* cache_control_s = string_piece_to_pool_string(r->pool, cache_control);
if (cache_control_s != NULL) {
if (FindIgnoreCase(cache_control, "private") ==
static_cast<int>(StringPiece::npos)) {
if (FindIgnoreCase(cache_control, "private") == StringPiece::npos) {
response_headers.Add(HttpAttributes::kEtag, "W/\"0\"");
}
}
+2
View File
@@ -218,6 +218,8 @@ void NgxRewriteDriverFactory::LoggingInit(ngx_log_t* log) {
void NgxRewriteDriverFactory::SetCircularBuffer(
SharedCircularBuffer* buffer) {
ngx_shared_circular_buffer_ = buffer;
ngx_message_handler_->set_buffer(buffer);
ngx_html_parse_message_handler_->set_buffer(buffer);
}
void NgxRewriteDriverFactory::SetServerContextMessageHandler(
+36 -6
View File
@@ -66,10 +66,10 @@ namespace net_instaweb {
mutex_(NULL) {
resolver_timeout_ = resolver_timeout;
fetch_timeout_ = fetch_timeout;
ngx_memzero(&url_, sizeof(url_));
ngx_memzero(&proxy_, sizeof(proxy_));
if (proxy != NULL && *proxy != '\0') {
url_.url.data = reinterpret_cast<u_char*>(const_cast<char*>(proxy));
url_.url.len = ngx_strlen(proxy);
proxy_.url.data = reinterpret_cast<u_char*>(const_cast<char*>(proxy));
proxy_.url.len = ngx_strlen(proxy);
}
mutex_ = thread_system_->NewMutex();
log_ = log;
@@ -106,6 +106,36 @@ namespace net_instaweb {
}
}
bool NgxUrlAsyncFetcher::ParseUrl(ngx_url_t* url, ngx_pool_t* pool) {
size_t scheme_offset;
u_short port;
if (ngx_strncasecmp(url->url.data, reinterpret_cast<u_char*>(
const_cast<char*>("http://")), 7) == 0) {
scheme_offset = 7;
port = 80;
} else if (ngx_strncasecmp(url->url.data, reinterpret_cast<u_char*>(
const_cast<char*>("https://")), 8) == 0) {
scheme_offset = 8;
port = 443;
} else {
scheme_offset = 0;
port = 80;
}
url->url.data += scheme_offset;
url->url.len -= scheme_offset;
url->default_port = port;
// See: http://lxr.evanmiller.org/http/source/core/ngx_inet.c#L875
url->no_resolve = 0;
url->uri_part = 1;
if (ngx_parse_url(pool, url) == NGX_OK) {
return true;
}
return false;
}
// If there are still active requests, cancel them.
void NgxUrlAsyncFetcher::CancelActiveFetches() {
// TODO(oschaaf): this seems tricky, this may end up calling
@@ -167,15 +197,15 @@ namespace net_instaweb {
command_connection_->read->handler = CommandHandler;
ngx_add_event(command_connection_->read, NGX_READ_EVENT, 0);
if (url_.url.len == 0) {
if (proxy_.url.len == 0) {
return true;
}
// TODO(oschaaf): shouldn't we do this earlier? Do we need to clean
// up when parsing the url failed?
if (ngx_parse_url(pool_, &url_) != NGX_OK) {
if (!ParseUrl(&proxy_, pool_)) {
ngx_log_error(NGX_LOG_ERR, log_, 0,
"NgxUrlAsyncFetcher::Init parse proxy[%V] failed", &url_.url);
"NgxUrlAsyncFetcher::Init parse proxy[%V] failed", &proxy_.url);
return false;
}
return true;
+2 -1
View File
@@ -115,13 +115,14 @@ class NgxUrlAsyncFetcher : public UrlAsyncFetcher {
private:
static void TimeoutHandler(ngx_event_t* tev);
static bool ParseUrl(ngx_url_t* url, ngx_pool_t* pool);
friend class NgxFetch;
NgxFetchPool active_fetches_;
// Add the pending task to this list
NgxFetchPool pending_fetches_;
NgxFetchPool completed_fetches_;
ngx_url_t url_;
ngx_url_t proxy_;
int fetchers_count_;
bool shutdown_;
+397 -73
View File
@@ -133,10 +133,8 @@ VALGRIND_OPTIONS=""
if $USE_VALGRIND; then
DAEMON=off
MASTER_PROCESS=off
else
DAEMON=on
MASTER_PROCESS=on
fi
if [ "$NATIVE_FETCHER" = "on" ]; then
@@ -157,7 +155,6 @@ by nginx_system_test.sh; don't edit here."
EOF
cat $PAGESPEED_CONF_TEMPLATE \
| sed 's#@@DAEMON@@#'"$DAEMON"'#' \
| sed 's#@@MASTER_PROCESS@@#'"$MASTER_PROCESS"'#' \
| sed 's#@@TEST_TMP@@#'"$TEST_TMP/"'#' \
| sed 's#@@PROXY_CACHE@@#'"$PROXY_CACHE/"'#' \
| sed 's#@@TMP_PROXY_CACHE@@#'"$TMP_PROXY_CACHE/"'#' \
@@ -177,9 +174,16 @@ check_not_simple grep @@ $PAGESPEED_CONF
# start nginx with new config
if $USE_VALGRIND; then
echo "Run this command in another terminal and then press enter:"
echo " valgrind --leak-check=full $NGINX_EXECUTABLE -c $PAGESPEED_CONF"
read
(valgrind -q --leak-check=full --gen-suppressions=all \
--show-possibly-lost=no --log-file=$TEST_TMP/valgrind.log \
--suppressions="$this_dir/valgrind.sup" \
$NGINX_EXECUTABLE -c $PAGESPEED_CONF) & VALGRIND_PID=$!
trap "echo 'terminating valgrind!' && kill -s sigterm $VALGRIND_PID" EXIT
echo "Wait until nginx is ready to accept connections"
while ! curl -I "http://$PRIMARY_HOSTNAME/mod_pagespeed_example/" 2>/dev/null; do
sleep 0.1;
done
echo "Valgrind (pid:$VALGRIND_PID) is logging to $TEST_TMP/valgrind.log"
else
TRACE_FILE="$TEST_TMP/conf_loading_trace"
$NGINX_EXECUTABLE -c $PAGESPEED_CONF >& "$TRACE_FILE"
@@ -196,9 +200,45 @@ else
fi
fi
# Helper methods used by downstream caching tests.
# Helper method that does a wget and verifies that the rewriting status matches
# the $1 argument that is passed to this method.
check_rewriting_status() {
$WGET $WGET_ARGS $CACHABLE_HTML_LOC > $OUT_CONTENTS_FILE
if $1; then
check zgrep -q "pagespeed.ic" $OUT_CONTENTS_FILE
else
check_not zgrep -q "pagespeed.ic" $OUT_CONTENTS_FILE
fi
# Reset WGET_ARGS.
WGET_ARGS=""
}
# Helper method that obtains a gzipped response and verifies that rewriting
# has happened. Also takes an extra parameter that identifies extra headers
# to be added during wget.
check_for_rewriting() {
WGET_ARGS="$GZIP_WGET_ARGS $1"
check_rewriting_status true
}
# Helper method that obtains a gzipped response and verifies that no rewriting
# has happened. Also takes an extra parameter that identifies extra headers
# to be added during wget.
check_for_no_rewriting() {
WGET_ARGS="$GZIP_WGET_ARGS $1"
check_rewriting_status false
}
if $RUN_TESTS; then
echo "Starting tests"
else
if $USE_VALGRIND; then
# Clear valgrind trap
trap - EXIT
echo "To end valgrind, run 'kill -s quit $VALGRIND_PID'"
fi
echo "Not running tests; commence manual testing"
exit 4
fi
@@ -219,6 +259,18 @@ PAGESPEED_EXPECTED_FAILURES="
~IPRO-optimized resources should have fixed size, not chunked.~
"
# Some tests are flakey under valgrind. For now, add them to the expected failures
# when running under valgrind.
if $USE_VALGRIND; then
PAGESPEED_EXPECTED_FAILURES+="
~combine_css Maximum size of combined CSS.~
~prioritize_critical_css~
~IPRO flow uses cache as expected.~
~IPRO flow doesn't copy uncacheable resources multiple times.~
~inline_unauthorized_resources allows unauthorized css selectors~
"
fi
# The existing system test takes its arguments as positional parameters, and
# wants different ones than we want, so we need to reset our positional args.
set -- "$PRIMARY_HOSTNAME"
@@ -252,48 +304,79 @@ function run_post_cache_flush() {
# nginx-specific system tests
# Tests related to rewritten response (downstream) caching.
CACHABLE_HTML_LOC="${SECONDARY_HOSTNAME}/mod_pagespeed_test/cachable_rewritten_html"
TMP_LOG_LINE="proxy_cache.example.com GET /purge/mod_pagespeed_test/cachable_rewritten_"
PURGE_REQUEST_IN_ACCESS_LOG=$TMP_LOG_LINE"html/downstream_caching.html.*(200)"
# Number of downstream cache purges should be 0 here.
CURRENT_STATS=$($WGET_DUMP $STATISTICS_URL)
check_from "$CURRENT_STATS" egrep -q \
"downstream_cache_purge_attempts:[[:space:]]*0"
if [ "$NATIVE_FETCHER" = "on" ]; then
echo "Native fetcher doesn't support PURGE requests and so we can't use or"
echo "test downstream caching."
else
CACHABLE_HTML_LOC="http://${SECONDARY_HOSTNAME}/mod_pagespeed_test/cachable_rewritten_html"
CACHABLE_HTML_LOC+="/downstream_caching.html"
TMP_LOG_LINE="proxy_cache.example.com GET /purge/mod_pagespeed_test/cachable_rewritten_"
PURGE_REQUEST_IN_ACCESS_LOG=$TMP_LOG_LINE"html/downstream_caching.html.*(200)"
# The 1st request results in a cache miss, non-rewritten response
# produced by pagespeed code and a subsequent purge request.
start_test Check for case where rewritten cache should get purged.
WGET_ARGS="--header=Host:proxy_cache.example.com"
OUT=$($WGET_DUMP $WGET_ARGS $CACHABLE_HTML_LOC/downstream_caching.html)
check_not_from "$OUT" egrep -q "pagespeed.ic"
check_from "$OUT" egrep -q "X-Cache: MISS"
fetch_until $STATISTICS_URL \
'grep -c downstream_cache_purge_attempts:[[:space:]]*1' 1
check [ $(grep -ce "$PURGE_REQUEST_IN_ACCESS_LOG" $ACCESS_LOG) = 1 ];
OUT_CONTENTS_FILE="$OUTDIR/gzipped.html"
OUT_HEADERS_FILE="$OUTDIR/headers.html"
GZIP_WGET_ARGS="-q -S --header=Accept-Encoding:gzip -o $OUT_HEADERS_FILE -O - "
# The 2nd request results in a cache miss (because of the previous purge),
# rewritten response produced by pagespeed code and no new purge requests.
start_test Check for case where rewritten cache should not get purged.
BLOCKING_WGET_ARGS=$WGET_ARGS" --header=X-PSA-Blocking-Rewrite:psatest"
OUT=$($WGET_DUMP $BLOCKING_WGET_ARGS $CACHABLE_HTML_LOC/downstream_caching.html)
check_from "$OUT" egrep -q "pagespeed.ic"
check_from "$OUT" egrep -q "X-Cache: MISS"
CURRENT_STATS=$($WGET_DUMP $STATISTICS_URL)
check_from "$CURRENT_STATS" egrep -q \
"downstream_cache_purge_attempts:[[:space:]]*1"
check [ $(grep -ce "$PURGE_REQUEST_IN_ACCESS_LOG" $ACCESS_LOG) = 1 ];
# Number of downstream cache purges should be 0 here.
CURRENT_STATS=$($WGET_DUMP $STATISTICS_URL)
check_from "$CURRENT_STATS" egrep -q \
"downstream_cache_purge_attempts:[[:space:]]*0"
# The 3rd request results in a cache hit (because the previous response is
# now present in cache), rewritten response served out from cache and not
# by pagespeed code and no new purge requests.
start_test Check for case where there is a rewritten cache hit.
OUT=$($WGET_DUMP $WGET_ARGS $CACHABLE_HTML_LOC/downstream_caching.html)
check_from "$OUT" egrep -q "pagespeed.ic"
check_from "$OUT" egrep -q "X-Cache: HIT"
fetch_until $STATISTICS_URL \
'grep -c downstream_cache_purge_attempts:[[:space:]]*1' 1
check [ $(grep -ce "$PURGE_REQUEST_IN_ACCESS_LOG" $ACCESS_LOG) = 1 ];
# The 1st request results in a cache miss, non-rewritten response
# produced by pagespeed code and a subsequent purge request.
# Because of the random bypassing of the cache (required for beaconing
# integration), this request could result in a BYPASS as well.
start_test Check for case where rewritten cache should get purged.
check_for_no_rewriting "--header=Host:proxy_cache.example.com"
check egrep -q "X-Cache: MISS|BYPASS" $OUT_HEADERS_FILE
fetch_until $STATISTICS_URL \
'grep -c downstream_cache_purge_attempts:[[:space:]]*1' 1
while [ x"$(grep "$PURGE_REQUEST_IN_ACCESS_LOG" $ACCESS_LOG)" == x"" ] ; do
echo "waiting for purge request to show up in access log"
sleep .2
done
check [ $(grep -ce "$PURGE_REQUEST_IN_ACCESS_LOG" $ACCESS_LOG) = 1 ];
# The 2nd request results in a cache miss (because of the previous purge),
# rewritten response produced by pagespeed code and no new purge requests.
# Because of the random bypassing of the cache (required for beaconing
# integration), this request could result in a BYPASS as well.
start_test Check for case where rewritten cache should not get purged.
check_for_rewriting "--header=Host:proxy_cache.example.com \
--header=X-PSA-Blocking-Rewrite:psatest"
check egrep -q "X-Cache: MISS|BYPASS" $OUT_HEADERS_FILE
CURRENT_STATS=$($WGET_DUMP $STATISTICS_URL)
check_from "$CURRENT_STATS" egrep -q \
"downstream_cache_purge_attempts:[[:space:]]*1"
check [ $(grep -ce "$PURGE_REQUEST_IN_ACCESS_LOG" $ACCESS_LOG) = 1 ];
# The 3rd request results in a cache hit (because the previous response is
# now present in cache), rewritten response served out from cache and not
# by pagespeed code and no new purge requests.
start_test Check for case where there is a rewritten cache hit.
check_for_rewriting "--header=Host:proxy_cache.example.com"
check egrep -q "X-Cache: HIT" $OUT_HEADERS_FILE
fetch_until $STATISTICS_URL \
'grep -c downstream_cache_purge_attempts:[[:space:]]*1' 1
check [ $(grep -ce "$PURGE_REQUEST_IN_ACCESS_LOG" $ACCESS_LOG) = 1 ];
# Enable one of the beaconing dependent filters and verify interaction
# between beaconing and downstream caching logic, by verifying that
# whenever beaconing code is present in the rewritten page, the
# output is also marked as a cache-miss, indicating that the instrumentation
# was done by the backend.
start_test Check whether beaconing is accompanied by a BYPASS always.
WGET_ARGS="-S --header=Host:proxy_cache.example.com"
CACHABLE_HTML_LOC+="?PageSpeedFilters=lazyload_images"
fetch_until -gzip $CACHABLE_HTML_LOC \
"zgrep -c \"pagespeed\.CriticalImages\.Run\"" 1
check egrep -q 'X-Cache: BYPASS' $WGET_OUTPUT
check fgrep -q 'Cache-Control: no-cache, max-age=0' $WGET_OUTPUT
fi
start_test Check for correct default X-Page-Speed header format.
OUT=$($WGET_DUMP $EXAMPLE_ROOT/combine_css.html)
@@ -365,7 +448,6 @@ check_from "$OUT" grep "$EXPECTED_EXAMPLES_TEXT"
# And also with bad request headers.
OUT=$(wget -O - --header=PageSpeedFilters:bogus $EXAMPLE_ROOT)
echo $OUT
check_from "$OUT" grep "$EXPECTED_EXAMPLES_TEXT"
# Test that loopback route fetcher works with vhosts not listening on
@@ -499,6 +581,18 @@ sleep .1
OUT=$($FETCH_CMD)
check_not_from "$OUT" fgrep "<style>"
# Tests that we get instant ipro rewrites with LoadFromFile and
# InPlaceWaitForOptimized get us first-pass rewrites.
start_test instant ipro with InPlaceWaitForOptimized and LoadFromFile
echo $WGET_DUMP $TEST_ROOT/ipro/instant/wait/purple.css
OUT=$($WGET_DUMP $TEST_ROOT/ipro/instant/wait/purple.css)
check_from "$OUT" fgrep -q 'body{background:#9370db}'
start_test instant ipro with ModPagespeedInPlaceRewriteDeadline and LoadFromFile
echo $WGET_DUMP $TEST_ROOT/ipro/instant/deadline/purple.css
OUT=$($WGET_DUMP $TEST_ROOT/ipro/instant/deadline/purple.css)
check_from "$OUT" fgrep -q 'body{background:#9370db}'
# If DisableRewriteOnNoTransform is turned off, verify that the rewriting
# applies even if Cache-control: no-transform is set.
start_test rewrite on Cache-control: no-transform
@@ -588,6 +682,59 @@ URL+="PageSpeedFilters=combine_javascript"
fetch_until $URL 'grep -c src=' 1
test_filter inline_javascript inlines a small JS file
start_test no inlining of unauthorized resources
URL="$TEST_ROOT/unauthorized/inline_unauthorized_javascript.html?\
PageSpeedFilters=inline_javascript,debug"
OUTFILE=$OUTDIR/blocking_rewrite.out.html
$WGET_DUMP --header 'X-PSA-Blocking-Rewrite: psatest' $URL > $OUTFILE
check egrep -q 'script[[:space:]]src=' $OUTFILE
EXPECTED_COMMENT_LINE="<!--InlineJs: Cannot create resource: either its \
domain is unauthorized and InlineUnauthorizedResources is not enabled, \
or it cannot be fetched (check the server logs)-->"
check grep -q "$EXPECTED_COMMENT_LINE" $OUTFILE
start_test inline_unauthorized_resources allows inlining
HOST_NAME="http://unauthorizedresources.example.com"
URL="$HOST_NAME/mod_pagespeed_test/unauthorized/"
URL+="inline_unauthorized_javascript.html?PageSpeedFilters=inline_javascript"
http_proxy=$SECONDARY_HOSTNAME \
fetch_until $URL 'grep -c script[[:space:]]src=' 0
start_test inline_unauthorized_resources does not allow rewriting
URL="$HOST_NAME/mod_pagespeed_test/unauthorized/"
URL+="inline_unauthorized_javascript.html?PageSpeedFilters=rewrite_javascript"
OUTFILE=$OUTDIR/blocking_rewrite.out.html
http_proxy=$SECONDARY_HOSTNAME \
$WGET_DUMP --header 'X-PSA-Blocking-Rewrite: psatest' $URL > $OUTFILE
check egrep -q 'script[[:space:]]src=' $OUTFILE
test_filter inline_css inlines a small CSS file
start_test no inlining of unauthorized resources.
URL="$TEST_ROOT/unauthorized/inline_css.html?\
PageSpeedFilters=inline_css,debug"
OUTFILE=$OUTDIR/blocking_rewrite.out.html
$WGET_DUMP --header 'X-PSA-Blocking-Rewrite: psatest' $URL > $OUTFILE
check egrep -q 'link[[:space:]]rel=' $OUTFILE
EXPECTED_COMMENT_LINE="<!--InlineCss: Cannot create resource: either its \
domain is unauthorized and InlineUnauthorizedResources is not enabled, \
or it cannot be fetched (check the server logs)-->"
check grep -q "$EXPECTED_COMMENT_LINE" $OUTFILE
start_test inline_unauthorized_resources allows inlining
HOST_NAME="http://unauthorizedresources.example.com"
URL="$HOST_NAME/mod_pagespeed_test/unauthorized/"
URL+="inline_css.html?PageSpeedFilters=inline_css"
http_proxy=$SECONDARY_HOSTNAME \
fetch_until $URL 'grep -c link[[:space:]]rel=' 0
start_test inline_unauthorized_resources does not allow rewriting
URL="$HOST_NAME/mod_pagespeed_test/unauthorized/"
URL+="inline_css.html?PageSpeedFilters=rewrite_css"
OUTFILE=$OUTDIR/blocking_rewrite.out.html
http_proxy=$SECONDARY_HOSTNAME \
$WGET_DUMP --header 'X-PSA-Blocking-Rewrite: psatest' $URL > $OUTFILE
check egrep -q 'link[[:space:]]rel=' $OUTFILE
start_test aris disables js inlining for introspective js and only i-js
URL="$TEST_ROOT/avoid_renaming_introspective_javascript__on/"
URL+="?PageSpeedFilters=inline_javascript"
@@ -735,6 +882,35 @@ echo Rewrite HTML with reference to a proxyable image.
fetch_until -save -recursive $URL?PageSpeedFilters=-inline_images \
'grep -c 1.gif.pagespeed' 1
start_test OptimizeForBandwidth
# We use blocking-rewrite tests because we want to make sure we don't
# get rewritten URLs when we don't want them.
function test_optimize_for_bandwidth() {
SECONDARY_HOST="optimizeforbandwidth.example.com"
OUT=$(http_proxy=$SECONDARY_HOSTNAME \
$WGET -q -O - --header=X-PSA-Blocking-Rewrite:psatest \
$SECONDARY_HOST/mod_pagespeed_test/optimize_for_bandwidth/$1)
check_from "$OUT" grep -q "$2"
if [ "$#" -ge 3 ]; then
check_from "$OUT" grep -q "$3"
fi
}
test_optimize_for_bandwidth rewrite_css.html \
'.blue{foreground-color:blue}body{background:url(arrow.png)}' \
'<link rel="stylesheet" type="text/css" href="yellow.css">'
test_optimize_for_bandwidth inline_css/rewrite_css.html \
'.blue{foreground-color:blue}body{background:url(arrow.png)}' \
'<style>.yellow{background-color:#ff0}</style>'
test_optimize_for_bandwidth css_urls/rewrite_css.html \
'.blue{foreground-color:blue}body{background:url(arrow.png)}' \
'<link rel="stylesheet" type="text/css" href="A.yellow.css.pagespeed'
test_optimize_for_bandwidth image_urls/rewrite_image.html \
'<img src=\"xarrow.png.pagespeed.'
test_optimize_for_bandwidth core_filters/rewrite_css.html \
'.blue{foreground-color:blue}body{background:url(xarrow.png.pagespeed.' \
'<style>.yellow{background-color:#ff0}</style>'
# To make sure that we can reconstruct the proxied content by going back
# to the origin, we must avoid hitting the output cache.
# Note that cache-flushing does not affect the cache of rewritten resources;
@@ -1535,6 +1711,8 @@ EXP_EXAMPLE="http://experiment.example.com/mod_pagespeed_example"
EXP_EXTEND_CACHE="$EXP_EXAMPLE/extend_cache.html"
OUT=$(http_proxy=$SECONDARY_HOSTNAME $WGET_DUMP $EXP_EXTEND_CACHE)
check_from "$OUT" fgrep "PageSpeedExperiment="
MATCHES=$(echo "$OUT" | grep -c "PageSpeedExperiment=")
check [ $MATCHES -eq 1 ]
start_test PageSpeedFilters query param should disable experiments.
URL="$EXP_EXTEND_CACHE?PageSpeed=on&PageSpeedFilters=rewrite_css"
@@ -1657,8 +1835,8 @@ check_from "$RESOURCE_HEADERS" egrep -q 'Cache-Control: max-age=31536000'
# Test critical CSS beacon injection, beacon return, and computation. This
# requires UseBeaconResultsInFilters() to be true in rewrite_driver_factory.
# NOTE: must occur after cache flush on a repeat run. All repeat runs now
# run the cache flush test.
# NOTE: must occur after cache flush, which is why it's in this embedded
# block. The flush removes pre-existing beacon results from the pcache.
test_filter prioritize_critical_css
fetch_until -save $URL 'fgrep -c pagespeed.criticalCssBeaconInit' 1
check [ $(fgrep -o ".very_large_class_name_" $FETCH_FILE | wc -l) -eq 36 ]
@@ -1666,7 +1844,8 @@ CALL_PAT=".*criticalCssBeaconInit("
SKIP_ARG="[^,]*,"
CAPTURE_ARG="'\([^']*\)'.*"
BEACON_PATH=$(sed -n "s/${CALL_PAT}${CAPTURE_ARG}/\1/p" $FETCH_FILE)
ESCAPED_URL=$(sed -n "s/${CALL_PAT}${SKIP_ARG}${CAPTURE_ARG}/\1/p" $FETCH_FILE)
ESCAPED_URL=$( \
sed -n "s/${CALL_PAT}${SKIP_ARG}${CAPTURE_ARG}/\1/p" $FETCH_FILE)
OPTIONS_HASH=$( \
sed -n "s/${CALL_PAT}${SKIP_ARG}${SKIP_ARG}${CAPTURE_ARG}/\1/p" $FETCH_FILE)
NONCE=$( \
@@ -1689,7 +1868,6 @@ fetch_until $URL \
'grep -c <style>[.]blue{[^}]*}[.]bold{[^}]*}</style>' 1
fetch_until -save $URL \
'grep -c <style>[.]foo{[^}]*}</style>' 1
# The last one should also have the other 3, too.
check [ `grep -c '<style>[.]blue{[^}]*}</style>' $FETCH_UNTIL_OUTFILE` = 1 ]
check [ `grep -c '<style>[.]big{[^}]*}</style>' $FETCH_UNTIL_OUTFILE` = 1 ]
@@ -1710,13 +1888,13 @@ start_test resize_rendered_image_dimensions with critical images beacon
HOST_NAME="http://renderedimagebeacon.example.com"
URL="$HOST_NAME/mod_pagespeed_test/image_rewriting/image_resize_using_rendered_dimensions.html"
http_proxy=$SECONDARY_HOSTNAME\
fetch_until -save -recursive $URL 'fgrep -c "pagespeed_url_hash"' 1 \
'--header=X-PSA-Blocking-Rewrite:psatest'
check [ $(grep -c "^pagespeed\.criticalImagesBeaconInit" \
fetch_until -save -recursive $URL 'fgrep -c "pagespeed_url_hash"' 2 \
'--header=X-PSA-Blocking-Rewrite:psatest'
check [ $(grep -c "^pagespeed\.CriticalImages\.Run" \
$WGET_DIR/image_resize_using_rendered_dimensions.html) = 1 ];
OPTIONS_HASH=$(awk -F\' '/^pagespeed\.criticalImagesBeaconInit/ {print $(NF-3)}' \
OPTIONS_HASH=$(awk -F\' '/^pagespeed\.CriticalImages\.Run/ {print $(NF-3)}' \
$WGET_DIR/image_resize_using_rendered_dimensions.html)
NONCE=$(awk -F\' '/^pagespeed\.criticalImagesBeaconInit/ {print $(NF-1)}' \
NONCE=$(awk -F\' '/^pagespeed\.CriticalImages\.Run/ {print $(NF-1)}' \
$WGET_DIR/image_resize_using_rendered_dimensions.html)
# Send a beacon response using POST indicating that OptPuzzle.jpg is
@@ -1724,14 +1902,66 @@ NONCE=$(awk -F\' '/^pagespeed\.criticalImagesBeaconInit/ {print $(NF-1)}' \
BEACON_URL="$HOST_NAME/ngx_pagespeed_beacon"
BEACON_URL+="?url=http%3A%2F%2Frenderedimagebeacon.example.com%2Fmod_pagespeed_test%2F"
BEACON_URL+="image_rewriting%2Fimage_resize_using_rendered_dimensions.html"
BEACON_DATA="oh=$OPTIONS_HASH&n=$NONCE&ci=1344500982&rd=%7B%221344500982%22%3A%7B%22renderedWidth%22%3A150%2C%22renderedHeight%22%3A100%2C%22originalWidth%22%3A256%2C%22originalHeight%22%3A192%7D%7D"
BEACON_DATA="oh=$OPTIONS_HASH&n=$NONCE&ci=1344500982&rd=%7B%221344500982%22%3A%7B%22rw%22%3A150%2C%22rh%22%3A100%2C%22ow%22%3A256%2C%22oh%22%3A192%7D%7D"
OUT=$(env http_proxy=$SECONDARY_HOSTNAME \
$WGET_DUMP --post-data "$BEACON_DATA" "$BEACON_URL")
$WGET_DUMP --no-http-keep-alive --post-data "$BEACON_DATA" "$BEACON_URL")
check_from "$OUT" egrep -q "HTTP/1[.]. 204"
http_proxy=$SECONDARY_HOSTNAME \
fetch_until -save -recursive $URL \
'fgrep -c 150x100xOptPuzzle.jpg.pagespeed.ic.' 1
# Verify that downstream caches and rebeaconing interact correctly for images.
start_test lazyload_images,rewrite_images with downstream cache rebeaconing
HOST_NAME="http://downstreamcacherebeacon.example.com"
URL="$HOST_NAME/mod_pagespeed_test/downstream_caching.html"
URL+="?PageSpeedFilters=lazyload_images"
# 1. Even with blocking rewrite, we don't get an instrumented page when the
# PS-ShouldBeacon header is missing.
OUT1=$(http_proxy=$SECONDARY_HOSTNAME \
$WGET_DUMP --header 'X-PSA-Blocking-Rewrite: psatest' $URL)
check_not_from "$OUT1" egrep -q 'pagespeed\.CriticalImages\.Run'
check_from "$OUT1" grep -q "Cache-Control: private, max-age=3000"
# 2. We get an instrumented page if the correct key is present.
OUT2=$(http_proxy=$SECONDARY_HOSTNAME \
$WGET_DUMP $WGET_ARGS \
--header="X-PSA-Blocking-Rewrite: psatest" \
--header="PS-ShouldBeacon: random_rebeaconing_key" $URL)
check_from "$OUT2" egrep -q "pagespeed\.CriticalImages\.Run"
check_from "$OUT2" grep -q "Cache-Control: max-age=0, no-cache"
# 3. We do not get an instrumented page if the wrong key is present.
OUT3=$(http_proxy=$SECONDARY_HOSTNAME \
$WGET_DUMP $WGET_ARGS \
--header="X-PSA-Blocking-Rewrite: psatest" \
--header="PS-ShouldBeacon: wrong_rebeaconing_key" $URL)
check_not_from "$OUT3" egrep -q "pagespeed\.CriticalImages\.Run"
check_from "$OUT3" grep -q "Cache-Control: private, max-age=3000"
# Verify that downstream caches and rebeaconing interact correctly for css.
test_filter prioritize_critical_css with rebeaconing
HOST_NAME="http://downstreamcacherebeacon.example.com"
URL="$HOST_NAME/mod_pagespeed_test/downstream_caching.html"
URL+="?PageSpeedFilters=prioritize_critical_css"
# 1. Even with blocking rewrite, we don't get an instrumented page when the
# PS-ShouldBeacon header is missing.
OUT1=$(http_proxy=$SECONDARY_HOSTNAME \
$WGET_DUMP --header 'X-PSA-Blocking-Rewrite: psatest' $URL)
check_not_from "$OUT1" egrep -q 'pagespeed\.criticalCssBeaconInit'
check_from "$OUT1" grep -q "Cache-Control: private, max-age=3000"
# 2. We get an instrumented page if the correct key is present.
OUT2=$(http_proxy=$SECONDARY_HOSTNAME \
$WGET_DUMP $WGET_ARGS \
--header 'X-PSA-Blocking-Rewrite: psatest'\
--header="PS-ShouldBeacon: random_rebeaconing_key" $URL)
check_from "$OUT2" grep -q "Cache-Control: max-age=0, no-cache"
check_from "$OUT2" egrep -q "pagespeed\.criticalCssBeaconInit"
# 3. We do not get an instrumented page if the wrong key is present.
WGET_ARGS="--header=\"PS-ShouldBeacon: wrong_rebeaconing_key\""
OUT3=$(http_proxy=$SECONDARY_HOSTNAME \
$WGET_DUMP $WGET_ARGS $URL)
check_not_from "$OUT3" egrep -q "pagespeed\.criticalCssBeaconInit"
check_from "$OUT3" grep -q "Cache-Control: private, max-age=3000"
# Verify that we can send a critical image beacon and that lazyload_images
# does not try to lazyload the critical images.
WGET_ARGS=""
@@ -1742,15 +1972,15 @@ URL="$HOST_NAME/mod_pagespeed_test/image_rewriting/rewrite_images.html"
# lazyloaded by default.
http_proxy=$SECONDARY_HOSTNAME\
fetch_until -save -recursive $URL 'fgrep -c pagespeed_lazy_src=' 3
check [ $(grep -c "^pagespeed\.criticalImagesBeaconInit" \
check [ $(grep -c "^pagespeed\.CriticalImages\.Run" \
$WGET_DIR/rewrite_images.html) = 1 ];
# We need the options hash and nonce to send a critical image beacon, so extract
# it from injected beacon JS.
OPTIONS_HASH=$(awk -F\' '/^pagespeed\.criticalImagesBeaconInit/ {print $(NF-3)}' \
OPTIONS_HASH=$(awk -F\' '/^pagespeed\.CriticalImages\.Run/ {print $(NF-3)}' \
$WGET_DIR/rewrite_images.html)
NONCE=$(awk -F\' '/^pagespeed\.criticalImagesBeaconInit/ {print $(NF-1)}' \
NONCE=$(awk -F\' '/^pagespeed\.CriticalImages\.Run/ {print $(NF-1)}' \
$WGET_DIR/rewrite_images.html)
OPTIONS_HASH=$(grep "^pagespeed\.criticalImagesBeaconInit" \
OPTIONS_HASH=$(grep "^pagespeed\.CriticalImages\.Run" \
$WGET_DIR/rewrite_images.html | awk -F\' '{print $(NF-3)}')
# Send a beacon response using POST indicating that Puzzle.jpg is a critical
# image.
@@ -1762,7 +1992,6 @@ BEACON_DATA="oh=$OPTIONS_HASH&n=$NONCE&ci=2932493096"
OUT=$(env http_proxy=$SECONDARY_HOSTNAME \
wget -q --save-headers -O - --no-http-keep-alive \
--post-data "$BEACON_DATA" "$BEACON_URL")
echo $OUT
check_from "$OUT" egrep -q "HTTP/1[.]. 204"
# Now only 2 of the images should be lazyloaded, Cuppa.png should not be.
http_proxy=$SECONDARY_HOSTNAME \
@@ -1776,11 +2005,11 @@ http_proxy=$SECONDARY_HOSTNAME \
URL="$URL?id=4"
http_proxy=$SECONDARY_HOSTNAME\
fetch_until -save -recursive $URL 'fgrep -c pagespeed_lazy_src=' 3
check [ $(grep -c "^pagespeed\.criticalImagesBeaconInit" \
check [ $(grep -c "^pagespeed\.CriticalImages\.Run" \
"$WGET_DIR/rewrite_images.html?id=4") = 1 ];
OPTIONS_HASH=$(awk -F\' '/^pagespeed\.criticalImagesBeaconInit/ {print $(NF-3)}' \
OPTIONS_HASH=$(awk -F\' '/^pagespeed\.CriticalImages\.Run/ {print $(NF-3)}' \
"$WGET_DIR/rewrite_images.html?id=4")
NONCE=$(awk -F\' '/^pagespeed\.criticalImagesBeaconInit/ {print $(NF-1)}' \
NONCE=$(awk -F\' '/^pagespeed\.CriticalImages\.Run/ {print $(NF-1)}' \
"$WGET_DIR/rewrite_images.html?id=4")
BEACON_URL="$HOST_NAME/ngx_pagespeed_beacon"
BEACON_URL+="?url=http%3A%2F%2Fimagebeacon.example.com%2Fmod_pagespeed_test%2F"
@@ -1796,6 +2025,58 @@ check_from "$OUT" egrep -q "HTTP/1[.]. 204"
http_proxy=$SECONDARY_HOSTNAME \
fetch_until -save -recursive $URL 'fgrep -c pagespeed_lazy_src=' 1
test_filter prioritize_critical_css with unauthorized resources
start_test no critical selectors chosen from unauthorized resources
URL="$TEST_ROOT/unauthorized/prioritize_critical_css.html"
URL+="?PageSpeedFilters=prioritize_critical_css,debug"
fetch_until -save $URL 'fgrep -c pagespeed.criticalCssBeaconInit' 3
# Except for the occurrence in html, the gsc-completion-selected string
# should not occur anywhere else, i.e. in the selector list.
check [ $(fgrep -c "gsc-completion-selected" $FETCH_FILE) -eq 1 ]
# From the css file containing an unauthorized @import line,
# a) no selectors from the unauthorized @ import (e.g .maia-display) should
# appear in the selector list.
check_not fgrep -q "maia-display" $FETCH_FILE
# b) no selectors from the authorized @ import (e.g .interesting_color) should
# appear in the selector list because it won't be flattened.
check_not fgrep -q "interesting_color" $FETCH_FILE
# c) selectors that don't depend on flattening should appear in the selector
# list.
check [ $(fgrep -c "non_flattened_selector" $FETCH_FILE) -eq 1 ]
EXPECTED_IMPORT_FAILURE_LINE="<!--Flattening failed: Cannot import "
EXPECTED_IMPORT_FAILURE_LINE+="http://www.google.com/css/maia.css: is it on "
EXPECTED_IMPORT_FAILURE_LINE+="an unauthorized domain?-->"
check grep -q "$EXPECTED_IMPORT_FAILURE_LINE" $FETCH_FILE
EXPECTED_COMMENT_LINE="<!--CriticalCssBeacon: Cannot create resource: either "
EXPECTED_COMMENT_LINE+="its domain is unauthorized and "
EXPECTED_COMMENT_LINE+="InlineUnauthorizedResources is not enabled, or it "
EXPECTED_COMMENT_LINE+="cannot be fetched (check the server logs)-->"
check grep -q "$EXPECTED_COMMENT_LINE" $FETCH_FILE
start_test inline_unauthorized_resources allows unauthorized css selectors
HOST_NAME="http://unauthorizedresources.example.com"
URL="$HOST_NAME/mod_pagespeed_test/unauthorized/prioritize_critical_css.html"
URL+="?PageSpeedFilters=prioritize_critical_css,debug"
# gsc-completion-selected string should occur once in the html and once in the
# selector list.
http_proxy=$SECONDARY_HOSTNAME \
fetch_until -save $URL 'fgrep -c gsc-completion-selected' 2
# Verify that this page had beaconing javascript on it.
check [ $(fgrep -c "pagespeed.criticalCssBeaconInit" $FETCH_FILE) -eq 3 ]
# From the css file containing an unauthorized @import line,
# a) no selectors from the unauthorized @ import (e.g .maia-display) should
# appear in the selector list.
check_not fgrep -q "maia-display" $FETCH_FILE
# b) no selectors from the authorized @ import (e.g .red) should
# appear in the selector list because it won't be flattened.
check_not fgrep -q "interesting_color" $FETCH_FILE
# c) selectors that don't depend on flattening should appear in the selector
# list.
check [ $(fgrep -c "non_flattened_selector" $FETCH_FILE) -eq 1 ]
check grep -q "$EXPECTED_IMPORT_FAILURE_LINE" $FETCH_FILE
start_test keepalive with html rewriting
keepalive_test "keepalive-html.example.com"\
"/mod_pagespeed_example/rewrite_images.html" ""
@@ -1832,13 +2113,11 @@ keepalive_test "keepalive-static.example.com"\
# are combined.
test_filter combine_css Maximum size of combined CSS.
QUERY_PARAM="PageSpeedMaxCombinedCssBytes=57"
URL="$URL?$QUERY_PARAM"
# Make sure that we have got the last CSS file and it is not combined.
fetch_until -save $URL 'grep -c styles/bold.css\"' 1
# Now check that the 1st and 2nd CSS files are combined, but the 3rd
# one is not.
check [ $(grep -c 'styles/yellow.css+blue.css.pagespeed.' \
$FETCH_UNTIL_OUTFILE) = 1 ]
URL="$URL&$QUERY_PARAM"
# We should get the first two files to be combined...
fetch_until -save $URL 'grep -c styles/yellow.css+blue.css.pagespeed.' 1
# ... but 3rd and 4th should be standalone
check [ $(grep -c 'styles/bold.css\"' $FETCH_UNTIL_OUTFILE) = 1 ]
check [ $(grep -c 'styles/big.css\"' $FETCH_UNTIL_OUTFILE) = 1 ]
# Test to make sure we have a sane Connection Header. See
@@ -1857,7 +2136,7 @@ CONNECTION=$(extract_headers $FETCH_UNTIL_OUTFILE | fgrep "Connection:")
check_not_from "$CONNECTION" fgrep -qi "Keep-Alive, Keep-Alive"
check_from "$CONNECTION" fgrep -qi "Keep-Alive"
test_filter ngx_pagespeed_static defer js served with correct headers.
start_test ngx_pagespeed_static defer js served with correct headers.
# First, determine which hash js_defer is served with. We need a correct hash
# to get it served up with an Etag, which is one of the things we want to test.
URL="$HOSTNAME/mod_pagespeed_example/defer_javascript.html?PageSpeed=on&PageSpeedFilters=defer_javascript"
@@ -1865,6 +2144,18 @@ OUT=$($WGET_DUMP $URL)
HASH=$(echo $OUT \
| grep --only-matching "/js_defer\\.*\([^.]\)*.js" | cut -d '.' -f 2)
# Test a scenario where a multi-domain installation is using a
# single CDN for all hosts, and uses a subdirectory in the CDN to
# distinguish hosts. Some of the resources may already be mapped to
# the CDN in the origin HTML, but we want to fetch them directly
# from localhost. If we do this successfully (see the MapOriginDomain
# command in customhostheader.example.com in pagespeed conf), we will
# inline a small image.
start_test shared CDN short-circuit back to origin via host-header override
URL="http://customhostheader.example.com/map_origin_host_header.html"
http_proxy=$SECONDARY_HOSTNAME fetch_until -save "$URL" \
"grep -c data:image/png;base64" 1
JS_URL="$HOSTNAME/ngx_pagespeed_static/js_defer.$HASH.js"
JS_HEADERS=$($WGET -O /dev/null -q -S --header='Accept-Encoding: gzip' \
$JS_URL 2>&1)
@@ -2010,7 +2301,7 @@ check_not_from "$(extract_headers $FETCH_UNTIL_OUTFILE)" \
# that we bail out of parsing and insert a script redirecting to
# ?PageSpeed=off. This should also insert an entry into the property cache so
# that the next time we fetch the file it will not be parsed at all.
echo TEST: Handling of large files.
start_test Handling of large files.
# Add a timestamp to the URL to ensure it's not in the property cache.
FILE="max_html_parse_size/large_file.html?value=$(date +%s)"
URL=$TEST_ROOT/$FILE
@@ -2029,5 +2320,38 @@ check_from "$LARGE_OUT" grep -q window.location=".*&ModPagespeed=off"
fetch_until -save $URL 'grep -c window.location=".*&ModPagespeed=off"' 0
check_not fgrep -q pagespeed.ic $FETCH_FILE
start_test messages load
OUT=$($WGET_DUMP "$HOSTNAME/ngx_pagespeed_message")
check_not_from "$OUT" grep "Writing to ngx_pagespeed_message failed."
check_from "$OUT" grep -q "/mod_pagespeed_example"
start_test Check keepalive after a 304 responses.
# '-m 2' specifies that the whole operation is allowed to take 2 seconds max.
curl -vv -m 2 http://$PRIMARY_HOSTNAME/foo.css.pagespeed.ce.0.css \
-H 'If-Modified-Since: Z' http://$PRIMARY_HOSTNAME/foo
check [ $? = "0" ]
start_test Date response header set
OUT=$($WGET_DUMP $EXAMPLE_ROOT/combine_css.html)
check_not_from "$OUT" egrep -q '^Date: Thu, 01 Jan 1970 00:00:00 GMT'
OUT=$($WGET_DUMP --header=Host:date.example.com \
http://$SECONDARY_HOSTNAME/mod_pagespeed_example/combine_css.html)
check_from "$OUT" egrep -q '^Date: Fri, 16 Oct 2009 23:05:07 GMT'
if $USE_VALGRIND; then
# It is possible that there are still ProxyFetches outstanding
# at this point in time. Give them a few extra seconds to allow
# them to finish, so they will not generate valgrind complaints
sleep 30
kill -s quit $VALGRIND_PID
wait
# Clear the previously set trap, we don't need it anymore.
trap - EXIT
start_test No Valgrind complaints.
check_not [ -s "$TEST_TMP/valgrind.log" ]
fi
check_failures_and_exit
+211 -46
View File
@@ -5,7 +5,7 @@
worker_processes 1;
daemon @@DAEMON@@;
master_process @@MASTER_PROCESS@@;
master_process on;
error_log "@@ERROR_LOG@@" debug;
pid "@@TEST_TMP@@/nginx.pid";
@@ -28,10 +28,22 @@ http {
proxy_temp_path "@@TMP_PROXY_CACHE@@";
root "@@SERVER_ROOT@@";
# Block 5a: Decide on Cache-Control header value to use for outgoing
# response.
# Map new_cache_control_header_val to "no-cache, max-age=0" if the
# content is html and use the original Cache-Control header value
# in all other cases.
map $upstream_http_content_type $new_cache_control_header_val {
default $upstream_http_cache_control;
"~*text/html" "no-cache, max-age=0";
}
pagespeed UsePerVHostStatistics on;
pagespeed InPlaceResourceOptimization on;
pagespeed CreateSharedMemoryMetadataCache "@@SHM_CACHE@@" 8192;
pagespeed PreserveUrlRelativity on;
pagespeed BlockingRewriteKey psatest;
# CriticalImagesBeaconEnabled is now on by default, but we disable in testing.
# With this option enabled, the inline image system test will currently fail.
@@ -53,7 +65,6 @@ http {
server_name max-cacheable-content-length.example.com;
pagespeed FileCachePath "@@FILE_CACHE@@";
pagespeed BlockingRewriteKey psatest;
pagespeed RewriteLevel PassThrough;
pagespeed EnableFilters rewrite_javascript;
@@ -64,6 +75,7 @@ http {
@@RESOLVER@@
server {
# Block 1: Basic port, server_name definitions.
# This server represents the external caching layer server which
# receives user requests and proxies them to the upstream server
# running on the PRIMARY_PORT when the response is not available in
@@ -72,58 +84,110 @@ http {
server_name proxy_cache.example.com;
pagespeed FileCachePath "@@FILE_CACHE@@";
# Disable PageSpeed on this server.
pagespeed off;
set $ua_dependent_ps_capability_list "";
set $bypass_cache 1;
# Block 2: Define prefix for proxy_cache_key based on the UserAgent.
# Define placeholder PS-CapabilityList header values for large and small
# screens with no UA dependent optimizations. Note that these placeholder
# values should not contain any of ll, ii, dj, jw or ws, since these
# codes will end up representing optimizations to be supported for the
# request.
set $default_ps_capability_list_for_large_screens "LargeScreen.SkipUADependentOptimizations";
set $default_ps_capability_list_for_small_screens "TinyScreen.SkipUADependentOptimizations";
# As a fallback, the PS-CapabilityList header that is sent to the upstream
# PageSpeed server should be for a large screen device with no browser
# specific optimizations.
set $ps_capability_list $default_ps_capability_list_for_large_screens;
# Cache-fragment 1: Desktop User-Agents that support lazyload_images (ll),
# inline_images (ii) and defer_javascript (dj).
# Note: Wget is added for testing purposes only.
if ($http_user_agent ~* "Chrome/|Firefox/|MSIE |Safari|Wget") {
# User Agents that support lazyload-images (ll), inline-images (ii) and
# defer-javascript (dj).
set $ua_dependent_ps_capability_list "ll,ii,dj:";
set $bypass_cache 0;
set $ps_capability_list "ll,ii,dj:";
}
# Cache-fragment 2: Desktop User-Agents that support lazyload_images (ll),
# inline_images (ii), defer_javascript (dj), webp (jw) and lossless_webp
# (ws).
if ($http_user_agent ~*
"Chrome/[2][3-9]+\.|Chrome/[[3-9][0-9]+\.|Chrome/[0-9]{3,}\.") {
# User Agents that support lazyload-images (ll), inline-images (ii),
# defer-javascript (dj), webp (jw) and webp-lossless (ws).
set $ua_dependent_ps_capability_list "ll,ii,dj,jw,ws:";
set $bypass_cache 0;
set $ps_capability_list "ll,ii,dj,jw,ws:";
}
# Cache-fragment 3: This fragment contains (a) Desktop User-Agents that
# match fragments 1 or 2 but should not because they represent older
# versions of certain browsers or bots and (b) Tablet User-Agents that
# correspond to large screens. These will only get optimizations that work
# on all browsers and use image compression qualities applicable to large
# screens. Note that even Tablets that are capable of supporting inline or
# webp images, e.g. Android 4.1.2, will not get these advanced
# optimizations.
if ($http_user_agent ~* "Firefox/[1-2]\.|MSIE [5-8]\.|bot|Yahoo!|Ruby|RPT-HTTPClient|(Google \(\+https\:\/\/developers\.google\.com\/\+\/web\/snippet\/\))|Android|iPad|TouchPad|Silk-Accelerated|Kindle Fire") {
set $ps_capability_list $default_ps_capability_list_for_large_screens;
}
# Cache-fragment 4: Mobiles and small screen Tablets will use image compression
# qualities applicable to small screens, but all other optimizations will be
# those that work on all browsers.
if ($http_user_agent ~* "Mozilla.*Android.*Mobile*|iPhone|BlackBerry|Opera Mobi|Opera Mini|SymbianOS|UP.Browser|J-PHONE|Profile/MIDP|portalmmm|DoCoMo|Obigo|Galaxy Nexus|GT-I9300|GT-N7100|HTC One|Nexus [4|7|S]|Xoom|XT907") {
set $ps_capability_list $default_ps_capability_list_for_small_screens;
}
# All User Agents that represent
# 1) mobiles
# 2) tablets
# 3) desktop browsers that do not have defer-javascript capability at a minimum
# are made to go to the pagespeed server directly bypassing the proxy_cache.
if ($http_user_agent ~* "Firefox/[1-2]\.|MSIE [5-8]\.") {
set $ua_dependent_ps_capability_list "";
set $bypass_cache 1;
# Block 3a: Bypass the cache for .pagespeed. resource. PageSpeed has its own
# cache for these, and these could bloat up the caching layer.
if ($uri ~ "\.pagespeed\.([a-z]\.)?[a-z]{2}\.[^.]{10}\.[^.]+") {
set $bypass_cache "1";
}
if ($http_user_agent ~* "Mozilla.*Android.*Mobile*|iPhone|BlackBerry|Opera Mobi|Opera Mini|SymbianOS|UP.Browser|J-PHONE|Profile/MIDP|portalmmm|DoCoMo|Obigo") {
# These are Mobile User Agents. We don't cache responses for these.
set $ua_dependent_ps_capability_list "";
set $bypass_cache 1;
}
if ($http_user_agent ~* "Android|iPad|TouchPad|Silk-Accelerated|Kindle Fire") {
# These are Tablet User Agents. We don't cache responses for these.
set $ua_dependent_ps_capability_list "";
set_random $rand 0 100;
set $should_beacon_header_val "";
if ($rand ~* "^[0-4]$") {
set $should_beacon_header_val "random_rebeaconing_key";
set $bypass_cache 1;
}
# Block 3b: Only cache responses to clients that support gzip. Most clients
# do, and the cache holds much more if it stores gzipped responses.
if ($http_accept_encoding !~* gzip) {
set $bypass_cache "1";
}
# Block 4: Location block for purge requests.
location ~ /purge(/.*) {
allow all;
proxy_cache_purge htmlcache $ua_dependent_ps_capability_list$1$is_args$args;
allow 127.0.0.1;
deny all;
proxy_cache_purge htmlcache $ps_capability_list$1$is_args$args;
}
# Block 6: Location block with proxy_cache directives.
location /mod_pagespeed_test/cachable_rewritten_html/ {
# 1: Upstream PageSpeed server is running at localhost:8050.
proxy_pass http://localhost:@@PRIMARY_PORT@@;
proxy_set_header Host $host;
proxy_cache_valid 200 30s;
# 2: Use htmlcache as the zone for caching.
proxy_cache htmlcache;
proxy_ignore_headers Cache-Control;
add_header X-Cache $upstream_cache_status;
proxy_cache_key $ua_dependent_ps_capability_list$uri$is_args$args;
# 3: Bypass requests that correspond to .pagespeed. resources
# or clients that do not support gzip etc.
proxy_cache_bypass $bypass_cache;
# 4: Use the redefined proxy_cache_key and make sure the /purge/
# block uses the same key.
proxy_cache_key $ps_capability_list$uri$is_args$args;
# 5: Forward Host header to upstream server.
proxy_set_header Host $host;
# 6: Set the PS-CapabilityList header for PageSpeed server to respect.
proxy_set_header PS-CapabilityList $ps_capability_list;
add_header PS-CapabilityList $ps_capability_list;
# 7: Add a header for identifying cache hits/misses/expires. This is
# for debugging purposes only and can be commented out in production.
add_header X-Cache $upstream_cache_status;
# Block 5b: Override Cache-Control headers as needed.
# Hide the upstream cache control header.
proxy_hide_header Cache-Control;
# Add the inferred Cache-Control header.
add_header Cache-Control $new_cache_control_header_val;
proxy_set_header PS-ShouldBeacon $should_beacon_header_val;
proxy_hide_header PS-ShouldBeacon;
}
}
@@ -141,6 +205,7 @@ http {
listen @@SECONDARY_PORT@@;
server_name experiment.example.com;
pagespeed FileCachePath "@@FILE_CACHE@@";
pagespeed InPlaceResourceOptimization off;
pagespeed RunExperiment on;
pagespeed AnalyticsID "123-45-6734";
@@ -200,6 +265,23 @@ http {
pagespeed CriticalImagesBeaconEnabled true;
}
server {
# Setup a vhost with the critical image beacon enabled to make sure that
# downstream caches and rebeaconing interact correctly.
listen @@SECONDARY_PORT@@;
server_name downstreamcacherebeacon.example.com;
pagespeed FileCachePath "@@FILE_CACHE@@";
pagespeed RewriteLevel PassThrough;
pagespeed CriticalImagesBeaconEnabled true;
# Enable the downstream caching feature and specify a rebeaconing key.
pagespeed DownstreamCachePurgeLocationPrefix "http://localhost:@@SECONDARY_PORT@@/purge";
pagespeed DownstreamCacheRebeaconingKey random_rebeaconing_key;
location ~ .*[.]html {
add_header Cache-Control "private, max-age=3000";
}
}
server {
listen @@SECONDARY_PORT@@;
server_name renderedimagebeacon.example.com;
@@ -210,6 +292,60 @@ http {
pagespeed CriticalImagesBeaconEnabled true;
}
# Build a configuration hierarchy where at the root we have turned on
# OptimizeForBandwidth, and in various subdirectories we override settings
# to make them more aggressive.
#
# In Apache we can do this all with Directory blocks, but to get the same
# inheretence in Nginx we need to have location blocks inside a server block.
server {
listen @@SECONDARY_PORT@@;
server_name optimizeforbandwidth.example.com;
pagespeed FileCachePath "@@FILE_CACHE@@";
pagespeed RewriteLevel OptimizeForBandwidth;
pagespeed DisableFilters add_instrumentation;
location /mod_pagespeed_test/optimize_for_bandwidth/inline_css {
pagespeed EnableFilters inline_css;
}
location /mod_pagespeed_test/optimize_for_bandwidth/css_urls {
pagespeed CssPreserveURLs off;
}
location /mod_pagespeed_test/optimize_for_bandwidth/image_urls {
pagespeed ImagePreserveURLs off;
}
location /mod_pagespeed_test/optimize_for_bandwidth/core_filters {
pagespeed RewriteLevel CoreFilters;
}
}
server {
# For testing with a custom origin header. In this VirtualHost,
# /mod_pagespeed_test is included in our DocumentRoot and thus does
# not need to be in any resource URL paths. This helps us verify that
# we are looping back to the corect VirtualHost -- if we hit the wrong
# one it will not work. Also we don't have a VirtualHost for
# sharedcdn.example.com, so the default Host header used for
# origin-mapping won't work either. Instead, we want origin-fetches
# to go back to this VirtualHost so we rely on the new third optional
# argument to MapOriginDomain.
listen @@SECONDARY_PORT@@;
server_name customhostheader.example.com;
pagespeed FileCachePath "@@FILE_CACHE@@_test";
root "@@SERVER_ROOT@@/mod_pagespeed_test";
pagespeed on;
pagespeed RewriteLevel PassThrough;
pagespeed EnableFilters rewrite_images;
# Don't use localhost, as ngx_pagespeed's native fetcher cannot resolve it
pagespeed MapOriginDomain 127.0.0.1:@@SECONDARY_PORT@@/customhostheader
sharedcdn.example.com/test customhostheader.example.com;
pagespeed JpegRecompressionQuality 50;
pagespeed CriticalImagesBeaconEnabled false;
}
server {
# Sets up a virtual host where we can specify forbidden filters without
# affecting any other hosts.
@@ -217,7 +353,6 @@ http {
server_name forbidden.example.com;
pagespeed FileCachePath "@@FILE_CACHE@@";
pagespeed BlockingRewriteKey psatest;
# Start with all core filters enabled ...
pagespeed RewriteLevel CoreFilters;
@@ -228,6 +363,16 @@ http {
pagespeed DisableFilters inline_css;
}
server {
listen @@SECONDARY_PORT@@;
server_name unauthorizedresources.example.com;
pagespeed FileCachePath "@@FILE_CACHE@@";
pagespeed RewriteLevel PassThrough;
pagespeed InlineUnauthorizedResourcesExperimental true;
pagespeed CssInlineMaxBytes 1000000;
}
server {
listen @@SECONDARY_PORT@@;
server_name client-domain-rewrite.example.com;
@@ -491,7 +636,6 @@ http {
server_name blocking.example.com;
pagespeed FileCachePath "@@SECONDARY_CACHE@@";
pagespeed BlockingRewriteKey psatest;
pagespeed RewriteLevel PassThrough;
pagespeed EnableFilters rewrite_images;
}
@@ -615,6 +759,13 @@ http {
pagespeed CriticalImagesBeaconEnabled false;
}
server {
listen @@SECONDARY_PORT@@;
server_name date.example.com;
pagespeed FileCachePath "@@FILE_CACHE@@";
add_header "Date" "Date: Fri, 16 Oct 2009 23:05:07 GMT";
}
server {
listen @@PRIMARY_PORT@@;
server_name localhost;
@@ -629,11 +780,14 @@ http {
# in the proxy_cache layer.
pagespeed DownstreamCachePurgeMethod "GET";
pagespeed DownstreamCachePurgeLocationPrefix "http://localhost:@@SECONDARY_PORT@@/purge";
pagespeed DownstreamCacheRebeaconingKey "random_rebeaconing_key";
# We use a very small deadline here to force the rewriting to not complete
# in the very first attempt.
pagespeed RewriteDeadlinePerFlushMs 1;
pagespeed RewriteLevel PassThrough;
pagespeed EnableFilters collapse_whitespace,extend_cache,recompress_images,convert_jpeg_to_webp,defer_javascript;
pagespeed EnableFilters collapse_whitespace,extend_cache,recompress_images;
pagespeed CriticalImagesBeaconEnabled true;
add_header Cache-Control "public, max-age=100";
}
location /mod_pagespeed_test/disable_no_transform/index.html {
@@ -649,6 +803,7 @@ http {
#pagespeed MemcachedThreads 1;
pagespeed on;
pagespeed MessageBufferSize 200000;
#pagespeed CacheFlushPollIntervalSec 1;
@@ -660,14 +815,6 @@ http {
pagespeed Library 43 1o978_K0_LNE5_ystNklf
http://www.modpagespeed.com/rewrite_javascript.js;
# If X-PSA-Blocking-Rewrite request header is present and its value matches
# the value of BlockingRewriteKey below, the response will be fully
# rewritten before being flushed to the client.
pagespeed BlockingRewriteKey psatest;
# Disable parsing if the size of the HTML exceeds 50kB.
pagespeed MaxHtmlParseBytes 50000;
add_header X-Extra-Header 1;
# Establish a proxy mapping where the current server proxies an image
@@ -706,6 +853,10 @@ http {
pagespeed DisableFilters collapse_whitespace;
}
location /mod_pagespeed_test/max_html_parse_size {
pagespeed MaxHtmlParseBytes 5000;
}
location ~ \.php$ {
fastcgi_param SCRIPT_FILENAME $request_filename;
fastcgi_param QUERY_STRING $query_string;
@@ -781,6 +932,20 @@ http {
expires 5m;
}
location /mod_pagespeed_test/ipro/instant/wait/ {
pagespeed InPlaceWaitForOptimized on;
# TODO: Valgrind runs pass only if the below line is uncommented.
#pagespeed InPlaceRewriteDeadlineMs 1000;
}
location /mod_pagespeed_test/ipro/instant/deadline/ {
pagespeed InPlaceRewriteDeadlineMs -1;
}
pagespeed LoadFromFile
"http://localhost:@@PRIMARY_PORT@@/mod_pagespeed_test/ipro/instant/"
"@@SERVER_ROOT@@/mod_pagespeed_test/ipro/instant/";
pagespeed EnableFilters remove_comments;
# Test LoadFromFile mapping by mapping one dir to another.
+165
View File
@@ -0,0 +1,165 @@
# Copyright 2013 Google Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Author: oschaaf@we-amp.com (Otto van der Schaaf)
# The first few suppressions can be found in other modules
# and easily found when searched for, and seem false positives.
{
<nginx false positive>
Memcheck:Param
socketcall.sendmsg(msg.msg_iov[i])
fun:__sendmsg_nocancel
fun:ngx_write_channel
fun:ngx_signal_worker_processes
fun:ngx_master_process_cycle
fun:main
}
{
<nginx false positive>
Memcheck:Param
socketcall.sendmsg(msg.msg_iov[i])
fun:__sendmsg_nocancel
fun:ngx_write_channel
fun:ngx_master_process_cycle
fun:main
}
{
<nginx false positive>
Memcheck:Param
socketcall.sendmsg(msg.msg_iov[i])
fun:__sendmsg_nocancel
fun:ngx_write_channel
fun:ngx_pass_open_channel
fun:ngx_start_cache_manager_processes
fun:ngx_master_process_cycle
fun:main
}
{
<nginx false positive>
Memcheck:Param
socketcall.sendmsg(msg.msg_iov[i])
fun:__sendmsg_nocancel
fun:ngx_write_channel
fun:ngx_pass_open_channel
fun:ngx_start_cache_manager_processes
fun:ngx_master_process_cycle
fun:main
}
{
<nginx false positive>
Memcheck:Leak
fun:malloc
fun:ngx_alloc
fun:ngx_event_process_init
fun:ngx_worker_process_init
fun:ngx_worker_process_cycle
fun:ngx_spawn_process
fun:ngx_start_worker_processes
fun:ngx_master_process_cycle
fun:main
}
{
<nginx false positive>
Memcheck:Param
socketcall.sendmsg(msg.msg_iov[i])
fun:__sendmsg_nocancel
fun:ngx_write_channel
fun:ngx_pass_open_channel
fun:ngx_start_worker_processes
fun:ngx_master_process_cycle
fun:main
}
# similar to http://trac.nginx.org/nginx/ticket/369
{
<nginx false positive>
Memcheck:Param
pwrite64(buf)
obj:/lib/x86_64-linux-gnu/libpthread-2.15.so
fun:ngx_write_file
fun:ngx_write_chain_to_file
fun:ngx_write_chain_to_temp_file
fun:ngx_event_pipe_write_chain_to_temp_file
fun:ngx_event_pipe
fun:ngx_http_upstream_process_upstream
fun:ngx_http_upstream_process_header
fun:ngx_http_upstream_handler
fun:ngx_epoll_process_events
fun:ngx_process_events_and_timers
fun:ngx_worker_process_cycle
}
# Mentioned in https://github.com/pagespeed/ngx_pagespeed/issues/103
# Assuming a false postives as the issue is closed.
{
<nginx false positive>
Memcheck:Param
write(buf)
obj:/lib/x86_64-linux-gnu/libpthread-2.15.so
fun:ngx_log_error_core
fun:ngx_http_parse_complex_uri
fun:ngx_http_process_request_uri
fun:ngx_http_process_request_line
fun:ngx_http_wait_request_handler
fun:ngx_epoll_process_events
fun:ngx_process_events_and_timers
fun:ngx_worker_process_cycle
fun:ngx_spawn_process
fun:ngx_start_worker_processes
fun:ngx_master_process_cycle
}
# Extra suppresions for testing in release mode:
{
<re2 uninitialised value in optimized code>
Memcheck:Cond
fun:_ZN3re24Prog8OptimizeEv
...
}
{
<re2 uninitialised value in optimized code>
Memcheck:Value8
fun:_ZN3re24Prog8OptimizeEv
...
}
{
<re2 uninitialised value in optimized code>
Memcheck:Cond
fun:_ZN3re2L4AddQEPNS_9SparseSetEi
...
}
{
<re2 uninitialised value in optimized code>
Memcheck:Value8
fun:_ZN3re2L4AddQEPNS_9SparseSetEi
...
}
{
<re2 uninitialized value in optimized code>
Memcheck:Value8
fun:_ZN3re23DFA10AddToQueueEPNS0_5WorkqEij
...
}
{
<re2 uninitialized value in optimized code>
Memcheck:Cond
fun:_ZN3re23DFA10AddToQueueEPNS0_5WorkqEij
...
}