Compare commits

..

46 Commits

Author SHA1 Message Date
dinic 01d5e79f8a 1. bugfix, connection_pool_mutex may not got unlock
2. disable keepalive if we don't find keep-alive header
2014-05-04 11:36:51 +08:00
dinic 2e6767e6d2 set connection keepalive when response header with "Connect: keep-alive" 2014-04-30 18:23:16 +08:00
dinic b2bb619524 native fetch support "keepalive" 2014-04-30 17:24:19 +08:00
Jeff Kaufman b4af0738a5 readme: use nginx 1.6.0 2014-04-25 08:27:26 -04:00
Otto van der Schaaf 323e820fde Merge pull request #658 from pagespeed/oschaaf-configure-wnoerror
nginx-gridfs: Add configure option to build with wno-error
2014-04-09 20:38:57 +02:00
Otto van der Schaaf 7d72a7c89a nginx-gridfs: Add configure option to build with wno-error
Some modules add things to CFLAGS that will make ngx_pagespeed emit
warnings at compile time. For example, nginx-gridfs will add
`--std=c99` - which is no good for ngx_pagespeed.

@peterbowey mentioned that -wno-error fixes the build -- so
to work around, make configure add `-wno-error` when used like
this: `WNO_ERROR=YES ./configure`
On my system, that results in a succesfull build when nginx-gridfs
is added to the module mix.

Fixes https://github.com/pagespeed/ngx_pagespeed/issues/626
2014-04-08 13:10:58 +02:00
Jeff Kaufman 6ced8c0f65 Merge pull request #641 from pagespeed/jefftk-if-block
if: support pagespeed directives in location if blocks
2014-03-25 13:16:27 -04:00
Jeff Kaufman 78cf39f9b3 if: support pagespeed directives in location if blocks 2014-03-19 16:46:48 -04:00
Jeff Kaufman f25569690a merge commit: merge 1.7.30.4 into master 2014-03-14 10:15:42 -04:00
huibaolin 707d671826 Change version 1.7.30.3 to 1.7.30.4 2014-03-14 08:16:38 -04:00
Jeff Kaufman 336352df38 Merge pull request #633 from pnommensen/patch-2
readme: use nginx 1.4.6
2014-03-07 13:17:42 -05:00
Patrick Nommensen 0edd405eb8 Update README.md
Don't know how I missed that.
2014-03-07 10:16:43 -08:00
Jeff Kaufman d004c4d916 Merge pull request #632 from pnommensen/patch-2
1.4.5 to 1.4.6 update
2014-03-07 11:32:01 -05:00
Patrick Nommensen 091ef6399b version update
1.4.5 to 1.4.6 http://nginx.org/en/CHANGES-1.4
2014-03-06 23:25:13 -08:00
Jeff Kaufman 9699caeab5 Merge pull request #623 from pnommensen/patch-1
Update README.md
2014-02-20 09:12:04 -05:00
Patrick Nommensen c371d516a8 Update README.md
nginx version update
2014-02-20 00:04:21 -08:00
Jeff Kaufman bf6c6c0e9b Merge pull request #621 from jart/dont-chown
Security Fix: Don't call chown() unless necessary.
2014-02-18 14:32:32 -05:00
Justine Tunney e8dd9fd3c3 Don't call chown() when initializing config dirs unless owner != worker user. 2014-02-15 22:32:55 -05:00
Jeff Kaufman 64eaa2a659 Merge pull request #602 from tcpper/fix_ngx_fetch_content_length
fix bug in NgxFetch#content_length_
2014-01-30 11:25:52 -08:00
Jeff Kaufman 83205c9c31 Merge pull request #606 from pagespeed/oschaaf-multiple-experiment-cookies
Experiments: fix sending out multiple experiment cookies
2014-01-24 13:12:27 -08:00
Otto van der Schaaf 625e762961 Experiments: fix sending out multiple experiment cookies
Only classify people into an experiment when we are rewriting html.
Fixes https://github.com/pagespeed/ngx_pagespeed/issues/586
2014-01-24 22:09:31 +01:00
Jeff Kaufman c20affe323 Merge pull request #605 from pagespeed/oschaaf-date-header
Date header: use current date when we don't get one handed over
2014-01-24 08:47:00 -08:00
Otto van der Schaaf 7a9e6de802 Date header: use current date when we don't get one handed over
When the content generator does not supply us with a date header,
we need to create one ourselves and set it to the current date.

Fixes:
https://github.com/pagespeed/ngx_pagespeed/issues/604 (duplicate)
https://github.com/pagespeed/ngx_pagespeed/issues/577
2014-01-24 16:49:37 +01:00
Huibao Lin 96cf9a22f7 Update to 1.7.30.3 release 2014-01-16 18:37:36 -05:00
Jeff Kaufman ab83a70a35 Merge pull request #599 from eezis/docfix
Added a missing 'cd ~' command to the README
2014-01-16 13:30:44 -08:00
Ernest Ezis 658b2cf7a9 Added the missing 'cd ~' command to the '3. Download and build nginx:' section 2014-01-16 12:45:14 -07:00
tcpper 6ccb815df3 fix bug in NgxFetch#content_length_ 2014-01-16 21:06:12 +08:00
Jeff Kaufman df5736609d native-fetcher: add support for FetchProxy
The native fetcher previously ignored FetchProxy settings; now it doesn't.

Squash-merge of tcpper's #590.
2014-01-08 10:51:24 -05:00
Jeff Kaufman 7fbb2c61ee readme: release 1.7.30.2 2014-01-06 16:51:41 -05:00
Jeff Kaufman af772c2fe8 Merge pull request #592 from pagespeed/jefftk-better-configure-error
config: point people to obj/autoconf.err when psol isn't detected
2014-01-03 03:33:47 -08:00
Jeff Kaufman a4bd9b9c13 config: point people to obj/autoconf.err when psol isn't detected by ./configure 2014-01-02 23:09:42 -05:00
Jeff Kaufman 328d3afc9b Merge pull request #583 from pagespeed/jefftk-support-purge
native-fetcher: support non-GET request methods like PURGE
2013-12-20 07:11:55 -08:00
Jeff Kaufman 2681c24ee0 native-fetcher: fix to work with nginx 1.5.8+
nginx 1.5.8 changed the resolver api, which the native fetcher uses.

Fixes #578.

Squash-merge of @dinic's #581.
2013-12-19 12:46:53 -05:00
Jeff Kaufman f86f47fda4 native-fetcher: support non-GET request methods like PURGE 2013-12-19 11:38:46 -05:00
Jeff Kaufman 179c81afa3 test: don't run downstream caching test with native fetcher 2013-12-19 11:03:53 -05:00
Jeff Kaufman 1f3560ea21 backport header-only fix
Was: trunk-tracking: update to r3632 from 1.7.30.1
2013-11-26 16:39:52 -05:00
Jeff Kaufman ed14455412 valgrind: unflake cache purging test
Fixes #569.
2013-11-25 14:38:24 -05:00
Jeff Kaufman be4d263d10 valgrind: suppressions file might not be in current directory 2013-11-25 10:23:25 -05:00
Jeff Kaufman 0bafd6b7e0 Merge pull request #565 from pagespeed/oschaaf-valgrind
Valgrind: Add an automated test
2013-11-25 07:03:33 -08:00
Otto van der Schaaf 9bbe912bd7 Valgrind: Add an automated test
This makes nginx run in the background under valgrind,
with both a master and a child process.
Valgrind errors will be redirected to `valgrind.log`.
When `USE_VALGRIND` is set, all system tests will be run under valgrind,
and at the end a new test is appended which ensures no valgrind errors
where encountered.

It is also worth noting that:
- There is a new file, `valgrind.sup`, which contains a few suppressions.
- Some tests behave flakey under valgrind. For now these are appended
  to the expected failures (when under valgrind only).
- 'Possibly lost' errors are all suppressed to get the amount of false
  positives manageable.
2013-11-21 21:26:15 +01:00
Jeff Kaufman b78eb8a939 Merge pull request #567 from pagespeed/oschaaf-304-timeout
system-tests: Test keepalive behaviour after a 304 response
2013-11-21 11:05:18 -08:00
Otto van der Schaaf e082a01912 system-tests: Test keepalive behaviour after a 304 response 2013-11-20 23:15:14 +01:00
Jeff Kaufman fa5815e1e8 Merge pull request #560 from pagespeed/jefftk-fix-messages
messages: unbreak /ngx_pagespeed_messages
2013-11-12 11:51:57 -08:00
Jeff Kaufman f12af2f03b messages: unbreak /ngx_pagespeed_messages
The shared circular buffer wasn't hooked up fully, which meant loading
/ngx_pagespeed_messages didn't work.  This fixes that and adds a test.

I also noticed while adding this that the 'Handling of large files' test
wasn't set up properly, so I converted that to use start_test.

Fixing that exposed another bug where the 'Handling of large files' test
was actually failing but being marked as an expected failure by being
grouped in with the test above.  Adding `pagespeed MaxHtmlParseBytes 5000`
to the appropriate location made it test what it was supposed to be testing
again, and the underlying feature wasn't broken.
2013-11-12 13:11:12 -05:00
Jeff Kaufman e22fae46bc readme: use release 2013-11-08 11:41:53 -05:00
Jeff Kaufman 53a599fbd4 readme: recommend tmpfs for file cache 2013-11-08 11:36:16 -05:00
11 changed files with 615 additions and 123 deletions
+5 -4
View File
@@ -47,10 +47,11 @@ recompiling Tengine](https://github.com/pagespeed/ngx_pagespeed/wiki/Using-ngx_p
3. Download and build nginx:
```bash
$ cd ~
$ # check http://nginx.org/en/download.html for the latest version
$ wget http://nginx.org/download/nginx-1.4.4.tar.gz
$ tar -xvzf nginx-1.4.4.tar.gz
$ cd nginx-1.4.4/
$ wget http://nginx.org/download/nginx-1.6.0.tar.gz
$ tar -xvzf nginx-1.6.0.tar.gz
$ cd nginx-1.6.0/
$ ./configure --add-module=$HOME/ngx_pagespeed-1.7.30.4-beta
$ make
$ sudo make install
@@ -72,7 +73,7 @@ In your `nginx.conf`, add to the main or server block:
```nginx
pagespeed on;
pagespeed FileCachePath /var/ngx_pagespeed_cache;
pagespeed FileCachePath /var/ngx_pagespeed_cache; # Use tmpfs for best results.
```
In every server block where pagespeed is enabled add:
+6 -1
View File
@@ -111,6 +111,10 @@ case "$NGX_GCC_VER" in
;;
esac
if [ "$WNO_ERROR" = "YES" ]; then
CFLAGS="$CFLAGS -Wno-error"
fi
pagespeed_include="\
$mod_pagespeed_dir \
$mod_pagespeed_dir/third_party/chromium/src \
@@ -208,7 +212,8 @@ if [ $ngx_found = yes ]; then
CORE_INCS="$CORE_INCS $pagespeed_include"
else
cat << END
$0: error: module ngx_pagespeed requires the pagespeed optimization library
$0: error: module ngx_pagespeed requires the pagespeed optimization library.
Look in obj/autoconf.err for more details.
END
exit 1
fi
+214 -46
View File
@@ -54,8 +54,156 @@ extern "C" {
#include "net/instaweb/util/public/thread_system.h"
#include "net/instaweb/util/public/timer.h"
#include "net/instaweb/util/public/writer.h"
#include "net/instaweb/util/public/pthread_mutex.h"
namespace net_instaweb {
class NgxConnection : public PoolElement<NgxConnection> {
public:
NgxConnection();
~NgxConnection();
void SetKeepAlive(bool k = true) { keepalive_ = k; }
bool KeepAlive() { return keepalive_; }
void SetSock(u_char *sockaddr, socklen_t socklen) {
socklen_ = socklen;
ngx_memcpy(&sockaddr_, sockaddr, socklen);
}
static NgxConnection* Connect(ngx_peer_connection_t *pc);
void Close();
static void NgxConnectionDumyHandler(ngx_event_t *ev) {};
static void NgxConnectionCloseHandler(ngx_event_t *ev);
typedef Pool<NgxConnection> NgxConnectionPool;
static NgxConnectionPool connection_pool;
static PthreadMutex connection_pool_mutex;
ngx_connection_t *c_;
private:
int64 timeout_;
int max_requests_;
bool keepalive_;
socklen_t socklen_;
u_char sockaddr_[NGX_SOCKADDRLEN];
};
NgxConnection::NgxConnectionPool NgxConnection::connection_pool;
PthreadMutex NgxConnection::connection_pool_mutex;
NgxConnection::NgxConnection() {
c_ = NULL;
keepalive_ = false;
// default keepalive 60s, max process 100 requests
timeout_ = 60000;
max_requests_ = 100;
}
NgxConnection::~NgxConnection() {
//
}
NgxConnection* NgxConnection::Connect(ngx_peer_connection_t *pc) {
NgxConnection *nc;
NgxConnection::connection_pool_mutex.Lock();
for (Pool<NgxConnection>::iterator p = connection_pool.begin();
p != connection_pool.end(); p++) {
nc = *p;
if (ngx_memn2cmp(static_cast<u_char*>(nc->sockaddr_),
reinterpret_cast<u_char*>(pc->sockaddr),
nc->socklen_, pc->socklen) == 0) {
nc->c_->idle = 0;
nc->c_->log = pc->log;
nc->c_->read->log = pc->log;
nc->c_->write->log = pc->log;
nc->c_->pool->log = pc->log;
if (nc->c_->read->timer_set) {
ngx_del_timer(nc->c_->read);
}
NgxConnection::connection_pool_mutex.Unlock();
return nc;
}
}
connection_pool_mutex.Unlock();
int rc = ngx_event_connect_peer(pc);
if (rc == NGX_ERROR || rc == NGX_DECLINED || rc == NGX_BUSY) {
return NULL;
}
nc = new NgxConnection();
nc->SetSock(reinterpret_cast<u_char*>(pc->sockaddr), pc->socklen);
nc->c_ = pc->connection;
return nc;
}
void NgxConnection::Close() {
max_requests_--;
if (!keepalive_ || max_requests_ <= 0) {
ngx_close_connection(c_);
delete this;
return;
}
if (c_->read->timer_set) {
ngx_del_timer(c_->read);
}
if (c_->write->timer_set) {
ngx_del_timer(c_->write);
}
ngx_add_timer(c_->read, static_cast<ngx_msec_t>(timeout_));
c_->data = this;
c_->read->handler = NgxConnectionCloseHandler;
c_->write->handler = NgxConnectionDumyHandler;
c_->idle = 1;
// this connection should not be associated with current fetch
c_->log = ngx_cycle->log;
c_->read->log = ngx_cycle->log;
c_->write->log = ngx_cycle->log;
c_->pool->log = ngx_cycle->log;
connection_pool_mutex.Lock();
connection_pool.Add(this);
connection_pool_mutex.Unlock();
}
void NgxConnection::NgxConnectionCloseHandler(ngx_event_t *ev) {
ngx_connection_t *c = static_cast<ngx_connection_t*>(ev->data);
NgxConnection *nc = static_cast<NgxConnection*>(c->data);
if (c->read->timedout) {
nc->SetKeepAlive(false);
nc->Close();
return;
}
char buf[1];
int n;
// not a timedout event, we should check connection
n = recv(c->fd, buf, 1, MSG_PEEK);
if (n == -1 && ngx_socket_errno == NGX_EAGAIN) {
if (ngx_handle_read_event(c->read, 0) != NGX_OK) {
nc->SetKeepAlive(false);
nc->Close();
return;
}
return;
}
nc->SetKeepAlive(false);
nc->Close();
}
NgxFetch::NgxFetch(const GoogleString& url,
AsyncFetch* async_fetch,
MessageHandler* message_handler,
@@ -70,7 +218,7 @@ namespace net_instaweb {
fetch_start_ms_(0),
fetch_end_ms_(0),
done_(false),
content_length_(0) {
content_length_(-1) {
ngx_memzero(&url_, sizeof(url_));
log_ = log;
pool_ = NULL;
@@ -83,7 +231,8 @@ namespace net_instaweb {
ngx_del_timer(timeout_event_);
}
if (connection_ != NULL) {
ngx_close_connection(connection_);
connection_->Close();
connection_ = NULL;
}
if (pool_ != NULL) {
ngx_destroy_pool(pool_);
@@ -142,19 +291,26 @@ namespace net_instaweb {
// The host is either a domain name or an IP address. First check
// if it's a valid IP address and only if that fails fall back to
// using the DNS resolver.
GoogleString s_ipaddress(reinterpret_cast<char*>(url_.host.data),
url_.host.len);
// Maybe we have a Proxy.
ngx_url_t* tmp_url = &url_;
if (0 != fetcher_->proxy_.url.len) {
tmp_url = &fetcher_->proxy_;
}
GoogleString s_ipaddress(reinterpret_cast<char*>(tmp_url->host.data),
tmp_url->host.len);
ngx_memzero(&sin_, sizeof(sin_));
sin_.sin_family = AF_INET;
sin_.sin_port = htons(url_.port);
sin_.sin_port = htons(tmp_url->port);
sin_.sin_addr.s_addr = inet_addr(s_ipaddress.c_str());
if (sin_.sin_addr.s_addr == INADDR_NONE) {
// inet_addr returned INADDR_NONE, which means the hostname
// isn't a valid IP address. Check DNS.
ngx_resolver_ctx_t temp;
temp.name.data = url_.host.data;
temp.name.len = url_.host.len;
temp.name.data = tmp_url->host.data;
temp.name.len = tmp_url->host.len;
resolver_ctx_ = ngx_resolve_start(fetcher_->resolver_, &temp);
if (resolver_ctx_ == NULL || resolver_ctx_ == NGX_NO_RESOLVER) {
// TODO(oschaaf): this spams the log, but is useful in the fetcher's
@@ -166,8 +322,8 @@ namespace net_instaweb {
}
resolver_ctx_->data = this;
resolver_ctx_->name.data = url_.host.data;
resolver_ctx_->name.len = url_.host.len;
resolver_ctx_->name.data = tmp_url->host.data;
resolver_ctx_->name.len = tmp_url->host.len;
#if (nginx_version < 1005008)
resolver_ctx_->type = NGX_RESOLVE_A;
@@ -209,8 +365,31 @@ namespace net_instaweb {
ngx_del_timer(timeout_event_);
timeout_event_ = NULL;
}
if (success) {
ConstStringStarVector v;
if (async_fetch_->response_headers()->Lookup(
StringPiece(HttpAttributes::kConnection), &v)) {
bool keepalive = false;
for (int i = 0; i < v.size(); i++) {
if (*v[i] == "keep-alive") {
keepalive = true;
break;
} else if (*v[i] == "close") {
break;
}
}
// - enable keepalive if we find "keep-alive" header
// - disable keepalive, if it's with "Connection:close"
// - disable keepalive, if it's without "keep-alive" header
connection_->SetKeepAlive(keepalive);
}
}
if (connection_) {
ngx_close_connection(connection_);
connection_->Close();
connection_ = NULL;
}
@@ -263,37 +442,14 @@ namespace net_instaweb {
return false;
}
str_url_.copy(reinterpret_cast<char*>(url_.url.data), str_url_.length(), 0);
size_t scheme_offset;
u_short port;
if (ngx_strncasecmp(url_.url.data, reinterpret_cast<u_char*>(
const_cast<char*>("http://")), 7) == 0) {
scheme_offset = 7;
port = 80;
} else if (ngx_strncasecmp(url_.url.data, reinterpret_cast<u_char*>(
const_cast<char*>("https://")), 8) == 0) {
scheme_offset = 8;
port = 443;
} else {
scheme_offset = 0;
port = 80;
}
url_.url.data += scheme_offset;
url_.url.len -= scheme_offset;
url_.default_port = port;
// See: http://lxr.evanmiller.org/http/source/core/ngx_inet.c#L875
url_.no_resolve = 0;
url_.uri_part = 1;
if (ngx_parse_url(pool_, &url_) == NGX_OK) {
return true;
}
return false;
return NgxUrlAsyncFetcher::ParseUrl(&url_, pool_);
}
// Issue a request after the resolver is done
void NgxFetch::NgxFetchResolveDone(ngx_resolver_ctx_t* resolver_ctx) {
NgxFetch* fetch = static_cast<NgxFetch*>(resolver_ctx->data);
NgxUrlAsyncFetcher* fetcher = fetch->fetcher_;
if (resolver_ctx->state != NGX_OK) {
if (fetch->timeout_event() != NULL && fetch->timeout_event()->timer_set) {
ngx_del_timer(fetch->timeout_event());
@@ -322,6 +478,11 @@ namespace net_instaweb {
fetch->sin_.sin_family = AF_INET;
fetch->sin_.sin_port = htons(fetch->url_.port);
// Maybe we have Proxy
if (0 != fetcher->proxy_.url.len) {
fetch->sin_.sin_port = htons(fetcher->proxy_.port);
}
char* ip_address = inet_ntoa(fetch->sin_.sin_addr);
fetch->message_handler()->Message(
@@ -352,7 +513,13 @@ namespace net_instaweb {
bool have_host = false;
GoogleString port;
size = sizeof("GET ") - 1 + url_.uri.len + sizeof(" HTTP/1.0\r\n") - 1;
const char* method = request_headers->method_string();
size_t method_len = strlen(method);
size = (method_len +
1 /* for the space */ +
url_.uri.len +
sizeof(" HTTP/1.0\r\n") - 1);
for (int i = 0; i < request_headers->NumAttributes(); i++) {
// if no explicit host header is given in the request headers,
@@ -380,7 +547,8 @@ namespace net_instaweb {
return NGX_ERROR;
}
out_->last = ngx_cpymem(out_->last, "GET ", 4);
out_->last = ngx_cpymem(out_->last, method, method_len);
out_->last = ngx_cpymem(out_->last, " ", 1);
out_->last = ngx_cpymem(out_->last, url_.uri.data, url_.uri.len);
out_->last = ngx_cpymem(out_->last, " HTTP/1.0\r\n", 11);
@@ -412,7 +580,7 @@ namespace net_instaweb {
return rc;
}
NgxFetchWrite(connection_->write);
NgxFetchWrite(connection_->c_->write);
return NGX_OK;
}
@@ -429,18 +597,18 @@ namespace net_instaweb {
pc.log = fetcher_->log_;
pc.rcvbuf = -1;
int rc = ngx_event_connect_peer(&pc);
if (rc == NGX_ERROR || rc == NGX_DECLINED || rc == NGX_BUSY) {
return rc;
connection_ = NgxConnection::Connect(&pc);
if (connection_ == NULL) {
return NGX_ERROR;
}
connection_ = pc.connection;
connection_->write->handler = NgxFetchWrite;
connection_->read->handler = NgxFetchRead;
connection_->data = this;
connection_->c_->write->handler = NgxFetchWrite;
connection_->c_->read->handler = NgxFetchRead;
connection_->c_->data = this;
// Timer set in Init() is still in effect.
return rc;
return NGX_OK;
}
// When the fetch sends the request completely, it will hook the read event,
+2 -1
View File
@@ -42,6 +42,7 @@ namespace net_instaweb {
typedef bool (*response_handler_pt)(ngx_connection_t* c);
class NgxUrlAsyncFetcher;
class NgxConnection;
class NgxFetch : public PoolElement<NgxFetch> {
public:
NgxFetch(const GoogleString& url,
@@ -136,7 +137,7 @@ class NgxFetch : public PoolElement<NgxFetch> {
ngx_http_request_t* r_;
ngx_http_status_t* status_;
ngx_event_t* timeout_event_;
ngx_connection_t* connection_;
NgxConnection* connection_;
ngx_resolver_ctx_t* resolver_ctx_;
DISALLOW_COPY_AND_ASSIGN(NgxFetch);
+42 -15
View File
@@ -244,6 +244,13 @@ void copy_response_headers_from_ngx(const ngx_http_request_t* r,
headers->Add(HttpAttributes::kContentType,
str_to_string_piece(r->headers_out.content_type));
// When we don't have a date header, invent one.
const char* date = headers->Lookup1(HttpAttributes::kDate);
if (date == NULL) {
headers->SetDate(ngx_current_msec);
}
// TODO(oschaaf): ComputeCaching should be called in setupforhtml()?
headers->ComputeCaching();
}
@@ -443,7 +450,7 @@ ngx_command_t ps_commands[] = {
NULL },
{ ngx_string("pagespeed"),
NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1|
NGX_HTTP_LOC_CONF|NGX_HTTP_LIF_CONF|NGX_CONF_TAKE1|
NGX_CONF_TAKE2|NGX_CONF_TAKE3|NGX_CONF_TAKE4|NGX_CONF_TAKE5,
ps_loc_configure,
NGX_HTTP_SRV_CONF_OFFSET,
@@ -563,15 +570,24 @@ char* ps_init_dir(const StringPiece& directive,
return NULL; // We're not root, so we're staying whoever we are.
}
// chown if owner differs from nginx worker user.
ngx_core_conf_t* ccf =
(ngx_core_conf_t*)(ngx_get_conf(cf->cycle->conf_ctx, ngx_core_module));
CHECK(ccf != NULL);
if (chown(gs_path.c_str(), ccf->user, ccf->group) != 0) {
struct stat gs_stat;
if (stat(gs_path.c_str(), &gs_stat) != 0) {
return string_piece_to_pool_string(
cf->pool, net_instaweb::StrCat(
directive, " ", path, " unable to set permissions"));
directive, " ", path, " stat() failed"));
}
if (gs_stat.st_uid != ccf->user) {
if (chown(gs_path.c_str(), ccf->user, ccf->group) != 0) {
return string_piece_to_pool_string(
cf->pool, net_instaweb::StrCat(
directive, " ", path, " unable to set permissions"));
}
}
return NULL;
}
@@ -848,19 +864,28 @@ char* ps_merge_srv_conf(ngx_conf_t* cf, void* parent, void* child) {
}
char* ps_merge_loc_conf(ngx_conf_t* cf, void* parent, void* child) {
ps_loc_conf_t* parent_cfg_l = static_cast<ps_loc_conf_t*>(parent);
// The variant of the pagespeed directive that is acceptable in location
// blocks is only acceptable in location blocks, so we should never be merging
// in options from a server or main block.
CHECK(parent_cfg_l->options == NULL);
ps_loc_conf_t* cfg_l = static_cast<ps_loc_conf_t*>(child);
if (cfg_l->options == NULL) {
// No directory specific options.
return NGX_CONF_OK;
}
// While you can't put a "location" block inside a "location" block you can
// put an "if" block inside a "location" block, which is implemented by making
// a pretend "location" block. In this case we may have pagespeed options
// from the parent "location" block as well as from the current locationish
// "if" block.
ps_loc_conf_t* parent_cfg_l = static_cast<ps_loc_conf_t*>(parent);
if (parent_cfg_l->options != NULL) {
// Rebase our options off of the ones defined in the parent location block.
ps_merge_options(parent_cfg_l->options, &cfg_l->options);
return NGX_CONF_OK;
}
// Pagespeed options are defined in this location block, and it either has no
// parent (typical case) or is an if block whose parent location block defines
// no pagespeed options. Base our options off of those in the server block.
ps_srv_conf_t* cfg_s = static_cast<ps_srv_conf_t*>(
ngx_http_conf_get_module_srv_conf(cf, ngx_pagespeed));
@@ -1341,7 +1366,8 @@ bool ps_determine_options(ngx_http_request_t* r,
RequestHeaders* request_headers,
ResponseHeaders* response_headers,
RewriteOptions** options,
GoogleUrl* url) {
GoogleUrl* url,
bool html_rewrite) {
ps_srv_conf_t* cfg_s = ps_get_srv_config(r);
ps_loc_conf_t* cfg_l = ps_get_loc_config(r);
@@ -1379,7 +1405,7 @@ bool ps_determine_options(ngx_http_request_t* r,
if (request_options != NULL) {
(*options)->Merge(*request_options);
delete request_options;
} else if ((*options)->running_experiment()) {
} else if ((*options)->running_experiment() && html_rewrite) {
bool ok = ps_set_experiment_state_and_cookie(
r, request_headers, *options, url->Host());
if (!ok) {
@@ -1634,7 +1660,7 @@ ngx_int_t ps_resource_handler(ngx_http_request_t* r, bool html_rewrite) {
RewriteOptions* options = NULL;
if (!ps_determine_options(r, request_headers.get(), response_headers.get(),
&options, &url)) {
&options, &url, html_rewrite)) {
return NGX_ERROR;
}
@@ -2159,7 +2185,8 @@ ngx_int_t ps_in_place_check_header_filter(ngx_http_request_t* r) {
return ngx_http_next_header_filter(r);
}
if (status_code == CacheUrlAsyncFetcher::kNotInCacheStatus) {
if (status_code == CacheUrlAsyncFetcher::kNotInCacheStatus &&
!r->header_only) {
server_context->rewrite_stats()->ipro_not_in_cache()->Add(1);
server_context->message_handler()->Message(
kInfo,
+2
View File
@@ -218,6 +218,8 @@ void NgxRewriteDriverFactory::LoggingInit(ngx_log_t* log) {
void NgxRewriteDriverFactory::SetCircularBuffer(
SharedCircularBuffer* buffer) {
ngx_shared_circular_buffer_ = buffer;
ngx_message_handler_->set_buffer(buffer);
ngx_html_parse_message_handler_->set_buffer(buffer);
}
void NgxRewriteDriverFactory::SetServerContextMessageHandler(
+36 -6
View File
@@ -66,10 +66,10 @@ namespace net_instaweb {
mutex_(NULL) {
resolver_timeout_ = resolver_timeout;
fetch_timeout_ = fetch_timeout;
ngx_memzero(&url_, sizeof(url_));
ngx_memzero(&proxy_, sizeof(proxy_));
if (proxy != NULL && *proxy != '\0') {
url_.url.data = reinterpret_cast<u_char*>(const_cast<char*>(proxy));
url_.url.len = ngx_strlen(proxy);
proxy_.url.data = reinterpret_cast<u_char*>(const_cast<char*>(proxy));
proxy_.url.len = ngx_strlen(proxy);
}
mutex_ = thread_system_->NewMutex();
log_ = log;
@@ -106,6 +106,36 @@ namespace net_instaweb {
}
}
bool NgxUrlAsyncFetcher::ParseUrl(ngx_url_t* url, ngx_pool_t* pool) {
size_t scheme_offset;
u_short port;
if (ngx_strncasecmp(url->url.data, reinterpret_cast<u_char*>(
const_cast<char*>("http://")), 7) == 0) {
scheme_offset = 7;
port = 80;
} else if (ngx_strncasecmp(url->url.data, reinterpret_cast<u_char*>(
const_cast<char*>("https://")), 8) == 0) {
scheme_offset = 8;
port = 443;
} else {
scheme_offset = 0;
port = 80;
}
url->url.data += scheme_offset;
url->url.len -= scheme_offset;
url->default_port = port;
// See: http://lxr.evanmiller.org/http/source/core/ngx_inet.c#L875
url->no_resolve = 0;
url->uri_part = 1;
if (ngx_parse_url(pool, url) == NGX_OK) {
return true;
}
return false;
}
// If there are still active requests, cancel them.
void NgxUrlAsyncFetcher::CancelActiveFetches() {
// TODO(oschaaf): this seems tricky, this may end up calling
@@ -167,15 +197,15 @@ namespace net_instaweb {
command_connection_->read->handler = CommandHandler;
ngx_add_event(command_connection_->read, NGX_READ_EVENT, 0);
if (url_.url.len == 0) {
if (proxy_.url.len == 0) {
return true;
}
// TODO(oschaaf): shouldn't we do this earlier? Do we need to clean
// up when parsing the url failed?
if (ngx_parse_url(pool_, &url_) != NGX_OK) {
if (!ParseUrl(&proxy_, pool_)) {
ngx_log_error(NGX_LOG_ERR, log_, 0,
"NgxUrlAsyncFetcher::Init parse proxy[%V] failed", &url_.url);
"NgxUrlAsyncFetcher::Init parse proxy[%V] failed", &proxy_.url);
return false;
}
return true;
+2 -1
View File
@@ -115,13 +115,14 @@ class NgxUrlAsyncFetcher : public UrlAsyncFetcher {
private:
static void TimeoutHandler(ngx_event_t* tev);
static bool ParseUrl(ngx_url_t* url, ngx_pool_t* pool);
friend class NgxFetch;
NgxFetchPool active_fetches_;
// Add the pending task to this list
NgxFetchPool pending_fetches_;
NgxFetchPool completed_fetches_;
ngx_url_t url_;
ngx_url_t proxy_;
int fetchers_count_;
bool shutdown_;
+131 -45
View File
@@ -133,10 +133,8 @@ VALGRIND_OPTIONS=""
if $USE_VALGRIND; then
DAEMON=off
MASTER_PROCESS=off
else
DAEMON=on
MASTER_PROCESS=on
fi
if [ "$NATIVE_FETCHER" = "on" ]; then
@@ -157,7 +155,6 @@ by nginx_system_test.sh; don't edit here."
EOF
cat $PAGESPEED_CONF_TEMPLATE \
| sed 's#@@DAEMON@@#'"$DAEMON"'#' \
| sed 's#@@MASTER_PROCESS@@#'"$MASTER_PROCESS"'#' \
| sed 's#@@TEST_TMP@@#'"$TEST_TMP/"'#' \
| sed 's#@@PROXY_CACHE@@#'"$PROXY_CACHE/"'#' \
| sed 's#@@TMP_PROXY_CACHE@@#'"$TMP_PROXY_CACHE/"'#' \
@@ -177,9 +174,16 @@ check_not_simple grep @@ $PAGESPEED_CONF
# start nginx with new config
if $USE_VALGRIND; then
echo "Run this command in another terminal and then press enter:"
echo " valgrind --leak-check=full $NGINX_EXECUTABLE -c $PAGESPEED_CONF"
read
(valgrind -q --leak-check=full --gen-suppressions=all \
--show-possibly-lost=no --log-file=$TEST_TMP/valgrind.log \
--suppressions="$this_dir/valgrind.sup" \
$NGINX_EXECUTABLE -c $PAGESPEED_CONF) & VALGRIND_PID=$!
trap "echo 'terminating valgrind!' && kill -s sigterm $VALGRIND_PID" EXIT
echo "Wait until nginx is ready to accept connections"
while ! curl -I "http://$PRIMARY_HOSTNAME/mod_pagespeed_example/" 2>/dev/null; do
sleep 0.1;
done
echo "Valgrind (pid:$VALGRIND_PID) is logging to $TEST_TMP/valgrind.log"
else
TRACE_FILE="$TEST_TMP/conf_loading_trace"
$NGINX_EXECUTABLE -c $PAGESPEED_CONF >& "$TRACE_FILE"
@@ -199,6 +203,11 @@ fi
if $RUN_TESTS; then
echo "Starting tests"
else
if $USE_VALGRIND; then
# Clear valgrind trap
trap - EXIT
echo "To end valgrind, run 'kill -s quit $VALGRIND_PID'"
fi
echo "Not running tests; commence manual testing"
exit 4
fi
@@ -219,6 +228,17 @@ PAGESPEED_EXPECTED_FAILURES="
~IPRO-optimized resources should have fixed size, not chunked.~
"
# Some tests are flakey under valgrind. For now, add them to the expected failures
# when running under valgrind.
if $USE_VALGRIND; then
PAGESPEED_EXPECTED_FAILURES+="
~combine_css Maximum size of combined CSS.~
~prioritize_critical_css~
~IPRO flow uses cache as expected.~
~IPRO flow doesn't copy uncacheable resources multiple times.~
"
fi
# The existing system test takes its arguments as positional parameters, and
# wants different ones than we want, so we need to reset our positional args.
set -- "$PRIMARY_HOSTNAME"
@@ -251,49 +271,84 @@ function run_post_cache_flush() {
# nginx-specific system tests
start_test Test pagespeed directive inside if block inside location block.
URL="http://if-in-location.example.com/"
URL+="mod_pagespeed_example/inline_javascript.html"
# When we specify the X-Custom-Header-Inline-Js that triggers an if block in the
# config which turns on inline_javascript.
WGET_ARGS="--header=X-Custom-Header-Inline-Js:Yes"
http_proxy=$SECONDARY_HOSTNAME \
fetch_until $URL 'grep -c document.write' 1
OUT=$(http_proxy=$SECONDARY_HOSTNAME $WGET_DUMP $WGET_ARGS $URL)
check_from "$OUT" fgrep "X-Inline-Javascript: Yes"
check_not_from "$OUT" fgrep "inline_javascript.js"
# Without that custom header we don't trigger the if block, and shouldn't get
# any inline javascript.
WGET_ARGS=""
OUT=$(http_proxy=$SECONDARY_HOSTNAME $WGET_DUMP $WGET_ARGS $URL)
check_from "$OUT" fgrep "X-Inline-Javascript: No"
check_from "$OUT" fgrep "inline_javascript.js"
check_not_from "$OUT" fgrep "document.write"
# Tests related to rewritten response (downstream) caching.
CACHABLE_HTML_LOC="${SECONDARY_HOSTNAME}/mod_pagespeed_test/cachable_rewritten_html"
TMP_LOG_LINE="proxy_cache.example.com GET /purge/mod_pagespeed_test/cachable_rewritten_"
PURGE_REQUEST_IN_ACCESS_LOG=$TMP_LOG_LINE"html/downstream_caching.html.*(200)"
# Number of downstream cache purges should be 0 here.
CURRENT_STATS=$($WGET_DUMP $STATISTICS_URL)
check_from "$CURRENT_STATS" egrep -q \
"downstream_cache_purge_attempts:[[:space:]]*0"
if [ "$NATIVE_FETCHER" = "on" ]; then
echo "Native fetcher doesn't support PURGE requests and so we can't use or"
echo "test downstream caching."
else
CACHABLE_HTML_LOC="${SECONDARY_HOSTNAME}/mod_pagespeed_test/cachable_rewritten_html"
TMP_LOG_LINE="proxy_cache.example.com GET /purge/mod_pagespeed_test/cachable_rewritten_"
PURGE_REQUEST_IN_ACCESS_LOG=$TMP_LOG_LINE"html/downstream_caching.html.*(200)"
# The 1st request results in a cache miss, non-rewritten response
# produced by pagespeed code and a subsequent purge request.
start_test Check for case where rewritten cache should get purged.
WGET_ARGS="--header=Host:proxy_cache.example.com"
OUT=$($WGET_DUMP $WGET_ARGS $CACHABLE_HTML_LOC/downstream_caching.html)
check_not_from "$OUT" egrep -q "pagespeed.ic"
check_from "$OUT" egrep -q "X-Cache: MISS"
fetch_until $STATISTICS_URL \
'grep -c downstream_cache_purge_attempts:[[:space:]]*1' 1
check [ $(grep -ce "$PURGE_REQUEST_IN_ACCESS_LOG" $ACCESS_LOG) = 1 ];
# Number of downstream cache purges should be 0 here.
CURRENT_STATS=$($WGET_DUMP $STATISTICS_URL)
check_from "$CURRENT_STATS" egrep -q \
"downstream_cache_purge_attempts:[[:space:]]*0"
# The 2nd request results in a cache miss (because of the previous purge),
# rewritten response produced by pagespeed code and no new purge requests.
start_test Check for case where rewritten cache should not get purged.
BLOCKING_WGET_ARGS=$WGET_ARGS" --header=X-PSA-Blocking-Rewrite:psatest"
OUT=$($WGET_DUMP $BLOCKING_WGET_ARGS $CACHABLE_HTML_LOC/downstream_caching.html)
check_from "$OUT" egrep -q "pagespeed.ic"
check_from "$OUT" egrep -q "X-Cache: MISS"
CURRENT_STATS=$($WGET_DUMP $STATISTICS_URL)
check_from "$CURRENT_STATS" egrep -q \
"downstream_cache_purge_attempts:[[:space:]]*1"
check [ $(grep -ce "$PURGE_REQUEST_IN_ACCESS_LOG" $ACCESS_LOG) = 1 ];
# The 1st request results in a cache miss, non-rewritten response
# produced by pagespeed code and a subsequent purge request.
start_test Check for case where rewritten cache should get purged.
WGET_ARGS="--header=Host:proxy_cache.example.com"
OUT=$($WGET_DUMP $WGET_ARGS $CACHABLE_HTML_LOC/downstream_caching.html)
check_not_from "$OUT" egrep -q "pagespeed.ic"
check_from "$OUT" egrep -q "X-Cache: MISS"
fetch_until $STATISTICS_URL \
'grep -c downstream_cache_purge_attempts:[[:space:]]*1' 1
# The 3rd request results in a cache hit (because the previous response is
# now present in cache), rewritten response served out from cache and not
# by pagespeed code and no new purge requests.
start_test Check for case where there is a rewritten cache hit.
OUT=$($WGET_DUMP $WGET_ARGS $CACHABLE_HTML_LOC/downstream_caching.html)
check_from "$OUT" egrep -q "pagespeed.ic"
check_from "$OUT" egrep -q "X-Cache: HIT"
fetch_until $STATISTICS_URL \
'grep -c downstream_cache_purge_attempts:[[:space:]]*1' 1
check [ $(grep -ce "$PURGE_REQUEST_IN_ACCESS_LOG" $ACCESS_LOG) = 1 ];
while [ x"$(grep "$PURGE_REQUEST_IN_ACCESS_LOG" $ACCESS_LOG)" == x"" ] ; do
echo "waiting for purge request to show up in access log"
sleep .2
done
check [ $(grep -ce "$PURGE_REQUEST_IN_ACCESS_LOG" $ACCESS_LOG) = 1 ];
# The 2nd request results in a cache miss (because of the previous purge),
# rewritten response produced by pagespeed code and no new purge requests.
start_test Check for case where rewritten cache should not get purged.
BLOCKING_WGET_ARGS=$WGET_ARGS" --header=X-PSA-Blocking-Rewrite:psatest"
OUT=$($WGET_DUMP $BLOCKING_WGET_ARGS \
$CACHABLE_HTML_LOC/downstream_caching.html)
check_from "$OUT" egrep -q "pagespeed.ic"
check_from "$OUT" egrep -q "X-Cache: MISS"
CURRENT_STATS=$($WGET_DUMP $STATISTICS_URL)
check_from "$CURRENT_STATS" egrep -q \
"downstream_cache_purge_attempts:[[:space:]]*1"
check [ $(grep -ce "$PURGE_REQUEST_IN_ACCESS_LOG" $ACCESS_LOG) = 1 ];
# The 3rd request results in a cache hit (because the previous response is
# now present in cache), rewritten response served out from cache and not
# by pagespeed code and no new purge requests.
start_test Check for case where there is a rewritten cache hit.
OUT=$($WGET_DUMP $WGET_ARGS $CACHABLE_HTML_LOC/downstream_caching.html)
check_from "$OUT" egrep -q "pagespeed.ic"
check_from "$OUT" egrep -q "X-Cache: HIT"
fetch_until $STATISTICS_URL \
'grep -c downstream_cache_purge_attempts:[[:space:]]*1' 1
check [ $(grep -ce "$PURGE_REQUEST_IN_ACCESS_LOG" $ACCESS_LOG) = 1 ];
fi
start_test Check for correct default X-Page-Speed header format.
OUT=$($WGET_DUMP $EXAMPLE_ROOT/combine_css.html)
@@ -1535,6 +1590,8 @@ EXP_EXAMPLE="http://experiment.example.com/mod_pagespeed_example"
EXP_EXTEND_CACHE="$EXP_EXAMPLE/extend_cache.html"
OUT=$(http_proxy=$SECONDARY_HOSTNAME $WGET_DUMP $EXP_EXTEND_CACHE)
check_from "$OUT" fgrep "PageSpeedExperiment="
MATCHES=$(echo "$OUT" | grep -c "PageSpeedExperiment=")
check [ $MATCHES -eq 1 ]
start_test PageSpeedFilters query param should disable experiments.
URL="$EXP_EXTEND_CACHE?PageSpeed=on&PageSpeedFilters=rewrite_css"
@@ -2010,7 +2067,7 @@ check_not_from "$(extract_headers $FETCH_UNTIL_OUTFILE)" \
# that we bail out of parsing and insert a script redirecting to
# ?PageSpeed=off. This should also insert an entry into the property cache so
# that the next time we fetch the file it will not be parsed at all.
echo TEST: Handling of large files.
start_test Handling of large files.
# Add a timestamp to the URL to ensure it's not in the property cache.
FILE="max_html_parse_size/large_file.html?value=$(date +%s)"
URL=$TEST_ROOT/$FILE
@@ -2029,5 +2086,34 @@ check_from "$LARGE_OUT" grep -q window.location=".*&ModPagespeed=off"
fetch_until -save $URL 'grep -c window.location=".*&ModPagespeed=off"' 0
check_not fgrep -q pagespeed.ic $FETCH_FILE
start_test messages load
OUT=$($WGET_DUMP "$HOSTNAME/ngx_pagespeed_message")
check_not_from "$OUT" grep "Writing to ngx_pagespeed_message failed."
check_from "$OUT" grep -q "/mod_pagespeed_example"
start_test Check keepalive after a 304 responses.
# '-m 2' specifies that the whole operation is allowed to take 2 seconds max.
curl -vv -m 2 http://$PRIMARY_HOSTNAME/foo.css.pagespeed.ce.0.css \
-H 'If-Modified-Since: Z' http://$PRIMARY_HOSTNAME/foo
check [ $? = "0" ]
start_test Date response header set
OUT=$($WGET_DUMP $EXAMPLE_ROOT/combine_css.html)
check_not_from "$OUT" egrep -q '^Date: Thu, 01 Jan 1970 00:00:00 GMT'
OUT=$($WGET_DUMP --header=Host:date.example.com \
http://$SECONDARY_HOSTNAME/mod_pagespeed_example/combine_css.html)
check_from "$OUT" egrep -q '^Date: Fri, 16 Oct 2009 23:05:07 GMT'
if $USE_VALGRIND; then
kill -s quit $VALGRIND_PID
wait
# Clear the previously set trap, we don't need it anymore.
trap - EXIT
start_test No Valgrind complaints.
check_not [ -s "$TEST_TMP/valgrind.log" ]
fi
check_failures_and_exit
+53 -4
View File
@@ -5,7 +5,7 @@
worker_processes 1;
daemon @@DAEMON@@;
master_process @@MASTER_PROCESS@@;
master_process on;
error_log "@@ERROR_LOG@@" debug;
pid "@@TEST_TMP@@/nginx.pid";
@@ -127,6 +127,45 @@ http {
}
}
server {
listen @@SECONDARY_PORT@@;
server_name if-in-server.example.com;
pagespeed FileCachePath "@@SECONDARY_CACHE@@";
pagespeed RewriteLevel PassThrough;
set $inline_javascript "No";
if ($http_x_custom_header_inline_js) {
# TODO(jefftk): Turn on NGX_HTTP_SIF_CONF and figure out how to get
# pagespeed directives inside of a server location block to be respected,
# then uncomment the following line and duplicate the if-in-location test
# for if-in-server.
#pagespeed EnableFilters inline_javascript;
set $inline_javascript "Yes";
}
add_header "X-Inline-Javascript" $inline_javascript;
}
server {
listen @@SECONDARY_PORT@@;
server_name if-in-location.example.com;
pagespeed FileCachePath "@@SECONDARY_CACHE@@";
location / {
set $inline_javascript "No";
pagespeed RewriteLevel PassThrough;
if ($http_x_custom_header_inline_js) {
pagespeed EnableFilters inline_javascript;
set $inline_javascript "Yes";
}
add_header "X-Inline-Javascript" $inline_javascript;
}
}
server {
listen @@SECONDARY_PORT@@;
server_name mpd.example.com;
@@ -141,6 +180,7 @@ http {
listen @@SECONDARY_PORT@@;
server_name experiment.example.com;
pagespeed FileCachePath "@@FILE_CACHE@@";
pagespeed InPlaceResourceOptimization off;
pagespeed RunExperiment on;
pagespeed AnalyticsID "123-45-6734";
@@ -615,6 +655,13 @@ http {
pagespeed CriticalImagesBeaconEnabled false;
}
server {
listen @@SECONDARY_PORT@@;
server_name date.example.com;
pagespeed FileCachePath "@@FILE_CACHE@@";
add_header "Date" "Date: Fri, 16 Oct 2009 23:05:07 GMT";
}
server {
listen @@PRIMARY_PORT@@;
server_name localhost;
@@ -649,6 +696,7 @@ http {
#pagespeed MemcachedThreads 1;
pagespeed on;
pagespeed MessageBufferSize 200000;
#pagespeed CacheFlushPollIntervalSec 1;
@@ -665,9 +713,6 @@ http {
# rewritten before being flushed to the client.
pagespeed BlockingRewriteKey psatest;
# Disable parsing if the size of the HTML exceeds 50kB.
pagespeed MaxHtmlParseBytes 50000;
add_header X-Extra-Header 1;
# Establish a proxy mapping where the current server proxies an image
@@ -706,6 +751,10 @@ http {
pagespeed DisableFilters collapse_whitespace;
}
location /mod_pagespeed_test/max_html_parse_size {
pagespeed MaxHtmlParseBytes 5000;
}
location ~ \.php$ {
fastcgi_param SCRIPT_FILENAME $request_filename;
fastcgi_param QUERY_STRING $query_string;
+122
View File
@@ -0,0 +1,122 @@
# Copyright 2013 Google Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Author: oschaaf@we-amp.com (Otto van der Schaaf)
# The first few suppressions can be found in other modules
# and easily found when searched for, and seem false positives.
{
<nginx false positive>
Memcheck:Param
socketcall.sendmsg(msg.msg_iov[i])
fun:__sendmsg_nocancel
fun:ngx_write_channel
fun:ngx_signal_worker_processes
fun:ngx_master_process_cycle
fun:main
}
{
<nginx false positive>
Memcheck:Param
socketcall.sendmsg(msg.msg_iov[i])
fun:__sendmsg_nocancel
fun:ngx_write_channel
fun:ngx_master_process_cycle
fun:main
}
{
<nginx false positive>
Memcheck:Param
socketcall.sendmsg(msg.msg_iov[i])
fun:__sendmsg_nocancel
fun:ngx_write_channel
fun:ngx_pass_open_channel
fun:ngx_start_cache_manager_processes
fun:ngx_master_process_cycle
fun:main
}
{
<nginx false positive>
Memcheck:Param
socketcall.sendmsg(msg.msg_iov[i])
fun:__sendmsg_nocancel
fun:ngx_write_channel
fun:ngx_pass_open_channel
fun:ngx_start_cache_manager_processes
fun:ngx_master_process_cycle
fun:main
}
{
<nginx false positive>
Memcheck:Leak
fun:malloc
fun:ngx_alloc
fun:ngx_event_process_init
fun:ngx_worker_process_init
fun:ngx_worker_process_cycle
fun:ngx_spawn_process
fun:ngx_start_worker_processes
fun:ngx_master_process_cycle
fun:main
}
{
<nginx false positive>
Memcheck:Param
socketcall.sendmsg(msg.msg_iov[i])
fun:__sendmsg_nocancel
fun:ngx_write_channel
fun:ngx_pass_open_channel
fun:ngx_start_worker_processes
fun:ngx_master_process_cycle
fun:main
}
# similar to http://trac.nginx.org/nginx/ticket/369
{
<nginx false positive>
Memcheck:Param
pwrite64(buf)
obj:/lib/x86_64-linux-gnu/libpthread-2.15.so
fun:ngx_write_file
fun:ngx_write_chain_to_file
fun:ngx_write_chain_to_temp_file
fun:ngx_event_pipe_write_chain_to_temp_file
fun:ngx_event_pipe
fun:ngx_http_upstream_process_upstream
fun:ngx_http_upstream_process_header
fun:ngx_http_upstream_handler
fun:ngx_epoll_process_events
fun:ngx_process_events_and_timers
fun:ngx_worker_process_cycle
}
# Mentioned in https://github.com/pagespeed/ngx_pagespeed/issues/103
# Assuming a false postives as the issue is closed.
{
<nginx false positive>
Memcheck:Param
write(buf)
obj:/lib/x86_64-linux-gnu/libpthread-2.15.so
fun:ngx_log_error_core
fun:ngx_http_parse_complex_uri
fun:ngx_http_process_request_uri
fun:ngx_http_process_request_line
fun:ngx_http_wait_request_handler
fun:ngx_epoll_process_events
fun:ngx_process_events_and_timers
fun:ngx_worker_process_cycle
fun:ngx_spawn_process
fun:ngx_start_worker_processes
fun:ngx_master_process_cycle
}