Apache2 php error logging
Ryan Schmidt
ryandesign at macports.org
Sat Feb 6 19:08:01 PST 2010
On Feb 6, 2010, at 15:17, Scott Haneda wrote:
> Version 1.6.2 was released in 2002. That's either really well executed code, or I'm missing out on whatever has taken it's place as the mainstream log roller of today.
Judging by the age of things on its web site, I guess the author has probably become too busy with other projects to devote time to cronolog. :(
> There is a beta Apache module for cronolog, which is intriguing, though I'm betting that beta timeframe exceeds gmails :) .the last mailing list post is from around 2004.
>
> With your potential crash, my possibly related Apache initgroups issue, I simply do not feel comfortable deploying cronolog in production. There are also some very Apache specific "Known Bugs" which are of concern.
>
> Cronolog is extremely well regarded though.
>
> Perhaps I should just stick with the tried and true hand rolling method most seem to go with. I've never really liked sleeping Apache for an arbitrary 30 seconds to let it finish requests. I would bet I miss a few log lines here and there.
So sleep 5 minutes then, or 15 minutes. Surely no HTTP request will last that long. Then you can process the old logs.
I've never heard of anybody doing log rotation by hand as a permanent solution. Then again I haven't surveyed many webmasters. But it seems like something a script should be able to do properly. I used cronolog in production years ago but haven't been serving any web sites recently so haven't looked into the current state of log rotation.
apache2 of course comes with the rotatelogs script which theoretically must be good for something, but I don't remember anything about it, having used cronolog for so long.
> A pre-compressed stream of log data sent to sqlite may be a project worth entertaining. Sqlite should be insignificantly larger than the log file itself, and once in a database, your flexibility goes way up with regard to what you can do to the data.
I would imagine the disk and CPU overhead of using a database for logging would not be negligible, otherwise more people would be doing it and apache might offer a built-in feature for that. (Or does it?) It might not be an unmanageable overhead, and for small sites it might be ok, but I would take some careful measurements before and after to see what the penalty really is.
> There must be a way to have Apache do on the fly log comprssion? My log stats app can parse most compressed formats, it's the CPU load/time and disk I/O while gziping a GB that is a killer.
I'm not sure if there's a way to compress on-the-fly. There's an entry in the cronolog FAQ about how this is not currently available. Even if it were, I have a feeling it would adversely affect the compression ratio; I think the compression ratio is achieved by taking large blocks of data and compressing them, not by compressing individual lines.
Any CPU load impact that gzipping your daily log file has should be mitigated by using "nice". Not sure what to do about disk I/O though. We had our server do log processing overnight when the server was least busy.
More information about the macports-users
mailing list