Apache2 php error logging

Scott Haneda talklists at newgeo.com
Sat Feb 6 13:17:05 PST 2010

On Feb 6, 2010, at 11:54 AM, Ryan Schmidt <ryandesign at macports.org>  

>> And I do all that by hand because logroller has bnot Bern updated  
>> in years and the unbuilt Apache log piping mechanism causes Apache  
>> to crash. I filed a bug report but Apache seems to not take notice.
> I used to like cronolog for this, but I had some trouble with it  
> last time I tried. It was months ago and I don't remember the  
> specifics, but there may have been crashing involved.

Amidst all that terrible iPhone auto correct mess I banged out above,  
when I stated logroller I meant chronolog.

Version 1.6.2 was released in 2002. That's either really well executed  
code, or I'm missing out on whatever has taken it's place as the  
mainstream log roller of today.

There is a beta Apache module for cronolog, which is intriguing,  
though I'm betting that beta timeframe exceeds gmails :) .the last  
mailing list post is from around 2004.

With your potential crash, my possibly related Apache initgroups  
issue, I simply do not feel comfortable deploying cronolog in  
production.  There are also some very Apache specific "Known Bugs"  
which are of concern.

Cronolog is extremely well regarded though.

Perhaps I should just stick with the tried and true hand rolling  
method most seem to go with. I've never really liked sleeping Apache  
for an arbitrary 30 seconds to let it finish requests. I would bet I  
miss a few log lines here and there.

Kind of a strange scenario, http logging by nature is huge, but for  
myself, I need to keep at least a years to provide more meaningful and  
acurate stats than something like Google Analytics. Analytics augments  
stats well, but it's not accurate by nature, and not all my users even  
know what a include statement is, let alone a footer file/function/ 
class statement. And there are of course the static HTML sites with  
thousads of files.

A pre-compressed stream of log data sent to sqlite may be a project  
worth entertaining. Sqlite should be insignificantly larger than the  
log file itself, and once in a database, your flexibility goes way up  
with regard to what you can do to the data.

Then again, risk of data corruption also goes way up with databases,  
wheras there is little risk to text file corruption, and even a little  
log file corruption yeilds relatively workable data.

If anyone has any suggestions on other Apache log rolling methods, I'm  
very interested. I'm doing about a GB a day uncompressed per machine.  
Gzipped and the size is marginal, megabytes at most.

There must be a way to have Apache do on the fly log comprssion? My  
log stats app can parse most compressed formats, it's the CPU load/ 
time and disk I/O while gziping a GB that is a killer.

If I could do on the fly compression, I would only need log rolling  
per virtual host of about 1MB so users can debug by watching their  
personal logs.

Thanks for the input, I appreciate it.

(Sent from a mobile device)

More information about the macports-users mailing list