Local logging

Basic options

Now that we have the syslog-ng package installed we can begin the configuration. The syslog-ng daemon stores its configuration file at /etc/syslog-ng/syslog-ng.conf. Before we begin to edit this file we should probably make a backup in case we should ever want to return to the default configuration.

lisa mv /etc/syslog-ng/syslog-ng.conf /etc/syslog-ng/syslog-ng.conf.backup

Now that we have a backup copy of the original configuration file we can create a new, empty, file in which we can begin to write our new configuration.

lisa nano -w /etc/syslog-ng/syslog-ng.conf

The first step in the configuration process is to set the default options which will apply to all the sources, filters, and destinations, which we shall define. Add the following lines at the top of the empty file we just created.


Now that we have added the above options to the configuration file let's see what they do. The following section gives a brief description of each of the options we have used.

By default the syslog-ng daemon will record the source identifier along with the host name of the originating system in the format source@hostname. Setting this to no will cause just the host name to be recorded.
When logging to files with syslog-ng we can specify a directory to contain the file instead of /var/log/. By default syslog-ng will not create these directories so this option is required if the directory tree is to be dynamic or you do not wish to create them in advance by hand.
dir_group(514), dir_owner(514), dir_perm(0750)
The previous option configured syslog-ng to create any directories which are needed to contain the log files specified in our destination statements. These entries control the group id, the owning user id, and the permissions, of any directories created in this way.
group(514), owner(514), perm(0640)
These entries perform the same task as those above, controlling ownership and permissions, except in this case they apply to the log files created by the logger rather than the directories.
Lines of log output are placed into a queue before they are written to their destination. This option is used to control the number of lines which may be buffered before being written. If this entry is too small then log lines may be lost.
This option controls the maximum size of any log entry. It is specified in bytes and should be used to ensure that no log entries will exceed the maximum space available when logging to a database. It can also be used to limit the damage done by an application logging excessively large messages.
The syslog-ng daemon sends periodic status entries to the log regarding the status of the logging system since the last message. This option controls how long, in seconds, between each such message.
Before log entries are written to their destination files they are held in a buffer in memory. This option controls how many lines may be in this buffer before it is written to disk. A setting of zero, as given here, will ensure that log entries are written to the file immediately.


The second step in configuring syslog-ng is to define one or more log sources to capture system events as they occur. There are a number of different sources of interesting events on a typical Linux system each of which will require an entry in the configuration file if it is to be monitored by syslog-ng.

The example code below contains one entry to capture events from each type of source as well as an entry to capture all events from all the sources listed in the other three entries.

source s_kernel   { file("/proc/kmsg");      };
source s_user { unix-stream("/dev/log"); };
source s_internal { internal(); };
source s_all { file("/proc/kmsg"); unix-stream("/dev/log"); internal(); };

The first of the lines in the above example code creates a file source called s_kernel which receives messages sent by the kernel to the virtual file /proc/kmsg. This type of source is only designed to read special files such as this so it cannot be used to read log entries from a normal file.

The second line creates a source called s_userspace which will be used to read all the log messages generated by user-space programs and daemons which are sent to the /dev/log socket. On Linux it is traditional to use a unix-stream socket to receive these messages, over a reliable connection so that messages are not lost, while on BSD it is more common to use a unix-datagram socket, to reduce vulnerability to a denial of service attack. You can use whichever you prefer.

The third line, as you have probably guessed, is responsible for capturing messages generated internally by the syslog-ng process. These messages are associated with a source called s_internal.

The final line is merely a convenience entry used to combine all the previous sources in to a single source so that they may be referenced more easily. As this example shows multiple physical sources can be included in a single virtual source definition.


The third step toward creating our configuration is to define some filters to allow us to categorise messages based on their priority and source. This is one of the many places where syslog-ng is considerably advanced over the original UNIX syslog daemon and thus provides us with a great deal more flexibility when logging events.

Firstly, let's define some filters for the standard UNIX log levels. We can use these filters later on in the configuration to log messages with different priorities to different destinations, or even ignore them completely in the case of debug messages.

filter f_emergency { level(emerg);  };
filter f_alert { level(alert); };
filter f_crit { level(crit); };
filter f_err { level(err); };
filter f_warn { level(warn); };
filter f_notice { level(notice); };
filter f_info { level(info); };
filter f_debug { level(debug); };

You can combine multiple levels into a single filter as shown below. These filters can be used to easily separate messages into three categories serious, check and trivial. As an example we could use these filters, in combination with other software, to arrange for serious messages to be emailed to an administrator immediately, check level messages to be sent on a weekly or daily basis, and trivial messages to be simply ignored.

filter f_serious { level(err..emerg); };
filter f_check { level(notice..warn); };
filter f_trivial { level(debug..info); };

You can also use boolean operators to combine and negate filters as shown in the examples below. These filters represent the opposite of the filters we created above and can be used whenever we want to specify that messages should be excluded by a filter rather than included by a filter.

filter f_not_serious    { not level(err..emerg); };
filter f_not_check { not level(warn) and not level(notice); };
filter f_not_trivial { not level(info) and not level(debug); };

filter f_not_emergency { not level(emerg); };
filter f_not_alert { not level(alert); };
filter f_not_crit { not level(crit); };
filter f_not_err { not level(err); };
filter f_not_warn { not level(warn); };
filter f_not_notice { not level(notice); };
filter f_not_info { not level(info); };
filter f_not_debug { not level(debug); };

Filters can also be created to filter log messages based on their source. This is often expressed using a kernel defined facility code. The example listing below provides appropriately named filters for all the standard UNIX facility codes. As you can see, facilities can be combined to create a filter which will match either of the specified facility codes.

filter f_kernel   { facility(kern);           };
filter f_auth { facility(auth); };
filter f_authpriv { facility(auth, authpriv); };
filter f_user { facility(user); };
filter f_daemon { facility(daemon); };
filter f_cron { facility(cron); };
filter f_mail { facility(mail); };
filter f_news { facility(news); };
filter f_uucp { facility(uucp); };
filter f_ftp { facility(ftp); };
filter f_lpr { facility(lpr); };

UNIX based systems also define a number of local facility codes which are used by the kernel, and some daemon processes, to log events. These include the local7 facility, which is used to log boot messages, and the local5 facility, which is used to filter messages from the routing subsystem.

filter f_local1   { facility(local1);   };
filter f_local2 { facility(local2); };
filter f_local3 { facility(local3); };
filter f_local4 { facility(local4); };
filter f_local5 { facility(local5); };
filter f_local6 { facility(local6); };
filter f_local7 { facility(local7); };

Sometimes it can be useful to filter log messages based on criteria other than their priority and facility code. For this reason syslog-ng comes with some additional matchers which can be used when defining filters. The example below shows how the match command may be used to compare the text of each log message with a regular expression.

filter f_avc      { match(".*avc: .*"); };
filter f_audit { match("^audit.*"); };
filter f_pax { match("^PAX:.*"); };
filter f_grsec { match("^grsec:.*"); };
filter f_firewall { match("^FW:.*"); };

In addition to the match command described earlier there is another command which can be used in a similar way. The example below creates a filter which matches the program name field of log messages against the specified regular expression. In this case it will match any program called ppp.

filter f_ppp      { program(ppp);       };


The fourth step in composing our basic configuration is to define some destinations for the log messages. As we shall see the syslog-ng logger can record events to a variety of devices. By far the most common output method is the file. In the example below we create several file destinations for system events such as the boot process, messages generated by the kernel, and the cron daemon. All of these files are grouped together in the system subdirectory of the standard /var/log directory.

destination d_boot         { file("/var/log/system/boot.log");    };
destination d_kernel { file("/var/log/system/kernel.log"); };
destination d_user { file("/var/log/system/user.log"); };
destination d_daemon { file("/var/log/system/daemon.log"); };
destination d_cron { file("/var/log/system/cron.log"); };
destination d_syslog { file("/var/log/system/syslog.log"); };

Next we define some more file destinations which we shall use to record security related events. If you are not using a hardened kernel you will probably want to omit the d_pax and d_grsec destinations, and their associated log entries in the next section, as they are not relevant to your configuration.

destination d_authlog      { file("/var/log/security/auth.log");     };
destination d_avc { file("/var/log/security/avc.log"); };
destination d_audit { file("/var/log/security/audit.log"); };
destination d_firewall { file("/var/log/security/firewall.log"); };
destination d_pax { file("/var/log/security/pax.log"); };
destination d_grsec { file("/var/log/security/grsec.log"); };

The destinations below will be used for log messages related to the mail system. While this is especially important on a system which is being used as an email server it should probably be present on all systems with any mail related services installed.

destination d_maildebug    { file("/var/log/mail/debug.log");     };
destination d_mailinfo { file("/var/log/mail/info.log"); };
destination d_mailwarn { file("/var/log/mail/warn.log"); };
destination d_mailerr { file("/var/log/mail/error.log"); };

The following destinations will probably only be of interest to those of you who are running a news server. If you are not then feel free to ignore these entries, and the corresponding log entries which we will create in the next section.

destination d_newswarn     { file("/var/log/news/warn.log");      };
destination d_newsnotice { file("/var/log/news/notice.log"); };
destination d_newserr { file("/var/log/news/error.log"); };
destination d_newscrit { file("/var/log/news/critical.log"); };

Below we have a destination for events relating to a line printer. If your system has such a device connected then this entry, and its corresponding log entry should probably be included.

destination d_lpr          { file("/var/log/misc/lpr.log");       };

Next we have a section for network related services. Here we have defined two destinations for receiving log messages relating to ppp connections and uucp connections respectively.

destination d_ppp          { file("/var/log/network/ppp.log");    };
destination d_uucp { file("/var/log/network/uucp.log"); };

Before we introduce other types of output method we should probably also create two additional file destinations. The first shall receive any debug level messages while the second shall receive messages of any other priority level which have not already been logged to any other destination.

destination d_debug        { file("/var/log/debug");              };
destination d_messages { file("/var/log/messages"); };
When running a production system debug level messages should either be ignored, prevented from being generated at source, or regularly purged to avoid them filling the available log space. It is probably a good idea to remove, or comment out, the debug line above unless it is being used.

Now that we have all the normal file destinations created we can move on to some of the more interesting output methods which syslog-ng offers us. The first two entries of the example below use the usertty method to output to the tty of the root user and, using a wild-card, to the tty of all logged in users. The third entry uses the file output method, which we used extensively earlier, to send log messages directly to the device for tty12 enabling them to be seen on the console of the system, without needing to be logged in, simply by pressing Alt-F12. The last entry uses the pipe method to send messages to the X windows console.

destination d_console_root { usertty("root");                     };
destination d_console_all { usertty("*"); };
destination d_console_tty { file("/dev/tty12"); };
destination d_console_x { pipe("/dev/xconsole"); };
Sending log events to the console, either of logged-in users or one of the spare consoles like tty12, can present a security risk if untrusted users can gain access to the system in that way. Additional measures should be taken to ensure that the security of sensitive information is preserved when using this type of log destination.


The fifth and final step in configuring syslog-ng is to define the mappings between the log sources, filters, and destinations, which we created in the previous three sections. This will connect them together in such a way that log events will "flow" from sources via filters to destinations.

Below we have an example of the simplest possible mapping. It consists of a single source and a single destination with no filters or other complications in between. It can be imagined as a pipe running from the source, in this case the s_syslog source which will receive internal events from the syslog-ng process, to the d_syslog destination, which in this case is a file located at /var/log/system/syslog.log. All the messages which are inserted into one end by the source will be written to the device at the other end by the destination.

log { source(s_internal); destination(d_syslog); };

Given the simplicity of the configuration file syntax it should come as no surprise to discover that filters can be placed, both figuratively and literally, between the source and the destination. An example of a mapping between a source and a destination with a single filter is given below. In this case it records any log messages sent with the local7 facility code, which represents the boot process on most Linux distributions, while ignoring any messages which do not pass through the filter.

log { source(s_all); filter(f_local7); destination(d_boot); };

Another way in which log mappings can be modified is with the inclusion of flags. These flags effect the way in which messages are processed, either by the mapping in which they are included or by subsequent mappings. Below is an example which uses the fallback flag to specify that messages should only be processed by this mapping if they will not be processed by any other non-fallback mappings.

log { source(s_kernel); destination(d_kernel); flags(fallback); };

Filters and flags can, of course, be combined to create complex mappings like those shown here. The example mappings below complete our configuration entries for the files in the system subdirectory.

log { source(s_user);   filter(f_user);     destination(d_user);     flags(fallback); };
log { source(s_user); filter(f_daemon); destination(d_daemon); flags(fallback); };
log { source(s_user); filter(f_cron); destination(d_cron); };

More than one filter can be specified in the same mapping. In the example below we match log entries against the f_authpriv filter, which ensures that they were generated with either the auth or authpriv facility codes, and also the f_not_debug filter, which ensures that they are not of the debug priority level.

log { source(s_user);   filter(f_authpriv); filter(f_not_debug);   destination(d_authlog); };

We can complete our log mappings for the security subdirectory with the following entries. If you are not using a hardened kernel then remember to omit the pax and grsec entries if you omitted their corresponding filter entries earlier.

log { source(s_kernel);   filter(f_audit);      destination(d_audit);    };
log { source(s_kernel); filter(f_avc); destination(d_avc); };
log { source(s_kernel); filter(f_firewall); destination(d_firewall); };
log { source(s_kernel); filter(f_pax); destination(d_pax); };
log { source(s_kernel); filter(f_grsec); destination(d_grsec); };

Next we have log mappings for the mail system. Each mapping first uses the f_mail filter, to only match mail related messages, and then applies a second filter to determine the appropriate log destination. As you can see the mailinfo destination appears twice to receive messages for both the info and notice priority levels.

log { source(s_user);   filter(f_mail); filter(f_debug);   destination(d_maildebug);  };
log { source(s_user); filter(f_mail); filter(f_info); destination(d_mailinfo); };
log { source(s_user); filter(f_mail); filter(f_notice); destination(d_mailinfo); };
log { source(s_user); filter(f_mail); filter(f_warn); destination(d_mailwarn); };
log { source(s_user); filter(f_mail); filter(f_err); destination(d_mailerr); };

The news system uses log mappings similar to those of the mail system above, however, as news servers traditionally use slightly different priority levels when reporting errors and warnings they have been modified accordingly. If you want logs for the info or debug priority levels then these lines can, of course, be added to assuming that appropriate destinations are defined.

log { source(s_user);   filter(f_news); filter(f_warn);    destination(d_newswarn);   };
log { source(s_user); filter(f_news); filter(f_notice); destination(d_newsnotice); };
log { source(s_user); filter(f_news); filter(f_err); destination(d_newserr); };
log { source(s_user); filter(f_news); filter(f_crit); destination(d_newscrit); };

As the lpr, uucp and ppp systems are all fairly simple, and in normal operation should produce no non-trivial error messages, the following log mappings will log all such messages to a single destination per service.

log { source(s_user);   filter(f_lpr);  filter(f_not_trivial);  destination(d_lpr);   };
log { source(s_user); filter(f_uucp); filter(f_not_trivial); destination(d_uucp); };
log { source(s_user); filter(f_ppp); filter(f_not_trivial); destination(d_ppp); };

Now that all the messages we are expecting are being categorised, and logged to files as appropriate, we can add mappings to ensure that all messages which have not been logged so far are recorded. This can be done using the fallback flag as we discovered earlier.

log { source(s_user);   filter(f_trivial);       destination(d_debug);      flags(fallback); };
log { source(s_user); filter(f_not_trivial); destination(d_messages); flags(fallback); };

All that remains now is to ensure that all serious log messages, excluding those with the error priority level, are echoed to the console of any logged in root users. We should also ensure that all alert level messages are echoed to the console of all logged in users.

log { source(s_all);    filter(f_serious); filter(f_not_err);   destination(d_console_root); };
log { source(s_all); filter(f_alert); destination(d_console_all); };


Now that we have a complete configuration file we are almost ready to start the logger however, before we do, there is one more interesting feature of syslog-ng which is worth introducing at this juncture. If you run the init script with the checkconfig parameter, as shown below, it will cause syslog-ng to attempt to load, and parse, the configuration file reporting on any errors it encounters.

The most common cause of errors in the configuration file is a missing, or indeed a spurious, semicolon or curly brace. This will, on occasion, cause the syslog-ng service to report a seemingly incorrect error location.
lisa /etc/init.d/syslog-ng checkconfig

Once you are satisfied that your configuration file is free from errors you can start the syslog-ng service by passing the start parameter to the init script as shown below.

lisa /etc/init.d/syslog-ng start

Another tool which can be used to test our syslog-ng configuration is the logger application. When run it will write the specified log message to the system log using the syslog function. An example of such a test, which writes a warn level message from the mail facility to the log, is shown below.

lisa logger -p mail.warn "A test log message"
The syslog function writes log entries to the /dev/log socket. For this reason you cannot use the above method to test kernel logging which would be output from /proc/kmsg instead. Loading or unloading a kernel module is a fairly simple method of generating a kernel level event.