How to manage Linux logs: Ultimate guide

Logs provide a wealth of information about system activity, application performance, and security events. Effective management of these logs is a crucial aspect of running any Linux system. Without a well-defined log management policy, important issues can go unnoticed and log files can take up excessive amounts of space, leading to performance problems, downtime, or even security breaches.

This end-to-end-guide covers everything you need to know about Linux log management: handy frameworks and utilities, how to set up logging, how to analyze logs, and some best practices.

Understanding Linux logs

Linux systems generate several types of logs:

  • System logs: These logs track system events like startup, shutdown, and errors. They are used to monitor the overall health of the system.
  • Application logs: These are generated by applications and provide details about the app’s activity, including errors and performance.
  • Security logs: These logs record login attempts, authentication errors, and other security-related events. They are crucial for detecting unauthorized access or breaches.
  • Boot logs: Boot logs contain information about the system startup process. You can use them to troubleshoot any issues that occur during booting.

Linux logs are usually stored in the /var/log/ directory. Some important files you’ll find there include:

  • /var/log/syslog: Contains general system messages.
  • /var/log/auth.log: Stores authentication logs, including login attempts.
  • /var/log/kern.log: Stores logs related to the kernel.
  • /var/log/dpkg.log: Tracks package installations and updates on systems using Debian-based package managers.

Why bother with a systematic approach to log management?

Here are some reasons to take a systematic approach to log management:

  • Improved troubleshooting: Well-organized logs make it easier to identify and resolve system or application issues quickly. For example, suppose a web server is experiencing intermittent crashes. By analyzing its access and error logs, you can identify patterns in the errors, such as specific requests or timeframes that lead to the crash.
  • Enhanced security: Regularly reviewing security logs can help detect unauthorized access or suspicious activity. For example, say a security incident occurs, and you need to investigate how the unauthorized access happened. By reviewing the security logs, you can track the intruder's actions, identify the compromised systems, and take appropriate measures to prevent future attacks.
  • Better performance monitoring: Application and system logs help monitor resource usage and performance. This information can be used to make timely optimizations. For example, if a database server is running slow, you can analyze the server's logs to track resource usage, identify queries that are consuming excessive resources, and optimize the database configuration for better performance.
  • Simplified compliance: Keeping logs organized and accessible ensures easier audits and compliance with industry standards. For example, suppose your organization is subject to industry regulations that require log retention and analysis. A well-organized log management system will enable you to easily retrieve and analyze logs for audits and compliance purposes.
  • Efficient storage management: A good log rotation and archiving strategy prevents logs from consuming too much disk space. For example, if your log files are consuming a significant amount of disk space, you can implement a log rotation and archiving strategy to automatically delete old logs.

Logging frameworks and tools

With so many log files to take care of, managing logs manually on a Linux system can be overwhelming. Fortunately, there are many native tools and frameworks in Linux that help administrators efficiently track, manage, and store logs. They are discussed below.

syslog

syslog is one of the oldest and most widely-used logging protocols in Unix-like systems. It collects logs from several sources — applications, devices, and system components — and stores them in a centralized location.

Most Linux distributions already have a syslog service running, like rsyslog or syslog-ng.

As touched upon earlier, logs are typically written to files located in /var/log/. You can customize the behavior of syslog by modifying the /etc/rsyslog.conf file.

For example, you can set separate log files for specific services or applications. In /etc/rsyslog.conf, add rules like:

authpriv.* /var/log/auth.log
mail.* /var/log/mail.log

After making changes, restart the syslog service:

sudo systemctl restart rsyslog

journalctl

journalctl is a part of the systemd framework and is used for querying and displaying logs collected by the systemd journal service. Unlike traditional syslog-based logs, systemd logs are structured and stored in a binary format.

You can view the logs stored by systemd by using the journalctl command. For example:

  • journalctl -b shows logs from the current boot session.
  • journalctl --since "2 hours ago" filters logs from a specific time range.

Logs are automatically collected by systemd without the need for additional configuration, but the /etc/systemd/journald.conf file can be modified to adjust retention policies, storage options, and other aspects.

For example, to compress logs, you can set the following in the configuration file:

Compress=yes

Limit the maximum number of lines by adding this configuration parameter:

LineMax=48K

After editing the configuration, reload the systemd journal service:

sudo systemctl restart systemd-journald

Logrotate

As logs grow, they can quickly consume disk space. Logrotate is a tool that automates the process of log rotation, compression, and deletion. It’s particularly useful for managing log files that get too large over time.

Logrotate is typically pre-installed in most distributions. You can find the configuration file at /etc/logrotate.conf.

You can set up custom log rotation rules for specific files. For example, here’s how to rotate logs daily and compress them after a set number of days:

/var/log/myapp/*.log {
daily
rotate 7
compress
missingok
}

Logrotate can also be scheduled as a cron job to automatically perform log rotations.

Setting up logging

Below are detailed steps for setting up system logging, application logging, and remote logging. Once all this is done, you should have a complete picture of your system's activity.

Setting up system logging

  1. If syslog isn’t installed on your system, begin by installing it using the relevant package manager:
sudo apt install rsyslog  # On Debian/Ubuntu-based systems
sudo yum install rsyslog  # On CentOS/RHEL
  1. Start the rsyslog service and enable it to run on boot:
sudo systemctl start rsyslog
sudo systemctl enable rsyslog
  1. Modify the configuration file located at /etc/rsyslog.conf. Add or edit the logging rules. For example, to log kernel messages separately, you can add:
kern.* /var/log/kernel.log 

You can make any other necessary changes to the file. Once done, save the file and restart the syslog service:

sudo systemctl restart rsyslog
  1. To ensure that logs don’t consume too much disk space, configure log rotation using logrotate. For example, add a rule to rotate /var/log/syslog daily:
/var/log/syslog {
daily
rotate 7
compress
missingok
notifempty
}

Setting up application logging

Many applications have their own logging mechanisms. However, you can centralize and standardize logging for better management. Here's how to go about it:

  1. Configure your application to generate logs. For example, for a NGINX web server, you can configure logging in the /etc/nginx/nginx.conf file under the http block:
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
  1. You can direct application logs to syslog to centralize them. In your application’s configuration file, specify syslog as the logging destination. For example, in NGINX, add the following line in the server block:
access_log syslog:server=unix:/dev/log;
  1. Add application log files to logrotate to manage their size. For example, to rotate NGINX logs weekly:
/var/log/nginx/*.log {
weekly
rotate 4
compress
missingok
notifempty
}

Setting up remote logging

In larger environments, it’s common to send logs to a central server for easier management and analysis. Here’s how to set up remote logging:

  1. On the log server, edit the /etc/rsyslog.conf file to allow it to receive logs from other machines. Uncomment or add these lines:
module(load="imudp")
input(type="imudp" port="514")
module(load="imtcp")
input(type="imtcp" port="514")
  1. Restart rsyslog on the server:
sudo systemctl restart rsyslog
  1. On the machines that will send logs to the central server, edit the /etc/rsyslog.conf file. Add the following line to direct logs to the remote server:
*.* @<log-server-IP>:514
  1. Restart the rsyslog service on each of the clients:
sudo systemctl restart rsyslog
  1. To secure remote logging, use encryption and authentication. Syslog over TLS can be enabled by configuring rsyslog on both the client and server sides to use certificates. Modify the configuration files to include:
$DefaultNetstreamDriverCAFile /etc/ssl/certs/ca-certificates.crt
$DefaultNetstreamDriverCertFile /etc/ssl/certs/logserver-cert.pem
$DefaultNetstreamDriverKeyFile /etc/ssl/private/logserver-key.pem

Setting up journalctl for systemd-based systems

If your system uses systemd, logs are automatically managed by journalctl. Here's how to configure it:

  1. By default, journalctl only keeps logs in memory, meaning they are lost after a reboot. To enable persistent logging, edit the /etc/systemd/journald.conf file and set:
Storage=persistent
  1. Then, reload the journal service:
sudo systemctl restart systemd-journald
  1. To prevent logs from consuming too much space, set size limits in /etc/systemd/journald.conf:
SystemMaxUse=1G
SystemMaxFileSize=100M

Analyzing and monitoring logs

Logs can grow large quickly, but Linux provides a variety of tools and commands that make it easy to sift through the data and extract useful information. This section covers common commands for analyzing logs and introduces open-source tools that simplify log monitoring.

Using basic Linux commands for log analysis

There are many command-line utilities that can help you filter, search, and process logs to find relevant information quickly:

grep

grep is used to search for specific patterns in log files. This is useful when looking for particular error messages or events.

grep "ERROR" /var/log/syslog

Use -i to ignore case sensitivity:

grep -i "failed" /var/log/auth.log

sed

Stream editor (sed) that can be used to search, replace, and manipulate text in logs. It’s particularly useful for transforming log data.

sed 's/old-text/new-text/g' /var/log/syslog

The above command will replace old-text with new-text in the syslog file.

awk

awk is a powerful text-processing tool used to extract and manipulate log data based on patterns and conditions.

awk '{print $1, $3, $5}' /var/log/syslog

This command prints the first, third, and fifth columns of each log entry (e.g., date, process name, and message).

tail

The tail command is used to monitor logs in real time.

tail -f /var/log/syslog

This command continuously updates the output as new log entries are added.

less

less can be used to scroll through large log files without loading the entire file into memory. This is useful for reading large logs.

less /var/log/syslog

cut

cut is used to extract specific fields from log entries.

cut -d' ' -f1,5 /var/log/syslog

This example cuts out the first and fifth fields from each line of the log.

Using dedicated tools for log monitoring and analysis

While command-line tools are great for ad-hoc analysis, there are several tools designed for continuous monitoring and deeper analysis of logs.

Site24x7 AppLogs

AppLogs is a comprehensive log management solution that allows you to aggregate and analyze all your logs from a central place. It works well with all major Linux distributions and supports more than 100 different log types by default. Follow these steps to get started:

  1. Install the server monitoring agent on all your Linux machines.
  2. Create a new log type or choose one from the 100+ available.
  3. Associate the chosen log type with a log profile. A log profile is a way to bind log types to groups of servers.

Now you can analyze your logs using the AppLogs Query Language. You can also create a log-based alert.

Log management best practices

Finally, here are some best practices to help you get the most out of your logging system:

Centralize log storage

Instead of managing logs on individual servers, centralize log collection into a single location via a tool like AppLogs. This simplifies management, ensures better security, and allows for comprehensive analysis across multiple systems.

Rotate and archive logs

Use logrotate or a similar tool to manage disk space efficiently by compressing, archiving, and removing old logs. This prevents logs from consuming all available disk space, which could otherwise lead to system failures.

Set up alerts for critical events

Set up automated alerts for key events such as login failures, system crashes, or unauthorized access attempts. This allows for real-time responses to potential security threats or system issues.

Tools like AppLogs offer the ability to set up triggers and alerts.

Ensure log integrity and security

Logs often contain sensitive information, so it’s crucial to secure access to them. Implement proper file permissions and encryption to prevent unauthorized access or tampering. You can also use file integrity monitoring tools to detect any changes to critical log files.

Regularly analyze logs

Even if you don’t use a dedicated log analysis tool, make it a habit to regularly review logs, regardless of whether there are any immediate issues. Regular analysis helps you spot trends, patterns, or potential threats before they become serious problems.

Enable and monitor application-specific logging

Many applications and services (e.g., Apache, NGINX, MySQL) have their own log systems. Ensure that these logs are enabled and configured properly for each application you use.

Archive logs for compliance

For organizations that need to comply with regulations (such as PCI DSS, HIPAA, or GDPR), it’s important to archive logs securely for extended periods. Some compliance standards require logs to be retained for a specific time.

Monitor log volume

Large volumes of logs can slow down log analysis tools and fill up storage quickly. Therefore, you should continuously monitor log volume so that your system isn’t overwhelmed by too much data. To do so, set limits on log size, rotate logs frequently, and compress older logs to manage volume.

Standardize log formats

Using consistent log formats across different applications and services makes parsing and analyzing logs much easier. Consider adopting a standard log format like JSON or RFC 5424 (used by syslog) for uniformity.

Use dashboards for real-time monitoring

Implement visual dashboards to monitor logs in real time. With AppLogs, you get a built-in, centralized dashboard that offers an overview of system health and log trends.

Test and update your log management setup

Regularly test your log management system to validate that all logs are being collected, rotated, and archived correctly. Moreover, test your alerting setup as well to verify that you’re notified of critical events promptly.

Consider using a SIEM solution

A Security Information and Event Management (SIEM) solution can provide a comprehensive view of your security posture by correlating logs from multiple sources. Consider incorporating one into your ecosystem.

Conclusion

Log analysis is a great way to track performance, troubleshoot issues, and investigate security incidents. However, for fruitful log analysis, you need a well-structured logging system, proper tools for monitoring and filtering data, and a clear log management policy that governs how logs are stored, rotated, and archived.

If you are looking for a quick and easy way to implement an effective log management system, check out AppLogs by Site24x7.

Was this article helpful?
Monitor your Linux environment

Check the health and availability of your Linux servers for optimal performance with Site24x7's Linux monitoring tool.

Related Articles

Write For Us

Write for Site24x7 is a special writing program that supports writers who create content for Site24x7 "Learn" portal. Get paid for your writing.

Write For Us

Write for Site24x7 is a special writing program that supports writers who create content for Site24x7 “Learn” portal. Get paid for your writing.

Apply Now
Write For Us