Auditing & Logging - Essential measures for protecting mission-critical systems

17 Jan 2022, by Slade Baylis

Within the world of IT infrastructure, when a problem occurs the first port of call is usually to try and discover what went wrong. Tracking that down and trying to figure out what happened can actually be quite a lot of work, even if the ultimate solution ends up being quite simple. 

One of the best ways to be able to diagnose the issue is to have detailed system logs from the time the issue occurred. Having those at hand can drastically reduce the turn-around time for resolving any issues that occur. If you don’t have them, the problem can be made much more difficult to diagnose, requiring a lot of effort to be put into purely trying to replicate the initial issue. In these cases, just finding and replicating the issue can sometimes take up the majority of the time required to fix it!

From a security perspective, one of the best things you can rely on is a log – if you don’t have them (or worse yet, they’ve been compromised) you’re really up the creak! Unfortunately, that’s also not something you can go back in time and fix. As they say “hindsight is 20/20”, so it’s best to learn from the mistakes of others and prevent yourself from falling into that trap in the first place. The best practice is to make sure that you are pro-actively keeping logs ahead of time, just in-case you need them. It’s better to have them and not need them, than need them and not have them!

This is why it’s really important to consider instituting some rigorous and reliable forms of logging into any mission-critical infrastructure that you operate. 

Syslog – The importance of centralising logs generated by your systems

With any application, usually the default setting when it comes to logging errors or debug information is to store it within the server it’s running on. For low-priority projects or systems that aren’t mission critical, that’s usually fine, as it allows for quick and easy access to them by developers or system administrators. However, for systems that your business relies on, keeping your logs for diagnosis on the same servers could be a disaster if that server becomes inaccessible due to a fault. For example, what if one of your systems goes down and the critical information that you need is on the very server that you’re trying to fix? As you can see, storing your logs on the servers themselves can really hamper your efforts to restore your systems in the event of an emergency.

To solve this, a standard called “Syslog” emerged for sending and receiving logging data to separate dedicated logging servers. By handing log information off to separate dedicated logging systems, even in the event that a server has gone down, the logs potentially needed to diagnose that issue would still be available to administrators. To make that even easier, additional information is included in those logs, such as “time stamps” (which list the date and time the log was created), its severity, a host IP addresses and more. 

Not only does this mitigate the risk of lost log data or the inability to access that data, but it can be configured to also send notifications if logs of certain types are generated. What this means is that it can act like a pseudo monitoring service, sending notifications to administrators if certain logs are generated. This can include notifications when critical errors occur, or if warnings are generated about things that need to be fixed, or it can even be used to send out alerts for suspicious activity that connected networking devices report.

By using this type of system, businesses can make sure they have access to the logs they need to quickly and efficiently resolve issues if they occur. Not only that, but in combination with pre-defined notifications, they can also take preventative action before issues occur - which lessens the likelihood of downtime or business interruption occurring in the first place.

Immutability – One way logging and increased data integrity

Another key benefit from separating the logging functionality from your individual systems and delegating that responsibility to a centralised solution, is that logs can be sent “one way”. Though it may seem like a hindrance, having logs only able to be sent from a source to that centralised logging service is actually a great security measure.

For example, in cases where systems are hacked / compromised by malicious third-parties, one of the first things they do after gaining access is to clear any logs or records of how they broke into the system. They do this for obvious reasons such as to hide their activity from detection and to make securing the system harder for any administrators that manage it.  

With a centralised logging system, not only are the logs not stored on the server, but if a server is compromised it has no access to be able to modify or delete any logs that have already been sent to that system. This means that you can preserve a full log history of what occurred. In cases such as the hypothetical one mentioned above, technicians can then use that information to diagnose how the systems were compromised and use that information to have them patched and secured. 

Visualising Data – How to easily view and manage generated logs

In a lot of ways, the data that you are storing is only as good as your ability to use it – and that’s just as true for logging data. Having logs stored centrally from all of your disparate systems is great, but to make that usable you also need a way to effectively search through that data for information at specific times, from specific machines, as well as set the type of log information that you want to view. To do that, various solutions exist to help visualise and organise that information. In this article we’re going to be two solutions called Grafana and Graylog that are supported by Micron21.

Grafana – An open-source visualisation and analytics tool

Grafana is an “open-source visualisation and analytics” tool, which allows its users to query, visualise, and explore their data. It’s not only used for server or application log data, as it’s much more flexible than that - it’s even been used by the likes of SpaceX! Specifically regarding IT infrastructure though, it can be used to intelligently organise the data from systems and allow you to easily access the log information that you need. Not only does it allow for quick searches through that information, but it can even be setup with custom graphs or charts. With that functionality you are able to spot discrepancies in the data, such as an increase in the volume of logs generated from a particular application or server.

Just like Syslog, it also can be configured with alerts and notifications based off its data, though it’s even more comprehensive in its approach and implementation. Where syslog can be configured to send notifications if a certain type of log is generated, Grafana can be setup with more advanced rules, such as only sending an alert if an issue occurs for over a certain period of time. The fact that the interface in Grafana for configuring these alerts is much easier to use and navigate is just a bonus. 

Another benefit, due to it not being limited to logging data, it can also be used for ingesting and displaying other business information. That allows the use of ping data, as an example, to set up Grafana as a monitoring system for infrastructure and application availability!

Graylog – An open-source log management tool

Unlike Grafana, Graylog (as the name suggests) has been purpose built purely for centralising and aggregating log data into a single location. It also allows users to quickly query and explore their data, though with more limited options regarding visualising it in unique ways. On the other hand, due to that specialisation it’s regarded as one of the more powerful solutions when it comes to being able to quickly search through large amounts of log information. It provides that functionality via its implementation of “ElasticSearch”.

Without getting too technical, Elasticsearch is an open-source search and analytics tool that can be used to search through all different types of data. Within a Graylog server, it’s used to provide quick searches through stored log data, to make the application fast and responsive when looking up that data. With regards to notifications and alerts, just like Grafana, Graylog also supports alerts and notifications. It allows you to set thresholds and determine who should be notified depending on the type of log and the messages within it.

Even though it doesn’t have some of the niceties that come with Grafana, its logging specialisation and rock solid search functionality make it a great choice for anyone who doesn’t require those other features. 

Overview – Protecting your business in the future by setting up centralised logging today

Whether its servers, applications, or even the computers that your staff use to perform their day-to-day work, it’s true that eventually issues will occur that need to be fixed. To be able to effectively and quickly resolve those issues, you will need to look through log information to track down the root cause. As they say, “knowledge is power”!

Commonly used systems such as Graylog and Grafana are compatible with both Linux and Windows systems, allowing you to centralise your log information regardless of how your infrastructure is organised. Not only that, but by utilising a separate solution you can bypass restrictions in place on Windows machines regarding log size and log retention. This allows you to keep your log history for as long as needed, allowing you to retroactively go back and find information, such as when investigating historical security incidents or errors.

In short, by implementing a separate and centralised logging system, you can make accessing and searching that data much easier – you also increase the integrity of that stored data through compartmented security, via only allowing one-way transmission of the logs from your systems. Due to that, it’s something we recommend every client of ours consider and think about.

Interested in setting up a centralised logging solution in your infrastructure?

If you would like our help in separating out the logging function of your servers of applications, let us know. We can help develop a plan for integrating a separate logging solution into your infrastructure!

You can reach us on 1300 769 972 (Option #1) or via email at sales@micron21.com.

See it for yourself.

Australia’s first Tier IV Data Centre
in Melbourne!

Speak to our Australian based team.

24 hours a day, 7 days a week
1300 769 972

Sign up for the Micron21 Newsletter