31 Jan 2025, by Slade Baylis
With the ever-increasing risk of data breaches, regulatory compliance, and the loss of data more generally, it’s never been more important to protect your data from loss, damage, misuse, and theft. Losing access to data can create immediate disruptions to internal operations - at its most extreme, it can even pose an existential threat to your organisation as a whole. Data theft is just as dangerous, if not more dangerous than data loss, because not only can data theft result in you losing access to your data, but it can also raise legal and regulatory challenges.
It’s for these reasons that it is critical for organisations to consider Data Loss Prevention (DLP) strategies and how they can be improved. The techniques and technologies used to prevent data loss can be entirely separate from the techniques and technologies used to prevent misuse and theft - which is why it’s important to consider both the passive and the malicious types of threats independently.
That’s why this month we’ll be releasing the first of two articles focused on Data Loss Prevention. In this article we’ll touch on the different strategies for reducing the risk of data loss from more passive and unforeseen threats, such as hardware failure. Then next month, we'll do a follow-up companion piece that will focus more on the ways of shielding your data from more active and malicious threats.
At its most basic level, Data Loss Prevention (DLP) is simply the act of using tools, techniques and technologies to shield data from being lost, damaged, misused, or stolen. It encapsulates many different aspects of your organisation, including your approach to backups, asset management, hardware acquisition, cybersecurity, physical security, access control, training, and so forth.
As touched on earlier, when it comes to protecting your organisation suffering from data loss, there are two different forms of threats - malicious threats and passive threats. Passive threats include all non-intentional threats to the integrity of your organisation’s data. This can include data corruption, hardware failure, and even local environmental disasters that can take your infrastructure offline or destroy it entirely!
Most organisations will be somewhat familiar with some of the methods used to protect against passive threats - such as keeping backups of your data. However, there are aspects to consider that can make data loss more or less likely, as well as affect the time it takes for you to recover should it occur. There are also other strategies to consider, which we’ll cover below.
By this point, all organisations will know that keeping backups of their data is imperative for ensuring they always have a copy of it. One less known aspect though, is that where you actually store your backups has a huge impact on how secure it is, how fast you can recover, as well as whether it’s protected from laterally moving compromises that aim to spread throughout your infrastructure.
For restore times, in most cases, a locally stored backup of your data is going to enable you to recover your data and systems the fastest. Having the data locally stored allows you to avoid needing to transfer the data from separate backup infrastructure, which could be rate-limited by file transfer and network transfer speeds. Though, there is one main and obvious downside to local backups, which is that any threat that affects the server or system will also affect your local backup.
That’s why this form of backup is mainly used to prevent data loss caused by issues such as accidental deletion, rather than anything infrastructure impacting. For protecting against threats that can affect your entire infrastructure, such as hardware issues or cyberthreats, then externally-stored backups is the way to go.
Through externally storing backups of your data and systems, you’re able to both protect yourself from issues that have the potential to cause data loss, as well as those that would take them offline. Consideration should be given to where the backup infrastructure is stored, as looking to store your backups in an off-site location can give you even further protection against location-based threats.
When storing backups externally, it should be noted that restoration times can be impacted due to the time it takes to transfer data. As such, this should be included in any planning and discussions regarding the Recovery Time Objectives (RTOs) for your business.
Another important aspect that should be considered is the way in which your backup infrastructure accesses and takes copies of your data. It has become more common for cyberthreats – such as ransomware – to look to immediately move laterally throughout a company’s infrastructure once they’re able to break into a single end-point. What this means, is that if your backup infrastructure is accessible from your systems, then it’s possible that malicious actors may be able to spread to your backups as well, removing them as a recovery option for you.
To protect from this, we recommend looking for backup options that are able to access and backup your data “one-way”, which is to say, in a way that doesn’t allow the systems themselves to access the backup infrastructure. For example, hypervisor level backups allow backups to be taken of entire virtual servers or containers from the hypervisor level, without requiring those systems to have any access to or knowledge of the backup platform. These sorts of solutions provide peace-of-mind because they ensure the backup data is protected from modern laterally-moving compromises.
For more information on things that you should consider when choosing what backup platforms to use or where to store your backup data, check out our Backing up your data – What should you consider when protecting your business? article.
Another potentially large threat to the integrity of your organisation’s data is the hardware on which it’s stored. When choosing which hardware to use to host your infrastructure, a balancing act usually occurs between performance, age, and capacity. Whilst older hardware can allow you to save on your infrastructure costs, in some cases the added risk of hardware failure is too high for certain use-cases or applications. It’s due to this reason that we recommend using newer hardware wherever possible.
The supplier of the hardware is also something that should be considered, as certain brands are known to be of a higher quality than others, so they are less likely to fail. For example, when it comes to data loss specifically, drives and storage are particularly important to select carefully. Whilst older drive technologies used to fail slowly over time, giving warning signs as they began to deteriorate, the newer technologies, such as SSDs, are now known to fail suddenly without warning.
Even considering whether the drives you're purchasing are all from the same manufacturing batch can be important. Manufacturing defects can – and have been known to - affect entire batches, resulting in the possibility that several drives could fail in quick succession to one another if you're unfortunate enough to encounter this issue.
One strategy that we recommend is to look to lease newer hardware to host your infrastructure. Not only does this allow you to benefit from the cost-savings that you get from not having to buy and upgrade your own hardware, but leasing cloud resources also allows you to benefit from the added reliability and performance of using dedicated hardware. Not only that, but as the hardware ages and your systems need to be moved to newer hardware, you’re able to upgrade without requiring large changes to your operating expenses or to your capital outlay.
Speaking on the potential for hardware failure, the failure of storage is one of the primary ways that organisations can find themselves losing data if they’re not prepared. Whilst reputable suppliers and newer hardware can mitigate some of this risk, ensuring you include some form of redundancy in your infrastructure is key to ensuring you never experience data loss.
One of the standard approaches that’s stood the test of time is to look to replicate data across a series of drives within a server, called a RAID (Redundant Arrays of Independent Disks). This allows you to protect yourself from data loss should a single drive fail within a host, as copies of that data exist that are spread throughout other drives within the RAID. Different configurations of RAID protect against the risk of more or less drives failing simultaneously, at the cost of having reduced storage capacity overall.
However, this form of protection only defends against drives failing and doesn’t protect against issues that may affect an entire host or server. It’s for this reason that many organisations look instead for greater forms of redundancy, through techniques such as High Availability (HA) – which is the term given to any service that has built in redundancy to protect against downtime and data loss due to failure of the primary infrastructure.
For example, within our cloud infrastructure, all of our clients’ services are protected from downtime through real-time storage replication and automatic failover. The storage that our clients’ cloud services use is replicated between many physical hosts, either via our mSAN Ceph storage platform for mCloud services, or through several servers in a hyperconverged infrastucture (HCI) cluster for VMware services.
This replication of data between multiple physical servers - or our entire mSAN storage cluster when it comes to our mCloud services – allows our users to be protected from data loss should a single server fail. Not only that, but it also allows virtual workloads to shift to alternative hardware seamlessly should issues occur with the primary hardware. This allows those workloads and the services hosted on them to continue functioning as normal without any interruption.
Overall, the main thing to know in order to protect yourself from passive threats to your data is that “redundancy” is key. Having copies of your data, either on systems that are replicated to in real-time for High Availability (HA) like our mCloud platform, or through VMware’s hyperconverged infrastructure (HCI), or via backups to external systems – these all help ensure that any threats to it are less likely to cause data loss.
Backing up locally allows you to protect yourself from accidental deletion. Having external backups allows you to protect yourself from hardware-caused data loss. In addition, by either backing up or replicating to an offsite location, you're able to further protect yourself from geographically focused threats, such as severe weather events, fire, and even terrorism.
For knowing which type of protection is appropriate for your environment and worth investing in, we recommend considering how critical each of your systems are to your operations in terms of what the cost to your organisation would be if it was to go down or if you were to lose access to the data within them entirely.
If you have any questions about protecting yourself from data loss, let us know! We can provide advice on the best ways to protect your organisation from benign passive threats such as hardware failure (as we’ve talked about in this article), right through to more malicious cyberattacks that are hell-bent on taking you down.
You can reach us by phone on 1300 769 972 (Option #1) or via email at sales@micron21.com.