31 Jan 2025, by Slade Baylis
As has become tradition here at Micron21, every six months or so we like to do a review of the state of cybersecurity across the world. The intent is to update our readers on the latest threats, how new technologies are affecting the security landscape, as well as talk about different the ways they can better defend themselves in the future.
In a lot of the cases, each time the threats have been typically similar with the biggest threats usually remaining the same. However, with recent developments in AI, this is no longer the case. With the development and proliferation of generative AI, the danger from even long-standing and persistent threats is increasing, as is its frequency and sophistication.
We’ll be touching on a range of issues within this article, but that’s the most important thing to understand about cybersecurity in 2025 – that all aspects of security are now being impacted by this radical technological shift. For this reason, for each of the threats we’ll be mentioning in this article, one of our focuses will be on the different ways that AI is impacting each of them.
With generative AI (genAI) impacting almost every aspect of cybersecurity, it’s only fitting that it’s the first topic we touch on this year. This is because, with the rapid development of different forms of generative AI, and its equally rapid adoption by both cybersecurity researchers and cybercriminals alike, the rate at which new vulnerabilities are being found, phishing attacks being launched and malware being developed, are all accelerating.
Generative AI is a subset of artificial intelligence (AI) that uses generative models to produce text, images, videos or other forms of data to learn the underlying patterns and structures of their training data, which in turn helps them to produce new data. As reported by the World Economic Forum1, nearly “47% of organizations cite adversarial advances powered by generative AI (GenAI) as their primary concern, enabling more sophisticated and scalable attacks”.
Not only that, but in 2024 there was a “sharp increase in phishing and social engineering attacks, with 42% of organizations reporting such incidents”. This increase in the number of phishing attacks is also expected to continue with Google2 predicting in their Cybersecurity Forecast that “threat actors will increasingly use AI for sophisticated phishing, vishing, and social engineering attacks” in 2025.
Another key finding in the Cybersecurity Forecast was that the “democratization of cyber capabilities” – which is the increased access to tools and services which lower barriers to entry for less-skilled actors – and likely lead to increased cyberattacks globally. We touched on this threat back in January of 2023 in our The latest AI craze - why does ChatGPT have cyber-security professionals concerned? article. In that article we talked about how automated malware generation and off-the-shelf RaaS (Ransomware-as-a-Service) would likely lead to many committing cybercrime that they would otherwise have been unable to do due to their lack of knowledge and lack of required skills.
On a more positive note, AI and machine-learning isn’t just empowering cybercriminals, it’s also being used by cybersecurity researchers to improve the tools used to fight against them. As reported in our earlier AI is making it harder to detect malware and phishing – but it can also be used to fight them article, older methods of detecting and preventing malicious code from running – where application code was compared against a library of known malware signatures – are no longer enough. Through the use of AI, new security software is now being implemented which is more reliable than previous methods because it instead looks at the specific behaviour of running applications in order to determine whether they are nefarious or not.
The dangers of social media and the risk to the privacy of your data is something that most people are aware of. Due to the amount of information that many choose to share and post online, in a lot of cases it’s possible for malicious actors to use this information to commit identity fraud. From using this personal information to open up new accounts with financial institutions, to taking over existing bank accounts, to obtaining medical services or insurance benefits in that name, these are just some examples of how identity fraud can potentially happen through the oversharing of private information online.
Not only is the use of your personal information from malicious actors of a concern, but your data can also be at risk from the utilisation and monetisation of that same data by social media companies for advertising purposes. This is because your data can be shared with many third-parties in that process, and if it's not properly vetted or secured, then it opens up many more avenues for the potential of your data being stolen should they be breached or your data willingly sold to criminals.
However, the latest concern is not just about social media and data security more generally - rather it's more about how this data can be used by genAI to create a whole new level of threat. As reported by SentinelOne3, with data from social media and the power of generative AI, cybercriminals are able to train their models on that data to “mimic the behaviour, appearance, and voice of individuals, making it harder to distinguish between real interactions and artificial ones”.
AI-generated videos, real-time deepfakes, voice cloning, and bot-driven chat are already being used to commit fraud online - impersonating high-profile individuals such as celebrities, business owners, and even heads of state. However, these specifically tailored and customised phishing attacks won’t just be a risk for those targets that cybercriminals deem high-value and high-profile, as these attacks are now possible at scale.
With the increasing speed at which these tools are developing and becoming available to all, SentinelOne has stated that soon “it won’t be too far-fetched to find yourself in a Zoom call, thinking you’re speaking with a colleague or superior, only to realize later that it was an AI-generated forgery”. Think about having an interaction with a colleague, friend, or family member online, only to discover that it’s a cybercriminal using AI trained on their photos, videos, detailed history of their lives, and voice, all sourced from their social media presence!
For some more information about these sorts of AI-enabled identity theft attacks, we’ve previously talked about how AI is being used to convincingly (and trivially) impersonate people’s voices to devastating effect. You can find more information on that in our previous How AI voice cloning has opened the door to a horrific new type of phishing scam article.
Overall, it’s critical that you know what’s possible with new developments with AI so that you know what to be on the lookout for and on guard against it. If you aren’t aware that cybercriminals are able to convincingly clone someone’s voice or create real-time deepfakes, then you will have no chance of defending yourself should you become victim to one of these styles of attack.
On the privacy side, as always, it's important to check your privacy settings on any of your social media accounts to ensure that your information is private and only available to those you are friends with. Not only that, but it’s prudent to also be careful who you choose to become friends with online, as any person you add is able to access to your data. If that person is actually an AI-persona, or even just maliciously inclined, they could use all this information as training-data for malicious AI models, training them to impersonate you.
It’s basically a fixture in these cybersecurity reviews, but ransomware is still one of the most common types of malicious attacks that organisations can find themselves on the receiving end of. The reason for this shouldn’t come as a surprise, as the financial gains for the attacker, when they succeed in breaching a system, are immense. This creates a cycle where each successful breach then incentivises and funds further development of more sophisticated ransomware tools, further increasing its success rate and use.
It’s due to this self-reinforcing cycle and increasing prevalence of attacks, that as reported by Check Point4, “businesses are expected to turn more to cyber insurance to mitigate the financial impact of such attacks, while governments will enforce stricter regulatory standards”.
As we’ve reported on recently, this move by governments towards tougher regulation and compulsory reporting is something that’s already underway in Australia. In our Ransomware payment reporting to be made mandatory under “Cyber Security Bill 2024” article from a few months ago, we reported on new laws, which if passed, would require victims of ransomware attacks who make payments, to report the payment to authorities. Other such measures aimed at breaking that funding cycle are also being contemplated by governments globally, such as making it illegal to pay the ransoms demanded in these attacks.
AI is also making ransomware attacks more frequent and dangerous, with cybercriminals using generative AI and automation to increase the speed and precision of their attacks. These enhanced techniques will, as reported Check Point, “allow ransomware to spread rapidly across networks, making early detection more critical than ever”. The developments with AI not only allow cybercriminals to develop ransomware malware more quickly and cheaply, but also enhance other forms of attacks which are often the initial beachheads for ransomware attacks.
Those other forms of attacks include “phishing emails generated with flawless grammar and personal details” which can be launched to thousands of people simultaneously, and specifically targeted and customised for each intended recipient. If successful, phishing attacks can be the initial entry-point into an organisation’s infrastructure, allowing a ransomware attack to begin.
Due to the increased sophistication of threats like AI-powered phishing attacks, it’s more important than ever to make sure your organisation is prepared. Our recommendation is to make sure you implement thorough, in-depth, and regular training, as well as simulated phishing campaigns. Through regular training you’re able to help your staff be on guard against the threats that they will encounter. In addition, through simulated phishing campaigns you’re able to ensure that training is effective and help staff better identify attacks in the future if they fall victim to one of these simulated “attacks”.
If you have any questions about any of the topics we’ve covered in this article, or would just like to have a conversation about the different ways we can help keep your organisation cybersecure, let us know!
You can reach out to us via email at sales@micron21.com or via phone on 1300 769 972 (Option #1).
1, World Economic Forum, “Global Cybersecurity Outlook 2025”, <https://www.weforum.org/publications/global-cybersecurity-outlook-2025/digest/>
2, Google, “Cybersecurity Forecast 2025”, <https://cloud.google.com/blog/topics/threat-intelligence/cybersecurity-forecast-2025/>
3, SentinelOne, “10 Cyber Security Trends For 2025”, <https://www.sentinelone.com/cybersecurity-101/cybersecurity/cyber-security-trends/>
4, Check Point, “2025 Cyber Security Predictions”, <https://blog.checkpoint.com/security/2025-cyber-security-predictions-the-rise-of-ai-driven-attacks-quantum-threats-and-social-media-exploitation/>