COMING SOON: ADAPTIVE AI ATTACKS — AI will increase the number and impact of cyber attacks, intel officers say Ransomware is likely to be the biggest beneficiary in the next 2 years, UK’s GCHQ says.
Dan Goodin – Jan 25, 2024 1:44 pm UTC EnlargeGetty Images reader comments 4
Threats from malicious cyber activity are likely to increase as nation-states, financially motivated criminals, and novices increasingly incorporate artificial intelligence into their routines, the UKs top intelligence agency said.
The assessment, from the UKs Government Communications Headquarters, predicted ransomware will be the biggest threat to get a boost from AI over the next two years. AI will lower barriers to entry, a change that will bring a surge of new entrants into the criminal enterprise. More experienced threat actorssuch as nation-states, the commercial firms that serve them, and financially motivated crime groupswill likely also benefit, as AI allows them to identify vulnerabilities and bypass security defenses more efficiently.
The emergent use of AI in cyber attacks is evolutionary not revolutionary, meaning that it enhances existing threats like ransomware but does not transform the risk landscape in the near term, Lindly Cameron, CEO of the GCHQs National Cyber Security Centre, said. Cameron and other UK intelligence officials said that their country must ramp up defenses to counter the growing threat. Advertisement
The assessment, which was published Wednesday, focused on the effect AI is likely to have in the next two years. The chances of AI increasing the volume and impact of cyber attacks in that timeframe were described as almost certain, the GCHQs highest confidence rating. Other, more-specific predictions listed as almost certain were: AI improving capabilities in reconnaissance and social engineering, making them more effective and harder to detect More impactful attacks against the UK as threat actors use AI to analyze exfiltrated data faster and more effectively, and use it to train AI models Beyond the two-year threshold, commoditization of AI-improving capabilities of financially motivated and state actors The trend of ransomware criminals and other types of threat actors who are already using AI will continue in 2025 and beyond.
The area of biggest impact from AI, Wednesdays assessment said, would be in social engineering, particularly for less-skilled actors.
Generative AI (GenAI) can already be used to enable convincing interaction with victims, including the creation of lure documents, without the translation, spelling and grammatical mistakes that often reveal phishing, intelligence officials wrote. This will highly likely increase over the next two years as models evolve and uptake increases.
The assessment added: To 2025, GenAI and large language models (LLMs) will make it difficult for everyone, regardless of their level of cyber security understanding, to assess whether an email or password reset request is genuine, or to identify phishing, spoofing or social engineering attempts. Page: 1 2 Next → reader comments 4 Dan Goodin Dan Goodin is Senior Security Editor at Ars Technica, where he oversees coverage of malware, computer espionage, botnets, hardware hacking, encryption, and passwords. In his spare time, he enjoys gardening, cooking, and following the independent music scene. Advertisement Channel Ars Technica ← Previous story Related Stories Today on Ars