AI is transforming the cybersecurity landscape

AI in security

Artificial intelligence has a significant impact on cybersecurity. With cybercriminals intensifying and automating their attacks, organizations are increasingly integrating AI into their defensive ecosystems.

Fabrice Clement

AI is increasing the risk of cybersecurity threats. No longer limited by human constraints, cybercriminals are intensifying and automating their attacks, exploiting vulnerabilities better and targeting phishing more precisely. But that also means that organizations must integrate AI into their entire cybersecurity ecosystem in order to deploy appropriate countermeasures . This is fundamentally changing the role and responsibilities of CISOs. There is a clear shift from reactive to proactive, particularly thanks to threat intelligence and automated log analysis.

“Phishing attacks, for example, are carried out in several languages, using sophisticated language,” Fabrice Clément, CISO at the Proximus Group, explains. “By relying not only on rules, but also on behavior, AI enables companies to detect changes in patterns and automate actions, including managing alerts and redirecting potential victims to warning pages.”

"The effectiveness of AI is a major issue and the challenge for cybersecurity is to prove ourselves more effective than our adversaries."

Fabrice Clément, Chief Information Security Officer (CISO) at the Proximus Group

A question of effectiveness

Fabrice is convinced that the effectiveness of artificial intelligence is the primary challenge facing CISOs in terms of threats. "Thanks to AI, the human adversaries we face are becoming increasingly effective. They are able to exploit large amounts of data, particularly through social engineering techniques or organizational recognition data and that makes them better at targeting and guiding their attacks. The challenge of cybersecurity is to be more effective than our adversaries."

"The threat is no longer solely from cybercriminal groups, but also from hostile nations capable of investing enormous resources in technology."

Benoît Hespel, Head of AI at Proximus ADA
Benoit Hespel

The profile of cybercriminals is changing

In the past, hackers needed specialized skills but that has changed. “A few years ago, all you had to do was ask ChatGPT how to create a keylogger and it would give you the code,” Benoît Hespel, Head of AI at Proximus ADA, recalls. “Since then, protection systems have been put in place in consumer LLMs to prevent this type of misuse. However, the threat continues to grow on the dark web, where criminals create, use, and share other large language models to create malware.”

Today, anyone with malicious intent can find a tool to carry out an attack. The result? More attackers, more automation, and more entities targeted at once by specialized AI agents.

Offensive tactics enhanced by AI

In addition to boosting social engineering, AI also personalizes phishing and refines deep fakes. "Whether it's text, video, sound, images, or QR codes, it's becoming increasingly difficult to distinguish between what's real and what's fake. Bots can even log into Teams to impersonate someone and collect information,“ Benoît points out. ”Some malware can also become adaptive and learn from the environment in which it is deployed, causing more damage and making it more difficult to detect."

Adversarial attacks

In companies, AI projects are often managed by people other than the IT team. For Benoît, this constitutes a new channel for threats: "AI models are subject to far fewer controls and guidelines than conventional IT projects and someone with access to the model can use it against its creator by injecting prompts into large language models or poisoning the data so that the model no longer detects a certain type of fraud, activity, or correlation." This is known as an adversarial AI attack.

Sometimes developers ask ChatGPT which open source library to use to develop a program. ChatGPT gives them three names, one of which is completely made up. This is the phenomenon of hallucination inherent in LLMs. But a hacker can create this fictitious library by inserting a backdoor or malware into it. All it takes is for a developer to install the library for the wolf to enter the sheepfold.

AI in security

Read this article further and discover how AI strengthens both defensive and offensive cybersecurity, helping organizations better protect themselves against emerging threats.

Receive our smartest insights in your mailbox

Stay informed and innovate! Subscribe to our newsletter where we share news about our partners, projects and trends.

Privacy agreement
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.