Updater
November 11, 2024 , in technology

 

Can AI enhance cybersecurity?

While AI has empowered hackers, it also provides tools for defending systems. Generative AI is expanding the toolbox but introduces risks of its own.

Eidosmedia AI and Cybersecurity

How AI is Shaping the Future of Cybersecurity | Eidosmedia

Nearly everything related to AI seems to be a double-edged sword. That’s just as true for cybersecurity as it is for its impact on data center growth. Yes, AI has empowered hackers. However, it also provides tools for defending systems against attacks.

GenAI has further expanded the toolbox for cybersecurity experts —- and introduced new risks. Explore both sides of the AI vs. cybersecurity divide as we look into how to use AI to defend against hackers and their AI weapons.

AI on the dark side of cybersecurity

Forget about the days of slipping malware into a computer using a scammy ad or even a basic phishing email. Today, AI has helped hackers level up their game. One of the most notable ways is through the use of deepfakes. A report from identity verification platform Sumsub found deepfake incidents in the financial technology industry alone increased by 700% in 2023. Through the power of generative AI (GenAI), scammers now use images, videos, and even sound clips to impersonate real customers and executives to fool cybersecurity systems. Of course, AI can also be used to detect deepfakes.

Today, hackers do not even need much technical skill. As the Financial Times (FT) reports, “In one experiment, cyber security researchers used an AI large language model (LLM) to develop a benign form of malware that can collect personal information, such as usernames, passwords and credit card numbers. By constantly changing its code, the malware was able to evade IT security systems, the researchers found.”

Of course, as with so many other problems posed by AI, part of the answer is AI itself.

Combatting AI-driven fraud with AI

Using AI to combat cybersecurity threats is not a new idea. As the FT reports, financial firms have used this tactic for at least a decade to detect potential fraud by spotting patterns and anomalies. Meanwhile, Geeks for Geeks, identifies five aspects of cyber-defense:

  • Detection
  • Prediction
  • Adaptation
  • Automation
  • Response

and indicates that AI has the potential to help at each step of the way.

AI acceleration

AI algorithms can process data at a speed unequaled by humans. It can also help automate incident responses and streamline efforts by detecting threats sooner and even triaging incidents, identifying the most pressing threats so humans can respond quickly. From detecting email scams to constantly scouring a network for threats, there are many ways AI can enhance cybersecurity efforts. And while AI may be a long-time tool in the cybersecurity toolbox, generative AI is opening new avenues to augment the efforts of the all-important security professionals.

GenAI expands the toolbox

For instance, another FT article reports, “Already, generative AI is being used to create specific models, chatbots, or AI assistants that can help human analysts detect and respond to hacks — similar to ChatGPT, but for cyber security. Microsoft has launched one such effort, which it calls Security Copilot, while Google has a model called SEC Pub.”

Still, keeping up with bad actors with seemingly unlimited access to the latest tools, is a problem for companies with budgets and potentially slow purchasing processes.

For other organizations with fewer resources, access to GenAI cybersecurity tools levels the playing field. Forbes reports, “[GenAI] empowers individuals who are not security professionals to understand their security posture….” Going forward, AI will continue to augment what’s possible and escalate the cybersecurity arms race.

Using AI defenses proactively

The state of AI is changing rapidly, and new opportunities for using AI in cybersecurity are constantly emerging. Like so many other technologies that can automate tasks, AI stands to free up people to focus on the bigger picture.

They can also be used proactively to create deceptive fronts that mislead and derail hackers' attempts at penetration. Secureframe writes, “Deception technology platforms are increasingly implementing AI to deceive attackers with realistic vulnerability projections and effective baits and lures.”

'Scam baiting' with Daisy

A recent example of scam-baiting is the chatbot developed by British telephone operator 02 to impersonate an elderly lady ('dAIsy') and waste the time of scammers who call her seeking bank details or other sensitive information.

“While they’re busy talking to me they can’t be scamming you, and let’s face it, dear, I’ve got all the time in the world,” Daisy says in the introductory video from O2 according to Forbes .

Daisy, whose number has been published on scammers dark sites worldwide, was reached by over a thousand callers in the first few days of activity, involving them in meandering conversations that lasted up to forty minutes.

AI in cybersecurity - opportunities and risks

Many of the arguments for using AI in cybersecurity are basic business considerations. Cost-efficiency, eliminating human error, and enabling better decision-making, just to name a few. Still, there are some best practices and potential downsides to keep in mind before investing more resources in AI.

Some of the concerns about using AI in cybersecurity are the same as those surrounding AI use in other arenas. Problems of bias, a lack of staff trained in the specific technology and massive energy consumption are among the considerations. "Companies are being damaged by AI systems that are formally trained on bad data," Thomas P. Vartanian, executive director at the nonprofit Financial Technology and Cybersecurity Center told TechTarget. "That can lead to bias in the output, and bias in the output often leads to lawsuits and reputational harm. But most fundamentally, it can just lead to the wrong answer."

Expert replacement?

Given the shortage of staff in the cybersecurity field, the question arises - will AI technology eliminate the need for trained security professionals?

The answer seems to be - no. As in other fields, AI can act as a 'force multiplier' increasing the range of human intervention by eliminating routine and time-consuming tasks. But, to be effective, organizations' AI-driven security operations will still need expert guidance from suitably trained professionals.

As sector portal Security Today points out: "Like all kinds of technology, organizations will need professionals skilled in deploying, managing, and maintaining AI-based cybersecurity. AI will also need humans to review, audit and monitor AI training data ... because AI can miss things, amplify biases or make wrong calculations or inferences, which can lead to negative outcomes."

Interested?

Find out more about Eidosmedia products and technology.

GET IN TOUCH