Cybersecurity threat or aid?

The effects of ChatGPT and other AI tools

By Jason Scanlon, Virtual Chief Technology Officer, Numata

Artificial Intelligence (AI) is a hot topic at the moment, and why not? It’s shown the world that we can have our innovation cake and eat it, too. However, AI doesn’t end with shouting commands at Alexa or asking Siri to remember your shopping list – it has evolved in a big way.

There’s a new kid on the block that not only mimics humans but also acts as a service provider across different fields.

Are Chatbots and ChatGPT the new humans?

Not yet, but they’re getting close.

If you’ve ever chatted with a Chatbot, you’d know that the interaction isn’t all that special. That’s because it’s not supposed to replace humans but instead offer quick and easy answers or services.

However, the human simulation we know, and love has changed with the introduction of ChatGPT, the human conversation simulator revolutionising the AI world.

What is ChatGPT?

Created in 2015 by Sam Altman and Elon Musk and released by OpenAI in November 2022, ChatGPT allows users to have natural language conversations and can even reply to complex requests. Using natural language generation and natural language processing, ChatGPT generates content using significant amounts of data.

What can it do?

While it can’t bring you coffee, it does use machine learning to answer follow-up questions, admit to mistakes, challenge its users, and reject inappropriate commands. What’s more, it can write essays, compose music, answer test questions, and even write computer code.

“Wow! Where do I sign up?”

Sure, it sounds like a lot of fun, but nothing AI-related is without risks. Its mere potential is sending shock waves throughout industries worldwide, including the education sector and Google, for that matter.

Although there are threats related to AI Chatbots and their effects on education and businesses, there are also talks about their potential for hackers and cybersecurity providers alike.

Keeping it ethical

For AI to deliver any positive value, organisations and individuals must develop and execute systems that respect the values and rights of people and societies. This means always considering transparency, accountability, privacy, fairness, non-discrimination, and human control over AI decision-making.

In other words, keep it clean, follow ethical and legal frameworks, and continuously reflect on the integrity of your systems.

Pros of ChatGPT for cybersecurity

1. Automated incident analyses

Currently, analysts do a lot of manual work or use a Security Orchestration, Automation, and Response (SOAR) tool to create a cyberattack narrative and determine its severity. However, research suggests that analysts can take data outputs from a Security Information and Event Management (SIEM) tool, run it through ChatGPT, and generate an automated incident narrative.

This promises to relieve information overload burdening analysts while easing the panic of finding enough skilled cybersecurity professionals to handle the data.

2. Reduced knowledge barrier for executives

ChatGPT can simplify cybersecurity jargon for those who don’t necessarily deal with the subject daily. Its ability to summarise complex cybersecurity topics makes learning and understanding highly relevant information easier and faster for those around the decision-making table.

3. Automated cyber defence testing

Penetration tests are important to analyse the efficacy of cyber defence systems and identify faults. ChatGPT can help “ethical hackers” test their systems by automating elements of attack simulations, lessening the need to research and create their own malware.

Cons of ChatGPT for Cybersecurity

1. Increased malware development

Recent research revealed that cybercriminals could use ChatGPT to generate ransomware and highly evasive malware code. In fact, Recorded Future researchers discovered the tool could produce effective results with minimal need for cybersecurity or computer science knowledge.

2. Human imitation enables phishing and social engineering

Due to its natural language ability, the Chatbot can help threat actors who aren’t fluent in English to write phishing emails and distribute info stealer malware, botnet staging tools, remote access trojans, loaders and droppers, or one-time ransomware executables.

What’s more, the lack of spelling and grammar errors or misuse of English vocabulary may make the message more believable to the general public.

To ChatGPT or not to ChatGPT?

The more advanced technology gets, the more aware we need to be of its positive and negative effects. Although you can’t always prepare for unpredictability, remember to remain vigilant, upskill yourself on risks, and implement cybersecurity measures to prevent cyber-attacks.

Find out how Numata can protect your business against new-age cyber threats.

SECURE MY BUSINESS

Back to Blog