By Jason Scanlon, Virtual Chief Technology Officer, Numata
Artificial Intelligence (AI) is a hot topic at the moment, and why not? It’s shown the world that we can have our innovation cake and eat it, too. However, AI doesn’t end with shouting commands at Alexa or asking Siri to remember your shopping list – it has evolved in a big way.
There’s a new kid on the block that not only mimics humans but also acts as a service provider across different fields.
Created in 2015 by Sam Altman and Elon Musk and released by OpenAI in November 2022, ChatGPT allows users to have natural language conversations and can even reply to complex requests. Using natural language generation and natural language processing, ChatGPT generates content using significant amounts of data.
What can it do?
While it can’t bring you coffee, it does use machine learning to answer follow-up questions, admit to mistakes, challenge its users, and reject inappropriate commands. What’s more, it can write essays, compose music, answer test questions, and even write computer code.
“Wow! Where do I sign up?”
Sure, it sounds like a lot of fun, but nothing AI-related is without risks. Its mere potential is sending shock waves throughout industries worldwide, including the education sector and Google, for that matter.
Although there are threats related to AI Chatbots and their effects on education and businesses, there are also talks about their potential for hackers and cybersecurity providers alike.
Keeping it ethical
Currently, analysts do a lot of manual work or use a Security Orchestration, Automation, and Response (SOAR) tool to create a cyberattack narrative and determine its severity. However, research suggests that analysts can take data outputs from a Security Information and Event Management (SIEM) tool, run it through ChatGPT, and generate an automated incident narrative.
This promises to relieve information overload burdening analysts while easing the panic of finding enough skilled cybersecurity professionals to handle the data.
2. Reduced knowledge barrier for executives
ChatGPT can simplify cybersecurity jargon for those who don’t necessarily deal with the subject daily. Its ability to summarise complex cybersecurity topics makes learning and understanding highly relevant information easier and faster for those around the decision-making table.
3. Automated cyber defence testing
Recent research revealed that cybercriminals could use ChatGPT to generate ransomware and highly evasive malware code. In fact, Recorded Future researchers discovered the tool could produce effective results with minimal need for cybersecurity or computer science knowledge.
2. Human imitation enables phishing and social engineering