Categories
Blog

ChatGPT: A Dream or a nightmare? (A comment from a cybersecurity POV)

Home » Blog » ChatGPT: A Dream or a nightmare? (A comment from a cybersecurity POV)

ChatGPT: A Dream or a nightmare? (A comment from a cybersecurity POV)

So everyone has been talking about ChatGPT these days. It seems like a dream come true for many who had procrastinated about a super-efficient AI-driven assistant. But it seems like there is something “phishy” about it, at least from a cybersecurity perspective. Through our blog, we will look into some of the plausible cybersecurity threats that ChatGPT mainly phishing.

What is ChatGPT? Why has it become so popular?

ChatGPT (Chat Generative Pre-trained Transformer), developed by OpenAI, provides AI-driven chat-based assistance based on context. It uses natural language processing, machine learning, and artificial intelligence to render text-based assistance. It has gained popularity due to its effectiveness in rendering human-like text based on the context fed through chat. Students, researchers, coders, and hackers worldwide are using ChatGPT to generate seemingly meaningful text.

Why are cyber criminals considering ChatGPT as their new friend?

You heard it right, hackers are using ChatGPT to generate meaningful text. In one research conducted by TechCrunch, upon asking ChatGPT to write a seemingly legitimate phishing mail, it denied the request replying it was not programmed to create harmful or malicious content. However, after a few tries they were able to generate legitimate phishing mail.

This opens up a whole range of possibilities for the malicious actors of the cyber crime world. Even though ChatGPT cannot be used to write malicious codes/tools directly, it can certainly be used to design them and develop parts of them.

“ChatGPT can undoubtedly be used to generate malicious codes without being flagged as malicious,” said renowned security researcher Dr. Ozarslan. He has worked in cybersecurity with NATO and has won awards such as the SANS Institute, RSA NetWars, and Global Interactive Cyber Range Awards. He put ChatGPT to the test by instructing it to write code in Swift, retrieve MS Office files from his Macbook, and generate a private key for decryption. He added, “sophisticated phishing campaigns and evasion codes to bypass threat detection were also created using the program.” 

It is concerning because a lack of technical skills prevents potentially motivated threat actors from engaging in criminal activity. This program is now available to all on the clear web, removing the barrier to using the dark web. ChatGPT makes it easy for newbies, wannabes, and script kiddies to learn the ropes without needing to leave the security of the “clear web.” 

It just goes to show how dangerous ChatGPT can be, especially with the ability to enable unsophisticated actors to deploy sophisticated phishing and cyberattacking techniques. Simply put, any amateur cyber attacker will now be able to launch sophisticated attacks using ChatGPT.

This means increased cybersecurity risk for Small and Medium Enterprises (SMEs). According to one research published on MDPI, AI-based Chat assistants like ChatGPT can be used to plan malicious chat-based social engineering (CSE) attacks against SMEs and customers by mimicking human-like conversations with victims.

Attackers who are not even well versed with the language would be able to engage in social engineering attacks based on the text generated by AI powered chat assistants.

Another plausible cybersecurity concern that AI assisted Chat assistants such as ChatGPT pose is that they can be used to spread misleading information/misinformation in critical fields such as medical research, defense and cybersecurity. To catch AI generated misinformation, experts use AI driven transformers to quickly identify misinformation by engaging in fact checks across a large range of resources. However, AI driven chat assistants like ChatGPT also use transformers that can easily generate reports bypassing cybersecurity experts as found by one research conducted in 2021 by researchers at University of Maryland.

What it also found is that AI powered chat assistants will reduce the effectiveness of cybersecurity by supplying misleading information to the threat intel which is used for automated cybersecurity response. This could also keep the experts from attending the actual vulnerability that needs to be addressed.

The real nightmare: ChatGPT based Cyber threats!

As per many cybersecurity experts, ChatGPT primarily poses the following cyber threats:

  1. Business Email Compromise and phishing
  2. Generation of malicious coding (such as ransomware codes)
  3. Automation of cyber attack tasks based on ChatGPT
  4. Simulation of cyber defense/attack to evolve cyber attack techniques

1. Business Email Compromise

It is an advanced phishing attack in which an attacker sends a fraudulent email to his target and asks the victim to engage in some form of monetary transaction. The attacker may also ask the victim to divulge company secrets or sensitive information.

2. Generation of malicious codes

There have been instances discovered on the dark web of hackers using ChatGPT to generate malicious codes that could be used to engage in ransomware attacks. Hackers may take ChatGPT’s assistance to write polymorphic malware codes, which is basically a type of malware that constantly evolves.

This would endanger the cybersecurity of small and medium-sized businesses that have left their cybersecurity solutions unconfigured by experts.

According to Sonic wall, over 270228 new variants of malware were discovered in the first half of 2022

3. Attack automation

ChatGPT can be used to write codes that can be used to automate certain tasks of cyber attacks making it really easy for attackers to engage in sophisticated attacks. They could easily design an Advanced Persistent Threats aiming to disrupt supply chains processes or manufacturing operations at large. They can create tailored codes for deploying a range of automated cyber attacks.

Around 71% of businesses became victim to cyber attack last year

4. Attack/Defense simulation

Attackers are evolving their techniques using ChatGPT to simulate cyber defense to figure out vulnerabilities of their targets. Similarly they are evaluating the effectiveness of their attacks by simulating the attack. They could pull research on different malware techniques and create a strain of malware that is the most effective against their targets.

$170404 is the average amount of ransom paid by mid sized organization in 2022 

Now the question that remains is whether there is a way out from ChatGPT based attacks?

Since most of the attacks are targeted towards small and medium businesses it is important for them to take steps at an organizational level to mitigate risks of becoming a victim of business compromise email attacks.

If you are a small and medium business owner you must spread awareness against such attacks and restrict the users based on least privilege basis. You must also train your personnel to be aware of advanced phishing attacks that are created using seemingly genuine text using ChatGPT. There are a few ways to identify text that is not generated by a human such as inconsistent grammar, repeated use of some words, wordy sentences, lack of idiom usage, usage of many words that sound meaningless altogether etc. To combat the threats it is important to deploy AI based cybersecurity that is driven by experts to identify subtle inconsistencies and vulnerabilities that are characteristic of ChatGPT based attacks.

How AI powered Chat assistants can be used to create BEC and phishing attacks?

For BEC and phishing attacks to work, the attacker must seem like a trustworthy person. ChatGPT has the ability to generate contextual text that can seem genuine to even experts. It can be used by cyber attackers to create advanced business email compromise campaigns, a gateway for far more serious cyber attacks. 

What makes ChatGPT based BEC attacks dangerous is the fact that they can bypass email protection scanners easily since they don’t consist of any malicious attachments.

As per FBI’s 2020 report the average loss per BEC victim increased by 29% year on year basis.

To Conclude

ChatGPT has brought the world to the dawn of the digital future, opening up possibilities with unlocked productivity and efficiency. Even though it has made the job quite easy for many, it has also aided attackers to come up with evolved methods, challenging the most complex cybersecurity in place. It has become one of the most immediate threats for small and medium enterprises since many of these enterprises are clueless about the potential cybersecurity threat that ChatGPT brings and the attacks that are orchestrated using it. They do not possess the expertise to deal with the attacks. 

Modern day attackers have started using ChatGPT to write evolved malware codes, engage in ransomware attacks. It has powered unsophisticated actors to engage in highly sophisticated attacks by assisting them in writing extensive codes for complex tools that could break through the defenses of most enterprises. Therefore it has become essential for them to deploy  AI powered cybersecurity that is powered by human expertise. 

SharkStriker is enabling enterprises to gain an increased cybersecurity posture through a range of holistic cybersecurity services that are tailored to meet organization’s cybersecurity and compliance goals. We take a unique approach of blending AI with cybersecurity expertise to render seamless cybersecurity against the most modern threats. Talk with our experts today to get bespoke cybersecurity services that best suit your requirements and budget.

MDR

Complete Visibility, Continuous Monitoring
& Advanced Threat Protection with
AI-backed Incident Remediation.

Read More >

Latest Post

All
Blog