fbpx

When ChatGPT and other chatbots first became widely accessible, there was fear in the cybersecurity community about how AI technology could be used for cyberattacks. However, it didn’t take long for threat actors to find ways to bypass safety checks and utilize ChatGPT to create malicious code.

The situation now seems to have reversed, with attackers exploiting ChatGPT directly to cause cybersecurity issues instead of using it as a tool for their attacks. In fact, Security Week reported that OpenAI, the creator of the chatbot, acknowledged a flaw in the open-source library used in the programming, which resulted in a data breach in the system. As a result of the breach, the service was shut down until the issue was rectified.

An Instant Success

Upon its release in late 2022, ChatGPT quickly gained widespread popularity, attracting interest from authors, software developers, and the general public alike. Despite its flaws (part of its text was awkward or obviously copied), the chatbot became the fastest-growing consumer app in history, gaining over 100 million monthly users by January. Within just one month of its launch, approximately 13 million people were using the AI technology on a daily basis. This is in stark contrast to the popular app TikTok, which took nine months to reach similar levels of usership.

According to a cybersecurity expert, ChatGPT can be likened to a Swiss Army knife due to its many practical uses, which are a significant factor in the chatbot’s early and rapid rise in popularity.

ChatGPT Confirmed Data Breach

The Security Breach

As with any widely used technology or app, it doesn’t take long for threat actors to identify vulnerabilities and launch attacks. In the case of ChatGPT, the exploit was made possible by a flaw in the Redis open-source library, which allowed users to view the conversation logs of others who were currently online.

Open-source libraries provide readily available and frequently used routines and resources, such as classes, configuration data, documentation, help data, message templates, pre-written code and subroutines, and type specifications and values, which are used to create dynamic interfaces, as defined by Heavy.AI. Redis, used by OpenAI to store user data for quick recall and access, is one such library. However, due to the open-source code’s development and accessibility to hundreds of contributors, vulnerabilities can arise and go undetected. In fact, attacks on open-source libraries have increased by 742% since 2019, with threat actors exploiting this known vulnerability.

While the ChatGPT attack was relatively small-scale and OpenAI took prompt action to correct the vulnerability, it’s worth noting that even minor cyber incidents can result in significant harm.

Upon further investigation by OpenAI researchers, it was discovered that the same flaw was likely responsible for making payment information visible for a few hours before ChatGPT was shut down.

According to a statement from OpenAI regarding the event, certain users were able to access the first and last name, email address, billing address, and last four digits (only) of a credit card number, as well as its expiration date, of other active users. It’s worth noting that no full credit card numbers were ever disclosed.

 

ChatGPT Confirmed Data Breach

Security, AI, and chatbots

While less than 1% of ChatGPT’s users were affected, including paying subscribers, and the data loss was swiftly remedied, this incident may indicate potential future dangers for chatbots and users.

Privacy concerns surrounding the use of chatbots have been present for some time now. Mark McCreary, co-chair of the legal firm Fox Rothschild LLP’s privacy and data security practice, has compared ChatGPT and other chatbots to an airplane’s black box. These AI systems store massive amounts of data, which is then used to respond to user queries and prompts. What’s stored in the chatbot’s memory is also potentially accessible to other users, raising concerns about data privacy and security.

Chatbots may also capture a single user’s notes on any given subject, summarize them, and seek further information. However, suppose these notes contain sensitive information, such as confidential customer data or intellectual property belonging to a company. In that case, they become part of the chatbot’s library and are no longer solely under the user’s control. This further underscores the importance of data privacy and security measures when using chatbots.

Increasing AI Use Restrictions

Due to privacy concerns, some companies and entire nations are implementing stricter regulations. For instance, JPMorgan Chase has limited its workers’ use of ChatGPT due to the company’s restrictions on third-party software and apps. Additionally, concerns have been raised about the security of financial information that could potentially be entered into the chatbot. Italy has even temporarily prohibited the program, citing the need to protect the personal data of its citizens. The authorities cited GDPR compliance as the reason for their worry.

As ChatGPT’s language capabilities continue to advance, experts predict that threat actors will use it to create more sophisticated and convincing phishing scams. Previously, phishing emails were often identified by grammatical errors and awkward phrasing. However, with ChatGPT’s ability to generate natural-sounding language, phishing emails could become much more difficult to detect. Additionally, the chatbot’s flawless language translation capability could be used by international threat actors to craft more convincing scams in multiple languages. This presents a significant challenge for organizations and individuals to remain vigilant and aware of potential cyber threats.

Indeed, the potential for AI-generated misinformation and conspiracy theories is a growing concern. With AI becoming increasingly sophisticated in its ability to generate text and manipulate media, creating convincing fake news and propaganda is becoming easier. This could have serious consequences not only for cybersecurity but also for democracy and society as a whole. The case of the op-ed produced by ChatGPT highlights the potential dangers of this technology if misused. It is essential to be aware of these risks and to take steps to mitigate them.

ChatGPT Confirmed Data Breach

Some Threats are Responded to by OpenAI

New cyber dangers will emerge with each iteration of chatbots, either as a result of their increased popularity or their ability to use more complex language. Because of this, the technology is a top target for assault. In order to avoid further data breaches inside the application, OpenAI is taking action. A bug prize of up to $20,000 is being offered to anyone who finds previously unknown vulnerabilities.

Nevertheless, according to The Hacker News, “the program does not cover model safety or hallucination issues, wherein the chatbot is prompted to generate malicious code or other faulty outputs.” So it appears that while OpenAI wants to protect the technology from external threats, little is being done to stop the chatbot from becoming the target of cyberattacks.

ChatGPT and other chatbots are expected to dominate the field of cybersecurity in the future. Whether they will be the target of attacks or the source remains to be seen, and only time will tell.

MANAGED CYBERSECURITY SOLUTIONS

Rhyno delivers a range of activities that combine to fully protect your infrastructure and data from cybercriminals, anywhere and everywhere, 24/7/365.

GO TO CYBERSECURITY SOLUTIONS

About Rhyno Cybersecurity Services

Rhyno Cybersecurity is a Canadian-based company focusing on 24/7 Managed Detection and Response, Penetration Testing, Enterprise Cloud, and Cybersecurity Solutions for small and midsize businesses.

Our products and services are robust, innovative, and cost-effective. Underpinned by our 24x7x365 Security Operations Centre (SOC), our experts ensure you have access to cybersecurity expertise when you need it the most.

This website uses cookies to improve your online experience. By continuing, we will assume that you are agreeing to our use of cookies. For more information, visit our Cookie Policy.

Privacy Preference Center