Opinion
GenAI tools in the workplace: 5 emerging threat scenarios
"As GenAI tools proliferate across organizations, security teams must remain vigilant and proactive in addressing these emerging threats," writes Dor Sarig, Co-Founder and CEO at Pillar Security
AI is transforming the workplace, with generative AI (GenAI) tools leading the charge. Over 70% of companies have integrated these tools, enhancing productivity across departments. There are tens of thousands of public GenAI tools available, and their usage is rapidly growing. However, widespread adoption introduces significant risks, particularly concerning data security and privacy.
While the risk of shared data being used to train GenAI models is well-known, other sophisticated threats must be recognized. This article highlights five realistic attack scenarios to help organizations understand and mitigate these risks.
The Emerging Threat Landscape
As an attacker in 2024, instead of hacking databases or email accounts, you might target the ChatGPT accounts of key users. Given the extensive use of AI tools for drafting communications, generating reports, and handling sensitive queries, compromising a GenAI account could yield valuable information without complex intrusion methods. The rise of GenAI tools has created a new attack surface, increasing the potential for data breaches.
Scenario #1: Leakage via Indirect Prompt Injection on a Website
Attackers can exploit GenAI tools through indirect prompt injection. For instance, a user might ask a chatbot to access a web resource. If the webpage contains malicious instructions hidden in its code, the AI model could be manipulated into leaking the user's chat history or other sensitive data. The attacker could craft a webpage that includes hidden commands, which the AI executes, such as sending a GET request with the chat history to a server controlled by the attacker.
Mitigation Strategy: Implement strict content filtering for web resources accessed by AI tools, educate employees on the risks, and encourage using trusted sources.
Scenario #2: Leakage via Indirect Prompt Injection in a File
A similar threat arises when a user uploads an untrusted document into a GenAI tool. The document might contain hidden instructions that trigger the AI to leak sensitive information via Base64-encoded GET requests to an attacker-controlled server. This method is particularly dangerous because it doesn’t require the attacker to have direct access to the AI tool—just the ability to get the malicious file to the user.
Mitigation Strategy: Enforce rigorous scanning of files uploaded to GenAI tools, use advanced threat detection systems, limit uploadable file types, and promote using company-approved documents.
Related articles:
Scenario #3: Leakage via Compromised Account
An attacker could gain access to an employee’s GenAI account using credentials obtained from a third-party data breach. Once inside, the attacker has access to all previous chat histories, potentially containing sensitive company information, strategic plans, or personal data. They could also use the compromised account to query the AI for additional information, further compounding the breach.
Mitigation Strategy: Enforce strong password policies, implement multi-factor authentication, and monitor for unusual account activity.
Scenario #4: Leakage via Compromised GenAI App
This risk involves the compromise of the GenAI app itself. An adversarial group might hack one of the leading AI chatbots, gaining access to all user chat sessions. This could occur through a vulnerability in the app's code, an insider threat, or a phishing campaign targeting the development team. The attacker could harvest vast amounts of sensitive data from unsuspecting users, affecting not only the compromised organization but also its clients, partners, and customers.
Mitigation Strategy: Use GenAI tools from reputable providers, regularly update AI tools, conduct security audits, and implement data encryption and backup strategies.
Scenario #5: Leakage via Rogue/Fake Chatbot
In this scenario, attackers create rogue or fake chatbots that mimic legitimate GenAI tools. An employee might download what seems to be an authentic AI app, only to discover it's a malicious clone. This fake chatbot captures all the data entered by the user before sending it to the actual AI provider, allowing the attacker to intercept and steal sensitive information. This type of attack is particularly effective because it exploits users’ trust in familiar interfaces and brands.
Mitigation Strategy: Enforce strict policies on AI tool usage, educate employees about rogue app risks, and implement network security measures to detect and block communications with malicious servers.
Conclusion
As GenAI tools proliferate across organizations, security teams must remain vigilant and proactive in addressing these emerging threats. The scenarios outlined above demonstrate that the risks associated with AI tools extend beyond data privacy concerns related to training data. Pillar helps teams identify and mitigate risks across the entire AI lifecycle by providing a unified security layer across the organization.
Dor Sarig is the Co-Founder and CEO of Pillar Security