Generative AI is a fundamental shift in the security landscape. While security teams have long battled “shadow IT,” GenAI represents a new and more insidious “shadow AI” threat, where sensitive data is actively fed to third-party models. Obsidian’s webinar on this topic highlights that this shift poses a direct risk of data exposure, from intellectual property to confidential merger information.
This article will delve into how GenAI has become the new insider threat and provide a detailed security checklist to help organizations gain visibility and control over their GenAI footprint.
AI Risks Won’t Wait: Get the full replay now: Discover how to control shadow AI and protect sensitive data across your SaaS stack.
The New Insider Threat
The launch of ChatGPT in late 2022 marked a new beginning for AI in the workplace, with employees quickly normalizing the experimentation of readily available GenAI tools. While these applications boost productivity, they also change how sensitive data leaves an organization. This creates a new insider threat, with employees submitting data through “shadow AI” applications that have no IT oversight. The security risks are severe; for instance,
12,000 API keys and passwords have been found in LLM training data.
Common Data Leaks
As GenAI startups rush to deliver new products, application security can be a secondary consideration. This, combined with the fact that employees have become accustomed to inputting a wide array of private corporate information, makes these unsecured apps a primary target for threat actors looking to steal and ransom data.
Common types of sensitive data exposed to LLM models include:
- Source code, API keys, and other proprietary engineering data.
- Information related to undisclosed mergers and acquisitions.
- Personally identifiable information (PII) or customer payment information (PCI).
- Legal documents or corporate intellectual property (IP) for innovations.
The risk is particularly high for companies using GenAI apps developed in China, such as DeepSeek and Baidu Chat, which offer minimal transparency around data retention or training processes.
Limitations of Legacy Security Systems
The webinar makes it clear that existing security tools are not equipped to handle the unique challenges of GenAI. Enterprise browsers can only apply data loss prevention (DLP) policies in managed environments, leaving data exposed when employees use consumer browsers.
Similarly, Secure Access Service Edge (SASE) either fully blocks access to GenAI apps, a counterproductive measure that drives employees to unmonitored devices, or lacks the granular, prompt-level visibility needed to detect sensitive data loss. These tools fail to provide end-to-end security in a world where GenAI is used in countless unmonitored ways.
A New Approach: Browser-Level Enforcement
To address these limitations, a new approach is needed that focuses on securing the point of user interaction: the browser. This method provides real-time visibility and control without stifling productivity. By applying in-line controls, security teams can monitor every shadow application and enforce policies directly at the prompt level.
This ensures that even if an employee is using a non-sanctioned tool, the organization’s sensitive data is protected from being inadvertently leaked. This approach is proactive and enables safe, responsible AI use across the business, rather than relying on reactive measures.
The GenAI Security Checklist: A Benchmark for AI Readiness
The webinar makes it clear that security teams need real-time visibility and browser-level guardrails to control GenAI activity. Simply blocking access to these tools is not a viable long-term solution, as it stifles productivity and drives employees to use personal accounts where security has zero oversight. The GenAI Security Checklist provides a three-step framework for managing this new reality.
Step 1: Maintain a 100% Inventory of GenAI Usage
This step is about gaining complete visibility into all GenAI applications across your organization. It involves capturing in-browser logins, tracking GenAI inventory in a centralized hub, and establishing clear policies on approved vs. unapproved tools. It’s also crucial to inventory AI-powered browser extensions for a complete understanding of your AI footprint.
Step 2: Track and Manage GenAI Adoption
This step focuses on managing GenAI usage and preventing shadow adoption. This includes regularly reviewing usage trends across departments, monitoring AI integrations with core SaaS tools, and prohibiting access to unapproved applications with a blocking feature.
Step 3: Stop GenAI from Accessing Sensitive Data
This is the final, crucial step to preventing data loss. It involves applying real-time Data Loss Prevention (DLP) at the browser level to redact or block sensitive data. In-browser alerts can be used to warn users or block unsafe prompts, helping employees use GenAI tools responsibly and securely.
Empowering Security Teams with Actionable Insights
Ultimately, the goal is not to block GenAI but to enable its safe and responsible use. As Brad Jones, CISO of Snowflake, notes in the checklist, “With the Obsidian browser extension, we’ve got a lot of insight into how users are interacting with things like generative AI SaaS solutions out there, potentially going after what documents may be being uploaded.”
This insight allows security teams to move beyond a reactive stance and build a proactive security posture. The webinar underscores that having a centralized platform to view every shadow application and enforce in-line controls is essential. By monitoring where users interact with GenAI, security teams can confidently block users from passing sensitive data to unauthorized third-party GenAI models and prevent inadvertent data leaks at the prompt level.
Ensuring Safe AI Usage Through Browser-Level Enforcement
The Obsidian webinar underscores that a new approach to security is needed for the age of GenAI. With existing security tools failing to secure GenAI end-to-end, a centralized platform that can view every shadow application and enforce in-line controls is essential.
By monitoring where users interact with GenAI, security teams can confidently block users from passing sensitive data to unauthorized third-party GenAI models and prevent inadvertent data leaks at the prompt level. This proactive, browser-level enforcement enables the safe and responsible use of AI across the business.
Watch the full on-demand webinar here: Take Control of Your AI Usage
Download the companion guide: AI Agent Security Best Practice
FAQs
1. What is “shadow AI” and why is it a significant security risk for US-based companies?
Shadow AI refers to employees using AI applications without IT oversight. It’s a risk because it creates a major visibility gap, exposing sensitive corporate data to unmonitored third-party models and posing a new form of insider threat.
2. How is GenAI usage creating new insider threats?
GenAI creates new insider threats by normalizing the submission of sensitive data through personal accounts and unapproved tools. This bypasses traditional security controls, leading to unintentional data leaks of intellectual property and proprietary information.
3. What types of sensitive data are most at risk from unsecured GenAI applications?
The most at-risk data includes source code, API keys, intellectual property, proprietary engineering data, and confidential information related to mergers and acquisitions. Personally identifiable information (PII) is also highly susceptible to exposure.
4. Why are traditional security tools like SASE and enterprise browsers insufficient for securing GenAI?
Traditional tools fail because they cannot provide the granular, prompt-level visibility needed to detect sensitive data loss. They do not monitor how users handle data in unmanaged environments or third-party AI applications.
5. How can a GenAI Security Checklist serve as a benchmark for an organization’s AI readiness?
The checklist lays out a foundational three-step framework to assess readiness: take a complete inventory of GenAI usage, proactively manage AI adoption, and implement browser-level enforcement to stop sensitive data from leaking.
Discover the future of AI, one insight at a time – stay informed, stay ahead with AI Tech Insights.
To share your insights, please write to us at sudipto@intentamplify.com.



