ChatGPT Apps was announced at OpenAI DevDay this week, fundamentally lowering the barrier to connecting ChatGPT with external applications across an organization.

Instead of requiring technical configuration, users can now integrate applications through simple prompts. Type, “Spotify, make a playlist for my party this Friday,” and it creates one. Mention you’re buying a home, and ChatGPT surfaces Zillow with an interactive map of listings matching your budget. Hand it an outline and ask Canva to turn it into a slide deck, and it handles the design and creation without you touching the app.

What used to require going through a cumbersome connectors menu, now happens through natural language. OpenAI launched with Booking.com, Canva, Coursera, Expedia, Figma, Spotify, and Zillow, with many more coming as they race toward a full third-party marketplace later this year. Because these apps surface naturally in conversation when relevant, adoption will be frictionless and inevitable. With the Apps SDK, any developer can turn an MCP server into an app and deploy it across your organization. No technical barriers, no approval gates.

This is OpenAI’s platform play. They’re building an app store for conversational AI, except the security model looks nothing like Apple’s did when it launched 2008. An MCP connection is all it takes to integrate, and because apps surface contextually in conversation, users won’t even realize they’re installing something, which creates a fundamental shift in how broadly and how quickly integrations spread across organizations, and why traditional security risks are compounding exponentially with AI and agentic adoption.

The Compounding Problem: Adoption × Variety × Remote Endpoints

A few weeks ago, we analyzed the security implications when OpenAI enabled remote Model Context Protocol (MCP) connections, allowing ChatGPT to communicate with external tools and data sources hosted anywhere. We identified critical risks around data exfiltration, external endpoints receiving conversational context, and the challenges of securing communications when those integrations live outside your security perimeter.

This creates a compounding force multiplier that security teams should understand. Easy adoption means more users. More users means more variety of apps installed. More apps means more remote endpoints. More endpoints means more data leaving your security boundary. 

  1. Increased Adoption: Apps remove all technical barriers. Where connectors often required configuration and IT involvement, ChatGPT Apps work through natural language prompts. This means exponentially more users will integrate exponentially more applications using ChatGPT. What was previously limited to technically sophisticated users now becomes available to anyone who can type, “Canva, turn this into a presentation.”
  2. Expanding Variety: The current native apps establish the UX pattern, but the explosion happens when any developer can build an app using the ChatGPT Apps SDK and share it organization-wide. The coming marketplace means thousands of third-party integrations you can’t vet, control, or even track. The variety of applications goes from a closed set of IT-managed connectors, to user-uploaded apps, and eventually a marketplace with unlimited options.
  3. Autonomous Data Sharing to Remote Endpoints: Unlike traditional APIs where you control exactly what structured data gets sent, ChatGPT decides what conversation context to share with external apps. The LLM determines what’s relevant and sends that context to remote endpoints, making it far more likely that sensitive information gets shared than with traditional integrations.

Traditional Risks, Amplified by Orders of Magnitude

The security challenges applications create aren’t new. They’re the same risks that have existed with SaaS sprawl, shadow IT, and third-party integrations. What’s changed is the scale and speed at which these risks now materialize. When adoption is frictionless and variety is unlimited, traditional security problems compound into organizational threats.

1. Destructive Capabilities with Expanding Maturity

The App’s available with this announcement are consumer focused. Canva for designs, Spotify for playlists, Coursera for learning. But integration maturity is increasing rapidly, and the infrastructure is already built for far more powerful capabilities.

Enterprise use cases will roll out later this year, with business applications available in this marketplace that have the autonomy to make decisions and have permission levels of users. When ChatGPT is set to “High Autonomy” mode, it can invoke these Apps automatically without requiring human approval for each action.

The question isn’t whether destructive enterprise Apps will be built, the question is whether your organization has visibility into which Apps are deployed and what capabilities they provide before someone uses one in production.

2. Data Leakage at Compounding Scale

Every App and remote MCP receives the conversation context you’ve shared with ChatGPT. When that endpoint sits outside your organization (and the default MCP architecture means most will) sensitive data flows beyond your security perimeter and governance controls.

This data leakage risk existed with integration with remote MCPs, but the scale changes everything. With frictionless adoption, users across your organization will integrate dozens of Apps. With the variety expanding to include any third-party marketplace App, the number of external endpoints receiving your data becomes impossible to track. With remote MCPs by default, each integration represents data flowing outside your security boundary.

3. Shadow AI Without Management Platform

Apps are uploaded from individual user accounts with zero organizational oversight. There’s no management platform, no approval workflow, no centralized visibility into what’s deployed or what data it’s accessing.

Traditional shadow IT lets users store data in unauthorized locations. Shadow AI gives those locations the ability to receive conversational context, interact with ChatGPT in real-time, and coordinate across multiple applications – all outside your security visibility.

The compounding risk: As adoption increases (more users) and variety expands (more Apps), the shadow AI problem grows exponentially while your governance capability remains at zero.

4. Expanded Attack Surface with Each Integration

Each new App integration creates a potential attack vector. More Apps means more opportunities for:

Indirect prompt injection: Malicious instructions embedded in App responses that manipulate ChatGPT’s behavior to take unintended actions or leak sensitive information.

Tool poisoning: Malicious instructions embedded in the  App functionality that causes ChatGPT to execute malicious operations while appearing to perform legitimate tasks.

Cross-app exploitation: Attackers leveraging access to one App to manipulate interactions with other Apps in the same conversation context.

With Connectors, you had limited integrations and technical users who might notice anomalies. With Apps, you have unlimited third-party integrations adopted by non-technical users who won’t recognize attack patterns. The attack surface doesn’t just expand, it multiplies with each new App installed across your organization.

5. Over-Permissive Configurations at Scale

While Apps connect using a user’s credentials, oversharing risks compound in the Apps paradigm. Organizations’ existing access management gaps whether it be loose permissions, or over-provisioned roles, for example, become more dangerous when AI autonomously decides which Apps to invoke and what data to share. Third-party Apps worsen this as developer often configure MCPs to run under their own identity rather than the end user’s, meaning a user with limited access could inadvertently query data through an App that has broader privileges.

Why Traditional Security Approaches Fail Here

You can’t secure what you can’t see. Organizations currently have zero visibility into:

  • What apps are deployed across their environment
  • Which integrations connect to remote endpoints outside security perimeters
  • What data is flowing to external MCPs through conversation context
  • How over-permissioned third-party apps create oversharing risks
  • Which apps have destructive capabilities or high autonomy configurations

Traditional application security tools weren’t built for this architecture. They’re designed to scan for malicious code, not analyze whether an MCP server gives ChatGPT overly broad access to your production systems. 

The compounding nature of the risk means you need visibility at the source, before Apps are deployed and connected to production ChatGPT instances.

How Noma Security Addresses the ChatGPT Apps Challenge

Noma Security takes a comprehensive approach to visibility by integrating at the source control layer where Apps are developed. Since every App is fundamentally an MCP server, we can identify them as they’re being built and before they’re deployed and connected to production ChatGPT instances.

Discovery Layer: Scanning your source code management systems reveals all MCP servers in development, giving you visibility into what Apps might be deployed across your organization. We identify both internal development and third-party integrations being evaluated.

Build-Time Analysis: We analyze MCP servers for:

  • Destructive capabilities that could modify or delete production resources
  • Remote data-sharing endpoints that flow data outside your security perimeter
  • Over-permissive configurations that risk oversharing access
  • Use of author credentials instead of least-privilege service accounts

Governance Foundation: Rather than blocking innovation, we provide the intelligence to make informed decisions about which Apps align with your risk tolerance and before they’re deployed at scale across your organization.

Gain Visibility Now, Before the ChatGPT Apps Marketplace Launches

OpenAI plans to launch a public App marketplace later this year. When that happens, the explosion of third-party integrations will make today’s ecosystem look quaint. Every SaaS vendor, every startup, every developer will rush to build their App and capture distribution through ChatGPT’s massive user base.

The combination of frictionless adoption, unlimited variety, and autonomous data sharing is compounding beyond what traditional security controls can manage.

The organizations that survive this transition won’t be the ones who blocked innovation, they’ll be the ones who built comprehensive visibility and control into their AI integration architecture before the risks became unmanageable. When every App is an autonomous integration making real-time decisions about what data to share, you need security built for that paradigm

The Noma Security platform is trusted by Fortune 500 companies to secure AI agents across their entire stack. From discovery of MCP servers in development to runtime monitoring of what Apps are deployed and what data they’re accessing and able to share, we provide the visibility and control needed for this new era. Contact us to learn more, and prepare yourself for this new marketplace. 

5 min read

Category:

Table of Contents

Share this: