Fortanix has introduced a new Confidential AI solution powered by NVIDIA Confidential Computing, and with this launch, the company is addressing one of the biggest barriers to enterprise AI adoption: how to protect both proprietary AI models and sensitive business data during inference. As enterprises continue to expand AI use cases, they also face growing concerns around model theft, data leakage, and misuse. Therefore, this new solution arrives at a time when security, privacy, and control matter more than ever.
According to Fortanix, the new Confidential AI offering allows model developers to distribute proprietary AI models for deployment in on-premises AI factories without exposing their intellectual property. At the same time, enterprises can run third-party AI models inside their own infrastructure while keeping their data local and fully under their control. In other words, Fortanix aims to remove the long-standing conflict between model protection and data privacy.
Moreover, the company explained that enterprises can now run AI inference at scale without exposing either the model or the data. Fortanix Confidential AI creates a trusted execution environment where model weights stay encrypted and hidden, even from the infrastructure hosting them. Likewise, sensitive enterprise data remains protected throughout runtime. As a result, both model providers and enterprise users gain strong security assurances backed by cryptographic proof rather than relying only on contracts or policies.
This approach is especially important for highly regulated industries. For example, organizations in sectors such as government, healthcare, and finance often need advanced AI capabilities, yet they must also meet strict privacy and compliance requirements. With this solution, they can use best-in-class third-party AI models on sensitive data without giving vendors access to that data. Meanwhile, model owners can expand their reach and monetize their innovations without worrying that their IP will be extracted or copied.
Fortanix also highlighted several core protections built into the solution. These include verified runtime environments, secure key release only to trusted systems, encrypted prompts and outputs in memory, and tamper detection if deployment environments are changed. In addition, the platform combines Confidential Computing, centralized cryptographic policy control, and secure key release to protect AI workloads during runtime.
“AI security can break during inference if you don’t protect data and models in use,” said Anuj Jaiswal, Chief Product and Strategy Officer, Fortanix. “This deployment on NVIDIA Confidential Computing-backed GPUs, verified by Fortanix Confidential Computing Manager and backed by secure key release from Fortanix Data Security Manager, demonstrates that you no longer have to choose between performance and protection. Confidential AI enables both.”
Furthermore, Fortanix said this solution supports a more practical and scalable model for enterprise AI deployment. Organizations can select third-party AI tools that best match their business needs, while model vendors can confidently share their innovations for on-premises use. Consequently, both sides benefit from stronger trust, improved security, and greater operational flexibility.
Fortanix described Confidential AI as a leading solution for protecting proprietary models, enterprise data, and inference processes across the AI lifecycle. The company said it enforces hardware-based runtime validation, secure key release linked to verified workloads, and centralized policy management. Because of that, enterprises can offer AI-powered services without directly handling plaintext model assets, while users’ prompts and outputs remain secure.
“The next phase of enterprise AI adoption requires a foundation of verifiable trust to ensure both data privacy and proprietary model integrity,” said Anne Hecht, Senior Director AI Platforms at NVIDIA. “The integration of NVIDIA Confidential Computing and Fortanix Confidential AI enables customers to deploy AI with security and privacy.”
The announcement also reflects growing industry demand for secure AI deployment models that balance performance, privacy, and IP protection. As more organizations move AI workloads into production, solutions like this could become increasingly important in enterprise AI factories where security cannot be optional.
“Our models represent years of proprietary research and engineering – protecting that IP while expanding access is a core tension in enterprise deployment,” said Kuba Abramczyk, Forward Deployed Engineer at ElevenLabs. “Working with Fortanix on NVIDIA Confidential Computing-backed infrastructure lets us resolve that directly, giving organizations in government, healthcare, and finance the ability to run our Text to Speech models on their own servers, on their own data.”
Recommended Cyber Technology News:
- VENON Malware Hits 33 Brazilian Banks with Credential Overlays
- Accenture Expands Google Cloud Alliance to Boost Cybersecurity
- CrowdStrike Secures Perplexity’s Comet AI Browser
To participate in our interviews, please write to our CyberTech Media Room at info@intentamplify.com
🔒 Login or Register to continue reading
