Confident Security has released OpenPCC, an open-source standard designed to enable companies to utilize large language models (LLMs) while safeguarding sensitive data.
Built by engineers with experience from Databricks and Apple, OpenPCC ensures that AI prompts, outputs, and logs maintain full privacy, whether models are operated in the cloud or on private servers. The introduction of OpenPCC addresses privacy concerns that have emerged with the rapid adoption of AI. As AI usage accelerates, enterprise data privacy has become a significant issue, with many LLMs potentially storing or learning from user input, and some even making AI chats publicly searchable.
Enterprises face increasing risks, with 98% of companies relying on vendors that have experienced data breaches. Additionally, 78% of employees have reportedly pasted internal information into AI tools, with one in five of these instances involving personal or regulated data such as PII, PHI, or PCI.
OpenPCC functions as a security layer positioned between enterprise systems and AI models. It is designed to protect data during AI model operation, preventing the leakage of confidential information and ensuring that all user data remains fully encrypted and inaccessible to unauthorized parties. The standard integrates with minimal code changes, facilitating secure communication with OpenPCC-compliant AI models and establishing a new open standard for AI privacy.
The release includes the OpenPCC specification and SDKs, providing a standardized protocol for secure AI usage across various models and providers under the Apache 2.0 license. It also features an OpenPCC-compliant inference server, demonstrating how CONFSEC deploys and verifies private AI interactions in production environments, released under the FSL license. Core privacy libraries are also part of the offering, including Two-Way for encrypted client–AI streaming, go-nvtrust for GPU attestation, and Go implementations of Binary HTTP (BHTTP) and Oblivious HTTP (OHTTP) for private communication between users and AI systems. These components aim to provide a practical foundation for secure AI deployment at scale.
Jonathan Mortensen, founder and CEO of Confident Security, stated, “Companies are being pushed to adopt AI faster than they can secure it. Most tools ask you to trust that data is safe. OpenPCC proves that every prompt, output, and log stays private. As AI transforms, privacy will define which companies earn trust and lead the market.”
Aditya Agarwal, General Partner at South Park Commons, commented, “Enterprises have been stuck choosing between innovation and security. What makes OpenPCC different is that it was built by engineers who understand both. By open-sourcing the framework and committing to independent governance, Confident Security is giving enterprises a standard they can finally trust to run AI safely.”
OpenPCC follows Confident Security’s $5 million seed funding round from Decibel, Ex/Ante, South Park Commons, Halcyon, and SAIF. The launch aligns with the company’s mission to establish privacy infrastructure as a universal and foundational component, similar to SSL. To maintain OpenPCC’s neutrality and community-driven nature, Confident Security is forming an independent foundation to steward the standard long-term.
Confident Security specializes in building provably private infrastructure for AI, and is the creator of CONFSEC, an enterprise-grade privacy platform. Both CONFSEC and OpenPCC are designed to be externally audited, secure, production-ready, and deployable on any cloud or on bare metal. The company utilizes technologies such as OHTTP, blind signatures, remote attestation, TEEs, TPMs, and transparency logs to ensure user prompts remain private. Confident Security is led by Jonathan Mortensen, a two-time founder, and its team comprises experts in secure systems, AI, infrastructure, and trusted computing, with backgrounds from organizations including Google, Apple, Databricks, Red Hat, and HashiCorp.