A California law firm’s IT department initially blocked the use of generative AI for drafting briefs and summarizing case law due to concerns about exposing privileged client information and violating bar rules by feeding data into public AI models. This situation highlighted a conflict between the efficiency promised by AI and the profession’s strict confidentiality requirements.
Following an evaluation, the firm piloted a confidential-compute platform from Phala, an infrastructure provider specializing in “trust layers” for AI. This system allows AI models to process encrypted data within secure enclaves, ensuring information remains sealed even from the infrastructure operator. A partner involved in the pilot stated that the system provided verifiable proof that no information left the firm’s control, shifting the discussion from whether AI could be used to how safely it could be deployed.
Both the American Bar Association’s Formal Opinion 512 (2024) and the California Bar’s Practical Guidance on Generative AI (2023) emphasize attorneys’ ongoing responsibility for maintaining confidentiality, competence, and supervision when utilizing third-party AI systems. These requirements have hindered many firms from adopting modern AI tools that rely on external data processing. Confidential computing provides a compliance pathway by isolating AI models in secure enclaves and generating cryptographic attestation, which is verifiable proof of data processing location and method, allowing firms to demonstrate control over sensitive information throughout the AI workflow.
During the California pilot, Phala’s confidential AI was deployed for document review and case analysis. Attorneys were able to create private model instances on an hourly basis, securely process discovery materials, and generate summaries rapidly. All data remained encrypted end-to-end, and the firm received automated attestation reports for auditing purposes. The pilot achieved 100% compliance with state bar confidentiality rules, resulted in a 40% faster document review, and offered predictable costs through hourly, isolated model use.
Marvin Tong, CEO at Phala, noted that the legal profession is recognizing privacy-by-design infrastructure as both a cybersecurity measure and a business enabler. This trend extends beyond the legal sector, with healthcare providers, banks, and insurers adopting confidential computing to meet data-protection standards while employing generative AI for research, analytics, and claims processing. Analysts project the global confidential computing market to surpass $70 billion by 2030, driven by increased regulatory scrutiny and demand for verifiable privacy. Recent guidance from the European Data Protection Board (EDPB) supports this direction, advocating for “computational locality” and traceable data governance in AI systems, guarantees that confidential computing is designed to deliver by ensuring sensitive workloads remain sealed, verified, and geographically contained.
Phala is a privacy infrastructure company focused on confidential AI and data processing. The company utilizes hardware-secured enclaves and blockchain-based attestation to enable organizations to process sensitive workloads in a verifiable, privacy-preserving manner. Its enterprise platform integrates with standard cloud and software ecosystems to deliver confidential computing to regulated industries globally.