Shapefin

LatticeFlow AI Launches AI GO! Platform for Evidence-Based AI Governance and Compliance

Share It:

LatticeFlow AI, a Swiss deep-tech firm specializing in trustworthy and compliant artificial intelligence, has initiated the Early Access Program for AI GO!. This platform is introduced as the first to facilitate AI Governance Operations through comprehensive technical assessments, aiming to establish a new benchmark for AI governance and compliance, thereby enabling organizations to accelerate their AI adoption.

Traditional Governance, Risk, and Compliance (GRC) systems, which are typically based on checklists, often prove inadequate for managing the complexities and rapid evolution of modern AI systems. AI GO! addresses this challenge by providing rigorous, evidence-based assessments designed to uncover AI-related risks, generate insights ready for audit, and support confident decision-making. The platform is designed to equip AI risk and compliance teams with the necessary visibility and assurance to leverage the full potential of Generative AI applications, chatbots, and the latest large language models (LLMs).

Petar Tsankov, CEO and Co-Founder of LatticeFlow AI, emphasized the necessity of a technical foundation for effective AI governance. “The only way AI governance can work and scale is when it’s grounded in technical evidence,” Tsankov stated. “With AI GO!, enterprises gain the deep insights they need to manage AI risks, prove compliance, and accelerate innovation with confidence.”

Building on its role as co-creator of COMPL-AI, the first framework for assessing Generative AI models against the principles of the EU AI Act, LatticeFlow AI aims to set a high standard for AI governance. The company focuses on transforming complex AI systems into transparent, auditable, and trustworthy insights that enterprises can utilize for various frameworks, standards, and regulations, including NIST, ISO, and the EU AI Act.

AI GO! enables organizations to implement AI governance effectively by combining rigorous technical assessments with the automation required for consistent oversight across diverse departments and use cases. Its AI-first governance approach allows teams to utilize predefined AI risk frameworks, such as those derived from the EU AI Act, or integrate their own custom frameworks to align with specific business needs. The platform is also extensible across various AI modalities and applications, encompassing Generative AI, LLMs, chatbots, Computer Vision, and Tabular models, providing a unified method for evaluating performance, safety, and compliance.

Key use cases supported by AI GO! include validating Generative AI applications and chatbots, screening LLMs, assessing third-party AI risks, and ensuring adherence to regulations like the EU AI Act. LatticeFlow AI’s involvement in developing COMPL-AI, which translates EU AI Act principles into technical controls, positions it at the forefront of evidence-based compliance. Notably, the company conducted the first EU AI Act compliance-centered analysis of the Swiss LLM Apertus, making it the first large AI model confirmed compliant with the EU AI Act.

Dr. Tsankov commented on the regulatory landscape, stating, “The EU AI Act is a turning point for the industry. It demands concrete technical evidence that AI systems are safe and trustworthy. With AI GO!, we help enterprises meet compliance head-on while continuing to drive innovation and value.”

LatticeFlow AI’s industry standing was recently highlighted at the ‘Trustworthy AI in Practice’ event, co-hosted with ETH Zurich. This event convened leaders from academia, industry, and technology to discuss advancing trustworthy AI, with a significant emphasis on the need for effective AI governance operations. Adrian Ott, Chief AI Officer at EY Switzerland, remarked, “For successful AI transformation in businesses, it’s crucial that AI innovation and risk management work in tandem rather than operating in opposition.” Dr. Matthias Bossardt, Partner and Head of Cyber & Digital Risk at KPMG Switzerland, added, “To benefit from AI at scale, organizations must establish robust AI governance processes that enable efficient risk and performance monitoring.” Dr. Imanol Schlag, Technical Lead of the Swiss LLM Apertus, noted, “The Swiss AI Initiative proves that large-scale efforts don’t have to choose between capability and transparency.” Dr. Naya Giannakopoulou, Senior Technical Architect at Microsoft Innovation Hub, concluded, “The future of enterprise-scale AI lies in guardrails that turn governance into an accelerator, not a blocker.” These perspectives collectively underscore a shared commitment to advancing AI governance through detailed technical assessments that foster trust, compliance, and scalable innovation.

Latest Posts