Back to Tracks

Trusted Agentic AI

Addressing governance, safety, and sovereignty challenges of increasingly autonomous systems — from evaluation frameworks to regulatory trends.

Key Topics

Dark Factory governance: who guarantees trust when 100% of code is AI-generated?

Three pillars: Spec governance, Agent trust & identity, Provenance

EU AI Act compliance and regulatory frameworks

Sovereign AI models and evaluation frameworks

Accountability in multi-agent systems

Transparency and explainability requirements

Risk assessment methodologies for agentic AI

International cooperation on AI governance

Three Pillars of Trust

Trusted Agentic AI rests on three pillars: Spec governance (ensuring intent is faithfully translated), Agent trust & identity (authentication and capability certification), and Provenance (tracking the origin and lineage of AI-generated artifacts). This track brings together policymakers, technologists, and civil society to address these challenges.

Practical Outcomes

  • Frameworks for spec governance and agent trust & identity
  • EU AI Act compliance strategies for agentic systems
  • Sovereign AI model evaluation and certification approaches
  • Network with regulators and compliance experts

Interested in this track?

Request an invitation to join the conversation

Request Invitation