Back to news

We Raised €1.85 Million to Build the Trust Infrastructure for AI

We're thrilled to announce our €1.85 million pre-seed round to accelerate the development of our AI control and governance platform.

9 min read
We Raised €1.85 Million to Build the Trust Infrastructure for AI

We Raised €1.85 Million to Build the Trust Infrastructure for AI

We're thrilled to announce that we've closed a €1.85 million (~$2.2 million) pre-seed funding round to accelerate our mission: making AI trustworthy, controllable, and aligned with enterprise values.

The round was led by the new "Polo Nazionale di Trasferimento Tecnologico per l'Intelligenza Artificiale e la Cybersecurity" (National Technology Transfer Hub for Artificial Intelligence and Cybersecurity)—established by CDP Venture Capital in partnership with Scientifica Venture Capital—and by BlackSheep, a venture capital fund specialized in AI investments, with the participation of Eden Ventures.

Download Press KitZIP

This investment will enable us to strengthen our team and accelerate the development of our enterprise SaaS platform for AI control and governance—an infrastructure designed to help companies safely adopt cutting-edge technologies like generative AI and large language models, even in highly regulated environments.

Principled Intelligence team

Why We're Building This

As we shared in our introductory post, the challenge organizations face today isn't whether to adopt AI—it's how to do so responsibly. Companies that succeed in controlling this technology will gain an immediate competitive advantage, becoming benchmarks in their industries.

However, AI adoption is often undermined by unpredictability:

  • These systems can provide sophisticated solutions to complex problems
  • But their high degree of autonomy can lead to unexpected behaviors
  • The consequences can be severe: reputational damage, financial losses, and eroded trust among customers and stakeholders

We founded Principled Intelligence to address this challenge: governing AI behavior and controlling AI-driven decisions in a simple, transparent manner aligned with each organization's unique principles.

What We're Building

We're developing an infrastructural layer for AI control and governance that integrates seamlessly into existing systems. Our platform enables companies to:

  • Define their operational principles in natural language
  • Translate them into verifiable criteria automatically
  • Simulate realistic interactions to stress-test AI behavior
  • Continuously monitor AI systems in production

This significantly reduces risk while ensuring controlled adaptability, consistency, and compliance with internal policies and external regulations.

What Makes Us Different

Unlike existing AI governance solutions, our platform is:

  • Easily customizable via natural language — no coding required to define your principles
  • Deeply explainable — providing in-depth explanations when violations occur
  • Accessible to non-experts — making governance a clear, inspectable, and manageable process for everyone
  • Multilingual by design — our years of experience in multilingual NLP ensure coherent performance across languages

Our Technology: Small Language Models for Governance

At the core of our technology are specialized small language models designed specifically for governance, compliance, guardrailing, and oversight tasks. These models are engineered to:

  • Understand company-defined principles
  • Enforce them in real time
  • Run on-premise on standard enterprise servers, ensuring data confidentiality and privacy compliance

In early pilot deployments, our models demonstrated higher accuracy, speed, and efficiency compared to state-of-the-art large language model–based approaches, enabling more reliable and economically sustainable AI control in enterprise settings.

From Academia to Industry

As we mentioned in our introduction, Principled Intelligence was born from our years of research in Artificial Intelligence and Natural Language Processing (NLP) at Sapienza University of Rome. During my time at Apple in the United States, I contributed to research on large language models aimed at improving response quality, reliability, and factual accuracy in conversational systems.

Upon returning to Italy, I resumed collaboration with Edoardo to jointly lead the technical development of Minerva LLM, the first family of large language models trained from scratch on Italian data, a joint project between Sapienza University of Rome, CINECA, NVIDIA, and the Italian Ministry of University and Research under the FAIR initiative.

Minerva has set new state-of-the-art performance benchmarks in various NLP tasks in Italian, now adopted by universities, companies, and developers with over 300,000 downloads, however, during our evaluations, we noticed significant limitations in its reliability and controllability.

The weaknesses we observed in Minerva as well as in other large language models regarding reliability, controllability, and alignment with user values made us realize that trust — not just raw capabilities — would be the key factor determining the success of AI systems in real-world applications.

This inspired us to explore innovative methods for AI control and governance. Building on this strong research foundation, we established Principled Intelligence to translate our academic insights into practical solutions that address real-world challenges in AI governance.

Our mission is to put AI control directly in the hands of companies—simply and transparently—so that AI can be adopted with confidence even in the most critical processes.

Our Perspective

In the coming years, trust—not just capabilities—will determine the success of artificial intelligence systems. Companies have powerful models and extremely valuable data, but often hesitate to bring them to market due to reputational, financial, and liability risks. What's missing is a clear and transparent way to verify that AI truly behaves in line with their principles, policies, and brand identity. We created Principled Intelligence to fill this gap: we help every company guide AI with its own principles, making it reliable, controllable, and therefore truly scalable.

Our Commitment to Open Source

We're strong believers in the power of open science and open-source software. Our academic roots taught us that progress happens faster when knowledge is shared openly, and we're bringing that philosophy to Principled Intelligence.

In the coming months, we'll be releasing:

  • Open governance models: Specialized small language models for compliance, guardrailing, and oversight tasks that anyone can use and build upon
  • Open evaluation benchmarks: Transparent, reproducible evaluation frameworks so the community can measure and compare AI governance approaches
  • Open research: We'll continue contributing to the academic and open-source community with papers, tools, and resources

We believe that making AI trustworthy shouldn't be a proprietary endeavor. By sharing our research and models openly, we hope to raise the bar for AI governance across the entire industry, and invite the community to help us improve.

What's Next

This funding is just the beginning. We're now focused on:

  • Expanding our team with top talent in AI and enterprise software
  • Accelerating platform development for production readiness
  • Working with early partners to deploy in real-world environments
  • Releasing open-source models and benchmarks to the community

If you're interested in building more transparent, reliable AI systems, we'd love to hear from you.


Press & Media Resources

Download our complete press kit including logos, photos, and press release.

Download Press Kit

Have questions or want to learn more? Reach out to us at hello@principled-intelligence.com

Related Articles