In the Zone | Our Investment in Mindgard
January 30, 2025
By Greg Dracon, Partner and Austin Kwoun, Analyst, .406 Ventures
“All software has security risks, and AI is no exception. The challenge is that the way these risks manifest within AI is fundamentally different from other software. Drawing on our 10 years of experience in AI security research, Mindgard was created to tackle this challenge. We’re proud to lead the charge toward creating a safer, more secure future for AI.” -- Peter Garraghan, Co-founder and CEO of Mindgard
Well before we met Mindgard, we had been on the lookout for a dynamic AI security platform that could keep up with the pace and scale at which the technology was evolving. .406 Ventures has been investing in AI companies since 2013 when we backed Indico, Alec Radford’s first company. Alec went on to be the lead author of the 2018 OpenAI paper that presented the generative pre-trained transformer (GPT) framework for the first time, launching the dialogue around this incredible technology that catapulted large language models (LLMs) and Generative AI (GenAI) into the limelight. But, as excited as we were to see GenAI finally hit the mainstream in 2022, we anticipated that a lack of trust in how LLMs perform and how they secure data would give enterprise buyers pause about putting these models into commercial use.
Sure enough, enterprises have been slow to fully deploy GenAI. Fortune 500 companies are universally enticed by its potential but have generally limited their commitments to low-risk commercial applications and proofs of concept (POCs). When we talked with our Cyber Executive Council (comprising members from many Fortune 500 companies as well as other cybersecurity industry leaders) to gather additional insight, they expressed apprehension about hallucinations and model drift, but security was, without a doubt, the most problematic concern for them.
Having been investing in cybersecurity for nearly two decades, we have seen time and again that new technologies (e.g. cloud infrastructure, SaaS, containers, etc.) inevitably expand attack surfaces as they’re adopted. These surfaces need to be understood and protected before the technology can be fully utilized. AI is no different. So, we were extremely careful about where we invested. We vigorously looked at the first wave of AI security, i.e. guardrails such as firewalls, filtering, etc. We decided to pass - our rationale being that these defensive controls, which require predictive policies and continual updating, would be challenged by the unpredictability of how GenAI operates. They would also struggle to keep up with the pace at which the underlying LLMs architectures are evolving. In our view, the right solution had to use offensive techniques to identify both known and unknown vulnerabilities, and it had to be nimble enough to keep up with the pace of innovation by adversaries who see opportunity in new and frequently changing attack surfaces.
Add all of this to our belief that, in order to hurdle the trust barrier, the ideal solution would secure AI systems both during the development phase as well as at runtime. We’d now defined the AI security solution of our dreams. We just needed to find it.
Enter Mindgard
We identified Mindgard through a trusted advisor in early 2024 and immediately knew we’d found something in that sweet spot.
For starters, the Mindgard founding team has a rare combination of cybersecurity and AI expertise, which they honed by identifying, dissecting, and cataloging every attack ever launched against an AI system. With the world’s largest library of AI model attacks in hand, Mindgard has built a platform that continuously and automatically runs attacks against AI models (generative and traditional) to assess vulnerabilities. As quickly as Mindgard identifies a model’s weaknesses, it pipes both data science and security mitigations to the relevant teams for remediation, thereby offering a full-circle AI security platform. It’s this platform that we believe will be critical for enterprises to establish the necessary trust to move AI models from POC to production.
Mindgard’s Edge
Professor (and CEO) Peter Garraghan has been laser-focused on the security of AI models for nearly a decade. In 2016, he founded the AI Security Lab at Lancaster University (now a veritable center of gravity for security research in the UK!), to build this library and to scale it for production-grade red teaming (a cybersecurity practice where security professionals develop and simulate real-world attacks against a system as a malicious actor would in order to identify security vulnerabilities). In 2022, he teamed up with seasoned cybersecurity executive and repeat founder Steve Street to found Mindgard.
Peter’s expertise and his lab’s robust attack library is just one of many aspects that differentiates Mindgard from other AI security start-ups that are just now recognizing the same need.
Additionally, we believe that Mindgard has many other clear advantages:
.406 and Mindgard
Mindgard has all the elements of a category-creating security platform. It reminds us of other great .406 portfolio companies like Veracode, which was one of the first companies to secure applications; Carbon Black, which pioneered endpoint detection and response (EDR); and Randori, which was the first automated red teaming solution.
We’re thrilled to lead Mindgard’s seed round, which closed in December 2024. The company is already working with world-class organizations to secure their AI deployments, and we’re excited to support their growth in the coming years.
As Peter puts it:
“Mindgard’s mission is clear: to secure the world’s AI and enable organizations to innovate with confidence.”
Here’s to creating a safer, smarter future for AI.
________________________
In the Founder’s Words, from Peter Garraghan, Co-founder and CEO:
WHAT’S MINDGARD?
Mindgard is the leader in AI security testing and red teaming, born from cutting-edge research at Lancaster University. We tackle the unique cybersecurity risks of AI systems that traditional tools can’t address. From securing generative AI models to protecting sensitive data, Mindgard’s mission is clear: to secure the world’s AI and enable organizations to innovate with confidence.
FAVORITE QUOTE:
"Time is the most valuable thing one can spend." Variants of this quote have been attributed to many individuals. For me, it underscores the vital importance of prioritization in building and scaling a startup where effective time management can mean the difference between success and stagnation.
BEST ADVICE RECEIVED:
Focus on solving meaningful problems. Technology alone isn’t enough; it must address real business needs and create tangible value for customers.
PAIN POINT ADDRESSED:
The deployment and use of AI introduces new risks, creating a complex security landscape that traditional tools cannot address. As a result, many AI products are being launched without adequate security assurances, leaving organizations vulnerable—an issue underscored by a Gartner finding that 29% of enterprises deploying AI systems have reported security breaches, and only 10% of internal auditors have visibility into AI risk. Many of these new risks such as LLM prompt injection and jailbreaks exploit the probabilistic and opaque nature of AI systems, which only manifest at runtime. Securing these risks, unique to AI models and their toolchains, requires a fundamentally new approach.
PROBLEM SOLVED:
Organizations are unable to use AI anywhere close to its full potential because of security risk. Mindgard’s Dynamic Application Security Testing for AI (DAST-AI) solution uncovers runtime vulnerabilities in AI systems that traditional tools miss. This ensures AI applications are secure by design and stay secure.
MILESTONE MOMENT:
Two important milestones are spinning out from Lancaster University, transforming years of cutting-edge research into a commercial solution, and closing our seed round with .406!
TECHNOLOGICAL INNOVATION:
Mindgard’s innovation lies in its ability to identify vulnerabilities in AI systems through cutting-edge methods like red teaming and runtime security testing. Our DAST-AI platform uncovers risks like prompt injection and model theft, pushing the boundaries of what’s possible in AI security testing.