Why we invested in Verax, the startup enabling trustworthy and responsible AI

InMotion Ventures extends its support of Verax AI, participating in a $7.6M Seed round less than six months after joining the startup’s first raise. This latest round was led by TQ Ventures with participation from Concept Ventures, Cardumen Capital, Seedcamp, InMotion Ventures, and XTX Ventures.

Sam Nasrolahi, Principal, and Wil Morgan, North American Investment & Scouting Lead, explore why we invested.

The Market

Generative AI has truly captured our attention. In a remarkably short period of time it has woven itself into the fabric of society. A striking 73% of UK consumers have used generative AI in their personal lives. Even those not deliberately seeking it out are interacting with large language models (LLMs) on a regular basis, through the apps they use or content they consume.

Enterprises have long understood the opportunities that generative AI offers. Yet the rapid ascendancy of LLM applications over the past two years took many firms by surprise. Concerns around reputational and financial complexities drove many corporations to impose widely-publicised gen-AI restrictions. Identifying technological safeguards was a critical precursor to implementing strategies for harnessing the benefits.

Similar to the initial hesitancy around cloud computing, concerns regarding data security and customer privacy are delaying the widespread commercial embrace of generative AI. Companies are exercising caution when entering this domain. Many have opted to use internal processes to experiment before introducing the novel technology to customer facing applications. Organisations must trust generative AI’s reliability and ethical safeguards to fully realise its transformative impact.

The probabilistic foundation of LLMs enables them to generate responses that seem highly plausible based on the prompts provided. Yet, these models lack the capability to distinguish between factual accuracy and generated predictions. This gap can lead to unpredictable and uncontrollable outcomes when organisations deploy LLMs. Such outcomes include hallucinations, a phenomenon often seen in chatbots, where models confidently generate nonsensical or incorrect responses by misinterpreting patterns or data — a challenge that is particularly difficult to identify and rectify. This inherent unpredictability and lack of control over LLMs underscores the critical need for ongoing monitoring and refinement.

Providers of LLMs are working hard to mitigate problematic behavior. However, as developers build more software layers, the potential for inaccuracies and security breaches only escalates. This vulnerability is especially apparent in enterprises whose primary expertise may not be software development or coding. Left unchecked, these challenges can lead to misinformation, poor customer experience, and ultimately harm a brand’s reputation. All signs point towards a need for rigorous quality and security standards in the development and deployment of LLM-enhanced applications.

What Verax does:

Mitigating the negative impacts of LLMs hinges on the ability to obtain real-time insights into their behavior. One solution involves the deployment of supervisory software that operates in tandem with LLMs. This software reviews interactions, identifying, blocking, and correcting errors such as hallucinations before they can affect the end-user.

Verax is acutely aware of the impact that trust issues and misunderstandings around generative AI can have on enterprise-level innovation. Founders, Leo and Oren, have a clear mission: to empower organisations to confidently utilise responsible AI. They aim to achieve this through an enterprise-grade control center that offers comprehensive visibility and command over LLMs in active deployment, ensuring their safe and effective use.

The current focus of the Verax Control Centre is to act as an intermediary layer. The control centre is deployed in the customer’s environment to connect users with the enterprise system and verify AI output quality. From this vantage point it is able to trace the lineage between AI-response and data used to generate the output. Moreover, when deviations from expected behavior do occur, Verax’s software provides customers with the data they need to address these issues and confirm their resolution, ensuring they prevent such deviations in future production environments.

Offering end-to-end visibility and a direct route to problem resolution, the Verax Control Center introduces an unprecedented level of transparency and traceability to applications. This approach significantly reduces end-user risk, giving business leaders the confidence to rely on their applications. With Verax, they have the assurance that their LLMs function precisely as intended, fostering a secure and trustworthy environment for leveraging AI technology. Verax will be looking to maintain close relationships with their customers and to continue building different offerings into the LLM agnostic trust platform as enterprise behavior evolves, which acts as a precursor to significant scale. A lack of pre-training allows the technology to be easily applied to different industries and CFD challenges cases, without the need for extensive customisation. Crucially, it also reduces the need for enterprise engineers – who typically lack ML skills – to pre-train the models themselves.  

Why we invested:

InMotion Ventures backs startups with the potential to accelerate the transformation of JLR. Enterprise technology is a core focus for our team, and the potential impact of artificial intelligence on the automotive industry is no secret.

In line with our approach of funding and scaling start-ups solving seismic, perennial problems, our team has prioritised the ‘picks and shovels’ of the Gen AI infrastructure layer. We’re not alone in this approach. Yet, our analysis has unearthed a significant flow of capital into uncertain value propositions that may not promise genuine longevity.

With their focus on solving short-term infrastructure issues, many of the early startups have been absorbed into the platform layer. As evidenced by the frantic pivoting that accompanies new releases on OpenAI’s developer platform. We’ve also noticed many founders doubling down on relatively minor testing and deployment issues. Doing so in the anticipation that they’ll become more significant challenges in the future.

In a space with as much dynamism as generative AI it’s difficult to predict how today’s issues will evolve with scale. As a result, we’re focussing our attention on the safeguarding and trust layer associated with enterprises deployment of AI applications. Our belief is that this is an enduring business model. Simultaneously a short term blocker and a pain point. One that will increase in prevalence as software layers are increasingly built across different LLM platforms.

As enterprises move from testing to the deployment of applications, addressing the issues of hallucinations and improving log capabilities has become a universal priority for data teams. Owing to the technical complexities involved, there are very few credible solutions for streamlining a precise view of AI quality and output validation. Despite a rich landscape of research, the market for practical, commercial solutions remains sparse.

At InMotion Ventures we see a huge opportunity in this space. Yet we felt it could only be realised by an agile, customer obsessed and experienced founding team.

The exceptional backgrounds and proven track record of Leo and Oren were key factors in our decision to invest. The two had previously collaborated at CloudEndure and AWS, and have deep experience selling to and working with large enterprises. Leo, a third time founder, brings a history of impressive exits and an incredible track record of capital efficiency. His previous ventures were acquired by Limelight ($25M) and Amazon Web Services ($250M) .

Before beginning any work, Leo and Oren leveraged their unrivaled network of Chief Information Security Officers to validate their approach. A diverse range of companies across varying industries and regions confirmed demand for the Verax offering. This comprehensive exercise underlined the universal appeal and necessity of the Verax Control Centre.

Having established clear market demand the team are applying a proven playbook for growth. The leadership is rooted in the UK and the US, and supported by an R&D center in Israel. The founders are now activating their impressive reputations within the AI and developer community to attract a top-tier engineering team.

With an in-demand offering in a rapidly growing sector, an intelligent approach to company building and a scalable strategy for addressing the pain points of data teams, Verax are on trajectory towards significant success and perfectly poised to capture significant market share.

Congratulations to Leo and Oren for raising an impressive seed round from a reputable consortium of investors. Verax will use this fresh capital to accelerate go-to-market efforts and scale brand awareness, and we look forward to supporting their international expansion.

The InMotion Ventures team are always interested in speaking with exceptional founders setting new benchmarks in quality, technology, and sustainability. If you are a founder or know a company in the space, get in touch with our team. Contact us via LinkedIn or through our investment form.