
Secret AI startup Irregular surfaces with $80M to test artificial intelligence limits
Sequoia leads the rounds, joined by Wiz’s Assaf Rappaport, Eon’s Ofir Ehrlich, and other investors, backing the Israeli company, founded by Dan Lahav and Omer Nevo, as it works with OpenAI, Anthropic, and government clients.
In a market where AI adoption is accelerating at a dizzying pace, Irregular (formerly Pattern Labs) has emerged from stealth with $80 million in funding and contracts with some of the world’s leading AI labs, including OpenAI and Anthropic. Founded in 2023, the Israeli company specializes in stress-testing advanced models under real-world scenarios, from bypassing antivirus software to autonomous offensive operations, offering a blueprint for safe deployment of next-generation AI systems.
Full list of Israeli high-tech funding rounds in 2025
The first round, in which Sequoia invested $30 million, was followed a few weeks later by another round of about $50 million, in which Sequoia invested again alongside Redpoint, Omri Caspi’s Swish Ventures, and several local angel investors led by Assaf Rappaport, who is believed to have invested a significant sum, as well as Ofir Ehrlich from Eon. Founded in 2023, the company already records significant revenues and employs 25 people, most of whom are based in Israel.
The rapid recruitment is not the only record Irregular is breaking. Working with leading AI labs, including OpenAI and Anthropic, the company has generated millions of dollars in revenue. Its work includes evaluating how next-generation AI models behave under real-world threats, from bypassing antivirus software to conducting autonomous offensive operations, and developing defenses to ensure safe large-scale implementation. Its clients also include government bodies, such as the UK government. The company’s founders told Calcalist: “We have revenues coming from the largest labs in the world like OpenAI and Anthropic. We have research work with them, and we are at the heart of the activity with them.”
Dan Lahav, the company’s CEO, said: “There is a new market that is opening up, an emerging AI frontier where a proactive approach is essential. We aim to understand these systems from the inside, anticipate potential damage, and work directly with the systems themselves. Our focus is on understanding the systems and what harm they can cause.”
Irregular was founded by Lahav and Omer Nevo (CTO), both with extensive experience in artificial intelligence and cybersecurity. Lahav previously worked at LabPixies, a startup acquired by Google, and later served as an AI researcher at IBM, where he received the Outstanding Technical Achievement Award. He has published numerous articles in leading journals, including a cover feature in Nature. Nevo is a serial entrepreneur who served as a development manager at Google Research, leading AI projects such as wildfire detection models and research-based products at scale. The two met through competitive debate: Nevo is a world debate champion, and Lahav holds the highest personal ranking in world championship history.
Related articles:
As AI adoption accelerates, risks and challenges grow. As models become deeply embedded in critical decision-making processes, any malfunction or vulnerability can escalate from a local crisis to a systemic collapse. AI systems will continue to strengthen and integrate into organizational operations, increasing potential risks. Without dedicated tools to test resilience and understand the scope of threats, organizations may be managing critical systems almost blindly.
Irregular runs controlled simulations exposing advanced AI models to realistic scenarios, testing their potential misuse in cyberattacks and resilience under attack. The company provides AI model creators and operators with a safe method to discover vulnerabilities early and develop defenses. Tests include bypassing antivirus systems, mapping and analyzing system environments, and assessing misuse potential. Using confidential inference and hardware-based verification, Irregular enables leading AI labs to evaluate cyber risks and ensure safe deployment, even before models are publicly launched or widely implemented.