AI21 unveils upgraded long-context Jamba models for enterprises
The new models aim to revolutionize AI capabilities for organizations with long context needs and include Hebrew and Arabic support.
AI21 Labs unveils a new family of language models designed for organizations, which for the first time fully supports Hebrew and Arabic. The new family, called Jamba, introduces the large language model 'Jamba 1.5 Large,' alongside an updated version of 'Jamba 1.5 Mini.' Both models are released as open models to the developer community. In a conversation with Calcalist, Ori Goshen, one of the company's founders and its CEO, stated, "We launched two language models which support Hebrew and Arabic as core languages. Other models have Hebrew support but not as a core language. We included Hebrew support for Zionist reasons, as it is not a significant market for us, but we are a local player aiming to foster innovation and advance AI in Hebrew."
The company, founded in 2017 by Prof. Amnon Shashua, the founder of Mobileye, Yoav Shoham, and Ori Goshen, has raised about $336 million from leading investors, including NVIDIA, Intel, Google, and others. Shoham, a world-renowned AI professor who led the Stanford University AI laboratory, returned to Israel in recent years.
"The input to the language model is crucial as it determines the relevance of the AI's responses. Most language models on the market are based on transformer architecture, which slows down as input size increases, resulting in delays. Our innovation is the Mamba architecture, which is significantly faster with long inputs, though it compromises quality. We combined this with another architecture to create Jamba, benefiting from both quality and speed, allowing us to handle very large outputs. For example, we can process the entire output of 'The Lord of the Rings,'" Goshen explained.
"The Jamba models will be available on major cloud platforms, including NVIDIA and Snowflake, each offering different business models. We've noticed that companies with sensitive information prefer AI models hosted within their own cloud or server farms rather than public clouds. Offering private models for organizations is expected to be a significant revenue source for us. We have made the model available for anyone to work with commercially, but companies with revenues over $50 million will need to purchase a license for use within their own servers," Goshen added.
Related articles:
"We focus on organizations, unlike Anthropic and OpenAI, which target end consumers. Our emphasis is less on generating creative content and more on addressing questions requiring high reliability in fields like health, economics, and finance," said Goshen. "There is significant investment in our field, with large sums of money involved. Our focus is on meeting organizational needs, and our substantial funding helps us remain competitive."