Invite brief
- Multifree computing has collected 189 million euros ($ 215 million) to scale its IA model compression tool inspired by quantum, compactifai, which reduces the size of the large language model up to 95% without sacrificing performance.
- Compactifai allows 4x to 12x to 12x treatment and inference costs of 50% to 80% lower, which makes it possible to execute AI advanced models on devices ranging from Raspberry Pi phones.
- The Tour of the B series is supported by investors, including Bullhound Capital, HP Tech Ventures, ForgePoint Capital and Toshiba, supporting the multiverse thrust to transform the ia inference market of $ 106 billion.
Press release – Multraverse Computing, the world leader in the compression of the IA model inspired by quantum, has developed compactive, a compression technology capable of reducing the size of LLM (large languages) up to 95% while retaining the performance of the model. After spending 2024 to develop technology and deployed it to initial customers, the company now announces an investment round of 189 million euros ($ 215 million).
The B series will be led by Bullhound Capital with the support of world class investors such as HP Tech Ventures, Sett, ForgePoint Capital International, CDP Venture Capital, Santander Climate VC, Toshiba and Capital Riesgo de Euskadi – Grupo Spri. The company has provided broad support to this thrust with a range of international and strategic investors. Investment will accelerate generalized adoption to meet massive costs prohibiting the deployment of LLM, revolutionizing the $ 106 billion market in AI inference.
LLMs generally operate on a specialized infrastructure and cloud -based which Raise the costs of the data center. Traditional compression techniques –Quantification and pruning—Aid to meet these challenges, but their resulting models considerably underperform the original LLM. With the development of compactifai, Multiverse has discovered a new approach. Compactive models are highly compressed versions of the main LLM open source which retain the original precision, are 4x-12x faster and give a reduction of 50% to 80% of the costs of inference. These compressed, affordable and energy efficient models can operate on the cloud, on private data centers or – in the case of ultra compressed LLM – directly on devices such as PCs, phones, cars, drones and even Raspberry Pi.
“The dominant wisdom is that the narrowing of the LLMS has a cost. Multiverse changes this,” said Enrique Lizaso Olmos, founder and CEO of multiverse computing. “What started as a breakthrough in the compression of models quickly proved to be a transformer, which blocks new efficiency in the deployment of AI and gaining rapid adoption for its ability to radically reduce material requirements for the management of AI models. With a unique union of global expert and strategic investors on board and relief capital as a main investor, we can now advance our laser infrastructure of compressed models with compressed infrastructure.
Compactifai was created Use of tensor networksAn approach inspired by quantum to simplify neural networks. Tensor Networks is an area of study specialized in the pioneer by Román Orús, co-founder and scientific director at Multavevese. “For the first time in history, we are able to profile the internal functioning of a neural network to eliminate billions of parasitic correlations to really optimize all kinds of AI models,” said Orús. Compressed versions of top llama, deepseek and mistral models are available nowWith additional models to come soon.
According to Roman, the co-founder and director of director, Bullhound Capital, said: “The compactif of multiverse introduces material changes in the treatment of AI which responds to the global need for greater efficiency in AI, and their ingenuity accelerates European sovereignty. The CEO of the company’s rapid expansion in a global race for the domination of the AI.
TUAN TRA, President of Technology and Innovation, HP Inc., said: “At HP, we are committed to directing the future of work by providing solutions that stimulate business growth and improve professional accomplishment. Our investment in the multiverse, the innovative multiverse approach is the size of the AI size.
Damien Henault, Managing Director, ForgePoint Capital International, said: “The multiverse team has solved a deeply complex problem with radical implications. The company is well placed to be a fundamental layer of IA infrastructure spilling represents a quantum leap for global deployment and the application of AI models, allow the smallest market, the smallest and the AI green.
Multifree computing extends its sincere gratitude to its current investors for their confidence and continuous support, as well as to European institutions whose support has contributed to reaching this stage. For more information on multi -purpose and compactive IT, visit MultiVISSECEPUTING.com