Working with the brightest minds Switzerland has to offer, a new language model is being prepared. Competing with the US, EU, and recently announced China LLM, Switzerland will have its own, open-source version. It is claimed to support over 1,000 languages and offer 70 billion parameters.
Looking at these numbers, this is Meta’s Llama 3 territory, and if you’re a DIY aficionado, this is your call to go cheese and chocolate when choosing your following AI stack.
The release is announced to be around late summer 2025, and if we believe this media release, the model supports over 1000 languages. It was trained on a supercomputer, «Alps», at the Swiss Supercomputer Centre.
In contrast to commercial LLMs, such as ChatGPT from OpenAI or Claude from Anthropic, the Swiss model is based on more ethical grounds and is positioned around transparency. The source code, model weights, and training data are all made available to showcase the openness of the model. This is rare in the industry, which is already a multi-billion-dollar industry. Although LLMs from Meta and DeepSeek are also categorised as open source, they are not entirely so in reality. If you look closely at their terms, the algorithms and training data remain private.
The ETH model will be released in two versions: a small one with eight billion parameters and a larger one with 70 billion parameters. The latter is comparable to Meta’s Llama 3, while OpenAI’s GPT-4 is estimated to have around 2 trillion parameters and Anthropic’s Claude 4 Opus around 300 billion parameters. What should you choose for your project? As the number of parameters is not the only metric for the performance of an LLM, it is an indication, though, of how it could behave on large datasets or if you plug some RAG and lots of API beh nd. Proprietary models (at least on benchmarks sponsored or not) currently achieve the highest marks. If you are sensitive to control over how your data is being handled or processed, open source models offer advantages in terms of traceability, customisability and data sovereignty.
Swiss data protection laws, Swiss copyright law, and the transparency obligations under the EU AI Act are well documented when the development teams behind the LLM started integrating their work. In the recent study by the project managers themselves, they explain that no loss of performance for everyday tasks is observed if the opt-outs for web crawling are respected during data collection, and the training thus ignores certain web content (ahum OpenAI.
The model is published under the Apache 2.0 license, making it accessible for both scientific and industrial applications. It is a result of the Swiss AI Initiative, launched by EPFL and ETH Zurich in December 2023. With over 800 researchers involved and access to more than 20 million GPU hours per year, it is the world’s largest open science and open-source project for AI.
Thank you, Switzerland, for showcasing that leapfrogging is still a thing.