Apertus: Switzerland’s open-source national LLM
Switzerland has released Apertus — a fully open, publicly developed large language model built by EPFL, ETH Zurich and the Swiss National Supercomputing Centre (CSCS). Designed as a public AI infrastructure, Apertus aims to offer a transparent alternative to closed models.
Key facts
- Model sizes: 8 billion and 70 billion parameters
- Training scale: ~15 trillion tokens
- Multilingual: trained on data from 1,000+ languages (≈40% non-English, including Swiss German and Romansh)
- Open release: code, model weights, training recipes and datasets publicly available
- Access: distributed via Swisscom and hosted on Hugging Face
- Compliance: built to respect Swiss data protection and copyright law
Why it matters
Apertus is positioned as a public-good AI: fully auditable, reproducible, and intended for research, education, and commercial adaptation. By releasing datasets and training details, the Swiss institutions aim to increase transparency in model development and provide an option that aligns with European privacy and regulatory standards.
Who can use it?
Researchers, hobbyists and companies can download and adapt Apertus for chatbots, translation, training tools, and more. The open approach makes it suitable for organizations that need to verify training data provenance and compliance.
Where to learn more
Official details and coverage are available from ETH Zurich and Swiss media:
- ETH Zurich: A language model built for the public good
- SwissInfo coverage
- Search for Apertus on Hugging Face
Notes & context
The Swiss release emphasizes that Apertus was trained on publicly available data and that crawlers respected machine-readable opt-out signals when encountered. As with other LLMs, legal and ethical questions around dataset composition and downstream use remain important considerations for adopters.
If you want this posted with additional metadata (categories, tags, featured image) or edited for tone/length, tell me how and I’ll update it.