One year after the “DeepSeek Moment,” open source has become the default. Models, research, infrastructure, and deployment are increasingly shared to support large-scale, system-level integration.
This final blog examines how leading Chinese AI organizations are evolving ,and what this implies for the future of open source.
✨ Sparse MoE:196B/11B active ✨ Supports up to 256K context ✨ Multi-token prediction for fast decoding (100–300 tok/s) ✨ Runs locally on consumer hardware
They just dropped their first VLA and depth perception foundation model on huggingface. ✨ LingBot-VLA : - Trained on 20k hours of real-world robot data - 9 robot embodiments - Clear no-saturation scaling laws - Apache 2.0
✨ Native multimodality : image + video + language + agents 💥 ✨1T MoE / 32B active ✨ 256K context ✨ Modified MIT license ✨ Agent Swarm execution ✨ Open weights + open infra mindset