Unlock In-Context Learning at Scale
AI that Learns from Experience
Mission
We aim to develop scalable AI systems with linear transformers as their foundation, enabling infinite-context learning to redefine how machines learn, adapt, and apply knowledge. By advancing beyond attention-based approaches, we design AI models capable of retaining and refining knowledge over time. Through this, we will unlock the potential for real-time learning and optimization, advancing the critical path toward artificial general intelligence.
Stay Updated
Follow our journey as we tackle the engineering and research challenges that make infinite-context learning possible. From innovations in memory systems to techniques for parallelizing recurrent models, we’re sharing our progress and insights on our technical blog. Join us and explore the next era of AI.