An Interview with Kai Kreuzer on Eclipse LMOS and GenAI in the Automotive Industry
At the SDV Community Days at Lunatech in Rotterdam, we sat down with Kai Kreuzer, co-lead of Eclipse LMOS, to learn more about the AI project and how it could benefit open source automotive software development. LMOS enables existing development teams to build and scale AI-driven cloud-native applications. With standout features like a Kotlin-based DSL for rapid agent creation and robust lifecycle management for AI agents in Kubernetes, LMOS offers a flexible, production-ready foundation. Its open, vendor-neutral protocol fosters scalable, collaborative multi-agent systems – even beyond Kubernetes – supporting digital sovereignty and innovation. Whether you're developing agents or deploying a local demo system, LMOS makes it easy to dive in and start building.
Hi Kai, please introduce yourself!
I'm a software architect at Deutsche Telekom, working in the AI Competence Center. Since the beginning of this year, I'm one of the project co-leads of Eclipse LMOS, which is a platform that we have built at Deutsche Telekom and open sourced at the Eclipse Foundation.
Why should people and organisations interested in developing SDVs take a closer look at Eclipse LMOS?
Organisations that deal with software-defined vehicles are typically medium to large enterprises, and they have IT departments. They have been doing software development for a while, and they have existing teams building professional software. Such companies are currently wondering: what does the new age of AI – and generative AI specifically – mean to their software development world? Do they have to invest in new teams, new skills? And how do they deal with that? That's where LMOS comes into play – because we are saying that existing teams should actually be enabled to embrace Generative AI, to enhance the applications that they are already building, the tools that they have, the skills that they have – to use all of that and bring it to the new software development paradigms that are now coming with Generative AI. Eclipse LMOS mainly targets cloud-native application development, like scalable microservices architectures, and to have the very same concepts for AI agents, which are small modules that can be developed independently by different teams and assembled in big applications.
In your opinion, what are LMOS's strongest features?
LMOS as a platform has many, many cool features. Let me point out two of my favorite ones. The first one is a Kotlin-based DSL (domain-specific language), which enables developers to very easily create AI agents by really just describing them through some natural language. And a domain-specific language based on Kotlin means that this very simple starting point can be evolved into a very complex, multi-tenant-capable, production-ready deployment artifact, because you have all the breadth and features of the Java ecosystem available at your fingertips, and you can also make use of all the Kotlin language features. So it's very easy to go from a small POC in an initial demo to a scalable deployment for your system.
The second feature I would like to highlight is everything around the lifecycle of agents. So similar to your microservices in classical applications, the agents become first-class citizens within a Kubernetes cluster. So you can deploy them independently. You can version them independently. And by leveraging frameworks like Istio, we also provide encryption of the traffic within the cluster automatically; we also enable developers to do, for example, Canary deployments, so shifting only 5% of the traffic to a new version of your agent to do early testing etc.
Could you tell us about the LMOS protocol and the vision of LMOS?
The LMOS protocol is actually one other artifact that materialised out of the implementation of the LMOS platform. As I said, the agents have their independent lifecycles, and by having that independence, they need to have a mechanism to discover each other, to negotiate how to communicate with each other. And this is exactly what the LMOS protocol provides on the platform. And the cool thing about this protocol is that it's not constrained to be used only inside Kubernetes clusters, but you can apply it in any kind of network, up to the scale of the internet itself. And we're looking into having multi agent collaboration on the internet, on any kind of network. The whole idea and vision behind that is to really enable developers to build vendor-neutral systems. We don't believe that AI agents should only live on the platforms of a few big companies out there, but that sovereignty is really a big, big deal. Not just for individual developers, not just for companies, but even for whole countries or economies like the European Union. The whole idea is to enable people to decide where to deploy things, how to build them, and have all the communication between the agents independent and under their own control.
What's the best way to get started with LMOS?
To get started, best visit the project website, which is https://eclipse.dev/lmos/. Besides reading into some of the core concepts, the best way then is clearly to get your hands dirty as quickly as possible. There are actually two paths that you can go. One is to look into agent development with this Kotlin DSL that we call ARC, where you can do some easy prototyping and see how easy agent development is with Eclipse LMOS. The other path is to try out the LMOS demo project, which sets up a small mini Kubernetes cluster on your local machine, deploy some example agents, and you can see the dynamics of having a multi-agent system running locally on your PC.