Michael Kissner: Developing the World’s First All-Optical Processor For? High-Performance Computing and AI at Akhetonics*
Benjamin Wolba
eurodefense.tech |?Fostering Defense Innovation for European Sovereignty | Blogging at future-of-computing.com
Making transistors increasingly smaller so they could fit on a silicon chip triggered a microprocessor revolution in the 1970s and formed the basis for our modern digital society.?
Fifty years later, we can see history being made by Akhetonics , assembling optical transistors into logic circuits and processing information all-optically—reaching orders of magnitude greater efficiency, bandwidth, and speed than electronic processors.
Since our last interview in the summer of 2022 , Akhetonics has gone through the Intel Ignite program and made significant progress in making optical, general-purpose processors a reality. While they have shown as a proof-of-concept that a fully general-purpose optical processor is feasible, they also started developing their first product—an all-optical AI accelerator, which will provide ultra-fast and efficient AI inference at light speed on the edge.?
We had the pleasure of talking again with Michael Kissner , co-founder and CEO of Akhetonics, about how they’re making optical computing a reality, what challenges remain, and how the Intel Ignite program accelerated their journey:
What’s On Your Roadmap Today?
Since our last interview, we have demonstrated the world’s first all-optical CPU, a fully general-purpose optical processor . Importantly, it doesn’t suffer from the von Neumann bottleneck, as we can perform memory operations very fast and thus achieve a one-to-one ratio of compute to memory operations. It’s a proof-of-concept of our technology, so it’s not meant for production, but it shows the potential of our technology. While the architecture is simple—it implements only a single instruction—it could, in principle, run Microsoft Windows.?
Our first product will be an optical XPU, where the X indicates that it will address applications across domains. It will be a reduced instruction set (RISC) processor, and a particularly relevant use case will be in machine learning to allow for lightning-fast inference. With great companies such as Linque and Lumai, who focus on analog optical AI accelerators, we can focus on the all-optical control logic and memory, so the entire chip will operate at light speed, and we avoid losses from repeated electronic-to-photonic conversions.?
When you're operating a large language model like GPT-4 or Llama on Nvidia GPUs, you’re typically not limited by the GPUs but by memory and how quickly it can store and retrieve data. For up to 1000 compute operations, they perform one memory operation. Our XPU processor will achieve a much higher arithmetic intensity, approaching one compute operation for every memory operation—more than 100x greater.
It will excel for edge AI applications in AR/VR, autonomous driving, or defense, where single prompts must be executed with ultra-high performance, low power consumption, and low latency. Our unique selling point is to be ultra-fast and efficient, which no electronic processors can match. In addition, our entire supply chain can be within Europe or the United States without having to rely on geopolitically sensitive regions.?
We’re planning for the optical XPU to come out in 2027/2028 as our first full product offering, a fully general-purpose AI and HPC processor. But even sooner than that, we’ll be launching an optical processor to offer ultra-low latency processing for high-frequency trading applications, which require only simple trading algorithms to be executed rapidly under a nanosecond.?
How Do You Store Data Optically??
Besides processing data optically, which we have demonstrated already using optical transistors and logic gates, another important challenge is optical data storage—you can’t simply stop light from propagating to store its data. The solutions to this tough engineering challenge depend on the memory hierarchy: We can simply implement registers through delay lines—sending light pulses through optical waveguides on a round trip and storing them until the next clock cycle.?
We can also implement flip-flops optically. In electronics, flip-flops are circuits composed of four to six transistors that can store one bit of information, zero or one. They’re the building blocks for static random access memory (SRAM), which gives fast data access but is rather large physically. We can implement them optically using ring resonators or logic elements, which can hold on to information for several clock cycles but also have a high area footprint.
The greatest challenge is developing large optical memory, analogous to the dynamic random access memory in electronic computers, as it’s hard to confine light and store it for longer. Fortunately, for many applications, it’s okay if write operations take longer, if the read operations can be quick, at light speed. For example, in machine learning, once you’ve stored the weights and biases of a neural network, you rarely need to update them compared to how often you need to retrieve them. And read-only memory is straightforward and cheap to implement; it could even be a defect in a waveguide for implementing an optical zero.?
领英推荐
We’re also working with phase change materials, which can switch between amorphous and crystalline states that have different optical properties and thus can be used to store and read data. Applying a short, high-intensity laser pulse to the material will rapidly melt and then quench the material, leaving it in an amorphous state. Applying a longer, moderate-intensity pulse to the material will heat it to a temperature where it crystallizes. As the material’s transmissivity will differ in either state, one can use low-intensity laser pulses to read the information instantly.?
How Did Intel Ignite Accelerate Your Journey?
Participating in Intel Ignite was a no-brainer for us. What we’re doing with optical transistors and building optical digital processors is reminiscent of the microprocessor revolution Intel started in the 1970s—we’re kind of doing the same fifty years later.?
My co-founder, Leonardo, and I attended the Intel Ignite Europe program in the spring of 2023. It was great to meet the entire Intel Ignite team and so many other deep tech startups. We also learned from successful deep tech founders, like the one from Lilium, and it was awesome to see how, even in a hard domain like deep tech, you can reach your goals if you stick to them and keep pushing forward.?
Leonardo and I are both very technical—while he did an MBA, I had zero commercial experience—so Intel Ignite has been helping us a lot to learn about the commercial aspects of building a startup, such as bringing a technology successfully to the market, talking to investors, thinking about IP, or structuring out cap table. It’s like a mini-MBA for deep tech founders.?
What Is One of Your Key Learnings from Intel Ignite?
We had a workshop with Florian Mück and John Zimmer on pitching a deep tech startup, which helped me a lot to articulate the vision for Akhetonics and pitch to different audiences. It’s super important to keep in mind that you should think about what your audience is actually interested in. Many technical founders like to talk about the tech all the time, but that’s what interests them, while investors care much more about the technology's commercial potential.?
*Sponsored post—we greatly appreciate the support from Intel Ignite
Subscribe to our email newsletter and learn about the trends and players shaping the future of computing in five minutes per week:
Freelancing @sycramore Consulting
5 个月Frage - wo ist optisches Computing noch klassisch bzw wo ist der Unterschied zu photonischem Quantencomputing?