Ten questions with Axelera AI’s
Scientific Advisor Luca Benini

Fabrizio Del Maffeo | CEO at AXELERA AI

Ten questions with Axelera AI’s Scientific Advisor Luca Benini

Professor Luca Benini is one of the foremost authorities on computer architecture, embedded systems, digital integrated circuits, and machine learning hardware. We’re honored to count him as one of our scientific advisors. Prof. Benini kindly agreed to answer a few questions for our followers on his research and the future of artificial intelligence.

For our readers who are unfamiliar with your work, can you give us a brief summary of your career?

I am the chair of Digital Circuits and Systems at ETHZ, and I am a full professor at the Università di Bologna. I received a PhD from Stanford University, and I have been a visiting professor at Stanford University, IMEC, EPFL. I also served as chief architect at STMicroelectronics France.

My research interests are in energy-efficient parallel computing systems, smart sensing micro-systems and machine learning hardware. I’ve published more than 1.000 peer-reviewed papers and five books.

I am a Fellow of the IEEE, of the ACM and a member of the Academia Europaea. I’m the recipient of the 2016 IEEE CAS Mac Van Valkenburg Award, the 2019 IEEE TCAD Donald O. Pederson Best Paper Award, and the ACM/IEEE A. Richard Newton Award 2020.

Which research subjects are you exploring?

I am extremely interested in energy-efficient hardware for machine learning and data-intensive computing. More specifically, I am passionate about exploring the trade-off between efficiency and flexibility. While everybody is aware of the fact that you can enormously boost efficiency with super-specialization, a super-specialized architecture will be narrow and short-lived, so we need flexibility.

Artificial Intelligence requires a new computing paradigm and new data-driven architectures with high parallelisation. Can you share with us what you think the most promising directions are and what kind of new applications they can unleash?

I believe that the most impactful innovations are those that improve efficiency without over-specialization. For instance, using low bit-width representations reduces energy, but you need to have “transprecision,” i.e., the capability to dynamically adjust numerical precision. Otherwise, you won’t be accurate enough on many inference/training tasks, and then your scope of application may narrow down too much.

Another high-impact direction is related to minimising switching activity across the board. For instance, systolic arrays are very scalable (local communication patterns) but have huge switching activity related to local register storage. In-memory computing cores can do better than systolic arrays, but they are not a panacea. In general, we need to design architectures where we reduce the cost related to moving data in time and space.

Can you share more with us about the tradeoffs and benefits of analog computing versus digital computing and where they can work together?

Analog computing is a niche, but a very important one. Ultimately, we can implement multiply-accumulate arrays very efficiently with analog computation, possibly beating digital logic, but it’s a tough fight. You need to do everything right (from interface and core computation circuits to precision selection to size).

The critical point is to design the analog computing arrays in a way that can be easily ported to different technology targets without complete manual redesign. I view an analog computing core as a large-scale “special function unit” that needs to be efficiently interfaced with a digital architecture. So, it’s a “digital on top” design, with some key analog cores, that can win.

Our sector has a prevailing opinion that Moore’s Law is dead. Do you agree, and how can we increase computing density?

The “traditional” Moore’s Law is dead, but scaling is fully alive and kicking through a number of different technologies — 2.5D, 3D die stacking, monolithic 3D, heterogeneous 3D, new electron devices, optical devices, quantum devices and more. This used to be called “More-than-Moore,” but I think it’s now really the cornerstone of scaling compute density – the ultimate goal.

You are a very important contributor to the RISC-V community with your PULP platform, widely used in research and commercial applications. Why and when did you start the project, and how do you see it evolving in the next ten years?

I started PULP because I was convinced that the traditional closed-source computing IP market, and even more proprietary ISAs, were stifling innovation in many ways. I wanted to create a new innovation ecosystem where research could be more impactful and startups could more easily be created and succeed. I think I was right. Now the avalanche is in motion. I am sure that the open hardware and open ISA revolution will continue in the next ten years and change the business ecosystem, starting from more fragmented markets (e.g., IoT, Industrial) and then percolating to more consolidated markets (mobile, cloud).

Can Europe play a leading role in the worldwide RISC-V community?

The EU can play a leading role. All the leading EU companies in the semiconductor business are actively exploring RISC-V, not just startups and academia. Of course, adoption will come in waves, but I think that some of the markets where the EU has strong leadership (automotive, IoT) are ripe for RISC-V solutions — as opposed to markets where the USA and Asia lead, such as mobile phones and servers which are much more consolidated. There is huge potential for the European industry in leveraging RISC-V.

What is the position of European universities and research centres versus American and Chinese in computing technologies – is there a gap, and how can the public sector help?

There is a gap, but it’s not quality; it’s in quantity. The number of researchers in computer architecture, VLSI, analog and digital circuits and systems in the EU is small in relation to USA and Asia. Unfortunately, these “demographic factors” take time to change. So really, the challenge is on academics to increase the throughput. Industry can play a role, too – for instance, leading companies can help found “innovation hubs” across Europe to increase our research footprint.

Companies can also help make Europe more attractive for jobs. Now that smart remote working is mainstream, people are not forced to move elsewhere. Good students in — for example — Italian or Spanish universities interested in semiconductors can find great jobs without moving. I am not saying that moving is bad, but if there are choices that do not imply moving away, more people will be attracted to these semiconductor companies and roles.

**Is the European Chips Act powerful enough to change the trajectory of Europe within the global semiconductor ecosystem? **

It helps, but it’s not enough. There is no way to pump enough public money to make an EU behemoth at the scale of TSMC. But, if this money is well spent, it can “change the derivative” and create the conditions for much faster growth.

Over the last decade, European semiconductor companies didn’t bring any cutting-edge computing technology to market. Is this changing, and do you think European startups can play a role in this change?

I think that some large EU companies are, by nature, “competitive followers,” so disruptive innovation is not their preferred approach, even though of course there are exceptions. The movement will come from startups, if they can attract the growth and funding of the larger companies. The emergence of a few European unicorns, as opposed to many small startups that just survive, will help Europe strengthen its position in the semiconductor market.