Interview with Jonathan Ballon,
Chairman of Axelera AI

Merlijn Linschooten | Marketing & Communications Manager at AXELERA AI

Interview with Jonathan Ballon, Chairman of Axelera AI

Jonathan Ballon has recently joined Axelera AI as Chairman of the board. Coming from leadership roles in some of the world’s most recognizable companies, such as Cisco Systems, General Electric and Intel, Jonathan brings deep entrepreneurial and operational expertise to the company. To commemorate Jonathan joining our team, we hosted an in-depth interview to learn more about his background.

Read the interview or listen to the podcast.

Jonathan, thank you for joining us today. Before moving into why you joined Axelera AI, could you please tell us something about yourself?

I have lived and worked in Silicon Valley for all of my career, almost 25 years. There’s a theme to my interest which aligns to where I’ve spent my time over that period, which is really around understanding this world that we live in. First, with startups that were looking at better dissemination of data and helping people to gain access to information, which then traversed into a career at Cisco for a decade looking at how we move data and information around the world. It’s about helping to build and deploy advanced applications of Internet technology, first in the Enterprise and subsequently in the cloud which has proven to be a great leveling agent for the world.

For the past 10 years I have focused on what happens after that fabric is in place, how do we start gaining access to the data in the world that we live in? And so the last several years have really been focused on distributed computing environments – what we now call the edge, which is all of this infrastructure out on the physical world we all inhabit. It’s in our hospitals, our factories, our cities, and making the most of the data generated at the edge.

If we can better understand the dynamic systems we inhabit and participate in, we can better understand how things work so we can improve the human condition.

You are a seasoned executive who worked in several Fortune 500 companies. There are plenty of start-ups developing AI solutions, so why did you decide to join the board of Axelera AI?

When I think about my career I’ve worked in both start-ups and large companies and each of them are equally valuable in helping to drive a pervasive adoption of technology.

And so typically what you see is lots of innovation happening in startups because they move faster and have less bureaucracy. They’re typically smaller and as a result you can get very close to the application and the use case.

However, it’s typically large companies that have the ability to drive pervasive adoption of technology at scale, so you really need both experiences to drive pervasive adoption of new technology

And when I think about my journey, it’s really been focused on kind of three primary areas. The first is how do we get adoption of technology, the life cycle of innovation, and what does it take to get something from being novel or interesting across that chasm into mass market adoption. That has been a key focus of mine for years.

Secondly, its around access. We want a world in which we don’t have the “have and have nots”. We want there to be an equal distribution of the benefits of technology. So how do you drive almost a democratization of access to technology? That’s done through economics, global reach and scale, as well providing tools to accumulate the value.

Lastly, it’s really about the application of this technology. Not just the application of technology for novel use cases, but really the understanding of how technology can be applied in a way that’s good business and that it has an economic value proposition for the end user.

Looking at Axelera AI, there are a couple of things that attracted me to the company. It starts with people. I’ve known the CEO Fabrizio for years. He’s an incredibly charismatic and passionate leader, but importantly, he not only understands the technology, he understands how to drive an economic value proposition, how to drive scale in the market commercially.

That’s important, because it is not just about great technology. It’s about how do we get technology scaled through the ecosystem and how does this become available on a global basis in a fair and equitable way. I think Fabrizio and the rest of the team really encompass all of those things.

Secondly, it’s about the market. I’m deeply passionate about what’s happening at the edge. We’re very early days in the movement of computing and computing architectures from being focused on cloud computing, now traversing out to the edge, as most of inference happens there. This distributed computing architecture is emerging where the definition of the network is expanding through every device, and the value created as a result is immense.

This shift will require purpose-built technology that factors this in, which Axelera AI has.

The third reason is about the people underwriting the company. Deep tech can be exceptionally hard. Having investors that really understand those dynamics and have the ability to represent the customer, the use cases and the core fundamental technology development cycles really makes a difference.

Axelera AI was incubated by Bitfury, the largest Bitcoin miner outside of China, and a groundbreaking crypto technology provider. IMEC provides not only investment but access to some of the core fundamental intellectual property that fuel the technology roadmap. And Innovation Industries, a European deep tech VC fund, who brings extensive industry experience and deep technical and operational knowledge to the table.

How do you see computing evolving in this data-driven era? And what will be the impact of in-memory computing and RISC-V?

It starts with the customer and the use case. In the past 10-12 years the technology has really been focused on the economies of scale that are gained through cloud computing environment, which is relatively unconstrained. You have unconstrained compute and unconstrained energy, all operating in a supervised, safe environment.

What we’re now seeing is this movement towards deep learning and accessing information out at the edge. However, you can’t take these unconstrained models that have been built for the cloud and deploy them at the edge. The volume or velocity of data can’t be supported physically or economically by existing networks. In many instances the use cases require real-time processing. Take autonomous driving. It obviously doesn’t make sense and would be very dangerous to send data back and fort to the cloud in real time.

So we need architectures that can support local inference with a much reduced and optimized neural network algorithms. But we also need computing architectures that can not only support the data movement, but also a much lower energy requirement and computing footprint.

The bottleneck with data velocity and computing architectures is the data bus, which brings the data back and forth between memory and the CPU. So the ability of putting memory in the CPU itself dramatically reduces the amount of energy that is required in order to perform those calculations, but also increases the speed. From an architectural point of view this is very important, both in the cloud and on the edge, which is why we’re seeing such movement towards in-memory computing.

Historically we’ve had two architectures, X86 and ARM, which the industry has been focused on for decades. This has been a closed proprietary instruction set, which in case of ARM you can license, but that value accrues to one company. Now we have a third new platform that’s emerging with RISC-V. What’s great about RISC-V is that you now have an open architecture and an open instruction set that allows for innovation to happen in a frictionless way. I think this is going to be a really pervasive and widely adopted architecture.

How do you see the AI semiconductor industry consolidating in the coming years? And if you see consolidation, how many startups will stay independent 5 years from now?

History is always a good indicator of the future. If you think back to what happened in the early 90s we had a relatively small number of semiconductor companies that were vertically integrated. They had everything from R&D, all the way through production and manufacturing operations.

That model was broken and so you started seeing the creation of foundries, solely focusing on scale manufacturing, allowing the innovation to take place with a much lower barrier to entry – and that’s continued to this day.

In fact, we’re seeing a resurgence of that right now, with more foundries being stood up to support the growth in demand and application-specific designs. This is creating more innovation because we now have given companies of any size the ability to design and build a novel computing architecture.

What will likely happen is you’ll see some of these computing architectures gain purchase in the marketplace through adoption at scale and be part of a of a broader set of capabilities in the ecosystem.

Other chip startups will simply go away through some combination of technical inferiority, inability to manufacture, failure to commercialize or simply run out of cash. In Silicon Valley alone, there are around 4000 new start-ups (all kinds) created yearly and only about 10% of them are ever successful. I have to imagine the ratio of silicon startups aligns to that metric in a best case scenario.

Some investors say that the AI semiconductor market has peaked and it’s now getting ready for a severe adjustment. Others are saying that this slowdown is more temporary and that we really haven’t reached the peak because AI is really still at that infancy phase and there is a need for new technologies and solutions. How do you see that play out?

We’re in the early days, so I don’t think anything’s peaked. In fact, if you look at historic R&D spend in semiconductors, it has increased every year for the past 50 years. What you’re seeing now with AI is the need for purpose-built architectures, not dissimilar to the trend a few decades ago around demand for graphics architectures.

Over the past decade we’ve witnessed a huge amount of growth and innovation happening in training, particularly for data centers and cloud. What we haven’t seen yet, is scale and adoption of applications at the edge. That is the next frontier.

And if you look at data as the indicators, we’re seeing more data being created in the next year than we saw over the last 10 years combined, with 75% of that data being generated out in the physical world such as factories, hospitals and cities. Currently that data is moving back to the cloud, but that won’t continue.

You’re not only going to see computing architectures, but also system architectures where inference, training and storage will happen as close to the sensor as possible.

This goes back to the use cases around real time compute, the ability for systems to not just be smart, but be intelligent. The difference between those two is; smart can think, intelligent can learn.

So we’ve currently got devices and systems that can be smart about whatever function it’s performing, but we’re moving towards a set of intelligent systems, and in that inference-training flywheel we’re not going to be moving the world’s data back into the cloud for training. A lot is going to happen close to the source of the data.

This will change computing communication networks and the algorithms that we write, because they’re going to need to be more constrained for a lower footprint in terms of energy and costs, out at the edge.

Many incumbent companies and well-funded startups are battling to win the AI cloud computing market while relatively few companies are developing solutions for AI at the edge. Some market experts and investors think that the market opportunity at the edge is still pretty small (compared to cloud) and the edge market is way to fragmented to be efficiently served. What do you think about that, do you see an opportunity?

Well, I think the cloud is not going away and will continue to grow and be innovative. However, because the data is being generated at the edge and needs to be analysed, processed, moved and stored at the edge, there is an untapped market for all of these layers.

So you’re going to see a tremendous amount of growth as a result of that.

So from an AI point of view we’ll go from narrow towards more broad use cases, eventually moving towards General AI, looking cross-domain and knowing the application of insights that are available when you start, being able to harness data from multiple sources. It creates a step function that will be available to us when we move through this journey over the next 10 to 20 years.

What is the impact of data if you look at the driving factors for an AI company to succeed at the edge?

I think it’s a factor. It’s about having the ability to access data, and to do that in a cost effective, and energy efficient way.

You should also factor in all kinds of other characteristics of the physical world that don’t exist in a controlled cloud environment. Things like physical security and temperature control, which you don’t always have. Often being operational 24/7, there is not always an opportunity for reboots, software updates and redundancy. You also have the physical world creating hot, harsh and extreme environments with dust, temperature changes, vibrations and such.

So the edge is very different – and depending on the use case, you may need to be operating in real time, measured in milliseconds. If you look at robotics for example, having zero latency in the control system is critical for precision and safety.

It introduces a whole new set of challenges, which you need to be prepared for.

Over last years we have seen a big change in the global market with US government trying to in-shore the semiconductor supply chain with a large government support (Chips Act), the EU government trying to relaunch the local semiconductor ecosystem with the EU Chips Act, China making impressive investments in the AI semiconductors and fuelling the internal demand, and finally Taiwan and Korea pushing to strengthen their position in the market. How do you see this evolve in the coming 10 years?

I think a lot of people see this as a retreat from globalization, where they’re starting to insource and localize a lot of capability in order to protect national interests and security, but that’s not the reality of the market that we’re in.

When you get down to raw materials as well as sophisticated equipment necessary for production at scale, there really is no country today that can completely vertically integrate and be successful in semiconductors.

It requires a global community. For example TSMC, the largest semiconductor foundry in the world, receives raw material supply and equipment from all over the world. It’s not as simplistic as an advanced factory with trained workforce both operating at scale. It really requires a global supply chain of materials and technical innovation.

I think what we’re seeing now is a political acknowledgement and recognition of how fundamental silicon is to the success of any nation state in terms of national security interests, but also the health and well-being of its citizens across every industry. The supply shortage that we’ve seen in semiconductors over the last several years has made that painfully obvious.

I’m very enthusiastic that we’re now seeing national investment programs, subsidies and other benefits in order to support the growth necessary. This really needs to be a public/private collaboration in order to supply the fundamental building blocks of innovation for the world.

I don’t see it being a retreat from globalization at all. I see it really being as shoring up of capabilities and the creation of capacity to support an ever increasing demand for computing.

What are the top 3 edge market which you expect to be disrupted by artificial intelligence? And what kind of applications will be more impactful on our daily life?

Over the past seven years there has been a focus on natural language processing – the ability to control the human machine interface using voice. We see that in our homes with any voice assistant, and also in a healthcare environment or an industrial setting where you have a worker that needs to be able to use both of their hands, but now may control a system using just their voice.

Over that similar period we’ve seen better than human accuracy in image analytics. It started with being able to identify cat or a dog in a photograph, now moving towards being able to analyse very dense and complex medical images.

Being able to translate the terabytes of data in one of those images to find anomalies better by applying deep learning algorithms – faster and with more accuracy than one of the world’s most sophisticated radiologists – is providing a huge a benefit. Not only to the overworked radiologists, but also towards better health outcomes as a result. Because not only can we now derive insight from the image, we can apply that to other datasets and fuse together not just a single diagnosis of what’s happening in that particular image, but applying this to population health records to look for insights in what caused those anomalies in the first place.

We’re seeing those same image analytics applied to video in real time, from object detection, object tracking and facial recognition. This is now at a point where we can understand not only the image, but who is in the image, how they are feeling and what are they doing – starting to understand behavioral analysis right in video images. It’s the ultimate sensor, because you can see what’s happening in the world.

We’ll start to take in other sensor types for sound, smell and vibration and applying all of these things together, moving towards more of a generalized AI where we start looking across domains and data sets, getting a robust understanding of the world we live in. I see this moving towards ‘what do we do about it’, being able to predict better and start allowing some degrees of autonomy.

I see this this journey from understanding what the data is telling us, to having it make a recommendation of what we should do in that scenario (but still requiring a human to take action,) towards full autonomy. And that autonomy can be a car making a decision to swerve or break. It could be in robotics – where you have an unsupervised robotic system – performing tasks and learning from a dynamic environment.

All of these things will start pervade our lives and in the process allowing humans to move to higher order of value creation and skill set. A lot of things that are historically mundane can be automated, things that are dangerous can be automated, or things that are dirty can be automated.

All of these things now allow the human experience again to improve.

Evaluate industry defining AI inference technology today.

The Axelera AI Metis Platform accelerates prototyping and deploying (vision) AI acceleration by providing a comprehensive hardware and software solution with unmatched usability and cost-efficiency.

Be among the first to accelerate your innovation and experience true freedom to innovate. Order your Metis evaluation kit and be a part of shaping the future of Edge AI.

Which Evaluation kit do you want to order?1/3.

This field is required!
This field is required!
This is not correct
This field is required!
This is not correct.

Your contact details2/3.

This field is required!
This field is required!
This field is required!
This field is required!
This field is required!
This field is required!

Your project info3/3.

This field is required!

One more thing...

This field is required!
This is not correct

Thank you for your ordering your Axelera Metis Evaluation Kit!

We've received your order, and a confirmation email has been sent to the provided email address. Our team is excited to review your order.

After evaluating your input, we will be in touch within the next 2 business days to discuss the next steps and how your order can benefit your innovative projects.
Stay tuned for more details coming your way soon!