Scientific Advisors

Luca Benini

Scientific Advisor

Luca Benini holds the chair of digital Circuits and systems at ETHZ and is Full Professor at the Universita di Bologna. He received a PhD from Stanford University.

He has been visiting professor at Stanford University, IMEC, EPFL. He served as chief architect in STmicroelectronics France.

Dr. Benini’s research interests are in energy-efficient parallel computing systems, smart sensing micro-systems and machine learning hardware.

He has published more than 1000 peer-reviewed papers and five books.

He is an ERC-advanced grant winner, a Fellow of the IEEE, of the ACM and a member of the Academia Europaea.

He is the recipient of the 2016 IEEE CAS Mac Van Valkenburg award,  the ACM/IEEE A. Richard Newton and the EDAA Achievement awards in 2020.

Executive Team

Evangelos Eleftheriou

Chief Technology Officer

Co-Founder ​- Axelera AI

Evangelos Eleftheriou, an IEEE and IBM Fellow, is the Chief Technology Officer and co-founder of Axelera AI, a best-in-class performance company that develops a game-changing hardware and software platform for AI.

As a CTO, Evangelos oversees the development and dissemination of technology for external customers, vendors, and other clients to help improve and increase Axelera AI’s business.

Before his current role, Evangelos worked for IBM Research – Zurich, where he held various management positions for over 35 years. His outstanding achievements led him to become an IBM Fellow, which is IBM’s highest technical honour.

In 2002, Evangelos became a Fellow of the IEEE, and later in 2003, he was co-recipient of the IEEE ComS Leonard G. Abraham Prize Paper Award. He was also co-recipient of the 2005 Technology Award of the Eduard Rhein Foundation. In 2005, he was appointed an IBM Fellow and inducted into the IBM Academy of Technology. In 2009, he was co-recipient of the IEEE Control Systems Technology Award and the IEEE Transactions on Control Systems Technology Outstanding Paper Award. In 2016, Evangelos received an honoris causa professorship from the University of Patras, Greece. In 2018, he was inducted into the US National Academy of Engineering as Foreign Member. Evangelos has authored or coauthored over 250 publications and holds over 160 patents (granted and pending applications).

His primary interests lie in AI and machine learning, including emerging computing paradigms such as neuromorphic and in-memory computing.

Evangelos holds a PhD and a Master of Eng. degrees in Electrical Engineering from Carleton University, Canada, and a BSc in Electrical & Computer Engineering from the University of Patras, Greece.

Scientific Advisors

Marian Verhelst

Scientific Advisor

Marian Verhelst is a full professor at the MICAS laboratories of the EE Department of KU Leuven. Her research focuses on embedded machine learning, hardware accelerators, HW-algorithm co-design and low-power edge processing. Before that, she received a PhD from KU Leuven in 2008, was a visiting scholar at the BWRC of UC Berkeley in the summer of 2005, and worked as a research scientist at Intel Labs, Hillsboro OR from 2008 till 2011. Marian is a topic chair of the DATE and ISSCC executive committees, TPC member of VLSI and ESSCIRC  and was the chair of tinyML2021 and TPC co-chair of AICAS2020. Marian is an SSCS Distinguished Lecturer, was a member of the Young Academy of Belgium, an associate editor for TVLSI, TCAS-II and JSSC and a member of the STEM advisory committee to the Flemish Government. Marian currently holds a prestigious ERC Starting Grant from the European Union, was the laureate of the Royal Academy of Belgium in 2016, and received the André Mischke YAE Prize for Science and Policy in 2021.

Announcements

Axelera AI Announces New CTO and Scientific Advisors

Experts Hail from IBM, Google, ETH Zurich, and Other Leading Organisations

EINDHOVEN, October 13, 2021 – Today, semiconductor start-up Axelera AI announced that Prof. Dr. Evangelos Eleftheriou has joined its executive team as Chief Technology Officer. Axelera AI also announced two new appointments to its cohort of scientific advisors, Prof. Dr. Marian Verhelst and Prof. Dr. Luca Benini. The CTO and new advisors will be supporting Axelera AI’s development of the next generation of AI hardware, launched last month with a successful $12 million seed round.

Before joining Axelera AI, Eleftheriou, an IEEE and IBM Fellow, has held various leadership positions at IBM Research. Among them, Eleftheriou has been the Department Head of Cloud and Computing Infrastructure as well as Leader of the Neuromorphic and In-memory Computing activities at IBM Research-Zurich. He brings to Axelera AI more than 35 years of research, leadership and industry experience. At Axelera AI, he will spearhead the development and dissemination of the company’s scalable AI hardware for applications at the edge. He joins a growing team in Zurich, with engineers and researchers hailing from Google, IBM Research and the Swiss Federal Institute of Technology (ETH Zurich), among others.

Prof. Dr. Benini is the current chair of Digital Circuits and Systems at D-ITET ETH Zurich and a full professor at the University of Bologna. His numerous accolades include an ERC-​advanced grant, the 2016 IEEE CAS Mac Van Valkenburg Award, the 2019 IEEE TCAD Donald O. Pederson Best Paper Award and the ACM/IEEE A. Richard Newton Award 2020.  Axelera AI advisor Prof. Dr. Marian Verhelst is the Senior Scientific Director of IMEC and Professor at KU Leuven, where she specialises in digital and analogue accelerators. Dr. Verhelst founded KU Leuven’s research consortium on “context-aware ubiquitous computing (CubiqLab)” and has published numerous books, patents and journal contributions.

“Axelera AI’s game-changing technology will require input from the world’s most cutting-edge AI researchers and technologists,” said Fabrizio Del Maffeo, CEO and founder of Axelera AI. “We are excited to have Evangelos joining as our CTO and a diverse team of researchers and leaders serving as our scientific advisors.”

“I am honoured to be joining the executive team at Axelera AI,” remarked Evangelos Eleftheriou. “Axelera AI will be changing the AI industry with its new technological designs. I look forward to working closely with their leadership, board and advisors to achieve the company’s goals and introduce new AI solutions around the globe.”

Scientific advisor Prof. Dr. Luca Benini added, “Axelera AI is creating industry-defining technology to support innovation and expansion in AI capabilities. I am thrilled to join as a scientific advisor to make this possible.”

“Axelera AI’s scientific advisors will make recommendations based on our extensive experiences in edge computing, efficient processing platforms, neural networks and more,” agreed Prof. Dr. Marian Verhelst. “It is an honour to advise the company alongside other leaders building the future of AI.”

About Axelera AI

Axelera AI is designing the world’s most efficient and advanced solutions for AI at the edge. Its game-changing hardware and software product will concentrate the AI computational power of an entire server into a single chip at a fraction of the power consumption and price of AI hardware today. The company’s groundbreaking solution will empower thousands of applications of AI at the edge, making the use of AI more efficient and accessible than ever before. Axelera AI’s products will be fully integrated with the leading open-sourced AI frameworks when it launches to select customers and partners in 2022.

About Artificial Intelligence at The Edge 

More than 125 billion “things” are expected to be connected to the internet by 2030 – including smartphones, cameras, smart city sensors, and more. Each of these “internet of things (IoT)” devices, even the most miniature sensor, creates an exponential amount of data for analysis. Currently, only 2% of that data has been analysed due to a lack of expertise and AI tools. Artificial intelligence allows us to analyse and understand this data thoroughly. However, the sheer amount of information and increasingly complex algorithms require a new generation of more powerful, efficient and accessible computation hardware. Due to growing privacy, security and bandwidth concerns, data is increasingly processed close to its origin, or “the edge.”

The AI hardware available today has been designed primarily for cloud computing operations, a sector with relatively limited cost, power, and scalability constraints. Computational hardware for edge applications needs an entirely new design that considers specific power needs, computational needs and economics. Axelera AI will capture this opportunity by developing and delivering a game-changing and scalable technology with superior performance and efficiency to accelerate AI applications at the edge.

Press Contact: 

Rachel Pipan, press@axelera.ai

Blog

Artificial Intelligence At The Edge: Data-Driven Decision-Making Is Here To Change The World

More than 125 billion “things” are expected to be connected to the internet of things (IoT) by 2030. From the nearly 4 million smartphones in the world to the tiniest camera sensors in local traffic lights, each of these devices will generate exponential amounts of data for analysis.

Data is the new oil and is the most valuable asset for tech giants like Facebook, Google and Amazon. The amount of data-heavy video and images shared on the internet is rapidly increasing, estimated to make up more than 80% of internet traffic by the end of 2021.  According to Cisco, 50% of the data produced to date was generated in the last two years. However, only 2% of this staggering amount of data has been analyzed due to a lack of available and accessible tools and hardware, leaving companies to wonder what they can do to address this data gap.

Artificial intelligence, or AI, offers a compelling solution to this problem. Still, it requires increasingly complex and powerful algorithms to analyze these massive amounts of data efficiently. Powerful AI is not enough on its own – due to growing privacy, security and bandwidth concerns, stakeholders increasingly need to process data close to its origin, often on the sensors/devices themselves, in what is called the “edge” of  IoT. 

The AI technology available today has been designed primarily for cloud computing operations, a sector with considerably less constraints in terms of cost, power, and scalability. For years, Incumbent computing companies have delivered inefficient and expensive computing technologies, opening the door for startups to propose new technologies. These innovative solutions aim to match this new data-driven computing era’s specific power needs, computational requirements and economics. 

The market opportunity is significant – the AI semiconductor market (for application-specific processors) is expected to reach more than $30 billion in 2023 and more than $50 billion in 2025, with the AI computing board and systems market estimated to be three to four times larger.

Figure 1 – Artificial Intelligence market opportunity. Source: Axelera AI .

80% of the current market is represented by chips that train the artificial neural networks typically used in cloud computing and large data centres owned by companies like Microsoft, Amazon, Google and Facebook. However, experts expect most of the market to shift to inference at the edge in the coming months.

This new generation of hardware for AI at the edge needs to address several challenges currently faced by developers.

Challenge 1: Standard computing performance is facing an end to its exponential growth.

According to Moore’s law, Amdahl’s law and Dennard scaling, computer performance has grown exponentially for 30 years. In looking carefully at data from the past 15 years, however, it is apparent that this growth has slowed down to almost flatten, especially in the previous five years.  

Challenge 2: Neural network size is increasing exponentially.  

While standard computing performance is slowing down, neural network size and computational requirements are increasing exponentially at a swift pace. In five years, the most advanced neural network increased in size by over 1,000 times. Similarly, the computational requirements to train the most advanced networks is doubling every three months, which amounts to over 1,000 times every two and a half years.

Challenge 3: Computer technology is not optimized for AI workloads.   

The standard CPU (Central Processing Unit) design is not well suited to meet today’s data processing needs. Matrix-vector multiplications dominate AI workloads where 70% of the workload consists of multiplying large tables of numbers and accumulating the results of these multiplications.  

Challenge 4: Technology is inefficient, leading to a data bottleneck. 

Data movement is the key driving factor behind artificial intelligence’s computer performance and power consumption, particularly in deep learning. AI processes constantly move data from the computer’s memory to the CPU, where operations such as multiplications or sums are performed, and then back to the memory where the partial or final result is stored. AI requires a new technology that should reduce data movement and optimize data flow within its system.

Figure 2 – Challenges of Artificial Intelligence at the Edge. 

Properly addressing the above challenges and delivering new products based on modern computing architecture will unleash cutting-edge new applications and scenarios, including retail, security, smart cities and more. Here are a few examples of the areas AI at the edge has the potential to unlock.  

Mobility: The mobility market is one of the larger current markets for AI. This includes autonomous driving, driving assistant systems, driver attention control, fleet management, passenger counting, commercial payload and perimeter control. 

Retail: Retail automation is another one of the fastest-growing markets for AI at the edge. It impacts all shopping centres from supermarkets to local stores and vending machines. Typical applications in this area include interactive digital signage, customer analytics, product analytics, autonomous checkout systems and autonomous logistics.

Security: The are more than 500 million public and private cameras in the world. Most of these systems do not transfer video to the cloud. Instead, the detection and crowd tracking is done by a computer in the camera’s proximity (in the case of the shops and indoor areas) or within a private network. 

Figure 3 – Edge AI Market opportunity. 

Smart City: According to the UN, 68% of the world’s population will live in urban areas by 2050. This unprecedented migration is forcing city and metropolitan area planners to rethink the way people live and how cities develop. AI is helping to collect insight and analyze data from cameras and sensors for applications such as intelligent traffic systems, intelligent lighting systems, intelligent parking systems and crowd analytics. 

Personal Safety: Artificial intelligence gives us the tools needed to improve safety in working areas and private life. Camera systems can limit access to restricted areas to authorized personnel, limit access to specific devices or machines to authorized people (using biometrics for identification), promptly identify employees in danger and more. Augmented reality will also allow people to learn how to more efficiently and safely operate new tools.

Robotics & Drones: Artificial intelligence at the edge is powering drones and robots used across logistics, manufacturing and many other sectors. Drones can survey and help businesses operate efficiently and safely over large areas with challenging environmental conditions. These inventions will radically change several vital areas, including agriculture, environmental control and logistics.  

Manufacturing: Manufacturers have used computer vision to optimize their processes for decades. All systems operate in an isolated manner to help limit the risk of complete manufacturing line failure. Deep learning is continuously introducing new possibilities and helping achieve higher manufacturing standards and output. 

Healthcare: Today, AI can help accurately identify early-stage skin cancers and other diseases with a success rate similar to that of an experienced radiologist. Features like these rely on powerful cloud computers that will soon be available to edge computers outside the cloud. 

The examples above illustrate only some of the numerous areas enhanced by AI and data-driven decision-making. The semiconductor market seems to have entered its Compute Cambrian Explosion era, with hundreds of newborn fabless semiconductor startups proposing new solutions every day. It is challenging to determine what technology and company will “win” this race and if only one winner will emerge.

We believe that heterogeneous architectures which merge different technologies will ultimately prevail. Dataflow computing and in-memory computing can deliver an optimal solution to fulfil market needs and provide cost-effective, robust and efficient hardware.

While tradition computing systems move data from memory to the computing unit and store back the result in memory, data flow in-memory computing technology allows to process data directly inside the memory cell – reducing drastically the data movement and consequently power consumption – and to perform, in just one computing cycle millions of operations.

Furthermore, the combination of the computing and memory reduces the footprint of the chip and consequently the cost of the chip. Combining multiple In-Memory Computing cores with dataflow makes it possible to develop a versatile technology which can support all the most used applications (networks) in the field of computing vision and natural language processing and delivering high throughput & efficiency at a fraction of the cost of current solutions.

Interested in learning more about this topic? In our next article, our CTO will explore the nuanced world of in-memory computing. Subscribe here to follow our blog and receive email notifications when we post next.