Blog

Artificial Intelligence At The Edge: Data-Driven Decision-Making Is Here To Change The World

More than 125 billion “things” are expected to be connected to the internet of things (IoT) by 2030. From the nearly 4 million smartphones in the world to the tiniest camera sensors in local traffic lights, each of these devices will generate exponential amounts of data for analysis.

Data is the new oil and is the most valuable asset for tech giants like Facebook, Google and Amazon. The amount of data-heavy video and images shared on the internet is rapidly increasing, estimated to make up more than 80% of internet traffic by the end of 2021.  According to Cisco, 50% of the data produced to date was generated in the last two years. However, only 2% of this staggering amount of data has been analyzed due to a lack of available and accessible tools and hardware, leaving companies to wonder what they can do to address this data gap.

Artificial intelligence, or AI, offers a compelling solution to this problem. Still, it requires increasingly complex and powerful algorithms to analyze these massive amounts of data efficiently. Powerful AI is not enough on its own – due to growing privacy, security and bandwidth concerns, stakeholders increasingly need to process data close to its origin, often on the sensors/devices themselves, in what is called the “edge” of  IoT. 

The AI technology available today has been designed primarily for cloud computing operations, a sector with considerably less constraints in terms of cost, power, and scalability. For years, Incumbent computing companies have delivered inefficient and expensive computing technologies, opening the door for startups to propose new technologies. These innovative solutions aim to match this new data-driven computing era’s specific power needs, computational requirements and economics. 

The market opportunity is significant – the AI semiconductor market (for application-specific processors) is expected to reach more than $30 billion in 2023 and more than $50 billion in 2025, with the AI computing board and systems market estimated to be three to four times larger.

Figure 1 – Artificial Intelligence market opportunity. Source: Axelera AI .

80% of the current market is represented by chips that train the artificial neural networks typically used in cloud computing and large data centres owned by companies like Microsoft, Amazon, Google and Facebook. However, experts expect most of the market to shift to inference at the edge in the coming months.

This new generation of hardware for AI at the edge needs to address several challenges currently faced by developers.

Challenge 1: Standard computing performance is facing an end to its exponential growth.

According to Moore’s law, Amdahl’s law and Dennard scaling, computer performance has grown exponentially for 30 years. In looking carefully at data from the past 15 years, however, it is apparent that this growth has slowed down to almost flatten, especially in the previous five years.  

Challenge 2: Neural network size is increasing exponentially.  

While standard computing performance is slowing down, neural network size and computational requirements are increasing exponentially at a swift pace. In five years, the most advanced neural network increased in size by over 1,000 times. Similarly, the computational requirements to train the most advanced networks is doubling every three months, which amounts to over 1,000 times every two and a half years.

Challenge 3: Computer technology is not optimized for AI workloads.   

The standard CPU (Central Processing Unit) design is not well suited to meet today’s data processing needs. Matrix-vector multiplications dominate AI workloads where 70% of the workload consists of multiplying large tables of numbers and accumulating the results of these multiplications.  

Challenge 4: Technology is inefficient, leading to a data bottleneck. 

Data movement is the key driving factor behind artificial intelligence’s computer performance and power consumption, particularly in deep learning. AI processes constantly move data from the computer’s memory to the CPU, where operations such as multiplications or sums are performed, and then back to the memory where the partial or final result is stored. AI requires a new technology that should reduce data movement and optimize data flow within its system.

Figure 2 – Challenges of Artificial Intelligence at the Edge. 

Properly addressing the above challenges and delivering new products based on modern computing architecture will unleash cutting-edge new applications and scenarios, including retail, security, smart cities and more. Here are a few examples of the areas AI at the edge has the potential to unlock.  

Mobility: The mobility market is one of the larger current markets for AI. This includes autonomous driving, driving assistant systems, driver attention control, fleet management, passenger counting, commercial payload and perimeter control. 

Retail: Retail automation is another one of the fastest-growing markets for AI at the edge. It impacts all shopping centres from supermarkets to local stores and vending machines. Typical applications in this area include interactive digital signage, customer analytics, product analytics, autonomous checkout systems and autonomous logistics.

Security: The are more than 500 million public and private cameras in the world. Most of these systems do not transfer video to the cloud. Instead, the detection and crowd tracking is done by a computer in the camera’s proximity (in the case of the shops and indoor areas) or within a private network. 

Figure 3 – Edge AI Market opportunity. 

Smart City: According to the UN, 68% of the world’s population will live in urban areas by 2050. This unprecedented migration is forcing city and metropolitan area planners to rethink the way people live and how cities develop. AI is helping to collect insight and analyze data from cameras and sensors for applications such as intelligent traffic systems, intelligent lighting systems, intelligent parking systems and crowd analytics. 

Personal Safety: Artificial intelligence gives us the tools needed to improve safety in working areas and private life. Camera systems can limit access to restricted areas to authorized personnel, limit access to specific devices or machines to authorized people (using biometrics for identification), promptly identify employees in danger and more. Augmented reality will also allow people to learn how to more efficiently and safely operate new tools.

Robotics & Drones: Artificial intelligence at the edge is powering drones and robots used across logistics, manufacturing and many other sectors. Drones can survey and help businesses operate efficiently and safely over large areas with challenging environmental conditions. These inventions will radically change several vital areas, including agriculture, environmental control and logistics.  

Manufacturing: Manufacturers have used computer vision to optimize their processes for decades. All systems operate in an isolated manner to help limit the risk of complete manufacturing line failure. Deep learning is continuously introducing new possibilities and helping achieve higher manufacturing standards and output. 

Healthcare: Today, AI can help accurately identify early-stage skin cancers and other diseases with a success rate similar to that of an experienced radiologist. Features like these rely on powerful cloud computers that will soon be available to edge computers outside the cloud. 

The examples above illustrate only some of the numerous areas enhanced by AI and data-driven decision-making. The semiconductor market seems to have entered its Compute Cambrian Explosion era, with hundreds of newborn fabless semiconductor startups proposing new solutions every day. It is challenging to determine what technology and company will “win” this race and if only one winner will emerge.

We believe that heterogeneous architectures which merge different technologies will ultimately prevail. Dataflow computing and in-memory computing can deliver an optimal solution to fulfil market needs and provide cost-effective, robust and efficient hardware.

While tradition computing systems move data from memory to the computing unit and store back the result in memory, data flow in-memory computing technology allows to process data directly inside the memory cell – reducing drastically the data movement and consequently power consumption – and to perform, in just one computing cycle millions of operations.

Furthermore, the combination of the computing and memory reduces the footprint of the chip and consequently the cost of the chip. Combining multiple In-Memory Computing cores with dataflow makes it possible to develop a versatile technology which can support all the most used applications (networks) in the field of computing vision and natural language processing and delivering high throughput & efficiency at a fraction of the cost of current solutions.

Interested in learning more about this topic? In our next article, our CTO will explore the nuanced world of in-memory computing. Subscribe here to follow our blog and receive email notifications when we post next.