Multilayer perceptrons (MLP) in Computer Vision
In this blog post, we review the V-MLP literature, compare V-MLPs to CNNs and ViTs, and attempt to extract the ingredients that really matter for efficient and accurate deep learning-based computer vision.
Interview with Torsten Hoefler, Axelera AI’s Scientific Advisor
Interview with Torsten Hoefler Axelera AI’s Scientific Advisor.
Our CTO had a chat with Torsten Hoefler to scratch the surface and get to know better our scientific advisor.
Transformers in Computer Vision
Convolutional Neural Networks (CNN) have been dominant in Computer Vision applications for over a decade. Today, they are being outperformed and replaced by Vision Transformers (ViT) with a higher learning capacity. The fastest ViTs are essentially a CNN/Transformer hybrid, combining the best of both worlds.
An Interview with Marian Verhelst, Axelera AI’s Scientific Advisor
An Interview with Marian Verhelst, Axelera AI’s Scientific Advisor.
I met Marian Verhelst in the summer of 2019 and she immediately stroke me with her passion and competence for computing architecture design. We started immediately a collaboration and today she’s here with us sharing her insights on the future of computing.
What’s Next for Data Processing? A Closer Look at In-Memory Computing
What’s Next for Data Processing? A Closer Look at In-Memory Computing Technology is progressing at an incredible pace and no technology is moving faster than Artificial Intelligence (AI). Indeed, we are on the cusp of an AI revolution which is already reshaping our lives.