by Evangelos Eleftheriou –  CTO at AXELERA AI

Out CTO had a chat with Torsten Hoefler to scratch the surface and get to know better our new scientific advisor.

Evangelos: Could you please introduce yourself and your field of expertise?

Torsten: My background is in High-Performance Computing on Supercomputers. I worked on large-scale supercomputers, networks, and the Message Passing Interface specification. More recently, my main research interests are in the areas of learning systems and applications of them, especially in the climate simulation area.

E: Where is currently the focus of your research interests?

T: I try to understand how to improve the efficiency of deep learning systems (both inference and training) ranging from smallest portable devices to largest supercomputers. I especially like the application of such techniques for predicting the weather or future climate scenarios.

E: What do you see as the greatest challenges in data-centric computing in current hardware and software landscape?

T: We need a fundamental shift of thinking – starting from algorithms, where we teach and reason about operational complexity. We need to seriously start thinking about data movement. From this algorithmic base, the data-centric view needs to percolate into programming systems and architectures. On the architecture side, we need to understand the fundamental limitations to create models to guide algorithm engineering. Then, we need to unify this all into a convenient programming system.


E: Could you please explain the general concept of DaCe, as a generic data-centric programming framework?

T: DaCe is our attempt to capture data-centric thinking in a programming system that takes Python (and others) codes and represents them as a data-centric graph representation. Performance engineers can then work conveniently on this representation to improve the mapping to specific devices. This ensures highest performance.

E: DaCe has also extensions for Machine Learning (DaCeML). Where do those help? Could in general in-memory computing accelerators benefit by such a framework and how?

T: DaCeML supports the Open Neural Network Exchange (ONNX) format and PyTorch through the ONNX exporter. It offers inference as well as training support at highest performance using data-centric optimizations. In-memory computing accelerators can be a target for DaCe – depending on their offered semantics, a performance engineer could identify pieces of the dataflow graph to be mapped to such accelerators.

E: In which new application domains do you see data-centric computing playing a major role in the future?

T: I would assume all computations where performance or energy consumption is important – ranging from scientific simulations to machine learning and from small handheld devices to large-scale supercomputers.

E: What is your advice to young researchers in the field of data-centric optimization?

T: Learn about I/O complexity!

As Scientific Advisor, Torsten Hoefler advises the Axelera AI Team on the scientific aspects of its research and development. To learn more about Torsten’s work, please visit his biography page.

Stay tuned to learn more about our progress in upcoming blog posts, and be sure to subscribe to our newsletter using the form on our homepage!

Share on:

Scroll to top