Advertisement
Alessandro Palla, Intelgiovedì 30 aprile dalle ore 13:30 alle ore 16:30 nell’aula F3 del Polo Etruria
Abstract
The demand for running complex AI models directly on personal edge devices has made low-power AI accelerators more critical than ever. This talk explores the dual frontier of modern semiconductor engineering: building silicon for AI, and building silicon with AI. We will begin by briefly demystifying how AI models actually work under the hood. Understanding these core computational mechanics sets the stage for a deep dive into the architectural workings of AI accelerators, where we will highlight the essential hardware-software trade-offs and compiler optimizations required for highly efficient, low-power edge deployment. Lastly, we will then explore how the very technology we are accelerating is stepping in to manage the growing complexity of digital design and how recent advancements in autonomous coding agents and LLMs are beginning to facilitate the engineering workflow by automating RTL generation, streamlining verification loops, and optimizing architectural space exploration.
Bio
Alessandro Palla is a senior staff deep learning engineer at Intel. He graduated in electronics engineering, and got the related PhD, in 2014 and 2018 respectively at the University of Pisa. He is working since 2017 in Intel Corporation, designing next generation Neural Processing Units (NPU) AI accelerator on Intel Client CPU. His domain of expertise is hardware/software codesign and compiler optimization techniques for AI accelerators.
Advertisement
Event Venue
Scuola di Ingegneria Università di Pisa, L.go Lucio Lazzarino 2,Pisa, Italy
Tickets
Concerts, fests, parties, meetups - all the happenings, one place.






