
About this Event
Nebius AI Cloud Unveiled: San Francisco Edition โ Plus, Free GPU Credits
At Nebius, weโre building an AI Cloud designed to streamline AI development at scale. With NVIDIA-accelerated GPUs, a robust ML-native infrastructure, and a scalable inference platform, our goal is to empower AI practitioners, researchers, and innovator.
Want to experience it firsthand? Join us in San Francisco for an exclusive deep dive into Nebius AI Neocloudโcrafted for builders and pioneers like you.
Bonus: Every attendee will receive free GPU credits to test-drive Nebius AI Neocloud, explore NVIDIA-powered AI, and experiment with cutting-edge text-to-image generation in AI Studio.
๐ Thursday, March 13
๐ Convene, San Francisco
Whether youโre fine-tuning AI models, orchestrating ML workloads, or deploying AI applications, this event is your opportunity to connect with Nebius developers, product experts, and industry partnersโwhile gaining behind-the-scenes insights into our platform.
Whatโs on the agenda?
๐น An insiderโs look at Nebius AI Cloud โ Architecture and key development principles
๐น NVIDIA Blackwell Architecture โ Insights and performance comparisons
๐น Kubernetes & Slurm for ML/HPC โ Optimizing large-scale cluster management
๐น Scaling AI workloads โ Lessons learned, mistakes, and best practices
๐น Test-time computation & agentic systems โ Unlocking the next generation of AI applications
๐น Inference at scale โ Choosing providers and optimizing performance
After a day of technical deep dives, stick around for networking, drinks, and bites with fellow AI practitioners.
๐ Spots are limitedโ and claim your free credits!
Agenda
๐: 01:00 PM - 02:00 PM
Registration & welcome coffee
Info: Start the day with networking and refreshments.
๐: 02:00 PM - 02:15 PM
Introduction
Host: Vasily Pantyukhin
Info: Setting the stage for an afternoon of innovation and discovery.
๐: 02:15 PM - 02:45 PM
How we build cloud for AI workloads
Host: Gleb Kholodov
Info: An in-depth look at the hardware and software architecture powering AI applications.
๐: 02:45 PM - 03:15 PM
Marrying Slurm and Kubernetes for workload management
Host: Dmitry Starov
Info: Discover Soperator, a Slurm-based workload manager for ML and HPC clusters.
๐: 03:15 PM - 03:30 PM
Break
๐: 03:30 PM - 04:00 PM
What it takes to win the large-scale training game
Host: Levon Sarkisyan
Info: Avoid the top mistakes and master best practices for scaling AI models.
๐: 04:00 PM - 04:45 PM
TractoAI โ a serverless platform for AI engineers (case study with SynthLabs)
Host: Maxim Akhmedov
Info: Customer spotlight.
๐: 04:45 PM - 05:15 PM
Improving agentic systems with test-time computation
Host: Karina Zainullina
Info: Talk about our recent research on combining guided search with agent inference, and how these techniques enable us to build better software engineering agents.
๐: 05:15 PM - 05:45 PM
Inference, all you need to know about it
Host: Alex Patrushev
Info: Our experience on building inference. Tips on choosing inference providers and GenAI models.
๐: 05:45 PM - 06:00 PM
Closing remarks
Host: Vasily Pantyukhin
๐: 06:00 PM
Networking, drinks and bites
Event Venue & Nearby Stays
Convene 100 Stockton, 100 Stockton Street, San Francisco, United States
USD 0.00