
About this Event
Ready to move beyond the theory and get hands-on with custom models? In the second session of our three-part series, Sngular engineer Luis Orellana will break down how to create your own LLMs through fine-tuning. You’ll learn:
- The differences between Full Fine-Tuning, LoRA, and QLoRA
- Best practices for dataset preparation and hyperparameter optimization
- How to apply fine-tuning techniques to a real-world use case
We’ll also walk through deploying a distilled model on Google Cloud Vertex AI and integrating it with a FastAPI service—so you can see how everything fits together in practice.
Light refreshments, great conversation, and practical deep learning insights await.
Topics include:
- What is fine-tuning (training-testing)?
- Types of fine-tuning:
- Full Fine-Tuning
- LoRA
- QLoRA
- Dataset preparation:
- Vectorization
- K-Means
- Hyperparameters:
- Learning Rate
- Batch Size
- Gradient Accumulation
- Grid Search
- Fine-tuning use case
- Google Cloud Vertex AI
- Decoder Strategies for LLMs
- Temperature
- Top K
- Top P
- Repetition Penalty
- FastAPI service integration use case
Agenda
🕑: 05:30 PM - 06:00 PM
Networking
Info: Find Sngular in the Expansive building on Liberty. We'll have signage outside to help you find us, and sodas and snacks inside to help you get comfortable. Network with other AI enthusiasts.
🕑: 06:00 PM - 07:00 PM
Tech Talk + Q&A
Info: Sngular's Luis Máximo Orellana Altamirano will discuss neural networks from foundational concepts to advanced architectures.
🕑: 07:00 PM - 07:30 PM
Mingle
Info: Chat and network with other attendees and presenters. We'll start packing up at 7:30.
Event Venue & Nearby Stays
Sngular Pittsburgh, 606 Liberty Avenue, Pittsburgh, United States
USD 0.00