OWASP AppSec Italy 2026 - Trainings

Wed Jun 17 2026 at 09:00 am to 06:00 pm UTC+02:00

Hotel Regina Margherita | Cagliari

OWASP Italy
Publisher/HostOWASP Italy
OWASP AppSec Italy 2026 - Trainings
Advertisement
OWASP Italy Day 2026 will take place on June 17th-18th, 2026, in Cagliari, Sardinia (Italy).
About this Event

OWASP Italy Day 2026 will take place on June 18th, 2026, in Cagliari, Sardinia (Italy) โ€” returning to one of the most inspiring locations for cybersecurity innovation and collaboration.


This will be a free, one-day, in-person event focused on application security, AI security, DevSecOps, and secure software development, bringing together researchers, professionals, and students to exchange ideas, share experiences, and strengthen the AppSec community.


The main conference will start on June 18th at 3:30 PM, following a day of training sessions and workshops on June 17th (and optionally the morning of June 18th).


Tickets for the training sessions and workshops starting on June 17th can be purchased directly through this Eventbrite page. These tickets grant access to the hands-on training activities taking place on June 17th (and, where applicable, on the morning of June 18th).


Registration for the public OWASP Italy Day event on June 18th (afternoon) is handled separately. Tickets for the main conference are available on the official OWASP Italy website at: https://owasp.org/www-chapter-italy/events/OWASPItalyDay2026-06-18


Day 1- Agenda

๐Ÿ•‘: 09:00 AM - 06:00 PM
Introductory - AI Security Training
Host: Vandana Verma Sehgal

Info: This hands-on workshop is designed for security engineers, AppSec teams, DevSecOps practitioners, senior developers, and software architects building or defending AI-powered applications.

By the end, youโ€™ll walk away with:
A clear mental model of LLM and agent-based threat surfaces
A hardened mini-agent youโ€™ll build during the session
A reusable MCP server security checklist
Practical playbooks and patterns you can apply immediately in real environments
This isnโ€™t theory โ€” itโ€™s attack โ†’ understand โ†’ defend.
Attendee Requirements (Prepare Beforehand)
Laptop with: Docker, Python 3.11+, Node 18+
One LLM: Ollama (llama3.1/phi3) or a cloud API key (OpenAI/Azure/OpenRouter)
Git installed, plus pipx or venv
Browser with DevTools (VS Code recommended but optional)
Sample lab repos (provided ahead of time):
- basic-llm-injection-demo
- mini-agent-tools-demo
- mcp-server-minimal (FastAPI/Express versions)


๐Ÿ•‘: 09:00 AM - 06:00 PM
Introductory - Secure Coding for LLM Applications
Host: Fabio Cerullo

Info: Description
AI-driven applications are rapidly transforming products, developer workflows, and customer experiences.
But these systems introduce unique security risks that traditional AppSec practices donโ€™t address.
This 1.5 days hands-on course teaches developers, AppSec engineers, and architects how to design and build secure AI/LLM applications. Participants learn to defend against prompt injection, insecure output handling, model poisoning, data leakage, and other risks from the updated OWASP Top 10 for LLM Applications 2025.
Through labs and real-world case studies, attendees gain practical skills for deploying safe, trustworthy, and compliant AI capabilities at scale.

Course Outline
Part I: Foundations of AI and LLM Security
Part II: Threat Modeling and Architecture Threat Modeling for LLM Systems
RAG Security: Retrieval, Embeddings, and Index Integrity Agent and Tool Security
Part III: The OWASP Top 10 for LLM Applications 2025
Part IV: Secure AI/LLM Design and Governance


๐Ÿ•‘: 09:00 AM - 06:00 PM
Intermediate - Secure AI Agent Swarm
Host: Krishnendu Dasgupta

Info: Modern AppSec teams are using agentic workflows to triage vulnerability reports and incident tickets that contain logs, stack traces, and chat transcripts often with PII, secrets, and sensitive internal context. Once these workflows add RAG and autonomous tool use (function calling), the attack surface expands: prompt injection can trigger unsafe actions, sensitive data can leak through memory/RAG/tool outputs, agents can be spoofed, and controls can be bypassed.

You will understand the need of building, securing and deploying AI Agent Swarms in a decenetralized and trustless ecosystem.

In this training you will build a Secure AppSec Triage & Remediation Swarm: a policy-governed, privacy-preserving multi-agent system powered by open-source foundation models in the 4Bโ€“20B range (Mistral/Qwen-class), with an explicit focus on EU policy-driven controls.


๐Ÿ•‘: 09:00 AM - 06:00 PM
Intermediate - AI-Powered Threat Modeling
Host: Marco Morana

Info: Traditional threat modeling frameworks such as STRIDE and PASTA remain essential to secure design, but modern cloud-native architectures, distributed systems, and fast development cycles require more scalable and consistent analysis than manual approaches can offer. This session introduces an AI-augmented approach to threat modeling that leverages Large Language Models (LLMs) such as ChatGPT and domain-specific tools like StrideGPT to accelerate the threat modeling lifecycle while preserving expert judgment.

Drawing from the AI-Powered Threat Modeling eBook and an accompanying 8-module online course, the presentation demonstrates how generative AI can support system decomposition, automatically generate threat scenarios from architectural descriptions and DFDs, assist risk-based prioritization using CVEs and telemetry, and streamline documentation into standardized, repeatable outputs, with a focus on proper data preparation and preprocessing.


๐Ÿ•‘: 09:00 AM - 06:00 PM
Advanced - The Mobile Playbook - Android AppSec
Host: Sven Schleier

Info: This hands-on course is designed to teach penetration testers, developers, and engineers how to analyse Android applications for security vulnerabilities. The course covers the different phases of testing, including dynamic testing, static analysis and reverse engineering. We will also explore how you can use the Model Context Protocol (MCP) to automate some of these workflows and leverage its strengths.

The course is based on the OWASP Mobile Application Security Testing Guide (MASTG) and taught by one of the project co-leaders. This comprehensive, open-source mobile security testing book covers both Android, providing a methodology and detailed technical test cases to ensure completeness and utilizes the latest attack techniques against mobile applications. This course provides hands-on experience with open-source tools and advanced methodologies, guiding you through real-world scenarios.


Day 2 - Agenda

๐Ÿ•‘: 09:00 AM - 01:00 PM
Introductory - Secure Coding for LLM Applications
Host: Fabio Cerullo

Info: AI-driven applications are rapidly transforming products, developer workflows, and customer experiences.
But these systems introduce unique security risks that traditional AppSec practices donโ€™t address.
This 1.5 days hands-on course teaches developers, AppSec engineers, and architects how to design and build secure AI/LLM applications. Participants learn to defend against prompt injection, insecure output handling, model poisoning, data leakage, and other risks from the updated OWASP Top 10 for LLM Applications 2025.
Through labs and real-world case studies, attendees gain practical skills for deploying safe, trustworthy, and compliant AI capabilities at scale.

Course Outline
Part I: Foundations of AI and LLM Security
Part II: Threat Modeling and Architecture Threat Modeling for LLM Systems
RAG Security: Retrieval, Embeddings, and Index Integrity Agent and Tool Security
Part III: The OWASP Top 10 for LLM Applications 2025
Part IV: Secure AI/LLM Design and Governance


๐Ÿ•‘: 09:00 AM - 01:00 PM
Intermediate - AI-Powered Threat Modeling
Host: Marco Morana

Info: Traditional threat modeling frameworks such as STRIDE and PASTA remain essential to secure design, but modern cloud-native architectures, distributed systems, and fast development cycles require more scalable and consistent analysis than manual approaches can offer. This session introduces an AI-augmented approach to threat modeling that leverages Large Language Models (LLMs) such as ChatGPT and domain-specific tools like StrideGPT to accelerate the threat modeling lifecycle while preserving expert judgment.

Drawing from the AI-Powered Threat Modeling eBook and an accompanying 8-module online course, the presentation demonstrates how generative AI can support system decomposition, automatically generate threat scenarios from architectural descriptions and DFDs, assist risk-based prioritization using CVEs and telemetry, and streamline documentation into standardized, repeatable outputs, with a focus on proper data preparation and preprocessing.


Advertisement

Event Venue & Nearby Stays

Hotel Regina Margherita, 44 Viale Regina Margherita, Cagliari, Italy

Tickets

EUR 850.00 to EUR 1200.00

Icon
Concerts, fests, parties, meetups - all the happenings, one place.

Ask AI if this event suits you:

More Events in Cagliari

Eric Johnson in Casteddu\/Cagliari
Sat, 11 Jul at 07:00 pm Eric Johnson in Casteddu/Cagliari

Cagliari is Happening!

Never miss your favorite happenings again!

Explore Cagliari Events