
About this Event
Generative AI Under the Lens: Rigorous Checks for Safety and Reliability
Generative AI has accelerated development across various domains - including software, robotics, autonomous systems, and industrial applications - enabling rapid innovation and productivity gains. However, deploying generative AI-driven systems can introduce safety, security, and reliability vulnerabilities, highlighting the urgent need for rigorous evaluation of their outputs before deployment. Among many evaluation techniques, this workshop will focus on formal and statistical verification approaches.
Generative AI requires careful verification. For example, in software development, 45% of organizations prioritize speed over quality, and 63% deploy code without full testing. While 80% view generative AI as enhancing both speed and quality, studies reveal significant flaws that could compromise safety and security, underscoring the need for efficient yet thorough verification methods across all AI-driven systems.
Generative AI can also support verification in multiple ways, including:
- Automating specification generation from system requirements to accelerate verification workflows.
- Enhancing formal verification tools by guiding proofs, generating counterexamples, and analyzing high-risk areas.
- Optimizing verification productivity and coverage in complex systems, including hardware and software co-design.
- Integrating with statistical methods to improve reliability, uncertainty quantification, and diagnostics.
These AI-assisted techniques reduce human effort, scale verification to complex systems, and enable tasks previously infeasible, while maintaining rigorous guarantees.
The workshop will convene researchers, practitioners, and industry experts to explore the following research question: How can generative AI-assisted verification techniques, including formal and statistical approaches, ensure that AI-driven systems are rigorously checked for safety, reliability, and security across diverse domains?
Participants will explore cutting-edge methods and collaborative opportunities to advance trustworthy and reliable generative AI deployment.
Dec'10 (Manchester )
🕑: 09:00 AM - 09:30 AM
Opening & Welcome
Info: Objectives of the workshop and technical scope. Introductory remarks from organizers.
🕑: 09:30 AM - 10:30 AM
Keynote
Info: AI-assisted formal verification – bridging speed, quality, and trust.
🕑: 10:30 AM - 12:00 PM
Session I – SE4GenAI & GenAI4SE Foundations
Info: AI-assisted formal verification frameworks. Perspectives on SE4GenAI and GenAI4SE. Safety-critical systems and US/DoD experience.
🕑: 12:00 PM - 01:30 PM
Lunch and Networking Pause
🕑: 01:30 PM - 03:00 PM
Session II – Technical Demonstrations & Case Studies
Info: Demos of LLMs and AI agents in verification workflows.
🕑: 03:00 PM - 04:00 PM
Panel Discussion
Info: Theme: Balancing speed, quality, and trust in verification pipelines. Panelists: keynote + academic/industry experts.
🕑: 04:00 PM - 04:30 PM
Closing
Info: Technical outcomes and research questions for Day 2.
Dec'11 (Liverpool)
🕑: 09:00 AM - 09:30 AM
Opening & Framing
Info: Scope: interdisciplinary perspectives on AI in software engineering.
🕑: 09:30 AM - 10:30 AM
Keynote
Info: Broader talk on generative AI and multidisciplinary applications.
🕑: 10:30 AM - 12:00 PM
Session III – Cross-Disciplinary Perspectives
Info: Contributions from software engineering, AI ethics, and policy. Invite external speakers for industry/government perspectives.
🕑: 12:00 PM - 01:30 PM
Lunch and Networking Pause
🕑: 01:30 PM - 03:00 PM
Session IV – Roadmap & Collaboration Priorities
Info: Breakout groups draft interdisciplinary research/practice priorities. Group reporting with actionable recommendations.
🕑: 03:00 PM - 03:30 PM
Closing & Next Steps
Info: Summary of both days. Agreement on outputs: joint report, proposal, or position paper.
Event Venue & Nearby Stays
The University of Manchester, Oxford Road, Manchester, United Kingdom
GBP 0.00
