About this Event
ABOUT THIS EVENT
The lawsuit that proves what we've been warning about
This week, a class action lawsuit against Eightfold AI made headlines. The complaint alleges that the company scraped social media profiles, location data, and internet activity to score candidates from 0 to 5 on "likelihood of success," all without their knowledge or consent, using data they never provided and cannot see or dispute.
This is not a one-off. It's the predictable result of an industry-wide problem.
The AI Recruiting Paradox
Candidates know they're being screened out by AI-driven hiring practices. In desperation, they lean on AI to help them beat the AI screening, writing CVs stuffed with keywords that might cut the grade with these recruitment systems. Organisations then use AI to screen these AI-generated CVs.
The result: nobody knows what the candidate is actually capable of.
This is the wrong use of AI: searching for meaning in meaningless unstructured data. Without structured frameworks for capturing human capabilities, organisations resort to scraping whatever digital traces are available and hoping AI can infer the rest. AI filters out promising candidates based on keywords, not capability. Your unique talents are invisible to automated screening. Qualified candidates get lost when AI interprets noise.
The Eightfold lawsuit shows where this leads: opaque scores, no recourse, and qualified people potentially screened out of roles they could have excelled in.
We've been building the alternative
Lumenai LABS launched Data Maturity Matters (DMM) in October 2025 precisely because we saw this coming. DMM is the product of 12 months of intensive academic research into why People Analytics remains stuck, and how to fix it.
On 2nd February, we're putting our solution under academic scrutiny.
What we're launching
The DRL 7 Use Case Initiative is the first live use case built against DMM standards. We will be researching and testing our consent-based DRL 7 candidate data collection and portable digital credentials, demonstrating what the right use of AI in recruitment actually looks like:
- Candidates create structured capability data they own and control
- AI models genuine talent-organisation compatibility, not scraped noise
- Data that can't be gamed by AI-generated CVs
- Portable digital credentials that flow forward into your entire career lifecycle
The goal: data designed for intelligent consumption, so AI models value rather than searches for meaning in noise.
Why this matters for you
Whether you're working in recruitment, leading organisational transformation, championing fair hiring practices, or entering or trying to find your next role in the job market, the question is the same: is AI working for you or against you?
For recruitment and HR professionals:
The current paradigm is failing everyone. AI screening AI-generated CVs produces noise, not signal. The Eightfold lawsuit signals where opaque, non-consensual data practices lead, and the EU AI Act is making transparency and human oversight non-negotiable.
Participating in the DRL 7 Use Case gives you:
- Insight into structured data collection that produces candidates you can actually assess
- A framework for consent-based, transparent AI use that meets emerging regulatory requirements
- Evidence of what the right use of AI in recruitment looks like in practice
- The opportunity to shape industry standards before they're imposed
- Connection with a community of like-minded professionals committed to fair, effective talent practices
For candidates and early career professionals:
You already know the system is broken. You're being forced to tailor CVs to beat algorithms, not to showcase what you can actually do. Your real capabilities are invisible to automated screening.
Participating in the DRL 7 Use Case gives you:
- A fair chance to showcase real potential, not keyword optimisation
- Control over your talent data with consent-based protocols
- Matching with roles and organisations where you can genuinely thrive
- A capability profile that lasts from application through your career
- The opportunity to shape the future of fair, effective talent practices
Why this matters now
The regulatory environment is catching up. The EU AI Act classifies recruitment AI as high-risk, requiring transparency, human oversight, and candidate notification. The World Economic Forum is calling for "standardised frameworks, scalable assessment tools, and clear pathways for recognition."
The Eightfold lawsuit is a signal. The direction of travel is clear. Covert profiling and data collection without meaningful consent are increasingly legally untenable.
The question is no longer whether AI will transform recruitment. It's whether we build that transformation on solid data foundations, or on sand.
Join us to see what solid foundations look like and be part of this change.
Agenda
Event Agenda
Info: 1. The AI recruiting paradox and its impact on early career talent James Bryce, LSEG
Introducing DMM and its open-source mission: From AI interpreting noise to modelling intelligence Antonia Manoochehri, DMM + Lumenai
2. The DRL 7 use case: Building structured capability data for the right use of AI
3. How participating benefits you as an early career professional and organisational thought leader
4. Q&A and next steps to get involved, including roles at DMM
Event Venue & Nearby Stays
Oxford Edge, 37 St Giles', Oxford, United Kingdom
GBP 0.00






