Why professional bodies need a new evidential foundation for their credentials, and what a credible response looks like.
Our Thesis: experiential assessment is the infrastructure layer for the future of professional credentialing.
Every major shift in how professional work gets done eventually forces a reckoning with how competence gets measured.
The Industrial Revolution led to professional licensing. Corporate bureaucracy led to standardised testing. The knowledge economy shifted the focus to university degrees and professional qualifications. In the 1990s, software engineering made the work itself demonstrable without the need for a credential, but only for developers.
AI is the next shift, and it breaks the previous measurement systems in two ways:
It makes traditional credentials easier to fake
It removes the developmental experiences that used to build the capability those credentials claimed to represent
This new challenge means that we need to shift both the way we measure competence and how professionals learn.
In a world where outputs are increasingly synthetic, the ability to observe how a professional actually performs, how they think, decide, communicate, and handle pressure, is more valuable than ever. Experiential assessment is not a feature of the future of professional credentialing. It is the infrastructure. And it generates data, verified, performance-based capability signals mapped to professional standards, that is the foundation that gives a credential its meaning.
Traverse is building that infrastructure. The thinking below sets out why the moment is now, what a credible response looks like, and what we have learned from the work so far.
Why Now
Three trends are converging in 2026.
1. Employers want to hire for skills but can’t.
85% of employers say they use skills-based hiring (TestGorilla, 2025), but research from Harvard and Burning Glass found that fewer than 1 in 700 hires actually changed as a result (Harvard Business School / Burning Glass Institute, 2024). GPA screening has dropped from 73% to 42% since 2019 (NACE Job Outlook 2026), but nothing credible has replaced it.
2. AI is hollowing out the bottom of the pyramid.
Two-thirds of managers and executives say that most recent hires were not fully prepared, and lack of experience was the most common failing (Deloitte 2025 Global Human Capital Trends). AI is automating the entry-level tasks that used to build professional judgment: the research, the drafts, the models, the grunt work. The bottom rung is disappearing. 54% of engineering leaders expect AI coding tools to reduce junior developer hiring long-term (LeadDev AI Impact Report 2025). A Stanford Digital Economy Lab study found that employment for early-career US software developers declined nearly 20% since late 2022 (Stack Overflow, 2025). The impact will not be limited to software jobs. Employers are responding by raising experience requirements, not lowering them, creating a cycle where junior professionals need experience they can no longer get.
3. Professional bodies are being pushed harder to justify the value of their credentials.
AI adoption is happening faster than any previous general purpose technology (Microsoft AI Diffusion Report, 2025). The tools professionals use to work - frontier AI models like Google’s Gemini, OpenAI’s ChatGPT, and Anthropic’s Claude - are updated every few weeks. Professional competence is a moving target in a way it has never been before. Bodies have recognised this and are already responding, but what they are building is difficult to maintain and scale.
Employers want to hire for skills but lack a reliable signal. Graduates want to prove what they can do but need experience they can no longer get. The professional bodies trying to bridge this gap with credentials are working harder than ever to keep pace. And AI is making all three problems worse by making it harder to tell whether a human or a machine did the work.
The Problem With How Professional Competence Has Been Measured
Modern assessments are still relatively poor proxies of human performance.
Knowledge recall tests, written case studies, and CPD hour logs have been the backbone of professional certification for decades. They are defensible, scalable, and familiar, but they are also relatively poor proxies for actual professional capability.
Written exams test what someone can remember under pressure. Case studies test the ability to structure a response to a pre-set question. CPD logs confirm attendance, not learning. None of these reliably shows whether someone can exercise sound professional judgement in the conditions where it is actually needed: under time pressure, with incomplete information, in situations where competing priorities must be weighed and where the consequences of poor decisions are real.
The gap between what the credential implies and what it actually evidences has always existed. Professional bodies have managed it through reputation, employer trust, and the assumption that workplace experience would close what the qualification left open. That assumption is now breaking down.
We’re missing two things:
The ability to observe performance itself: to place a member in a realistic, professionally demanding scenario that mirrors actual practice, evaluate how they navigate it, and produce a structured, defensible record of what that performance demonstrated. Not an inference. Not a proxy. An observation.
The ability to use that same experience to actively develop capability, not simply to score it. This is the distinction between assessment of learning and assessment for learning. Most existing assessment formats do one. Traverse does both from the same data infrastructure.
The goal then is to make the assessment and the development the same event. There is no separate remediation step. Candidates navigate a high-fidelity work scenario, receive dynamic feedback calibrated to their specific performance and your competency framework, and leave with something more useful than a grade: a practised capability and an evidence base to support it.
Until recently, doing either well at scale was practically impossible. AI changed that. Not by replacing the assessor, but by making it feasible to build the environment, evaluate the nuance, deliver meaningful feedback, and repeat the experience across thousands of candidates simultaneously without sacrificing the depth that makes assessment useful.
Why This Matters More Than People Realise
There is a second-order consequence of AI’s rise in knowledge work that will define the next decade of professional development.
As AI takes over the execution layer of knowledge work, be it drafting documents, running analyses, processing information, or managing routine decisions, all the grunt work that young professionals used to do to build professional judgment is disappearing. Junior accountants developed professional scepticism by doing the detailed work. Junior lawyers developed situational judgment by handling the research and the drafts. Junior finance professionals developed critical thinking by building the models themselves. Those experiences are the neural pathways for how the job gets done. Remove the experiences and the pathways don’t form.
This is not a future problem. Deloitte’s 2025 Global Human Capital Trends report identified the “experience gap” as one of the defining workforce challenges of the AI era, finding that most so-called “entry-level jobs” now require two to five years of experience, and that 61% of employers have increased experience requirements in the past three years (Deloitte, 2025). The pattern holds outside professional services: in software engineering, early-career developer employment dropped nearly 20% from its 2022 peak, even as demand for experienced developers rose 9% (Stanford Digital Economy Lab, 2025). The entry point to knowledge work is compressing.
This creates a profound challenge for professional bodies specifically. The professionals of the next decade will need more capacity for judgment, critical thinking, and effective oversight of AI-driven workflows precisely because those are the skills AI cannot replicate. But they will have systematically fewer natural opportunities to develop them. If the body’s qualification is meant to certify that its members possess these capabilities, it needs an assessment model that can actually observe and develop them.
We still teach children long-division even though calculators exist, not because they will ever need to do it manually, but because the process of doing it builds the numerical reasoning that makes them effective with the tools that do it for them. The same logic applies to professional skills in the AI era. You do not need to have literally experienced a high-stakes audit dispute to know how to handle one. But you need enough simulated experience of that moment, its pressure, its ambiguity, its competing priorities, that your judgment is calibrated when the real version arrives. And when it does arrive, increasingly, you will be the one supervising the AI that is doing the work, not doing it yourself. That supervision requires real capability. Simulations can build it.
Flight simulators exist because some skills can only be developed through experience, but the cost of gaining that experience in the real thing is too high. The same logic applies to professional judgment. You cannot learn to navigate a high-stakes audit dispute, a difficult client conversation, or an ambiguous strategic decision from a textbook. But you also shouldn’t have to wait for the real moment to arrive, with real consequences, before you’ve had any practice. Imagine the professional equivalent: high-fidelity simulations of real work, accessible, repeatable, and available to every member rather than just the people who happen to get the right opportunities by chance.
Accessibility is central to this mission. The experiences that traditionally built professional confidence and judgment were unevenly distributed, whether by firm size, geography, network, or luck. Making those experiences as open as possible is a key step in making the profession more accessible to everyone.
The Opportunity for Accreditation Bodies
Accreditation bodies, chartered institutes, membership organisations, and professional certification bodies are facing a genuine question about the evidential foundation of their credentials.
If the skills they certify are being displaced or transformed by AI, what does the qualification prove? The bodies that answer this question credibly will strengthen their position. Those that wait risk others answering it for them.
The pattern is consistent across conversations we've had with senior figures inside professional bodies. Entry-level, transactional roles are falling away while strategic, analytical, and advisory work is growing. As one examinations director put it recently, "jobs start in the middle." The bottom rung of the professional ladder, the work that used to build judgment through repetition, is compressing or disappearing entirely.
The more progressive bodies have already begun to respond. Some have restructured their qualifications to embed critical thinking from year one rather than deferring it to the final stage. Others have redesigned their case study examinations as business simulations, where candidates produce work that mirrors what they would produce in practice. The intent is to develop judgment through practice, not simply test recall under examination conditions. The explicit focus is shifting towards skills that cannot (or are less likely to) be automated.
This is encouraging. But even the most progressive bodies face a structural constraint: they are building on assessment infrastructure designed for a different era. Many others are further behind, piecing together AI tools, connecting APIs, and running isolated experiments with simulation, without the underlying data architecture to make any of it defensible or scalable.
The Potential of Curated Talent Pools
A better assessment solution is the first step towards a future where people and organizations can find each other based on verified capability data.
Between 2010 and 2020, online courses became a phenomenon. What started as the democratization of learning from elite universities evolved into a sub-segment of the creator economy focused on people teaching their peers. The infrastructure of formal online learning gave way to a dizzying variety of educational content. The problem shifted from limited access to complete overwhelm.
The same thing is happening to talent and recruitment. The job board model is structurally broken. It consists of pools of candidates described by demographic data, arbitrary tags, and self-reported credentials, filtered by keyword matching that produces low-definition signals at high volume. Year after year, employer surveys confirm this: this past year, 60% of employers reported that receiving too many unqualified candidates is their top recruiting challenge when using job boards (iHire State of Online Recruiting, 2025). Employers do not need more candidates. They need better information about the candidates they already have access to.
We envision a data infrastructure that makes genuinely high-definition talent pools possible. Not demographic data and tags, but verified performance data, mapped to professional standards, generated through actual simulation of the work itself. When that data exists at scale, the curation possibilities are significant: bodies curating pools of their certified members with demonstrated competency profiles; employers accessing pre-verified talent matched to specific role requirements; individuals building portable, defensible records of their capability that travel with them across employers and career transitions.
Professional bodies are the natural conveners of these pools. The designation, the network, the CPD structure: all of it creates a relationship with members that no employer-built competency framework can replicate. With verified performance data at scale, that community becomes something more commercially and strategically powerful: a curated pool of professionals whose competencies are not self-reported or inferred from examination history, but demonstrated and structured. That is the credential becoming a genuine hiring signal rather than just a compliance requirement.
What Comes Next
AI is simultaneously raising the bar for what judgment, critical thinking, and professional expertise look like and removing the organic experiences that used to build them.
The measurement systems that professional bodies inherited from the previous era were designed for a world where credentials could be trusted and workplace experience could be assumed to close any remaining gap. That world is ending.
What comes next is experiential assessment: the ability to observe, evaluate, and develop how professionals actually perform, at scale, with the rigour that regulated professions and the bodies that serve them require. Verified capability signal mapped to professional standards is the data that assessment generates, and it is the foundation for a new infrastructure layer that makes the credential more useful to employers, more valuable to members, and more defensible to regulators.
That infrastructure does not exist yet. That’s what we’re building.
Learn more.
Reach out to Owen Ashby at owen@thetraverse.co.
Sources Cited
Deloitte (2025). 2025 Global Human Capital Trends: Closing the Experience Gap Through Talent Development. https://www.deloitte.com/us/en/insights/topics/talent/human-capital-trends/2025/closing-the-experience-gap-through-talent-development.html
Harvard Business School / Burning Glass Institute (2024). Skills-Based Hiring: The Long Road from Announcements to Outcomes. https://www.hbs.edu/managing-the-future-of-work/Documents/research/Skills-Based%20Hiring.pdf
iHire (2025). The State of Online Recruiting 2025. https://www.ihire.com/resourcecenter/employer/pages/the-state-of-online-recruiting-2025
LeadDev (2025). AI Impact Report 2025: Junior Devs Still Have a Path to Senior Roles. https://leaddev.com/hiring/junior-devs-still-have-path-senior-roles
Microsoft (2025). AI Diffusion Report. https://www.microsoft.com/en-us/research/publication/ai-diffusion-report/
NACE (2026). Job Outlook 2026: Employer Use of Skills-Based Hiring Practices Grows. https://www.naceweb.org/job-market/trends-and-predictions/employer-use-of-skills-based-hiring-practices-grows
Stack Overflow (2025). AI vs Gen Z: Stanford Digital Economy Lab Findings on Early-Career Developer Employment. https://stackoverflow.blog/2025/12/26/ai-vs-gen-z/
TestGorilla (2025). The State of Skills-Based Hiring 2025. https://www.testgorilla.com/skills-based-hiring/state-of-skills-based-hiring-2025/