REFLECTIONS
The Wavelength AI Experiment
Testing AI against human enquiry - and what it revealed about us
The brief we composed was precise: replicate a completed Wavelength study using a group of synthetic participants, capable of building on each other's views. Run the same enquiry, and compare what comes back.
Clearly, we weren't testing speed or cost. We wanted to discover the extent to which AI can produce similar levels of insight as our real participants, and in evaluating its capability - understand where it could genuinely help, and where it may quietly struggle. What we found surprised us.
The AI was good. Uncomfortably good. It surfaced the same themes and articulated structural pressures with fluency. It organised complexity efficiently. In a landscape mapping exercise, it would have done the job with laser precision. But something was missing - and it took us a while to name it.
The synthetic participants spoke in complete sentences and described challenges without contradiction. They presented teaching as a coherent, manageable profession. However, we know that real teachers don't do that.
One teacher described "magpie-ing" resources - not borrowing, not sharing, but a collaborative survival strategy built through staff room exchanges and informal WhatsApp groups. Another mentioned spending "a bit of my wages" on classroom materials.
These weren't answers to our questions, but glimpses of professional reality that arrived sideways, in the spaces between structured responses. Where AI described systematic planning, teachers spoke of "winging it" - not as failure, but as the adaptive skill that keeps classrooms running.
The difference wasn't accuracy - both approaches identified the right pressures. The difference was texture, and it is within the texture that real meaning lives. For an organisation designing resources for teachers, this texture isn't optional - it's the difference between something that looks right and something that actually gets used.
The experiment made us more precise about the role AI can play in research. It can set the scene — but more than that, it can show us where and how to look.
AI works as reconnaissance. It maps terrain before human enquiry begins - surfacing themes, sketching hypotheses, sharpening the questions worth asking. Used that way, AI doesn't diminish research - it sharpens it.
But interpretation - the ability to notice what arrives sideways, to hear what wasn't quite said - remains entirely human territory. That distinction now shapes how we design every project.
This project made the stakes clear. Understanding how teachers actually find and use resources - the workarounds, the compromises, the informal networks - required more than accurate answers. It required the texture that only real conversation can provide.
AI provided the map. Human research revealed the terrain.