IN CONVERSATION

With Dr Manuela Oliveira da Silva

Legitimacy, seagulls, and what AI gets wrong about being human

Over a zucchini salad at Bosco in Bath, Wavelength speaks to Dr Manoela Oliveira da Silva about legitimacy, trust, and the human costs of technology.

Bosco’s courtyard is ruled by Gerald. The staff talk about him like a made man: an alpha seagull who keeps rivals away but exacts his price in stolen pizzas - four a day, on average. Gerald is nowhere to be seen when I arrive for lunch, though his reputation hangs in the air like the smell of baked dough.

Across the table, Manoela sits serenely in the shade, dressed in a patterned shirt alive with colour, amused by my predicament. I, in thick denim, had taken the full heat - a misguided act of gallantry. Brazilian and used to real heat, she is looking cool and comfortable. Within minutes I’m roasting, clutching a Diet Coke like a lifeline. It will not be the last time she shows better judgement.

Menus arrive, and Manoela navigates the gluten and dairy-free rules with the waiter, eventually settling on a zucchini salad. I opt for a steak with rocket. “Too hot for pasta,” she says, wisely. 

Behind the warmth is a formidable CV: a PhD in Computer Science, earlier degrees in applied linguistics, even a stint at Harvard. She spent a decade at Voxar Labs in Recife before moving to Bath, where she now works at the Centre for People-Led Digitalisation. By the time the food lands, our conversation is already deep into the questions that preoccupy her work: why we adopt technology, how we come to trust it, and what we risk losing along the way.

The metaverse comes up first - that much-hyped digital frontier that promised to revolutionise everything and delivered mostly empty virtual real estate. Manoela sees it as a perfect case study in misplaced enthusiasm. “We’re at the trough of disillusionment now,” she says, referencing the Gartner hype cycle. The same pattern that swept organisations into virtual worlds they didn’t need is now driving them toward AI they don’t understand. Which brings us to the heart of her work: not what technology does, but why we adopt it, and whether we should.

“We use technology because we feel we should,” Manoela says, hands animated, eyes alive. “Everyone else is doing it, so we copy them. But do we really understand why?”

This is where she introduces the idea that has defined her recent research: legitimacy. At Bath, she and colleagues developed legitimacy cards - a deceptively simple deck of prompts designed to help organisations interrogate the social and cultural dimensions of technology adoption.

“It’s not just: does it work? It’s: is it legitimate? Is it appropriate in this context? Will people accept it?”

She tells me about trialling the cards at a major aerospace engineering company. Instead of obsessing over which shiny system to buy, engineers found themselves reflecting on fit. Would this tool be accepted by shop-floor workers? Did it align with the company’s values? Would customers or regulators see it as credible? One participant admitted the cards helped him justify a project he’d been struggling to push through the bureaucracy. Another suggested they could be used to audit existing decisions, catching missteps before they became expensive mistakes.

I tell her my camel-and-horse analogy, inspired by AI evangelists keen to convince the world their systems are only a step away from human conversation. We keep hammering away at AI to give us nuance and perspective - things it cannot do - just as if we were shaving humps off a camel to make it look like a horse. “Exactly,” she laughs. “Don’t paint the camel. Use it for what it’s good at.” Her point lands: businesses often hammer away at trying to make technology do what it isn’t suited for - not because it’s needed, but because they’re scared of being left behind. Why not just talk to humans?

If adoption is one problem, trust is another.

Manoela recounts her irritation with a fitness watch that presumed to judge her running. “Every time I ran, it told me I wasn’t doing well enough, that I’d be beaten by someone else. But what does a machine know about my effort? It doesn’t sweat. It doesn’t struggle.” She disabled the function entirely.

That personal frustration opens up a bigger question: how do we build trust in technologies that don’t share our experience of the world? The danger is not spectacular failure, but quiet, creeping dependence on systems we don’t understand. “Most people using ChatGPT have no idea how it works,” she observes, “but they trust it completely.”

Her scepticism isn’t luddite. She insists she is pro-technology - but pro-human judgement first.

Her concern is sharpest when she talks about education. She describes scenes from Recife that made national news: entire classrooms of students suffering collective panic attacks during exams. Ambulances were called. “They had been conditioned by technology - constant noise, constant distraction - and suddenly silence, focus, pressure… and they collapsed.”

The image is shocking, and she recounts it with genuine compassion. But it crystallises her deepest fear: that by outsourcing too much to machines, students are losing the resilience and skills education is meant to build. “If you always let AI write for you,” she asks, “do you still know how to write?”

The danger is not the dramatic collapse but the quiet erosion of skills over time, so gradual you don’t even notice what’s missing until it’s gone.

As our plates are cleared, the conversation turns to a story that encapsulates everything we’ve been discussing. I mention reading that dating apps are being declared “a thing of the past” as Gen Z reports widespread burnout. It connects to research I’d conducted for Hinge, following twelve people’s dating lives over six months. The most striking finding? Not one mentioned love.

Manoela nods knowingly and matches it with her own discovery: a historian she follows who analyses Victorian personal ads. The symmetry is absurd and revealing. Whether 1850 or 2024, people reduce themselves to specifications - height, wealth, status - while avoiding the messier business of actual connection. “You’re trying to mechanise intimacy,” she observes. “But intimacy isn’t efficient.”

The pattern is familiar: technology promises to solve human problems by eliminating the human elements that make them complex. Dating apps reduce people to swipeable profiles. AI reduces thinking to prompt engineering. In both cases, we lose precisely what makes the activity worthwhile.

This brings us to Manoela’s broader concern about digital isolation. “People feel connected on a screen, but they aren’t really connected,” she says. The paradox has only deepened since Covid: more tools for communication than ever, yet higher rates of loneliness.

She contrasts it with life in Bath, which she’ll soon leave to return to Brazil. “Here, people have time to talk. You stop on the street, chat, share a drink. That rhythm - I’ll miss it.” In bigger cities, she notes - London, Recife - it’s “rush, rush, rush.”

Her critique lands because it comes from someone naturally sociable. Manoela radiates warmth; she’s the kind of person who makes connection effortlessly. When she worries about digital isolation, you take notice.

The irony isn’t lost on either of us: Silicon Valley’s latest promise is AI companions for lonely people. “That’s just going to make it worse,” she says, shaking her head. “You’re trying to solve a human problem with the thing that created it.”

For Manoela, legitimacy isn’t just a card game or an academic concept. It’s a way of stopping ourselves being swept away by hype. “Organisations don’t adopt technology in a vacuum. These are social choices. Cultural choices. Human choices. If you don’t ask: is this accepted? Does this fit? … then you risk losing sight of why you’re doing it at all.”

She’s seen it repeatedly: companies dazzled by buzzwords, schools pressured into tools they can’t integrate, students crushed by expectations they don’t understand. The legitimacy lens forces a pause. It reframes adoption not as a race, but as a negotiation between efficiency and appropriateness.

There’s cultural wisdom here too. She marvels that British universities allow beer at events - “In Brazil, that’s unthinkable by law.” The contrast illustrates her point: legitimacy isn’t absolute. What feels normal in one context can be prohibited in another. Smart adoption requires understanding not just what technology can do, but what society will accept.

If all this sounds heavy, it is lightened by Manoela herself. She is vibrant, animated, smiling throughout. Optimistic without being naïve.

In a few days she will leave Bath for Recife, to begin her new post at Universidade de Pernambuco. She talks about it with excitement: teaching again, continuing her research into immersive education and multimodal analytics, working with Brazilian companies, building projects that bring technology and society into better alignment.

It is this combination - rigorous thinking and genuine warmth - that makes her perspective linger. She critiques, yes, but she also shows a way forward.

The bill arrives, and Manoela reaches into her bag to slide a small box across the table: gluten- and dairy-free chocolates for my wife - thoughtful to the end.

As we stand, Gerald circles above Bosco’s courtyard. We both glance up, sharing the same wary look the staff must give him a dozen times a day.

Walking back through Bath’s summer streets, I think of her warnings: the classrooms in Recife, the watch that presumed to judge her, the students quietly losing their skills, the organisations painting camels to look like horses. All of them reminders that adopting technology isn’t just about efficiency - it’s about legitimacy, trust and culture.

And then her parting thought: that real innovation never comes from tools alone. It comes from conversations.