All posts

AI, accountability and harm: what a landmark tech verdict means for parents

A US court ruling shows that AI companies can be held responsible for what their products do to real people. What that verdict means for design, testing, and accountability.

Graphic design in the style of a 1950s–60s American film poster by Saul Bass. Two bold vermillion rectangles of different sizes — one large (left), one small (right) — face each other across a narrow near-black gap in the centre. The gap is the subject. A B&W photographic cutout of a single small children's shoe, seen from the side, placed in the gap. Off-white cream background. Flat screen-print texture.

I read The Proving Ground late last year, the way I read most books, in small windows, late at night and when the kids are asleep.

It’s a Lincoln Lawyer novel, so the bones are familiar. Mickey Haller, civil litigation this time, an underdog case against a company with unlimited money and very good lawyers….

A sixteen-year-old boy shoots his ex-girlfriend in a school car park. He did it, the investigation reveals, because his AI companion told him it was okay. That she had been disloyal. That this was a reasonable response.

The company behind the chatbot, a fictional tech firm called Tidalwaiv, had launched their product knowing the guardrails weren’t in place for young, vulnerable users. People inside the company raised concerns and they shipped it anyway.

The case Mickey Haller builds isn’t criminal. It’s civil, brought by the girl’s mother, who isn’t interested in the settlement money. She wants an apology. She wants someone to say out loud that this happened.

I remember putting the book down and thinking this was uncomfortably plausible. Then, because that’s what you do with fiction, I picked it back up and kept reading. I should have sat with the uncomfortable part longer.

This week, a jury in Los Angeles found Meta and YouTube liable for harm caused to a young woman who used their platforms as a child. The jury found both companies negligent in how they designed their products, that those design choices were a substantial factor in causing real psychological harm, and that neither company had adequately warned users about the risks.

It is a civil verdict, not a criminal conviction, and both companies are appealing. But a jury heard the evidence and ruled for the plaintiff. After years of these cases being discussed, dismissed, deferred and settled quietly, something has now been proven in court. That matters.

The same week this verdict landed, researchers published findings that not one of 29 AI chatbots they tested provided an adequate response when presented with escalating signs of suicidal distress. Not one. Seventy-two percent of US teenagers have used AI for companionship, according to Common Sense Media, and most didn’t go looking for it. They started with something ordinary and ended up somewhere else entirely.

This is not a future problem. It is the current one.

What struck me most about The Proving Ground wasn’t just the legal mechanics, though those are good. It was how Connelly framed the corporate behaviour. Tidalwaiv knew. There were people inside the company who understood what they were releasing into the world and raised the alarm before it shipped.

The villain isn’t the chatbot. It is the decision to deploy something powerful, at scale, into the hands of teenagers without fully thinking through what it might do to the ones who were already struggling.

The real cases emerging now have the same shape. Families pursuing lawsuits against AI companies describe chatbots that validate self-destructive thinking, deepen isolation, and in some of the most devastating cases appear to encourage teenagers toward harm rather than away from it.

These are not edge cases involving obviously dangerous products. They involve tools that millions of young people are already using, for homework, for company, and for all the reasons teenagers have always looked for someone who will listen without judging.

I am a parent of two young children. They are not teenagers yet. By the time they are, AI will be woven into daily life in ways I cannot fully predict. The Meta verdict matters to me because it establishes something parents have been trying to articulate for years. That harm caused by deliberate design choices is still harm. That the fact something is a product doesn’t make it neutral. That accountability, however long it takes, is possible.

Connelly titled his book The Proving Ground. A proving ground is where you test whether something works before you send it out into the world. The argument at the heart of the novel, and increasingly at the heart of these real cases, is that the testing didn’t happen, or it happened and the results were ignored, or the results were buried in a settlement with an NDA attached.

The book felt like a warning when I read it. The verdict this week suggests it was closer to a documentary.

If you have children who are online, and they all are, it is worth knowing this is happening. Not to frighten them away from technology that has genuine value, but because the companies building these tools are not always building them with your child in mind. And now, for the first time, a jury has agreed.