A Jury Found Social Media Guilty of Addiction

0
17

On Wednesday morning, a Los Angeles jury ended nine days of deliberations and found Meta and YouTube negligent in the design of their platforms, liable for harm caused to a young woman identified in court as KGM, now 20, who said she began using YouTube at age six and Instagram at age nine and spent up to sixteen hours a day on the latter platform during her adolescence.

The verdict produced $6 million in total damages: $3 million compensatory, split 70-30 between Meta and YouTube reflecting the jury’s apportionment of responsibility, and a further $3 million in punitive damages, $2.1 million from Meta and $900,000 from YouTube, on grounds that both companies acted with malice, oppression, or fraud. The same evening, a separate jury in New Mexico ordered Meta to pay $375 million for violating state consumer protection laws by failing to protect children from predators on Instagram and Facebook.

Two juries in two states delivered the same message in two days.

What the Los Angeles Verdict Actually Established

Meta and YouTube were negligent in the design of their platforms, knew their design was dangerous, failed to warn of those risks, and caused substantial harm to the plaintiff, the jury found. The central legal strategy, which lawyers for KGM pursued across seven weeks of testimony including appearances by Meta CEO Mark Zuckerberg and Instagram head Adam Mosseri, was to keep the case away from content and focus entirely on design.

Social media companies have historically sheltered behind Section 230 of the Communications Decency Act, which shields them from liability for third-party content. By targeting the architecture rather than the posts, the plaintiff’s team found a route around that protection.

Internal Meta documents shown to the jury included an executive memo describing the company’s efforts to attract users before their teens, and a separate document noting that eleven-year-olds were four times as likely as users of competing apps to return to Instagram, despite the platform requiring users to be at least thirteen. TikTok and Snap, originally named as defendants, settled before the trial began. More than 2,400 related cases remain consolidated in US federal court.

The damages figure is small relative to Meta’s revenues and will almost certainly be reduced or overturned on appeal. That is not the point. The verdict is the first of more than twenty bellwether trials due to go to court this year, test cases designed to gauge jury reactions and set legal precedent across the broader consolidated litigation.

What it establishes is that a jury of ordinary Americans, after hearing the evidence, concluded that the companies knew what they were building, knew it was causing harm to children, and built it anyway. As the plaintiff’s lead attorney Mark Lanier put it outside the court: “Today’s verdict is a referendum, from a jury to an entire industry, that accountability has arrived.”

Apple’s Voluntary Move

While the Los Angeles jury was deliberating, Apple was rolling out iOS 26.4 to UK iPhones, an update that includes something no previous operating system has contained: a device-level prompt asking users to confirm they are at least eighteen years old.

After installing iOS 26.4, UK users see a prompt to confirm they are an adult. Apple gives several ways to do it. If the user has had an Apple account for a long time or already has a credit card on file, verification can happen automatically in a couple of seconds. If not, the user must either link a credit card or scan a government-issued ID. Users who skip the prompt or cannot verify face automatic activation of Apple’s Web Content Filter across Safari and all third-party browsers, and Communication Safety features that blur nudity in Messages and FaceTime.

Children under thirteen cannot create an account without a guardian. Ofcom, the UK communications regulator, praised the move, calling it “a real win for children and families” and noting that Apple’s decision positions the UK as one of the first countries in the world to receive hardware-level child safety protections.

The detail that changes the political weight of the announcement is this: Apple was not actually required to do any of this at the iOS or App Store level. The UK’s Online Safety Act primarily targets platforms hosting adult content, and app stores are not covered by its provisions. Apple implemented the checks anyway. That voluntary decision is the more significant development in the longer run.

A company acting ahead of legal compulsion establishes a precedent that regulators will subsequently treat as a baseline. It also builds the infrastructure. Once age verification sits at operating system level, it stops looking exceptional and starts looking like standard digital furniture, the same way cookies and app permissions did before it.

The Checkpoint Expands

The two stories belong in the same article because they describe different points on the same trajectory.

The Los Angeles verdict targets the mechanics of addiction: the infinite scroll, the autoplay, the algorithmically tuned notification that arrives precisely when a teenager’s resistance is lowest. Apple’s age check targets entry: the moment a person first presents themselves to a digital service. One addresses what the machine does to you once you are inside. The other addresses who is permitted to enter. Both are responses to genuine and documented harms.

Both carry consequences that extend beyond their stated purpose. A design liability verdict changes how platforms are built. A device-level age check, once normalised, becomes a tool that could eventually determine access to ordinary browsing, reading, and communication for adults who decline or cannot easily provide documentation.

Ofcom praised Apple’s move as protection for children. Digital rights campaigners noted that adults who lack a qualifying credit card or ID are currently being routed into a filtered internet without appeal. Those two observations are both accurate. The difficulty is that the infrastructure does not distinguish between them.

Keep up with Daily Euro Times for more updates!

Read also:

Could AI Follow the Metaverse Into Oblivion?

Zoom Out: France Switches to Sovereign Video Call App

‘Microslop’ Resistance: Europe’s Window for Digital Freedom

LEAVE A REPLY

Please enter your comment!
Please enter your name here