The federal government has levied a record-setting fine against a giant internet platform for abusing its users’ privacy. Critics say the government hasn’t done nearly enough.
Yes, that was also the takeaway in July, when the US laid out a $5 billion penalty against Facebook. It’s also today’s news: Google’s YouTube has agreed to pay $170 million in fines to settle Federal Trade Commission charges that it illegally harvested children’s personal data, which it used to serve them personalized ads.
First things first: $170 million ($136 million will be paid to the FTC and another $34 million to New York state, which joined the Feds’ case) is a record for companies accused of violating the 1998 Children’s Online Privacy Protection Act. It’s also basically a rounding error in terms of profits for Google and YouTube.
Google’s parent company Alphabet may generate $161 billion in revenue this year; RBC analyst Mark Mahaney thinks YouTube will generate $20 billion of that.
That alone is enough to make the settlement unsatisfactory to the FTC’s Rohit Chopra, who voted against the Facebook deal and also dissents today. “Financial penalties need to be meaningful or they will not deter misconduct,” Chopra writes in a statement, which is partially redacted but indicates that he wanted Google to pay something in the billions for its sins.
The bigger issue is whether YouTube is fundamentally going to change the way it does business. This will also sound familiar from this summer: YouTube says it is going to overhaul the way it interacts with kids who watch videos on its massive platform, but critics doubt the platform’s commitment to that pledge.
Today’s settlement requires YouTube to ask people who upload videos to the service to indicate whether those videos are aimed at kids. If the video uploaders say their videos are for kids, then YouTube is supposed to make sure it doesn’t collect data about kids who watch the videos (without getting an okay from their parents); it also promises not to show children ads that use “behavioral” targeting, which requires all kinds of internet surveillance.
YouTube notes that it will do more than the terms of the settlement require; it’s also going to use software to backstop the self reporting, using “machine learning to find videos that clearly target young audiences, for example those that have an emphasis on kids characters, themes, toys, or games.”
But that doesn’t satisfy FTC commissioner Rebecca Kelly Slaughter, who also voted against the FTC’s deal with Facebook this summer. She wanted today’s deal to require YouTube to scour its platform for kids content — instead of simply making it ask video makers to comply with its rules.
Making YouTube’s policing effort voluntary — without fear of penalty if it falls short — Slaughter says, lets it off the hook, or even worse. “A cynical observer might wonder whether in the wake of this order YouTube will be even more inclined to turn a blind eye to inaccurate designations of child-directed content in order to maximize its profit,” Slaughter writes in the dissent.
None of the back and forth, by the way, has anything to do with other kid-related complaints that have surfaced about YouTube over the past few years, like showing kids wildly inappropriate videos. YouTube does promise to make other changes in the way it works with kids, including a pledge of $100 million to be spent over three years to help video makers create appropriate stuff for kids.
But just like the Facebook settlement from earlier this summer, the gap between the YouTube settlement’s terms and the fundamental restructuring that YouTube’s critics want points out an even more fundamental question: Is the US government (or any government, really) up for the task of regulating giant internet platforms?
In both cases, the government is relying on the five-person FTC to rein in the most powerful forces on the internet by asking them to interpret and enforce laws that are, in internet terms, prehistoric.
The Facebook settlement hinged on a 1996 telecom law; today’s deal is focused on alleged violations of a 1998 privacy law. Think about the tech you used in 1996 or 1998 — which certainly didn’t involve a smartphone or Google or Facebook or instant global connectivity — and think about what’s going to be required for a thoughtful, useful set of rules that let us deal with tech today and in the future. Then think about whether today’s political system is remotely ready to grapple with that challenge.