In summer In 2017, three Wisconsin teenagers were killed in a high-speed car accident. At the time of the collision, the boys were recording their speed using Snapchat’s speed filter, which was 200 km/h. It was not the first incident of this type: the same filter was linked to several other accidents between 2015 and 2017.
The Wisconsin teens’ parents have sued Snapchat, saying its product, which awarded “trophies, streaks and social recognition” to users who exceeded 100 miles per hour, was negligently designed to encourage dangerous high-speed driving . A lower court initially found that Section 230 of the Communications Decency Act immunized Snapchat from liability, saying the app was not responsible for third-party content created by people using its speed filter. But in 2021, the Ninth Circuit overturned the lower court’s decision.
Platforms are largely shielded from liability for this type of content due to Section 230. But, in this important case–Lemmon vs. Snap–The Ninth Circuit made a critical distinction between a platform’s own harmful product design and its hosting of harmful third-party content. The argument was not that Snapchat had created or hosted harmful content, but rather that it had negligently engineered a feature, the speed filter, which incentivized dangerous behavior. The Ninth Circuit correctly concluded that the lower court erred in raising Section 230 as a defense. It was the wrong legal instrument. Instead, the court focused on Snapchat’s negligent design of the speed filter, a common product liability tort.
Frustratingly, in the years since, and most recently during oral arguments in the United States Supreme Court last month for Gonzalez vs. Google, courts have failed to understand or distinguish between harmful content and harmful design choices. Judges hearing these cases and lawmakers working to curb online abuse and harmful activity must keep this distinction in mind and focus on the platforms’ negligent product design rather than being distracted by the General Section 230 immunity claims on harmful content.
At the heart of González is whether Section 230 protects YouTube not only when it hosts third-party content, but also when it makes targeted recommendations about what users should watch. Gonzalez’s lawyer argued YouTube shouldn’t have Section 230 immunity for recommending videos, saying the act of curating and recommending third-party material it displays is content creation. fully fledged. Google’s lawyer countered that its recommendation algorithm is neutral, treating all the content it recommends to users the same. But these arguments miss the mark. It is not at all necessary to invoke section 230 to prevent prejudices from being taken into account in this case. It’s not that YouTube’s recommendation feature created new content, but that “neutral” recommendation algorithms are negligently designed not to differentiate between, say, ISIS videos and cat videos. Actually, recommendations actively promote harmful and dangerous content.
Recommendation features such as YouTube Watch Next and Recommended for You, which are at the heart of González– contribute materially to harm by prioritizing scandalous and sensational content, and by encouraging and financially rewarding users for creating such content. YouTube designed its recommendation features to increase user engagement and ad revenue. The creators of this system should have known that it would encourage and promote harmful behavior.
Although most courts have accepted a sweeping interpretation of Section 230 that goes beyond simply immunizing platforms from liability for dangerous third-party content, some judges have gone further and have begun to impose greater scrutiny. strict negligent design claiming product liability. In 2014, for example, Omegle, a video chat service that matches random users, matched an 11-year-old girl with a 30-year-old man who would groom and sexually abuse her for years. In 2022, the judge hearing this case, AM vs. Omegle, found that Section 230 largely protected the actual material sent by both parties. But the platform was still to blame for its negligent design choice to connect sexual predators with underage victims. Last week, a similar case was filed against Grindr. A 19-year-old Canadian is suing the app because it connected him with adult men who raped him for four days when he was underage. Again, the lawsuit claims that Grindr was negligent in its age verification process and actively sought to get underage users to join the app by targeting its TikTok advertising to minors. These cases, like Lemmon vs. Snapaffirm the importance of focusing on harmful product design features rather than harmful content.
These cases have set a promising precedent on how to make platforms safer. When attempts to curb online abuse focus on third-party content and Section 230, they get bogged down in thorny free speech issues that make it difficult to implement meaningful change. But if litigants, judges, and regulators avoid these content issues and instead focus on product liability, they’ll get to the root of the problem. Holding platforms accountable for negligent design choices that encourage and monetize the creation and proliferation of harmful content is key to addressing many of the dangers that persist online.
Wired Notice publishes articles written by external contributors representing a wide range of viewpoints. Read more reviews hereand see our shipping instructions here. Submit an editorial to opinion@wired.com.