First in the nation law requires tech companies to take steps to improve kids’ well-being

Research shows that more young Americans are facing mental health struggles, and technology is partly to blame. A new California law requires tech companies to do more to protect the privacy and data of children online. The measure could pave the way for similar laws elsewhere. (Photo illustration by Alexia Faith/Cronkite News)

PHOENIX – The word “crisis” dominates the headlines about the mental health of children these days, with experts and advocates pointing a finger at one factor in particular: social media.

Following recent reports about the impact of platforms like Instagram on teen well-being, several groups have sued tech companies, and in September, California enacted a first in the nation law requiring firms to do more to protect the privacy and data of children online.

Dr. Jenny Radesky, a developmental behavioral pediatrician who studies the intersection between technology and child development, has seen firsthand what this youth crisis looks like.

“Media now is so good at interacting with our psychology that you can imagine that sometimes it’s going to play upon our strengths, but other times it’s going to play upon our weaknesses,” said Radesky, an assistant professor at the University of Michigan Medical School.

“We’re seeing it with tons of referrals to the medical center for everything from eating disorders to suicide attempts. There’s no doubt that there’s a mental health crisis that existed before the pandemic – and is now just worsened.”

A U.S. surgeon general’s advisory, issued in December, warns that more young Americans are facing mental health struggles. The COVID-19 pandemic is to blame, it said, but so, too, is technology as youth get “bombarded with messages through the media and popular culture that erode their sense of self-worth – telling them they are not good looking enough, popular enough, smart enough, or rich enough.”

Tech companies tend to prioritize engagement and profit over safeguarding users’ health, the report found, using methods that may increase the time kids spend online and, in turn, contribute to anxiety, depression, eating disorders and other problems.

During the pandemic, the time youngsters spent in front of screens for activities not related to schoolwork rose from an average of 3.8 hours a day to 7.7 hours a day.

Radesky specifically worries about social media apps that use algorithms to constantly feed content to the user. Her concern is when a child’s viewing habits or online behaviors reveal something about them to the platform. A user who constantly engages with violent videos might tell the app that they are a little impulsive, for example.

She noted that TikTok and Instagram use such algorithms, while on the platforms Twitch and Discord, users have to seek out content.

“Automated systems aren’t always picking up when they’re serving something that could potentially go – for a child or a teen or even an adult – into territory that’s not in their best interest,” Radesky said.

“We need the digital ecosystem to respect the fact that kids need space and time away from tech and they need to engage with content that’s positive and hopeful.”

As the nation looks for remedies, California has passed a law that could serve as a model for other states.

The bipartisan legislation was sponsored by Assemblymembers Buffy Wicks, D-Oakland, and Jordan Cunningham, R-San Luis Obispo. It prohibits companies with online services from accessing children’s personal information, collecting or storing location data from younger users, profiling a child and encouraging children to provide personal info.

A working group will be required to determine how best to implement the policies by January 2024.

The measure, heralded as a first in the U.S., was modeled after a similar measure passed last year in the United Kingdom, where the government mandated 15 standards that tech companies, specifically those that collect data from children, have to follow.

Common Sense Media, a San Francisco nonprofit that advocates for safe and responsible use of children’s media, backed the California measure. Irene Ly, policy counsel for the organization, called it a first step toward forcing tech companies to enact changes to make the internet safer for kids.

Related story

Ly said companies have made “intentional design choices” to drive up engagement, such as automatically playing videos when users scroll and using algorithms to feed targeted content to users, and argued companies are more than capable of making changes that protect young users.

“It’s overdue that businesses make some of these easy and necessary changes, like offering young users the option to have the most privacy-productive settings by default and not tracking their precise location automatically,” Ly said.

Ly said privacy protection goes hand-in-hand with protecting mental health, given that adolescents are uniquely vulnerable to the influences of online content.

“They’re not going to develop the critical thinking skills or the ability to distinguish between what is an ad and what is content until they’re older. This makes them really ill-equipped to assess what they’re seeing and what impact that can have on them.”

Ly cited an April report from advertising watchdog group Fairplay for Kids that found Instagram’s algorithm was promoting eating disorder accounts that had garnered 1.6 million unique followers.

“Algorithms are profiling children and teens to serve them images, memes and videos encouraging restrictive diets and extreme weight loss,” the report stated. “And in turn, Instagram is promoting and recommending children and teen’s eating disorder content to half a million people globally.”

The report drew scrutiny from members of Congress, who demanded answers from Meta, Instagram’s parent company, and its CEO, Mark Zuckerberg.

The Social Media Victims Law Center subsequently filed a lawsuit against Meta on behalf of Alexis Spence, a California teen who developed an eating disorder, along with anxiety and depression, when she was just 11.

The lawsuit alleges Alexis was directed to Instagram pages promoting anorexia, negative body image and self-harm, and contends Instagram’s algorithm is designed to be addictive and targets preteens specifically.

It’s one of several similar lawsuits against tech companies filed after Frances Haugen, a former product manager at Meta, leaked internal documents in 2021 that suggested the company knew about the harmful content its algorithms were pushing.

In a September 2021 statement, Meta said it had taken steps to reduce harm to youth, including introducing new resources for those struggling with body image issues; updating policies to remove graphic content related to suicide; and launching an Instagram feature that allows users to protect themselves from unwanted interactions to reduce bullying.

“We have a long track record of using our research … to inform changes to our apps and provide resources for the people who use them,” the company said.

And in a Facebook post last year, Zuckerberg said, “The reality is that young people use technology. … Technology companies should build experiences that meet their needs while also keeping them safe. We’re deeply committed to doing industry-leading work in this area.”

Dylan Hoffman is an executive director at TechNet, a network of tech executives representing about 100 companies. Although the organization supports protections for children online, it did have some concerns about the new California measure, he said.

One provision requires companies to estimate the age of child users “with a reasonable level of certainty,” and Hoffman worries those verification steps could affect adults seeking lawful content.

“The bill defines kids to mean anyone under the age of 18, which could create some issues,” he said, noting that TechNet tried to push for changing the definition of “children” in the bill to users younger than 16. “What does that mean for companies to identify the age of their users? Are they required to more strictly and more stringently verify the age and identity of their users?”

That, he said, “could have a number of consequences” – not only around access for kids but access for adults, as well.

Radesky hopes that as conversations continue about the pros and cons of social media use, media outlets will frame kids’ mental health as an issue everyone should address, not just parents.

“I hope in the future as the press continues to cover this … they’ll really start shifting most of the focus on the change in the tech environment and what tech companies can do better,” she said.

A federal measure calling on tech companies to implement new safeguards for children was introduced in Congress earlier this year. But with the Kids Online Safety Act still pending, Radesky noted the California measure will serve as a test case for companies and for youth.

“You’re going to have this group of California kids see: How well is tech doing this? It all depends on enforcement and the tech companies really listening to their child design teams,” she said.

In the end, Radesky added, companies must also begin to view such laws not as regulation but “more like cleaning up this area of the neighborhood that’s filled with junk.”

Rithwik Kalale rith-wick kuh-lah-lay (he/him)
News Reporter, Phoenix

Rithwik Kalale expects to graduate in August 2023 with a master’s degree in mass communication. Kalale has reported in audio and digital formats for Real Vision, KJZZ, TARO Magazine and Arizona PBS.

Alexia Faith uh-lex-see-uh faith (she/her/hers)
News Visual Journalist, Phoenix

Alexia Faith expects to graduate in May 2023 with a bachelor’s degree in journalism and mass communications. Faith, who has interned as a videographer with Arizona Highways Magazine, has her own videography and photography company.