Help Aleteia continue its mission by making a tax-deductible donation. In this way, Aleteia's future will be yours as well.
*Your donation is tax deductible!
As Congress continued deliberating a proposed law that would protect young internet users from potential harm, Meta, the parent company of Facebook and Instagram, announced that it is taking steps to provide such protection on its own.
The company on September 17 introduced Instagram Teen Accounts, which will limit who can contact users of the app under 16, as well as what content they can see. Teens under 16 will need a parent’s permission to change any of the default settings to be less strict, Meta said.
But some child-welfare advocates and “anti-big tech” voices found the timing of the announcement — during a week in which the US House of Representatives began consideration of the Kids Online Safety and Privacy Act — to be suspicious.
“Big tech has a long history of announcing on the eve of a hearing or bill passage that they are finally going to do something about the myriad problems they have caused and been unresponsive to for years,” said Mary Graw Leary, professor of law at the Columbus School of Law of The Catholic University of America in Washington.
In July, the Senate overwhelmingly approved the Kids Online Safety and Privacy Act (KOSPA), which aims to protect adolescents from the harms of social media, gaming sites, and other platforms. On September 18, the bill was debated in the House Energy and Commerce Committee. Advocates are hoping it will now advance to the floor of the House for a full vote.
President Biden has expressed his support for the legislation. It would be the first major regulation purporting to protect children on the internet in more than a quarter century, The Associated Press pointed out.
“They’re aware of the harm”
The bill is “a response to numerous congressional hearings, whistleblowers, testimony, [the Wall Street Journal investigation] ‘The Facebook Files,’ and other leaks, which have demonstrated that these platforms have designed these products to knowingly exploit the not-yet-fully-formed child’s mind,” Graw Leary said.
“And by that I mean some design components, like late night notifications, prizes for the longer you are online — things like that, which they design and which we now know to engage people longer. And we also know they’re aware of the harm that this causes children, but they don’t do anything about it. And so I look at [KOSPA] as very much like a child product protection bill.”
She said that for years, people in the child healthcare profession and parent groups have been asking for “simple things, like if you’re under a certain age, having the product come to you with the default settings as the most protective, and then families can opt out of them if they want.”
That sounds like what Teen Instagram is doing, but for advocates like Graw Leary, it’s too little and too late.
In recent years there’s been increasing scrutiny of how websites, social media, and apps are designed, in light of the negative effects digital platforms have on users, particularly young people.
The 2021 Wall Street Journal investigative series The Facebook Files showed that Facebook’s platforms are “riddled with flaws that cause harm, often in ways only the company fully understands.”
Some of the internal research conducted by Instagram found that teens blame the platform for increases in the rate of anxiety and depression, The Facebook Files revealed.
Some states also have passed their own social media restrictions.
In June, Dr. Vivek Murthy, the U.S. Surgeon General, called for cigarette-like labels on social media to warn of the potential mental health risks.
Graw Leary charged that Meta’s announcement last week of Teen Instagram falls into a pattern of announcing a change to “avoid accountability,” and that is “usually followed by a discovery that they are not following through.”
“They have taken the least impactful — but still important — burden on them in KOSPA and said they will do that: set privacy by default,” she told Aleteia. “Will they do any of the other important components of the bill? It seems not. That is another part of their strategy – agree to voluntarily do the least in order to avoid the main regulations.”
“It can be changed tomorrow”
In addition, social media sites might have parental controls, but one of the complaints that’s been repeatedly made is that they are hard to find and use, she said.
Graw Leary added that through Congressional hearings, leaked internal documents, company whistleblowers, and investigative reporting, “we have learned frankly that these companies are not truthful, and such representations cannot be relied upon.”
The law professor said the fact that Instagram is making the changes voluntarily does not provide enough confidence.
“When something is voluntary, it can be changed tomorrow,” she said. “What Big Tech has shown us in the 30 years of their development: They cannot be trusted. That is why over 30 states’ attorneys general from both parties are engaged in a class action lawsuit against them in California for their knowing facilitation of harm to children. In the words of one congressional witness, ‘The company’s leadership knows how to make Facebook and Instagram safer, but won’t make the necessary changes because they put their astronomical profits before people.’”
As of the publication date of this article, Meta has not responded to a request for comment.