How much can AI help in providing safety and privacy to teenagers on social media?

How much can AI help in providing safety and privacy to teenagers on social media?

[ad_1]

Meta announced on January 9, 2024 that it would protect teen users on Instagram and Facebook by preventing them from viewing content deemed harmful by the company. These also include content related to suicide and eating disorders. The move comes as federal and state governments step up pressure on social media companies to provide safeguards for teenagers. We know that teens turn to their peers on social media for support they can’t get anywhere else. Efforts to protect teens may inadvertently make it harder for them to get help.

The US Congress has considered social media and the risks it poses to young people several times in recent years. The CEOs of Meta, The committee’s chairman and ranking member, respectively, Senators Dick Durbin (D-Ill.) and Lindsey Graham (R-S.C.), said in a statement ahead of the hearing that tech companies should “ultimately protect children, especially “We are being forced to admit our failures in providing security.”

A recent research shows that while teens face danger on social media, they also receive support from peers, especially through direct messages. The research identified steps that social media platforms can take to protect users as well as their online privacy and autonomy.

What are the children facing?

The prevalence of risks for adolescents on social media is well established. These risks range from harassment and bullying to poor mental health and sexual exploitation. The investigation revealed that companies like Meta knew that their platforms exacerbated mental health issues, helping to make youth mental health one of the priorities of the US Surgeon General.

Most are from self-reported data such as teen online safety research surveys. There is a need for further investigation of youth’s real-world personal interactions and their perspectives on online risks. To address this need, a large dataset of youth’s Instagram activity was collected that included over 7 million direct messages. Using this dataset, it was found that direct conversations can be important for young people seeking support on issues ranging from daily living to mental health concerns. The findings show that these channels were used by young people to discuss their public interactions in more depth. Based on mutual trust in settings, adolescents felt safe to ask for help.

Research shows that the privacy of online conversations plays an important role in young people’s online safety and yet a significant amount of harmful conversations on these platforms come in the form of private messages. Unsafe messages flagged by users in the dataset included harassment, sexual messages, sexual solicitation, nudity, pornography, hate speech, and sale or promotion of illegal activities.

However, using automated technology to detect and prevent online risks to teens has become more difficult as platforms have come under pressure to protect user privacy. For example, Meta has implemented end-to-end encryption for all messages on its platform to ensure that message content is secure and only participants in the conversation can access it.

Additionally, the steps Meta has taken to prevent content related to suicide and eating disorders keep that content from being posted publicly, even if a teen’s friend posted it. Additionally, Meta’s content strategy does not address unsafe conversations that teens engage in online.

to balance

In such a situation, the main challenge is to protect the young users without attacking their privacy. For this purpose, a study was conducted to find out how we can use minimal data to detect insecure messages. The researchers wanted to understand how various characteristics or metadata of risky conversations such as conversation length, average response time, and relationships of conversation participants could contribute to machine learning programs that detect these risks. For example, previous research has shown that risky interactions tend to be short and one-sided, such as when strangers advance in conversation unwantedly.

The researchers found that the machine learning program was able to identify unsafe interactions using only metadata for the interactions in 87% of cases. However, the most effective method is to analyze text, images and video of conversations to identify the type and severity of risk. These results highlight the importance of metadata to isolate unsafe conversations and can be used as guidelines for platforms to design artificial intelligence risk detection. Platforms can use high-level features like metadata to block harmful content without scanning that content. For example, a persistent harasser that a youth wants to avoid will create metadata—repeated, brief, one-way communications between unrelated users—that an AI system can use to stop the harasser.

Ideally, young people and their carers will be given the option to be able to turn on encryption, risk detection or both so they can make the right choice around privacy and security for themselves.

[ad_2]

Source link