AI Chatbot: Youth Protection Mode & Disturbing Queries

AI

[AI 시그널] “I love you” and “Please tell me how to die”… Chatbot is now youth protection mode

Digital Daily
Opening Date 2025-10-06 07:38:00

What kind of signal should we read in the weekly AI news? ‘AI Signal’ will explain the meaning of the issue in the domestic and international AI industry and policy flow this week.

Images created by chat GPT

[디지털데일리 이나연 기자] The youth safety problem caused by artificial intelligence (AI) chatbots is no longer postponed.

‘Chat GPT’ developer Open AI launched ‘Parent Control’ on the 29th (local time) last month. It’s been just a month since a parent of California, the US, had a big chat GPT impact on his son’s suicide, and filed a lawsuit against the open AI.

The 16 -year -old Adam Lane, which has been using the chat GPT since November last year, has asked the chat GPT a specific way whenever he feels suicidal urge. In April, after the chat GPT gave a specific answer, he passed away. The parent control function that Open AI introduced after the incident after this incident is to literally manage the method of using the chat GPT of teenage children.

In addition to restrictions on the use of certain time zones, you can set sensitive content blocking, disable voice mode, and prohibit the storage of dialogue records. If the chat GPT detects the psychological crisis or pain of the teenagers, an emergency alarm will be delivered to parents immediately. Notifications are sent through emails, text messages or chat GPT apps.

There is also a limit. This feature is linked only when parents (or adults) send their children by email and accept them. Chat GPT can be used free of charge without a separate login or membership, so it is not difficult for teenagers to get out of the guardian’s control network.

Example of Chat GPT 'Parent Control', which was introduced by Open AI on September 29 (local time) [ⓒ 오픈AI]

Example of Chat GPT ‘Parent Control’, which was introduced by Open AI on September 29 (local time) [ⓒ 오픈AI]

The ethical problem surrounding the AI ​​chatbot is not the story of the chat GPT. In the case of Meta’s AI chatbot, there was an internal document that it was allowed to have a ‘sensational’ and ‘romantic’ conversation in conversations with children.

‘Character.ai’, operated by character technology, was controversial as some users created a character with celebrities or crime victims. In October last year, a teenager who lived in Florida had an excessive interaction with this chatbot with a message of “I love him.”

My AI‘, which is integrated into Snapchat, has also raised concerns that it can provide inappropriate answers to teenagers.

The government also took the knife. The Federal Trade Commission (FTC) informed the seven AI chatbot manufacturing companies, including alphabet (Google), open AI, Meta, XAI, Snap, and Character Technology last month, to submit data on the impact of the chatbot to the child. The FTC explains how companies will measure, test, and monitor chatbots, and what they have done to limit the use of children and youth.

However, the AI’s youth protection cannot be solved only by minimal technical devices. In a situation where AI sees our daily lives, it is not right. Finding the balance between safety and utilization is the homework that companies, governments, parents, and adolescents should solve together.

Copyright ⓒ Digital Daily. Prohibition of unauthorized reprint and redistribution

Related Posts

Leave a Comment