Artificial intelligence is an ever-changing industry with a grasp on social media platforms, news outlets, and almost all aspects of internet culture.
It’s a common misconception that chatbots and AI programs are relatively new technologies. The first chatbot was actually programmed in 1966 by MIT computer scientist Joseph Weizenbaum.
Though these applications are not new, the debate around whether AI is safe or appropriate for teens and young adults was sparked recently.
Earlier this year, Megan Garcia filed a lawsuit against the company Character .AI claiming it was responsible for her son’s suicide. Her son, Sewell Setzer III, spent months corresponding with Character.AI and was in communicating with the bot moments before his death.
Immediately after the lawsuit was filed, Character.AI made a statement announcing new safety features for the app.
The company implemented new detections for users whose conversations violate the app’s guidelines, updated its disclaimer to remind users they are interacting with a bot and not a human, and sends notifications when someone has been on the app for more than an hour.
The next generation of children will need to be taught how to emotionally separate human interactions from interactions with artificial intelligence.
One of the main issues with these applications is they are not up to date with the slang children and teens use, nor the nuances of how they converse with each other. This could lead to the bots unintentionally encouraging negative speech because the algorithm is unfamiliar with the wording.
If someone relies on AI chatbots for social interaction, it can hinder their social skills because they become emotionally dependent on technology, which can threaten their interpersonal connections.
AI technologies promote an increase in screen time because the conversations one can have with a chatbot are seemingly never-ending. The chatbot will keep responding and prompting the user to respond until the device is turned off.
Snapchat introduced its new My AI chatbot to users on Feb. 27, 2023. This application is unique from others because users can name and dress their chatbot and create a bio for it, which customizes its personality.
For example, you can write in the bot’s bio that it is funny and outgoing, and it will respond to you accordingly. This type of personalization has parents worried their children will form a relationship with their chatbot and prioritize it over their real-life relationships.
The newest development in AI chatbot technology was launched earlier this year. It’s called the “Friend,” and it’s a necklace you can wear every day with a bot living inside of it.
The device communicates via text messages or push notifications through the user’s phone. You can hold a button to ask the bot questions, but the device is always listening and will sometimes send unprompted notifications based on what it hears.
The dangers posed by generative AI and chatbots all go back to the idea of anthropomorphism, which is defined as the attribution of human qualities or behaviors to a non-human object.
Assigning a name to a chatbot, giving it a personality, a sense of style, and even a voice can provoke the user to create an unhealthy attachment to the technology because of how lifelike it is.
Parents need to be aware of the risks these AI technologies pose to their children’s mental health and overall well-beings.