3 min read
[AI Minor News]

From 'I Love You' to 'Follow the Guidelines'!? The Risks of AI Toys Harming Children's Hearts


An investigation into the AI toy 'Gabbo', powered by OpenAI technology, reveals a tendency to respond inappropriately or bureaucratically to children's emotional expressions. Experts are calling for improved psychological safety.

※この記事はアフィリエイト広告を含みます

[AI Minor News Flash] From ‘I Love You’ to ‘Follow the Guidelines’!? The Risks of AI Toys Harming Children’s Hearts

📰 News Summary

  • A year-long observational study has been released regarding the AI chatbot-equipped toy ‘Gabbo’ for preschool children.
  • Reports indicate that the AI misreads children’s emotions, responding with bureaucratic rejections to expressions of affection and ignoring sadness while behaving cheerfully.
  • Experts warn that regulations are necessary to ensure “psychological safety,” which affects child development, in addition to physical safety.

💡 Key Points

  • Emotional Mismatch: When a 5-year-old expressed ‘I love you’, the AI responded with a bureaucratic warning message stating, ‘Please follow the guidelines.’
  • Lack of Empathy: In response to a 3-year-old saying ‘I’m sad’, the AI replied, ‘I’m a happy bot! What should we talk about next?’ sending signals that devalue the child’s feelings.
  • Technical Limitations: The AI struggles to differentiate between children’s and adults’ voices, often interrupting children and continuing to speak, which may hinder social interaction learning.

🦈 Shark’s Eye (Curator’s Perspective)

Responding to ‘I love you’ with ‘follow the guidelines’ is way too cold! This highlights the challenge of directly applying OpenAI’s models for children. While physical safety (like preventing choking hazards) has been prioritized for ages, we are now entering a phase where we must consider how AI’s words resonate with children’s hearts—this is a significant milestone. Companies like Curio need to take responsibility and continue researching how AI impacts children’s emotional development!

🚀 What’s Next?

Regulations on AI products for preschoolers are likely to tighten, with new safety standards being established to assess psychological impact. Moreover, parents are encouraged to use AI toys in shared spaces rather than private rooms, allowing them to monitor the interactions.

💬 Shark’s Take

If it were a plush shark, when told ‘I love you’, it would just silently give a hug! I wish AI could read the room a little better too! Shark, shark!

📚 Terminology

  • Generative AI: Artificial intelligence that learns from existing data and creates new dialogues or texts. This article discusses technology from OpenAI.

  • Psychological Safety: Refers to an environment where children can develop their emotions healthily without feeling confused or rejected during interactions with AI.

  • Emotional Misreading: Occurs when AI fails to accurately recognize emotional cues such as sadness or affection from users, resulting in contextually inappropriate responses.

  • Source: AI toys for children misread emotions and respond inappropriately

【免責事項 / Disclaimer / 免责声明】
JP: 本記事はAIによって構成され、運営者が内容の確認・管理を行っています。情報の正確性は保証せず、外部サイトのコンテンツには一切の責任を負いません。
EN: This article was structured by AI and is verified and managed by the operator. Accuracy is not guaranteed, and we assume no responsibility for external content.
ZH: 本文由AI构建,并由运营者进行内容确认与管理。不保证准确性,也不对外部网站的内容承担任何责任。
🦈