The new Wild West of AI kids’ toys

Fatesrider

Ars Legatus Legionis
25,278
Subscriptor
I'm not 100% against AI in toys, but parents should be able to govern how these interact with their kids... maybe be able to set parameters such as how personal they can get, how much advice and information they can communicate, and so on. If the toy is just generally pleasant and charming and interactive at my daughter's tea party, that's fine. But if it starts asking my daughter about family details and suggests moral behavior, the batteries are coming out of that thing, immediately.
That's an odd take. The approach to them you seem to have is "Let's use my child to experiment on and see if it's suitable". I think most folks would rather that it be extensively safety tested prior to introducing them to the spawn of our loins.

But WRT AI, you also seem to presume that if the AI will stays within its boundaries for a short time, it will always continue to. So far, that's not been the case. Especially for the interactive kinds of AI out there. LLM's are just too unreliable to not jump the track.

Going in the other direction, if you impose too strict of a guardrail, the toy quickly becomes uninteresting, like the pull-string talking dolls of my youth. Toy makers should use some imagination to create engaging toys and not leave it up to very unpredictable and unreliable LLM's to do it for them.
 
Upvote
13 (13 / 0)
So these Toys are:
  • A Subscription Service?
  • Linked to your Home Intranet?
  • Collecting data?
  • Real Time Generate un-pre-screen-able content?? (I.E. not like video game ESRB or MPA film ratings?)
  • Are Live Alpha/Beta Testing LLM-AI product/service on not legally able to consent children?
  • Made of LLM-AI that scrapped the all the rando of the World Wide Web?
  • Actively linked to Sociopath Controlled Businesses, Political Institutions/Nation States, and/or (coming soon to a darker reality near you) Religions? (LLM-AI Gumby anyone?)
  • Spying? (Your child ask Toy "X" about "Y". We would like to ask you, Mr./Misses/ Child Parent/Guardian about "Y")
  • Engaging in Advertising? Propaganda? Proselytizing? (No, it's "Teaching". It's "Education")
  • A scam that will brick in X months when the business case is no longer monitarily profitable to maintain the service infrastructure?
 
Upvote
49 (51 / -2)
I've seen this movie. It's a horror.
1778346390395.gif
 
Upvote
23 (23 / 0)
For parents interested in a cuddly, talking kids’ toy, there’s always the neurotic techie option: build one yourself and control the inputs and outputs as much as technically possible. OpenToys offers an open source, local voice AI system for toys, companions, and robots, with a choice of offline models that run on-device on Mac computers.

This is not in any way a solution.

Don't inflict LLMs on your kids, folks.
 
Upvote
28 (29 / -1)

CADirk

Smack-Fu Master, in training
59
This sounds as bad as you can imagine with what basically is a furby with upgraded security risks.
Kids might be exposed to all sorts of AI hallucinated stuff, but those things do come with a microphone and an internet connection, so all sound information is going to be stored somewhere. And all that data is a goldmine to be scraped for useful information.
 
Upvote
13 (13 / 0)
Just what an undeveloped mind needs, a toy talking to them to gaslight them and be a sycophant. Adults have a hard enough time with these things. Kids don’t stand a chance. Will they even know their own reality as these tech companies optimize for engagement at the cost of everything else?
For a lot of people who just ride the hype and use the next new thing because they assume everyone else is doing it, this scenario might pique enough self-interest to have them stop and think about consequences.

You're not going to be able to give off the outward appearance of the photogenic ideal of a family if your kids are maladjusted and exhibiting addicted behaviours at a level that was inconceivable in your own childhood. Wouldn't that make you a bad parent?

These are the same potential disadvantages, just from another cause, that we might attribute to absentee parents, or to families with low socioeconomic status. Why would you want that for your children? They're going to have to be schooled either with the delinquents or with the seriously mentally ill, and you're going to spend their entire childhood fighting uphill battles against the system, or paying for the highest level of private schooling because of it. Why would you want that for yourself?
 
Upvote
7 (7 / 0)

Wheels Of Confusion

Ars Legatus Legionis
75,744
Subscriptor
This sounds as bad as you can imagine with what basically is a furby with upgraded security risks.
Kids might be exposed to all sorts of AI hallucinated stuff, but those things do come with a microphone and an internet connection, so all sound information is going to be stored somewhere. And all that data is a goldmine to be scraped for useful information.
Those other risks were around with toys before LLMs came around. Barbies and Robosapiens and other toys with cameras, microphones, and online-enabled features were security nightmares more than a decade ago.
 
Upvote
12 (12 / 0)

jimcollinsworth

Smack-Fu Master, in training
12
Two thoughts - seems like any model that was trained on terabytes of adult content cannot be used in any toy, no way to add enough controls and rules to make it safe. They need to train new models from scratch using only age appropriate content. Any products doing that?

And second the toys really need to be multimodal, trained with age appropriate vision data. Kids are just learning language, so much of the context is visual, what they are holding, where they are standing, etc. Its like talking to a toddler on a phone, how well do those conversations go?
 
Upvote
7 (9 / -2)

Wheels Of Confusion

Ars Legatus Legionis
75,744
Subscriptor
Two thoughts - seems like any model that was trained on terabytes of adult content cannot be used in any toy, no way to add enough controls and rules to make it safe. They need to train new models from scratch using only age appropriate content. Any products doing that?

And second the toys really need to be multimodal, trained with age appropriate vision data. Kids are just learning language, so much of the context is visual, what they are holding, where they are standing, etc. Its like talking to a toddler on a phone, how well do those conversations go?
Any model meeting those requirements will still occasionally tell kids they can eat delicious rocks because they're still LLMs.
 
Upvote
17 (17 / 0)

knighttime

Seniorius Lurkius
20
Subscriptor
So, is this the response to all of those people who said that playing with dolls and lawn darts were tools of Satan?

Just give me back my Lego's.... and I mean just the box of 1000's with no picture on the box to build and not the new BT/WiFi or whatever the new ones have....

Thanks, I just looked up lawn darts and found a very "interesting" X-Ray of someone who got hit. If you ask me, that looks like more of "prove knowledge" territory than complete ban. For example, have them only available if you show a shooting/archery range membership or other training certificate.
(The key is relative safety: if everything had to be 100% politically cushy for kids, I don't know how you'd have adult cooks or construction workers since cutting tools and heat are so "scary".)

For the AI toys, I'd currently support a ban if asked to vote because the AI orgs have not demonstrated ability to show people how to safely operate them. We don't know if there is a way to use these without harming a kid's neuroplastic developing mind, or even adult minds for that matter.
Maybe we could have them in the license territory as well: I'd love to see someone with the AI toy license on their wall, acknowledging that they went through the paperwork and mandatory course to get their kid's data to be scraped all day long by a mega-corp, lol.
 
Upvote
-3 (0 / -3)

Num Lock

Ars Praetorian
442
Subscriptor
The kid in me thinks a 'real' Teddy Ruxpin would have been awesome.
I dunno about that. I had one and took it to bed with me. It started talking in the middle of the night, I think there was a thunderstorm at the time. Scared the absolute shit out of me.

That was the end of Teddy Ruxpin.
 
Upvote
24 (24 / 0)

Erbium168

Ars Centurion
2,841
Subscriptor
Oh yeah. I think parents should also vet all toys’ electrical safety, and whether/which toxic chemicals are present, so the choice is in their hands.

Parents definitely have nothing but unlimited time to vet everything on modern earth, including all the things they haven’t heard of, dreamt of, or remotely understand. FFS.
The US has UL for electrical safety, and Europe has agencies testing to ENs, like TüV. If you understand UL or CE that's enough.

But there seems to be no equivalent for software safety. And it is needed.
 
Upvote
22 (22 / 0)
I'm surprised any reputable company would want to go in on this given that there's still no consistent way to build ironclad guard rails. It's not gonna be a great look when the teddybear starts spouting slurs and currently it's very hard to actually stop that.

In terms of stuff you can do with AI, using it for like, fictional interactions is probably one of its better use cases but definitely not for children's toys with the way the tech is right now.
 
Upvote
9 (9 / 0)
Not a fan of blanket outlawing stuff, but this is the poster child. Parents who let their kids use these need to be publicly shamed
I'm a fan of banning them, and the reason given is in the article:

The LLMs that are used to power these toys are designed for those who are 13+ years old.
 
Upvote
10 (10 / 0)
How about our lawmakers, their staff and industry get together and hammer out legal guidance, guardrails
The only way a guardrail will work is this:

1. The toy is limited to "AI Play" for a limited amount of time a day (30 minutes? I don't know).
2. The training performed on the LLM is explicitly designed for children in mind. That means certain naughty words that I can't write here won't be in the training data (so it can't be said), the toy is explicit that it's a toy and make believe, and there's no easy way to violate the guardrails.
 
Upvote
-14 (1 / -15)
They need to train new models from scratch using only age appropriate content. Any products doing that?
It'll cost billions of dollars to train new models using only age appropriate content, with responses that are age appropriate (that means, child psychologists, or those who are training to be child psychologists and related jobs, need to review the output and confirm it's child safe).
 
Upvote
18 (18 / 0)
I don't think I understand the logic here. Some praise glorious 'AI' as being worth its risks because it does things faster; but as a toy what's the point? This 'play' is something that children do more or less for its own sake; and which is of interest to us because it turns out to be crucial to certain human development processes.

Is someone seriously expecting these things to turn their kid into an agentic 10x toddler like the ads aimed at linkedin addicts promise; or is this purely one of those situations where easy novelty is being shoved onto the market because it's easy and, through some sort of sleight of hand, it's on customers to express concerns rather than vendors to demonstrate adequacy?

The ways 'AI' systems are bad match neatly with the cynical incentives of half-assing stuff that doesn't matter as fast as possible; but that seems like less of an endorsement when you are dealing with a situation that you should really just avoid, it'll be considerably cheaper and lower effort, unless you are just thrilled by how totally worth doing it is for some reason.

Is this basically entirely supply driven; or is there an actual customer segment crying out for a faster, worse, way of parenting?
 
Upvote
3 (7 / -4)

AbstractHaderach

Smack-Fu Master, in training
59
I'm your AI friend till the end...
Right on the nose. That film was made precisely because of children being exploited by consumerism. Ah, the best horror always has a positive social message, doesn't it? Texas Chainsaw Massacre - eat more vegetables. Babadook - yes people do get mentally strained to the limit. Evil dead - hm.
 
Upvote
5 (5 / 0)
I've seen this movie. It's a horror.
The toy development teams running their 'A.I.'s should really try harder to prompt them not to consider film scripts like ME3AN or Ex Machina to be part of their training set.

They saw The Beast and found it to be an optimistic tale of social progress.
 
Upvote
2 (2 / 0)

graylshaped

Ars Legatus Legionis
68,202
Subscriptor++
I dunno about that. I had one and took it to bed with me. It started talking in the middle of the night, I think there was a thunderstorm at the time. Scared the absolute shit out of me.

That was the end of Teddy Ruxpin.
Now I'm curious. Did it say "I am the walrus" or "Hold, me, Num Lock!"
 
Upvote
4 (4 / 0)
I'm not 100% against AI in toys, but parents should be able to govern how these interact with their kids... maybe be able to set parameters such as how personal they can get, how much advice and information they can communicate, and so on. If the toy is just generally pleasant and charming and interactive at my daughter's tea party, that's fine. But if it starts asking my daughter about family details and suggests moral behavior, the batteries are coming out of that thing, immediately.
I have an idea for something far better than an AI toy to accompany a child for a pretend tea party:
A human.
 
Upvote
15 (15 / 0)
The only way a guardrail will work is this:

1. The toy is limited to "AI Play" for a limited amount of time a day (30 minutes? I don't know).
2. The training performed on the LLM is explicitly designed for children in mind. That means certain naughty words that I can't write here won't be in the training data (so it can't be said), the toy is explicit that it's a toy and make believe, and there's no easy way to violate the guardrails.
This is ars you can use any fucking words you want to.
 
Upvote
18 (18 / 0)
So as a thought exercise I decided to try and come up with a way to make a product I would not be uncomfortable giving to a cherished child.
My first idea is to limit the outputs, have a few hundred predefined outputs and have the LLM choose one of them based on the input.
The problem is we would have to exclude anything that could be used to answer a yes or no question otherwise an LLM might output a dangerous answer. And already I have hit a roadblock rendering the toys interactivity completely non functional.

Gee, I thought about that for all of a minute and concluded that this is a horrible idea, there is no excuse for the people developing and selling these toys.
 
Upvote
6 (6 / 0)
Words are not adequate to convey the escalating horror and despair I felt as I read through this article.

One of the great unexamined ironies of the modern age is that the more knowledge and exposure one has in one's professional life to the real-world design and implementation of today's technologies, the more likely one becomes to lean Luddite in one's personal life. Casual hobbyist tech types may be enthusiastic about AI and unconcerned about things as described in this article — but if you do tech for a living, you instinctively know these factories need to be burned to the ground.
 
Upvote
23 (23 / 0)