These connected companions could disrupt everything from make-believe to bedtime stories. No wonder some lawmakers want them banned.
See full article...
See full article...
Show me the front and back of Daddy's credit cards and then you can make a wish for anything you want.Hi Bobby! We're best pals, right? Tickle my ear to open your free Kalshi account!
That's an odd take. The approach to them you seem to have is "Let's use my child to experiment on and see if it's suitable". I think most folks would rather that it be extensively safety tested prior to introducing them to the spawn of our loins.I'm not 100% against AI in toys, but parents should be able to govern how these interact with their kids... maybe be able to set parameters such as how personal they can get, how much advice and information they can communicate, and so on. If the toy is just generally pleasant and charming and interactive at my daughter's tea party, that's fine. But if it starts asking my daughter about family details and suggests moral behavior, the batteries are coming out of that thing, immediately.
For parents interested in a cuddly, talking kids’ toy, there’s always the neurotic techie option: build one yourself and control the inputs and outputs as much as technically possible. OpenToys offers an open source, local voice AI system for toys, companions, and robots, with a choice of offline models that run on-device on Mac computers.
For a lot of people who just ride the hype and use the next new thing because they assume everyone else is doing it, this scenario might pique enough self-interest to have them stop and think about consequences.Just what an undeveloped mind needs, a toy talking to them to gaslight them and be a sycophant. Adults have a hard enough time with these things. Kids don’t stand a chance. Will they even know their own reality as these tech companies optimize for engagement at the cost of everything else?
Those other risks were around with toys before LLMs came around. Barbies and Robosapiens and other toys with cameras, microphones, and online-enabled features were security nightmares more than a decade ago.This sounds as bad as you can imagine with what basically is a furby with upgraded security risks.
Kids might be exposed to all sorts of AI hallucinated stuff, but those things do come with a microphone and an internet connection, so all sound information is going to be stored somewhere. And all that data is a goldmine to be scraped for useful information.
Any model meeting those requirements will still occasionally tell kids they can eat delicious rocks because they're still LLMs.Two thoughts - seems like any model that was trained on terabytes of adult content cannot be used in any toy, no way to add enough controls and rules to make it safe. They need to train new models from scratch using only age appropriate content. Any products doing that?
And second the toys really need to be multimodal, trained with age appropriate vision data. Kids are just learning language, so much of the context is visual, what they are holding, where they are standing, etc. Its like talking to a toddler on a phone, how well do those conversations go?
So, is this the response to all of those people who said that playing with dolls and lawn darts were tools of Satan?
Just give me back my Lego's.... and I mean just the box of 1000's with no picture on the box to build and not the new BT/WiFi or whatever the new ones have....
I dunno about that. I had one and took it to bed with me. It started talking in the middle of the night, I think there was a thunderstorm at the time. Scared the absolute shit out of me.The kid in me thinks a 'real' Teddy Ruxpin would have been awesome.
The US has UL for electrical safety, and Europe has agencies testing to ENs, like TüV. If you understand UL or CE that's enough.Oh yeah. I think parents should also vet all toys’ electrical safety, and whether/which toxic chemicals are present, so the choice is in their hands.
Parents definitely have nothing but unlimited time to vet everything on modern earth, including all the things they haven’t heard of, dreamt of, or remotely understand. FFS.
I'm a fan of banning them, and the reason given is in the article:Not a fan of blanket outlawing stuff, but this is the poster child. Parents who let their kids use these need to be publicly shamed
The only way a guardrail will work is this:How about our lawmakers, their staff and industry get together and hammer out legal guidance, guardrails
It'll cost billions of dollars to train new models using only age appropriate content, with responses that are age appropriate (that means, child psychologists, or those who are training to be child psychologists and related jobs, need to review the output and confirm it's child safe).They need to train new models from scratch using only age appropriate content. Any products doing that?
Right on the nose. That film was made precisely because of children being exploited by consumerism. Ah, the best horror always has a positive social message, doesn't it? Texas Chainsaw Massacre - eat more vegetables. Babadook - yes people do get mentally strained to the limit. Evil dead - hm.I'm your AI friend till the end...
I've seen this movie. It's a horror.
The toy development teams running their 'A.I.'s should really try harder to prompt them not to consider film scripts like ME3AN or Ex Machina to be part of their training set.
Now I'm curious. Did it say "I am the walrus" or "Hold, me, Num Lock!"I dunno about that. I had one and took it to bed with me. It started talking in the middle of the night, I think there was a thunderstorm at the time. Scared the absolute shit out of me.
That was the end of Teddy Ruxpin.
Now I'm curious. Did it say "I am the walrus" or "Hold, me, Num Lock!"
I have an idea for something far better than an AI toy to accompany a child for a pretend tea party:I'm not 100% against AI in toys, but parents should be able to govern how these interact with their kids... maybe be able to set parameters such as how personal they can get, how much advice and information they can communicate, and so on. If the toy is just generally pleasant and charming and interactive at my daughter's tea party, that's fine. But if it starts asking my daughter about family details and suggests moral behavior, the batteries are coming out of that thing, immediately.
This is ars you can use any fucking words you want to.The only way a guardrail will work is this:
1. The toy is limited to "AI Play" for a limited amount of time a day (30 minutes? I don't know).
2. The training performed on the LLM is explicitly designed for children in mind. That means certain naughty words that I can't write here won't be in the training data (so it can't be said), the toy is explicit that it's a toy and make believe, and there's no easy way to violate the guardrails.