This wording gives them an out to abandon any generative-AI based products, actually. They known hallucinations aren't fixable, so at some point they can simply say "Well we tried but this is never going to be production-ready."Rohit Prasad, who leads the artificial general intelligence (AGI) team at Amazon, told the Financial Times the voice assistant still needed to surmount several technical hurdles before the rollout.
This includes solving the problem of “hallucinations” or fabricated answers, its response speed or “latency,” and reliability. “Hallucinations have to be close to zero,” said Prasad. “It’s still an open problem in the industry, but we are working extremely hard on it.”
Personally I wouldn't uphold any of y'all as examples of slow, incremental, responsible AI service roll-outs.One current employee said more steps were still needed, such as overlaying child safety filters and testing custom integrations with Alexa such as smart lights and the Ring doorbell.
“The reliability is the issue—getting it to be working close to 100 percent of the time,” the employee added. “That’s why you see us... or Apple or Google shipping slowly and incrementally.”
Because of the more personalised, chatty nature of LLMs, the company also plans to hire experts to shape the AI’s personality, voice and diction so it remains familiar to Alexa users, according to one person familiar with the matter.
This! I'm currently shopping for a new TV but many manufacturers have ruled themselves out with this insane BS.At this point, whenever I see a new product release and the main feature is "now includes AI" I'm 100% not interested in that product. It's a step backwards IMO. Sure hope this trend of cramming AI into everything ends sooner rather than later.
I think the negative sentiment toward AI is almost entirely toward Generative AI because companies are trying to use it for tasks it's not well-suited for, and thus very unreliable*. Using AI for language processing is essentially what Large Language Models were designed for (if I recall, LLMs were built for translation, which requires fast and accurate language processing), so I would expect it to be more reliable and thus accepted in that space.I know there is negative sentiment of AI in Ars community, but I for one hope Alexa has AI natural language processing capability. At the moment Alexa is dumb to a point of unusable.
I cannot ask it to add an event to my calendar.
I cannot ask it to turn off my air conditioner unless I say a very specific and unnatural phrase such as turn the downstairs thermostat to off . It doesn't understand even words like HVAC.
I can't tell it say like automatically turn off my Echo show screen at 10pm at night. There is simply no understanding of my instructions.
'Your plastic pal that's fun to be with' was sarcasm. NOT a road map.The problem both Google and Amazon have with these devices is that they want them to do things that the consumer is not interested in. I don't want a "proactive" assistant. I want a device that will wake up when I call it and do the basic shit that I want: Turn the lights on or off, turn the TV on or off, play music. Otherwise, just stay out of the way.
Of course, that one time purchase doesn't make enough money so they have to keep finding ways to be annoying.
You want to compare them to the worst "AI" assistant on the market, why??It'll be interesting to see how Amazon's Echo / Alexa solution compares with Apple's HomePod / Siri. It looks like Apple is preparing new HomePods and Apple TVs. Apple seems to be trying to push a more distributed (on-device) computing solution. Amazon is apparently looking at central servers. Both sides seem to still have steep challenges ahead in deployment.
Personally, I don't see what Alexa will provide that I would be interested in. My own use-case, like 95% of the rest of it's users, is simply to play music, set timers, and home automation, with occasional Wikipedia-type queries. It does these adequately already.
“The reliability is the issue—getting it to be working close to 100 percent of the time,” the employee added. “That’s why you see us... or Apple or Google shipping slowly and incrementally.”
Well there's your problem! You trained an opaque 700-billion-parameter model on most of the internet, without first considering whether its output would be "predictable." The reason most people only use voice assistants to set timers and turn lights on and off is because that's all that they can predictably do. That's the same reason we program computers using languages defined by formal grammars and semantics instead of telling them to "add up the receipts" and expecting something reasonable.“[T]he most challenging thing about AI agents is making sure they’re safe, reliable, and predictable,” Anthropic’s chief executive, Dario Amodei, told the FT last year.
If regular Alexa couldn't generate any meaningful subscription revenue or sales commissions, does the team seriously expect GenAI Alexa (which is presumably far more expensive per query) to be different? It seems like Amazon is doubling down on a failed monetization strategy.An enduring challenge for Amazon’s Alexa team—which was hit by major lay-offs in 2023—is how to make money. Figuring out how to make the assistants “cheap enough to run at scale” will be a major task, said Jared Roesch, co-founder of generative AI group OctoAI.
Options being discussed include creating a new Alexa subscription service, or to take a cut of sales of goods and services, said a former Alexa employee.
And just like Google's "put glue on pizza" or "eat rocks to get your vitamins" or "use a hitachi magic wand on your children", your most-ridiculous outputs will be widely publicized.I still find it funny they say "hallucinate" instead of "spews random bullshit".
Also, things like "hallucinations have been near zero" are meaningless. If there is a 0.3% chance that it spews bullshit, but does 10 million tasks a day, that means its gonna be making alot of mistakes every day.
I would have agreed with you but we've crossed the threshold where 'AI' means absolutely anything the marketing department decides which may or may not be a step in any direction whatsoever.At this point, whenever I see a new product release and the main feature is "now includes AI" I'm 100% not interested in that product. It's a step backwards IMO. Sure hope this trend of cramming AI into everything ends sooner rather than later.
Yep, the stochastic text generator will just output the next token bases on the statistical model that has been built up from training data (to set the model weights) and previous tokens (hidden prompt + user supplied prompt + tokens it has already outputted).On one hand, I’m glad that at least one massive company seems to care about its products producing nonsense.
On the other hand… isn’t avoiding hallucinations basically impossible, given how LLMs are mainly very clever autocomplete? Even with some effort to provide sources like Google tries to, they’re terrible at picking up tone and cite sarcasm.
Yup.And just like Google's "put glue on pizza" or "eat rocks to get your vitamins" or "use a hitachi magic wand on your children", your most-ridiculous outputs will be widely publicized.
Funny how Apple’s caution seems to be borne out by Amazon’s experience.You want to compare them to the worst "AI" assistant on the market, why??
Or maybe BOB because this trash is unleashing evil on the worldcall it 'Bob' as a nod to history
It can be worse than that. Don't assume everyone's English is educated native quality. My German wife speaks very good English and has a large vocabulary, but still pronounces many words with a foreign accent. She is also prone to translating German literally, resulting in stuff like the old joke, "Throw Mama down the stairs her hat."You know what, Alexa is so bad at just understanding something simple, how bad could the hallucinations be comparatively?
Alexa cant understand "turn off the downstairs lights" is functionally the same as "turn off the lights downstairs". Yes, i know one is grammatically better than the other, but functionally this is something simple