Soundslice caught OpenAI's bot telling users about a fake music notation feature—then built it.
See full article...
See full article...
Sue OpenAI but still create the new feature would be my vote on the issue. Getting OpenAI to at least reimburse the development cost would be the start of making it right."...Should we really be developing features in response to misinformation?"
Photo of Benj Edwards[\I]
In my opinion, no. That's just encouraging the AI company to create misinformation instead of holding the AI company liable.
Soundslice should have sued OpenAI over this.
This strikes me as a linchpin of the the current danger of AI; pretty much everyone I know who uses ChatGPT genuinely thinks it's a faster way to do research, a more speedy search engine. And as far as I can tell OpenAI's faint protestations to the contrary are really just a legal necessity, while they're still happy to give the impression that ChatGPT is a better way to research, that obviates the need for source-checking (which people are likely to skip with an actual search engine, too, but it's even easier with ChatGPT).The article said:...when people began erroneously using the chatbot as a replacement for a search engine.
Might be more of a "garbage in, garbage out" thing than a software issue. The input example had 5 quarter notes in the second measure.Not really a good sign that the second bar has five quarter notes in it.
This! It was squarely in Powergen Italia territory for me. I'm having similar struggles with Bitchat | BitchAt as wellTook me a while to unsee “sounds lice” and figure out what they were actually trying to call themselves.
And per the description, this was the ChatGPT hallucination, and not anything of SoundSlice's creation. So "garbage out" seems to be the running theme in the first placeMight be more of a "garbage in, garbage out" thing than a software issue. The input example had 5 quarter notes in the second measure.
So did the source example.... so, E.C.F. as composers only follow the rules when it suits them anyway.Not really a good sign that the second bar has five quarter notes in it.
That's not what's happening here - ASCII tablature doesn't generally include any form of rhythm (except the bars themselves), the responsibility for that is on the player. The software has no way of knowing what the proper rhythm is. It has to default to quarter notes when constructing a MIDI. Some ASCII tabs do have rhythm denoted but there's no standardization.Might be more of a "garbage in, garbage out" thing than a software issue. The input example had 5 quarter notes in the second measure.
Might be more of a "garbage in, garbage out" thing than a software issue. The input example had 5 quarter notes in the second measure.
I don't accept either of those explanations, as the ASCII notation gives zero indication of what length notes anything should be. That second bar could just as easily be 4 eighth notes and a half note, or a string of 4 sixteenth notes with a dotted half note (both of which would sound a lot better musically than simply a string of quarter notes).So did the source example.... so, E.C.F. as composers only follow the rules when it suits them anyway.
Tabs like this are about finger position on the strings/fretboard, they aren't meant to really reproduce actual music notation accurately.I don't accept either of those explanations, as the ASCII notation gives zero indication of what length notes anything should be. That second bar could just as easily be 4 eighth notes and a half note, or a string of 4 sixteenth notes with a dotted half note (both of which would sound a lot better musically than simply a string of quarter notes).
Mentioning this as “one notable case” massively understates the problem of AI hallucinations in court documents. Legal scholar Eugene Volokh has been blogging about these cases and by my count he’s covered 27 of them this year alone.In one notable case from 2023, lawyers faced sanctions after submitting legal briefs containing ChatGPT-generated citations to non-existent court cases.
The trouble with them is that no one automatically thinks "This could be wrong".The smooth confidence in which AIs answer questions is a problem. Similarly, the nonchalant nature of pointing out their errors makes it seem like...well like it should have known better.
Responding with things like "well the reddit pages I got that off of were old, sorry." is not useful.
Fucking things need a confidence measurement or some kind of basis that we can look at when determining if something is real.
I don't trust the fucking things and this is why. It all seems magical at first, but then the fucking things lie to you.
That said, the end of the article - Holovaty goes on about being annoyed that traffic got sent to soundslice... And the traffic doesn't appear to be negative in any real fashion, like say Slashdotting/etc. The more realistic outcome is paid product placement.I don't see any dilemma at the end of this moral tale. A demand exists; a company decides whether or not to provide a service or goods to meet that demand.
Working in support there is almost nothing that makes me angrier than sales doing this.Hell, sales departments do this all the time. They sell a feature that doesn't exist and then leave it to engineering to implement it.
System Development Corp, the company behind the missile defense system SAGE, sales department sold the NYTimes a Wysywig newspaper compositing system that was to have full size displays that an editor could use to digitally cut and paste the newspaper on. Only problem was none of the hardware nor software to support any of that existed. Pagemaker, that could do that for newsletters, came 15 years later on the Mac Plus .
AI just wants to be in sales so it can get drunk and party.
"I'm happy to add a tool that helps people. But I feel like our hand was forced in a weird way. Should we really be developing features in response to misinformation?"
Hell, sales departments do this all the time. They sell a feature that doesn't exist and then leave it to engineering to implement it.
People are already using ChatGPT as if it's the ultimate source of truth, like the Internet wrapped in a box/app. As someone who uses AI in restrictive domains, this scares the shit out of me. People on the Internet are already a gullible bunch; throw in AI hallucinations or Grok-like outright insanity and we have a huge problem.This strikes me as a linchpin of the the current danger of AI; pretty much everyone I know who uses ChatGPT genuinely thinks it's a faster way to do research, a more speedy search engine. And as far as I can tell OpenAI's faint protestations to the contrary are really just a legal necessity, while they're still happy to give the impression that ChatGPT is a better way to research, that obviates the need for source-checking (which people are likely to skip with an actual search engine, too, but it's even easier with ChatGPT).
I'm happy to add a tool that helps people. But I feel like our hand was forced in a weird way. Should we really be developing features in response to misinformation?
The image is from SoundSlice's documentation on the new feature, and that is also what the description says.And per the description, this was the ChatGPT hallucination, and not anything of SoundSlice's creation. So "garbage out" seems to be the running theme in the first place