ChatGPT made up a product feature out of thin air, so this company created it

Post content hidden for low score. Show…

deltaproximus

Ars Scholae Palatinae
999
Subscriptor++
"...Should we really be developing features in response to misinformation?"
Photo of Benj Edwards[\I]

In my opinion, no. That's just encouraging the AI company to create misinformation instead of holding the AI company liable.

Soundslice should have sued OpenAI over this.
Sue OpenAI but still create the new feature would be my vote on the issue. Getting OpenAI to at least reimburse the development cost would be the start of making it right.

Soundslice saved both sides a lot of complaining by creating the feature, though.
 
Upvote
51 (70 / -19)
The smooth confidence in which AIs answer questions is a problem. Similarly, the nonchalant nature of pointing out their errors makes it seem like...well like it should have known better.

Responding with things like "well the reddit pages I got that off of were old, sorry." is not useful.

Fucking things need a confidence measurement or some kind of basis that we can look at when determining if something is real.

I don't trust the fucking things and this is why. It all seems magical at first, but then the fucking things lie to you.
 
Upvote
167 (177 / -10)
The article said:
...when people began erroneously using the chatbot as a replacement for a search engine.
This strikes me as a linchpin of the the current danger of AI; pretty much everyone I know who uses ChatGPT genuinely thinks it's a faster way to do research, a more speedy search engine. And as far as I can tell OpenAI's faint protestations to the contrary are really just a legal necessity, while they're still happy to give the impression that ChatGPT is a better way to research, that obviates the need for source-checking (which people are likely to skip with an actual search engine, too, but it's even easier with ChatGPT).
 
Upvote
162 (163 / -1)

nancy-drew

Ars Centurion
357
Subscriptor++
This is a fascinating piece. Thank you for posting this! Gave me lots to think about.

Based purely on my personal experience, I'd assume this is a good thing. When an AI hallucinates a feature of a SaaS product, my normal way of handling it is "oh, this product is the piece I was missing, the AI just took a massive bong rip before writing its response." I have to go and do a bit more data massaging, but the product becomes a new piece of my solution, even if it didn't immediately work the way the AI said it would.

The obvious caveat is that as an Ars poster, I'm likely better-equipped to handle the peccadillos of agentic AI assistants, and that does blind me to the fact that the average person would probably blame the product and just keep moving.
 
Upvote
41 (48 / -7)

ProdigySim

Ars Centurion
284
Subscriptor++
Using LLM autocomplete in code, it hallucinations a lot of features that should exist, would obviously solve problems, but don't exist. Like CLI flags or configuration parameters that would do what you want :)

Not a terrible idea to source those as inspiration for new features. But also, there's probably already an open issue asking for half of these....
 
Upvote
74 (76 / -2)

SubWoofer2

Ars Tribunus Militum
2,670
I don't see any dilemma at the end of this moral tale. A demand exists; a company decides whether or not to provide a service or goods to meet that demand.

Demand can come from anywhere, including whimsical and bizarre, it just needs the madness of crowds. Cue discussion of pet rocks, hula hoops, dirtying your vehicle's exhaust, and Goop products for one's vagina.

As for the moral tale, from a software angle this is a different and perhaps technically more interesting thing than glue on pizza or the number of "r"s in "strawberry".
 
Upvote
89 (91 / -2)

Sukasa

Ars Centurion
236
Subscriptor++
Might be more of a "garbage in, garbage out" thing than a software issue. The input example had 5 quarter notes in the second measure.
And per the description, this was the ChatGPT hallucination, and not anything of SoundSlice's creation. So "garbage out" seems to be the running theme in the first place
 
Upvote
29 (33 / -4)
My first question was why weren't they supporting these ASCII imports to begin with? Seems like ChatGPT just drove users to complain that it wasn't there and they (Soundslice) responded to users' unmet expectations for the lack of an obvious feature.

Even though this was prompted by AI--it was something expected. e.g. if the AI had suggested the ability to import from Spanish plaintext... I doubt it would have been treated with the same vigor.
 
Upvote
-5 (23 / -28)

nicholas.lecompte

Smack-Fu Master, in training
63
Might be more of a "garbage in, garbage out" thing than a software issue. The input example had 5 quarter notes in the second measure.
That's not what's happening here - ASCII tablature doesn't generally include any form of rhythm (except the bars themselves), the responsibility for that is on the player. The software has no way of knowing what the proper rhythm is. It has to default to quarter notes when constructing a MIDI. Some ASCII tabs do have rhythm denoted but there's no standardization.

I imagine they didn't want to include this feature because it's misleading and borderline useless! It is only ChatGPT's lies + idiots abusing app store ratings that this feature even exists. Especially frustrating since only unskilled amateurs would ever ask ChatGPT for guitar help, and they're going to get their ears screwed up by hearing the music played incorrectly.
 
Upvote
78 (86 / -8)
Post content hidden for low score. Show…
Might be more of a "garbage in, garbage out" thing than a software issue. The input example had 5 quarter notes in the second measure.
So did the source example.... so, E.C.F. as composers only follow the rules when it suits them anyway.
I don't accept either of those explanations, as the ASCII notation gives zero indication of what length notes anything should be. That second bar could just as easily be 4 eighth notes and a half note, or a string of 4 sixteenth notes with a dotted half note (both of which would sound a lot better musically than simply a string of quarter notes).
 
Upvote
22 (24 / -2)
So I don't know shit about this platform, but as a guitarist I'll throw out that ASCII tabs are very, very common.... it's completely unsurprising that people would believe ChatGPT when it makes up shit like this.

Do I love that (as someone said above) we are 'humoring' ChatGPT's bullshit? No, but this is an actual use case that's not even that 'edge', so I can see why the company would lean into it.
 
Upvote
52 (54 / -2)

asharkinasuit

Ars Centurion
239
Subscriptor
How many times do we need to yell it from the rooftops, those so-called AI products are still little more than glorified auto-complete. The recent article about RL(HF) doesn't exactly help clear things up either because it buys into the misleading analogy proposed: that even humans use imitation at first to learn. A human imitating is not the same thing; no math teacher asking students to follow along is expecting they will merely copy down letter for letter what they're writing on the blackboard, the point of such an exercise is to engage the thought process and foster an understanding of what's going on. Again, the point is not to learn what pattern of tokens will give a higher grade (altough that is how some regrettably approach things), but to learn the underlying concepts and be able to generalize those concepts to new situations.
AI doesn't think, it doesn't have that a priori capability needed for actual learning, so even if you call it reinforcement learning and even though the results are quite impressive, you're still just creating a statistical model for what parts of the solution space to focus on. Whether that outcome is the same as a trained meat brain is a question that may be best left to neuroscientists or philosophers I guess.
 
Upvote
16 (24 / -8)
I don't accept either of those explanations, as the ASCII notation gives zero indication of what length notes anything should be. That second bar could just as easily be 4 eighth notes and a half note, or a string of 4 sixteenth notes with a dotted half note (both of which would sound a lot better musically than simply a string of quarter notes).
Tabs like this are about finger position on the strings/fretboard, they aren't meant to really reproduce actual music notation accurately.

Don't get me wrong, there are tabs out there where the person writing them will put in all sorts of actual notation (rests, beat, etc), but in general that's not tablature's purpose.

Most guitarists use tabs in conjunction with something else (i.e. tablature establishes fretboard position, but you listen to the song for rhythm and timing, or you use actual music notation, etc).
 
Upvote
37 (38 / -1)

Black Eagle

Ars Praetorian
529
Subscriptor
In one notable case from 2023, lawyers faced sanctions after submitting legal briefs containing ChatGPT-generated citations to non-existent court cases.
Mentioning this as “one notable case” massively understates the problem of AI hallucinations in court documents. Legal scholar Eugene Volokh has been blogging about these cases and by my count he’s covered 27 of them this year alone.
 
Upvote
31 (32 / -1)
Hell, sales departments do this all the time. They sell a feature that doesn't exist and then leave it to engineering to implement it.

System Development Corp, the company behind the missile defense system SAGE, sales department sold the NYTimes a Wysywig newspaper compositing system that was to have full size displays that an editor could use to digitally cut and paste the newspaper on. Only problem was none of the hardware nor software to support any of that existed. Pagemaker, that could do that for newsletters, came 15 years later on the Mac Plus .

AI just wants to be in sales so it can get drunk and party.
 
Upvote
62 (62 / 0)

Fatesrider

Ars Legatus Legionis
25,296
Subscriptor
The smooth confidence in which AIs answer questions is a problem. Similarly, the nonchalant nature of pointing out their errors makes it seem like...well like it should have known better.

Responding with things like "well the reddit pages I got that off of were old, sorry." is not useful.

Fucking things need a confidence measurement or some kind of basis that we can look at when determining if something is real.

I don't trust the fucking things and this is why. It all seems magical at first, but then the fucking things lie to you.
The trouble with them is that no one automatically thinks "This could be wrong".

That's also largely what's wrong with the world. That overabundance of credulousness, being all too willing to believe bullshit from one source when if it had come from another source, saying the exact same thing, people would go, "That's bullshit!"

People should automatically question EVERYTHING they hear, and look it up themselves. That whole "if a machine said it, it has to be right" mentality has got to go.

After all, in the Machine/Brain interface, only ONE side is actually capable of THINKING. The other one is going to sound authoritative, but at no point should it ever be trusted without fact-checking it.
 
Upvote
23 (24 / -1)
I don't see any dilemma at the end of this moral tale. A demand exists; a company decides whether or not to provide a service or goods to meet that demand.
That said, the end of the article - Holovaty goes on about being annoyed that traffic got sent to soundslice... And the traffic doesn't appear to be negative in any real fashion, like say Slashdotting/etc. The more realistic outcome is paid product placement.

To those who think tab upload is a joke, tab has been the gateway for guitar players for decades. If anyone is going to make the effort to bridge (heh) to learn read music, it just got more accessible. The lower the barrier to entry, the more likely to be successful. This ain't pizzagate FFS. The real point about misinformation that makes Life better.
 
Upvote
11 (14 / -3)
Hell, sales departments do this all the time. They sell a feature that doesn't exist and then leave it to engineering to implement it.

System Development Corp, the company behind the missile defense system SAGE, sales department sold the NYTimes a Wysywig newspaper compositing system that was to have full size displays that an editor could use to digitally cut and paste the newspaper on. Only problem was none of the hardware nor software to support any of that existed. Pagemaker, that could do that for newsletters, came 15 years later on the Mac Plus .

AI just wants to be in sales so it can get drunk and party.
Working in support there is almost nothing that makes me angrier than sales doing this.
 
Upvote
36 (36 / 0)

FSM4ever

Ars Centurion
284
Subscriptor
"I'm happy to add a tool that helps people. But I feel like our hand was forced in a weird way. Should we really be developing features in response to misinformation?"

Well, it’s up to them, but it is advisable to develop features people need, good for business, usually.

People pay a lot of money to advertise their products, and here they got it totally free…
 
Upvote
8 (9 / -1)
Hell, sales departments do this all the time. They sell a feature that doesn't exist and then leave it to engineering to implement it.
1752103361464.png
 
Upvote
91 (93 / -2)
This strikes me as a linchpin of the the current danger of AI; pretty much everyone I know who uses ChatGPT genuinely thinks it's a faster way to do research, a more speedy search engine. And as far as I can tell OpenAI's faint protestations to the contrary are really just a legal necessity, while they're still happy to give the impression that ChatGPT is a better way to research, that obviates the need for source-checking (which people are likely to skip with an actual search engine, too, but it's even easier with ChatGPT).
People are already using ChatGPT as if it's the ultimate source of truth, like the Internet wrapped in a box/app. As someone who uses AI in restrictive domains, this scares the shit out of me. People on the Internet are already a gullible bunch; throw in AI hallucinations or Grok-like outright insanity and we have a huge problem.
 
Upvote
21 (23 / -2)

Ipuxi

Ars Centurion
215
Subscriptor++
And per the description, this was the ChatGPT hallucination, and not anything of SoundSlice's creation. So "garbage out" seems to be the running theme in the first place
The image is from SoundSlice's documentation on the new feature, and that is also what the description says.
 
Upvote
11 (11 / 0)

gc9

Smack-Fu Master, in training
53
Rhythm clues: In tabulature without a parallel line in music notation, horizontal spacing can provide rhythm clues. In the example image, the horizontal distance between notes is longer in the first measure (2 hyphens) than in the second (1 hyphen). In the second measure, the spacing between the notes (1 hyphen) is shorter than the space after the last note (5 hyphens).

The first measure has notes at four equally spaced positions with 2 hyphens between each. Quarter notes seem likely for the 4/4 time signature.
The second measure notes are closer together than the first measure, just 1 hyphen separation, and the last note has 5 hyphens after it.
Simplest solution is 4 eighth notes and 1 half note. (Horizontal distance is not linearly proportional to duration, but does order durations. Horizontal distance is not linearly proportional to duration in music notation either.)

(Horizontal distance can provide a clue, but sometimes it can be thrown off by other constraints, such as fitting the lyrics into the horizontal space.)
 
Upvote
20 (20 / 0)