George Carlin’s heirs sue comedy podcast over “AI-generated” impression

This is the thing that profoundly offends me about the AI discussion. The lack of honesty.

The argument that it threatens to give people who can't dedicate the time and discipline the ability to fully realize their ideas, that's "democratization", and that's valid. But I haven't seen that in motion. Just a moneysaving tool for the bosses to devalue the skill and labor of people with a certain skillset.

Only one AI flaunter I've engaged with was honest enough to admit it's simple tribal revenge for him: "owning artists who laughed at NFTs". I think this is more what this drive is motivated by than any altruistic purpose, "Effective" or otherwise.

Is it that hard to be honest? Their idols/men (and it's always men) of vision make no secret of their plans ruining people's lives, why do they think they need to lie to anyone skeptical? Is it just a veiled insult ("you're dumb, here's a stupid placating argument HAHA you engaged with it #owned")?
That. Art is the most democratic thing possible; practically anyone can create art in some form. Finances might keep some people from creating some kinds of art, but there's something available to literally everyone. Are you going to be good at it right away? No, it takes passion and practice. But you could pick up a pencil and draw today. You could pick up a paintbrush and paint tomorrow. You could sing, or rap. You could gather your friends and put on a play. The sky is the limit.

AI does absolutely nothing to democratize the creation of art, because it's already as democratic as it gets. Quite the opposite, as so many others have pointed out; it steals art from the people, by taking all the things people have created and mashing them into a stochastic slurry that kinda sucks and has nothing new to say, but costs rich assholes less money than paying artists what they're worth.
 
Upvote
5 (7 / -2)

DeschutesCore

Ars Scholae Palatinae
1,079
That. Art is the most democratic thing possible; practically anyone can create art in some form. Finances might keep some people from creating some kinds of art, but there's something available to literally everyone. Are you going to be good at it right away? No, it takes passion and practice. But you could pick up a pencil and draw today. You could pick up a paintbrush and paint tomorrow. You could sing, or rap. You could gather your friends and put on a play. The sky is the limit.

AI does absolutely nothing to democratize the creation of art, because it's already as democratic as it gets. Quite the opposite, as so many others have pointed out; it steals art from the people, by taking all the things people have created and mashing them into a stochastic slurry that kinda sucks and has nothing new to say, but costs rich assholes less money than paying artists what they're worth.
It's worse than that.

When all the artists starve and die, or just give up their dreams completely because they can't compete with fractional pennies on the dollar per image, AI will have nothing to train on but its own aging output.
 
Upvote
5 (7 / -2)

hillspuck

Ars Scholae Palatinae
2,179
🔄🎨 AI remixes inspire new art. 📈💡 AI's evolution opens innovation. 🛠️🎭 AI's a tool, not a replacement. 🚀🧠 AI surpasses limitations, like in chess & Go. 🔮🤖 Future AI will invent, not just follow. 🌌🧑‍🎨 AI expands creative horizons for all.
Well, that certainly makes it clear that this post didn't have anything new to add, and your points have already been addressed. Thanks.
 
Upvote
7 (7 / 0)

s73v3r

Ars Legatus Legionis
25,618
Let's be clear: AI in art isn't about 'forcing artists into manual labor'
If you can't make money being an artist, there isn't much left.


. It's about enhancing and broadening the scope of what's possible creatively.
No, it's not. It's about not having to pay actual artists, or spend time actually learning how to create art.


There really is no way to get doomers/luddites/decels
Ahh yes, the people who want to be able to support their families are the ones in the wrong. How dare they want to be able to feed themselves.
 
Upvote
8 (9 / -1)

s73v3r

Ars Legatus Legionis
25,618
Because it's inherently transformative
It really isn't.


Because masterpieces like It Takes A Nation of Millions or Paul's Boutique are basically impossible now.
Bullshit. You're just unwilling to put in the work.


Because some of the most original and unique artists I've ever heard sample heavily.
They're not original or unique if they rely on other people's work that heavily.

Because I do not accept that ephemera can be owned.
Nobody gives a shit how you feel. You just want to be able to take from others without compensating them for it.

Because I don't give a rat's ass about anyone's ability to commercially exploit creative endeavors.
Then why should anyone give a rat's ass about your ability to use other people's work because you can't come up with your own stuff?

Because I find copyright morally and ethically despicable.
Again, nobody cares how you believe that you're entitled to the work of others for free, and that artists don't deserve compensation.

I can keep going like this for pages.
You don't have to, we all know your only reason for this is that you don't believe artists deserve compensation for their hard work.
 
Upvote
4 (8 / -4)

s73v3r

Ars Legatus Legionis
25,618
Good to see you're already up to the bargaining stage. But remember, AI's seismic shift is inevitable
Just like the seismic shift crypto was supposed to have?

, not about defeating talent but expanding creativity.
By defeating talent and removing the ability of artists to actually support themselves, because you're too damn cheap to pay artists, and too damn lazy to learn how to do it yourself.


🌊🎨 It's time to adapt and ride the wave, not try to stop it. 🏄‍♂️🤖
The same refrain: It's always other people that have to accept that their livelihood is going to be taken away. You never have to worry about it.
 
Upvote
9 (10 / -1)
Well, that certainly makes it clear that this post didn't have anything new to add, and your points have already been addressed. Thanks.

I went into great detail addressing your points. But you wanted brevity, and you got it, but it does mean that you'll miss a lot of nuances. You're just making the biggest problem I have communicating my ideas worse, by choosing to ignore what I'm already trying to condense in the hopes of getting people to understand what I'm trying to say:

20240129_182803.jpg




Moving on: I'm thrilled to share my latest creation: "The Carlin AI Chronicles," a comedy skit series that takes a deep dive into the world of AI, humor, and the legacy of the legendary George Carlin.

About the Series:
Inspired by a vibrant discussion in an online forum, this series is a playful yet poignant exploration of the intersections between artificial intelligence, comedy, and societal views. Through the lens of humor, we explore the complexities, fears, and potential of AI in our lives.

What to Expect:

Seven Unique Skits: Each episode is based on a page from the forum thread, featuring a comedic robot with the persona of George Carlin. Imagine a robot with long white hair and a beard, dressed in Carlin's iconic black attire, tackling the absurdities of our modern world.
Diverse Themes: From the ethics of AI-generated comedy to the surreal future of AI content creation, no topic is off-limits. We delve into legal debates, the role of AI in art, and even the bizarre world of AI talk shows.
A Visual Treat: Accompanied by stunning, wide-screen images, each skit is brought to life with vivid visuals, enhancing the comedic experience.
A Tribute to Carlin: While exploring futuristic themes, the series pays homage to George Carlin's unique style of humor – irreverent, insightful, and unapologetically honest.
Why You Should Watch:
"The Carlin AI Chronicles" is more than just a comedy series; it's a reflection on our evolving relationship with technology. It's a celebration of humor's power to connect us, whether through human wit or AI creativity. So, sit back, enjoy, and let's laugh our way through the intriguing world of AI!

Enjoy!


View: https://youtu.be/xWfX5Z5ikfI
 
Upvote
-10 (0 / -10)

hillspuck

Ars Scholae Palatinae
2,179
I went into great detail addressing your points. But you wanted brevity, and you got it, but it does mean that you'll miss a lot of nuances.
I've read a lot of your posts, and you just basically disagree and keep circling around and around the same arguments again for the bulk of them.

If you have no setting between "one line of topics that are identical to the same thing posted dozens of times before" and "in-depth firehose of words and pictures often times repeating the same arguments but maybe burying some nuance somewhere", then yes, I'm going to miss them.
 
Upvote
9 (9 / 0)
I've read a lot of your posts, and you just basically disagree and keep circling around and around the same arguments again for the bulk of them.

If you have no setting between "one line of topics that are identical to the same thing posted dozens of times before" and "in-depth firehose of words and pictures often times repeating the same arguments but maybe burying some nuance somewhere", then yes, I'm going to miss them.

I addressed your points by highlighting:

1. Creativity as Remixing: Much of what we call 'original' art is actually creative remixing. AI, like electroswing blending old and new, is another tool for creating novel styles from existing elements.

2. AI's Role in Art: AI isn't just fabricating; it's enhancing the creative process. It helps transform ideas into reality, making users 'directors' of creativity, not just passive creators.

3. Reverse God of the Gaps: (I'm pretty sure I hadn't talked about this before, so this is what I mean by missing some of the nuances) Critics often highlight what AI can't do now as proof of its limitations. History, however, shows a pattern where AI overcomes these perceived barriers, much like in chess, Jeopardy, and Go. This skepticism mirrors the 'god of the gaps' argument, failing to see the evolving nature of AI's capabilities. As AI progresses, it's likely to breach domains once thought exclusively human. The future of AI, especially as it approaches AGI, will challenge our current understanding of intelligence and creativity, moving beyond just following instructions to inventing and discovering autonomously.

Moving on, @s73v3r you express concerns about AI impacting artists' livelihoods and the value of their work. But let's face reality: AI is advancing regardless of these concerns. What's the point of just lamenting these changes?

Why not instead ask, how can we adapt to and leverage AI, rather than fear it? History shows that with every major technological shift, new opportunities emerge. It's less about AI defeating talent and more about how talent can adapt to and thrive with AI.

AI's transformation of industries is inevitable. The challenge is to find ways to integrate it constructively. What are your ideas on adapting to this change, rather than just opposing it?
 
Upvote
-9 (1 / -10)

hillspuck

Ars Scholae Palatinae
2,179
I addressed your points by highlighting:

1. Creativity as Remixing: Much of what we call 'original' art is actually creative remixing. AI, like electroswing blending old and new, is another tool for creating novel styles from existing elements.
"Much" is not the same thing as "all", though. And I would even dispute the "much" part. Taking your example of electroswing, where did the swing, jazz, house, and hip hop that it remixes come from? Oh, right, human creativity that added new things. Current AI doesn't add new things. Most humans do. Current generative AI is a dead end.

And speaking of not adding anything new, multiple people have already addressed this point (which you've already made before). You ignore it because it doesn't fit your narrative. As such, I will be ignoring any of your posts asserting it again.

2. AI's Role in Art: AI isn't just fabricating; it's enhancing the creative process. It helps transform ideas into reality, making users 'directors' of creativity, not just passive creators.
It only "enhances" the creative process of those unable to create at all. It does not allow them to create anything new but purely remix (see #1 above).

Again, you keep bringing this up without considering the issue of zero creative elements being introduced. As such, I will be ignoring parts of your posts that show zero creativity like bringing this up again.

3. Reverse God of the Gaps: (I'm pretty sure I hadn't talked about this before, so this is what I mean by missing some of the nuances) Critics often highlight what AI can't do now as proof of its limitations. History, however, shows a pattern where AI overcomes these perceived barriers, much like in chess, Jeopardy, and Go.
No, actually, it does not. We've been pretty stalled on AI for decades.
[Within 10 years]
A computer would be world champion in chess.
A computer would discover and prove an important new mathematical theorem.
Most theories in psychology will take the form of computer programs.
That's you up there, with your "just around the corner" thinking. Wonder when that was actually said? Oh, wait, 1957. How is that pattern going 67 years later? They weren't even able to beat a really good chess player until 1987. They weren't able to beat world champion Kasparov until 1997, and even that has an asterisk beside it. And all these wins were against computers that were far out of reach of normal incomes. It wasn't until the mid- to late-2000s that "normal" commercial equipment could beat the masters.

There is a pattern here. That pattern is that it always takes way longer than people saying "just around the corner" predict.

And as you keep following a pattern and posting similar things (you've posted point #3 before in other forms), I will no longer respond to this rehashed point that does not stand up against actual history.

Personally, I think we may eventually get there with AI. But I think it's as likely to be 100+ years from now as it is 10. But one of the problems you have is you keep saying "AI will _____" as if AI is one single thing. It's not. LLMs (or all generative AI) are not the entirety of AI. It's possible they'll create an AI that actually does have the power of creating something new. I seriously doubt it will come out of this current crop, just like this current crop didn't come out of all the lines of research they were doing in the 50s, 60s, and 70s. The lines that they were sure were going to produce a human level intelligence just around the corner.

In much the same way, your posts never seem to come up with something new. They just keep regurgitating the same lines and pounding the podium. Perhaps you've got something new just around the corner.
 
Upvote
7 (8 / -1)
"And speaking of not adding anything new, multiple people have already addressed this point (which you've already made before). You ignore it because it doesn't fit your narrative. As such, I will be ignoring any of your posts asserting it again.
It helps to remember that Kamus writes most long form posts via LLM. The LLM doesn’t take rebuttals into account. It’ll never “learn” from discussions here.

Though the LLM he uses may not be the only one with that problem.
 
Upvote
8 (8 / 0)
"Much" is not the same thing as "all", though. And I would even dispute the "much" part. Taking your example of electroswing, where did the swing, jazz, house, and hip hop that it remixes come from? Oh, right, human creativity that added new things. Current AI doesn't add new things. Most humans do. Current generative AI is a dead end.

And speaking of not adding anything new, multiple people have already addressed this point (which you've already made before). You ignore it because it doesn't fit your narrative. As such, I will be ignoring any of your posts asserting it again.

I'll correct myself then, because in this context: "much" ="all" actually; everything is influenced from other works or things that came before them. Electroswing was used as an example because it was an obvious one (again: it's in the name!) to illustrate how new art styles emerge from blending existing ones.

Now, consider the development of musical genres. Rock 'n' roll emerged from blues and country music. Heavy metal evolved from rock, adding distorted electric guitar, an instrument that itself was a progression from the acoustic guitar. The electric guitar's invention revolutionized music, leading to new genres like rock and metal.
Then there's pop music, which evolved from rock, incorporating elements from various genres to create something appealing to a broader audience. Each of these styles didn't materialize in isolation; they were all influenced by what came before.

Film soundtracks offer another example. John Williams' iconic 'Star Wars' score was heavily influenced by earlier classical compositions, with some critics noting similarities to works by composers like Holst and Wagner. This isn't plagiarism (and damn... is it close in this instance) but an example of how artists draw from existing works to create something new and distinct.

But of course... this argument will become moot very soon anyway; as AI models are being increasingly trained by synthetic data, most of the data that 'inspires' newer models will come from... other AI models that came before them. There is no dead end here, far from it.

It only "enhances" the creative process of those unable to create at all. It does not allow them to create anything new but purely remix (see #1 above).

Again, you keep bringing this up without considering the issue of zero creative elements being introduced. As such, I will be ignoring parts of your posts that show zero creativity like bringing this up again.

You question the ability of AI-enhanced processes to create anything new, reducing it to mere remixing. But what exactly do you mean by that? The very nature of creativity involves taking inspiration from existing works and transforming them into something novel. This process, which you label as 'just remixing,' can indeed result in entirely new, original works, as I explained, and provided numerous examples of above.

No, actually, it does not. We've been pretty stalled on AI for decades.

That's you up there, with your "just around the corner" thinking. Wonder when that was actually said? Oh, wait, 1957. How is that pattern going 67 years later? They weren't even able to beat a really good chess player until 1987. They weren't able to beat world champion Kasparov until 1997, and even that has an asterisk beside it. And all these wins were against computers that were far out of reach of normal incomes. It wasn't until the mid- to late-2000s that "normal" commercial equipment could beat the masters.

There is a pattern here. That pattern is that it always takes way longer than people saying "just around the corner" predict.

And as you keep following a pattern and posting similar things (you've posted point #3 before in other forms), I will no longer respond to this rehashed point that does not stand up against actual history.

Personally, I think we may eventually get there with AI. But I think it's as likely to be 100+ years from now as it is 10. But one of the problems you have is you keep saying "AI will _____" as if AI is one single thing. It's not. LLMs (or all generative AI) are not the entirety of AI. It's possible they'll create an AI that actually does have the power of creating something new. I seriously doubt it will come out of this current crop, just like this current crop didn't come out of all the lines of research they were doing in the 50s, 60s, and 70s. The lines that they were sure were going to produce a human level intelligence just around the corner.

In much the same way, your posts never seem to come up with something new. They just keep regurgitating the same lines and pounding the podium. Perhaps you've got something new just around the corner.

I'm so confident in this exponential trend that I'm willing to bet $50 we'll see AGI before the end of this decade (2029). Your view of AI taking up to 100 years seems to underestimate the pace of current advancements. LLMs today are making people feel understood, a feat that seemed impossible just a 4 years ago. This isn't 'just around the corner' optimism, the trend and data back up my optimism. (if you want, we can make that bet in a fixed Bitcoin amount today, because by 2029 50 dollars aren't going be as valuable as they are today)
 
Last edited:
Upvote
-10 (0 / -10)

IncorrigibleTroll

Ars Tribunus Angusticlavius
9,228
It really isn't.

It really is. You're clearly out of your depth here. I mean, if we're just going to make assertions, I can back mine up with demonstrations and you can just pout.

Bullshit. You're just unwilling to put in the work.

Again, this is your ignorance talking. Soundclash doesn't (cannot, even) happen through a single desk. And sampling involves shitloads of work. You know who just slaps a bunch of samples together and calls it a day? Teenagers who just grabbed their first cracked copy of FL Studio. Anybody worth listening to is going to be chopping, processing, recoloring, timestretching, repitching, etc the hell out of their samples, because that's where the artistry and the pleasure both live.

Unless by "put in the work", you mean the work of sample clearance. In that case, yeah, sure, I'm not willing to wade through that nightmare. Maybe look a little more deeply into what's involved with that. It's a disastrous shitshow of a process with more veto points than congress, and it's kind of adorably naive that you think any more than a nominal fraction of any fees would make it to the actual artist. Sampling is literally the worst fucking model to cite. Cover songs (which are not the least bit transformative and much less creative than even the laziest, most adolescent sampling) have an infinitely better licensing model than samples (cumpulsory, fixed rate, percentage of revenues) and I wouldn't be so incensed had you pointed at that.

They're not original or unique if they rely on other people's work that heavily.

Really now, this is nothing more than a childish circular wank, not an argument. I'd rather not engage it at all, but if you're trying to say that Death Grips sound like Jane's Addiction because of a few samples, you should probably just close your mouth on this topic forever. Dadaist poetry also "samples" heavily, and that's clearly distinct from other styles poetry or prose. You mostly remind me of those asshat guitarists who used to troll the synthesizer forums on gearslutz to dictate what is and is not "real music", so I'll rate your opinion about as highly as theirs: somewhere well below the sub-basement. Good for you, you know 3 chords. We're all so very proud of you. Now go away.

Nobody gives a shit how you feel. You just want to be able to take from others without compensating them for it.

This argument cuts both ways, bub. I don't give a shit about how you feel about it. I've never cleared a sample and I never will, and I feel not one iota of guilt. Nor would I be the least bit offended or bothered if somebody samples one of my tunes (though I would love a heads-up so that I can check out how it was used), and I most certainly would not be sticking my hand out about it. Get your fucking toxic moneygrubbing out of my art. IP maximalists are just landlord apologists in a different context, and I have equal sympathy for both.

Then why should anyone give a rat's ass about your ability to use other people's work because you can't come up with your own stuff?

And again, the chump who knows fuckall about that of which he speaks makes hard and fast declarations. Fuck off with the condescending nonsense, you ass. Sampling a rottweiler bark from somebody's youtube video to use as one layer of several in a snare drum is coming up with my own stuff. If you're going to insist otherwise, well I don't know what to tell you other than that you're speaking out of your asshole (and if the timbre is interesting, I might sample it and use it in the mids layer of a bass). And I don't owe that person any more than a kidnapper owes the magazines they cut the ransom note letters out of.

Again, nobody cares how you believe that you're entitled to the work of others for free, and that artists don't deserve compensation.
You don't have to, we all know your only reason for this is that you don't believe artists deserve compensation for their hard work.

You speak for everyone, do you? And don't make ignorant statements about what I do or don't pay for. I can just about guarantee I put more money in the hands of artists than you do. But by all means, continue to feel incredibly smug because you give spotify $15/month.
 
Upvote
-3 (3 / -6)

IncorrigibleTroll

Ars Tribunus Angusticlavius
9,228
It helps to remember that Kamus writes most long form posts via LLM. The LLM doesn’t take rebuttals into account. It’ll never “learn” from discussions here.

Though the LLM he uses may not be the only one with that problem.

The tech is impressive, but it isn't quite good enough for that usage yet. That does go a long way to explaining why even on the occasions when Kamus makes a point I'm inclined to agree with, he makes it in a really lousy way.

You're a lawyer or at least in a law-adjacent field, right? Those dolts submitting briefs written with ChatGPT must be a great source of amusement and professional facepalming. It's good enough for fluffy business copy (the stuff that was a lot of words to say little to begin with), but not much else yet.
 
Upvote
-4 (2 / -6)
The tech is impressive, but it isn't quite good enough for that usage yet. That does go a long way to explaining why even on the occasions when Kamus makes a point I'm inclined to agree with, he makes it in a really lousy way.

You're a lawyer or at least in a law-adjacent field, right? Those dolts submitting briefs written with ChatGPT must be a great source of amusement and professional facepalming. It's good enough for fluffy business copy (the stuff that was a lot of words to say little to begin with), but not much else yet.

The reason you 'sometimes agree with me' stems from the fact that cynicism often struggles to stand up against this thing called 'reality'. Consider both of your stances on Bitcoin: despite numerous pronouncements of its demise by people like you, it remains resilient and relevant. Trends backed up by evidence always have the final say.

Vindication may take time, but when the tide turns, even the staunchest skeptics, like yourself, might find themselves adjusting their stance. Hell, It wouldn't be surprising if, by the end of this year, you finally capitulate and buy some Bitcoin, possibly through an ETF. (which is just an IOU, and kind of defeats the purpose of getting into Bitcoin in the first place, but whatever 🤷)

Regarding the functioning of LLMs and their ability to 'remember' rebuttals: While LLMs don't have memory in the human sense, they operate within context windows. This means if a rebuttal or any piece of information is within the current discussion's context window, the LLM can access and use it for generating responses. It's not about recalling past conversations but about processing the available information within the current interaction's scope. This method allows for coherent and contextually relevant responses, as long as the discussion details remain within the LLM's accessible context. So, while It can't 'remember' past sessions, It can maintain continuity and address points effectively within an ongoing conversation.

Moreover, the actual reason I often find myself repeating points, especially to individuals like hillspuck, isn't a shortcoming of the LLM's memory capabilities. Rather, it's a necessity to continuously address and counter persistent skepticism. When someone, like him admits to not fully engaging with detailed responses, it becomes inevitable that my points need reiterating.
 
Upvote
-9 (0 / -9)

IncorrigibleTroll

Ars Tribunus Angusticlavius
9,228
The reason you 'sometimes agree with me' stems from the fact that cynicism often struggles to stand up against this thing called 'reality'. Consider both of your stances on Bitcoin: despite numerous pronouncements of its demise by people like you, it remains resilient and relevant. Trends backed up by evidence always have the final say.

Vindication may take time, but when the tide turns, even the staunchest skeptics, like yourself, might find themselves adjusting their stance. Hell, It wouldn't be surprising if, by the end of this year, you finally capitulate and buy some Bitcoin, possibly through an ETF. (which is just an IOU, and kind of defeats the purpose of getting into Bitcoin in the first place, but whatever 🤷)

Regarding the functioning of LLMs and their ability to 'remember' rebuttals: While LLMs don't have memory in the human sense, they operate within context windows. This means if a rebuttal or any piece of information is within the current discussion's context window, the LLM can access and use it for generating responses. It's not about recalling past conversations but about processing the available information within the current interaction's scope. This method allows for coherent and contextually relevant responses, as long as the discussion details remain within the LLM's accessible context. So, while It can't 'remember' past sessions, It can maintain continuity and address points effectively within an ongoing conversation.

Moreover, the actual reason I often find myself repeating points, especially to individuals like hillspuck, isn't a shortcoming of the LLM's memory capabilities. Rather, it's a necessity to continuously address and counter persistent skepticism. When someone, like him admits to not fully engaging with detailed responses, it becomes inevitable that my points need reiterating.

TL;DR.
 
Upvote
1 (4 / -3)
You're a lawyer or at least in a law-adjacent field, right? Those dolts submitting briefs written with ChatGPT must be a great source of amusement and professional facepalming. It's good enough for fluffy business copy (the stuff that was a lot of words to say little to begin with), but not much else yet.
I am, in fact, a lawyer.

Any attorney using it right now without massive checking is fucking insane.

I actually, perhaps surprisingly to some, find that there will be a very good use for it in law…eventually. The big pull will be synthesizing both publicly available stuff AND (for larger firms) pulling from internal copies of arguments. A few small examples of potential use:
  • add all oral arguments before a given judge that they ruled favorably to your position and use that knowledge to better phrase your arguments
  • keep a record of wins and losses and analyze different judges and courts preferences from a variety of angles to help decide strategy
  • eliminate form documents, but use AI to help keep consistent tone and general arguments on specific topics (of course, have ones that argue both sides, if you’re a large firm)
Basically, AI has potential uses. Letting it actually write a document, right now? Fuck no. Create citations and never bother to check them? That should be a one-step disbarment.
 
Upvote
7 (7 / 0)

hillspuck

Ars Scholae Palatinae
2,179
Basically, AI has potential uses. Letting it actually write a document, right now? Fuck no. Create citations and never bother to check them? That should be a one-step disbarment.
Current generative AI doesn't care about being truthful; it only aims for truthiness. Sometimes with enough good data, this can be enough. Many, many other times it results in complete bullshit that looks plausible until someone who knows about the subject reads it.

It has its uses, but the direction of the technology isn't pointing in that direction. That's why it's a bit of a dead end for those kind of uses. It's baked in. Maybe a different AI will make a lot more progress, but this ain't it.
 
Upvote
6 (6 / 0)
Current generative AI doesn't care about being truthful; it only aims for truthiness. Sometimes with enough good data, this can be enough. Many, many other times it results in complete bullshit that looks plausible until someone who knows about the subject reads it.

It has its uses, but the direction of the technology isn't pointing in that direction. That's why it's a bit of a dead end for those kind of uses. It's baked in. Maybe a different AI will make a lot more progress, but this ain't it.

It's true that generative AI, like ChatGPT, prioritizes 'truthiness' over verifiable truth due to its design. It's based on patterns in data rather than factual accuracy. However, this isn't the end of the road for AI's usefulness or potential development.

Consider a human without access to fact-checking tools. Their opinions and statements are only as good as their personal knowledge and biases. Give them access to robust fact-checking tools, and their ability to provide accurate information dramatically improves. Similarly, as AI technology evolves and integrates more sophisticated fact-checking and data verification mechanisms, its utility and accuracy in tasks like legal document preparation will significantly increase.

The current trajectory of AI technology is not a dead end but a stepping stone. As we develop better ways to integrate verifiable data sources and improve AI's understanding of accuracy, we'll see its applications in fields like law becoming more reliable and widespread. AI's current limitations while working towards enhancing its capabilities, much like how we continuously improve our own methods of information verification and analysis.

EDIT: I forgot to add this: In a recent discussion on another thread, I elaborated on the idea of AI agents as personal 'champions,' similar to the champions Tyrion had in 'Game of Thrones.' These AI agents, like independent Bitcoin nodes, could work exclusively for an individual, validating information and ensuring integrity in a decentralized manner.

Just as Bitcoin nodes independently validate transactions, ensuring the integrity of financial data, these AI 'champions' could independently verify facts, cross-check information, and even counter misinformation. They would operate based on set parameters, creating a network of checks and balances.

The concept is not just theoretical. It's rooted in the same principles that have made Bitcoin's decentralized verification model successful. These AI agents could serve various roles - from fact-checking news to safeguarding financial transactions, much like a 'smart wallet' I envisioned back in 2017. They could also provide personalized education and content curation, each agent specialized in its domain yet working collaboratively to maintain honesty and accuracy.

This idea of AI agents as personal validators and helpers is part of a larger vision where AI becomes an extension of our intellect and agency, similar to how the internet transformed access to information.

TL;DR: Think of AI 'champions' as personal fact-checkers, similar to Bitcoin nodes. In Bitcoin, truth spreads faster than lies, a contrast to the usual wildfire spread of misinformation online. These AI agents could validate information for you and even share verified data with other trusted agents, following predefined rules. It's like having your own Bitcoin node, but for information integrity, ensuring accuracy and countering falsehoods effectively.
 
Last edited:
Upvote
-9 (0 / -9)
"I still think a database doing less than 10 transactions per second could run a significant part of the world financial system, so don't expect me to concede the limitations of other over-hyped technologies."

(P.S. Lightning network doesn't fix anything with Bitcoin, because to be a secondary network the results of lightning transactions have to be written to the Bitcoin blockchain, and a database doing less than 10 transactions per second isn't going to be able to do that either).
 
Upvote
5 (5 / 0)
"I still think a database doing less than 10 transactions per second could run a significant part of the world financial system, so don't expect me to concede the limitations of other over-hyped technologies."

(P.S. Lightning network doesn't fix anything with Bitcoin, because to be a secondary network the results of lightning transactions have to be written to the Bitcoin blockchain, and a database doing less than 10 transactions per second isn't going to be able to do that either).


Your take on the Lightning Network is like criticizing Visa for not fixing the entire financial system's speed issues. LN channels can stay open indefinitely. And when LN does settle, it’s just one transaction on-chain, no matter how many occurred off-chain.

I repeat: channels can stay open indefinitely. Years, decades, centuries could go by without a channel closing. But if for whatever reason they close, potentially millions of transactions would be settled in just one on chain transaction... how is that inefficient?
 
Upvote
-7 (0 / -7)
Your take on the Lightning Network is like criticizing Visa for not fixing the entire financial system's speed issues. LN channels can stay open indefinitely. And when LN does settle, it’s just one transaction on-chain, no matter how many occurred off-chain.

I repeat: channels can stay open indefinitely. Years, decades, centuries could go by without a channel closing. But if for whatever reason they close, potentially millions of transactions would be settled in just one on chain transaction... how is that inefficient?

The way of transferring bitcoin between different lightning network channels is to settle those transactions, so your coins are committed to that channel until the channel decides to reconcile them.

If you're only using a single lightning channel and don't need on-ramps or off-ramps to Bitcoin, this isn't an issue - but if all the people you want to exchange coins with are on the same channel, and that channel isn't reconciled with Bitcoin for years, it's a distinct currency at that point.

Proof-of-work distributed consensus systems achieve the consensus because astronomical time and effort are spent verifying transactions. If you want fast transactions, you can't verify the system with proof-of-work. It's physically impossible. A Lightning Network channel creator is pinky-swearing that at some point in the future they'll reconcile the transactions to the Blockchain.
 
Upvote
5 (5 / 0)
The way of transferring bitcoin between different lightning network channels is to settle those transactions, so your coins are committed to that channel until the channel decides to reconcile them.

If you're only using a single lightning channel and don't need on-ramps or off-ramps to Bitcoin, this isn't an issue - but if all the people you want to exchange coins with are on the same channel, and that channel isn't reconciled with Bitcoin for years, it's a distinct currency at that point.

Proof-of-work distributed consensus systems achieve the consensus because astronomical time and effort are spent verifying transactions. If you want fast transactions, you can't verify the system with proof-of-work. It's physically impossible. A Lightning Network channel creator is pinky-swearing that at some point in the future they'll reconcile the transactions to the Blockchain.


let’s clarify a few points about the LN before we get back on track:

1. LN Transaction Settlement: Your view oversimplifies LN's mechanics. Yes, coins in a channel are committed until the channel closes, but that doesn’t make it a 'distinct currency.' LN is a layer on top of Bitcoin, not a separate entity. Transactions within the channel are real Bitcoin transactions (these are not just IOUs like in traditional financial systems), just not recorded on the blockchain until the channel closes. This is a feature, not a flaw, allowing for faster and more efficient transactions.

2. Proof-of-Work (PoW) and Speed: PoW is indeed resource-intensive, but it’s crucial for Bitcoin’s decentralized security. LN doesn’t replace PoW; it complements it by handling transactions that don’t need the same level of security but benefit from speed. The 'pinky swear' analogy underestimates the robustness of LN’s smart contracts, which ensure that when a channel closes, it settles accurately on the blockchain.

3. Off-topic Tangent: While your points on LN are worth discussing, they’re a tangent from the original topic. My analogy was about Bitcoin nodes, not LN. The comparison with AI agents was to illustrate independent verification, similar to how nodes verify transactions. These agents, like nodes, could serve as personal validators of information, bringing us closer to an era where truth propagates as efficiently as transactions on the Bitcoin network.

So, while your LN "concerns" are noted, they veer off from the core discussion about AI agents and Bitcoin nodes. Let's stay focused on how these technologies, in their respective domains, contribute to a future where verification and truth are decentralized and reliable.

The reason I got into this in the first place, is because hillspuck is under the impression that LLMs will never be able to leverage tools that boost their trustworthiness.
 
Upvote
-6 (0 / -6)
real Bitcoin transactions (these are not just IOUs like in traditional financial systems), just not recorded on the blockchain
lol

Hillspuck's got a good point. Bolting on trustworthiness to a system designed without any concept of trust is a terrible strategy unlikely to work, just like attempting to bolt efficiency on to an inefficient-by-design system didn't work for the Lightning Network.
 
Upvote
7 (7 / 0)
lol

Hillspuck's got a good point. Bolting on trustworthiness to a system designed without any concept of trust is a terrible strategy unlikely to work, just like attempting to bolt efficiency on to an inefficient-by-design system didn't work for the Lightning Network.


🙄 Alright, let's unpack your response:

1. Trustworthiness in LLMs: The idea isn't about 'bolting on' trustworthiness to LLMs. It's about enhancing their capabilities with tools that enable better fact-checking and information validation. Just as providing a human with reliable sources and verification tools enhances their ability to discern truth, equipping an LLM with similar resources can significantly boost its trustworthiness and utility. This isn't about retrofitting an ill-suited system; it's about evolving the system to meet new standards of reliability.

2. Efficiency in the Lightning Network: Your analogy between LLMs and the Lightning Network (LN) misses a key point. LN was designed to address specific issues of efficiency in Bitcoin transactions. It's not a haphazard bolt-on but a strategic layer that complements Bitcoin's foundational technology. The LN's role is to improve transaction speed and scalability, not to overhaul the underlying principles of Bitcoin.

3. Hillspuck's Point: You mention that hillspuck has a 'good point,' but let's clarify that he hasn't yet responded to the argument about enhancing LLMs. The conversation is about the potential for LLMs to evolve and become more trustworthy, not about their current limitations.

So, the crux of the matter is the evolution of technology, be it LLMs or the LN. It's about developing these systems in a way that addresses their initial limitations and expands their capabilities. In the case of LLMs, providing them with accurate, verifiable data sources can transform them into powerful tools for fact-checking and information analysis, much like giving a human access to a well-stocked library or a reliable internet connection enhances their knowledge and understanding.
 
Upvote
-8 (0 / -8)

raffivegas

Smack-Fu Master, in training
2
wow, just read the entire lawsuit and found the smoking gun. What a perfect example of a misleading headline. Everything in the lawsuit confirms that the routine was written by AI, and not by any humans at all. Here's the part which makes the headline clickbait'y and totally misleading: PAGE 13 LINE 7 of the lawsuit: "Assuming Defendants’ representation that the Dudesy Special was created by artificial intelligence is accurate, the result was not created by “listening.” AI models do not “listen”; they apply algorithms to data inputs in order to generate an output. Here, the data input was George Carlin’s entire corpus of copyrighted works." So the headline saying "George Carlin was human-written" is only technically true because it was originally written by George Carlin himself. Neither the youtube channel owners nor the dudesey ai company they outsourced to wrote the material, they used George's original material and fed it to an AI in written form. So the stand-up routing is in fact AI generated. Too bad the damage is done by the misleading title. Watch this get soft shadow-banned.
 
Upvote
-6 (0 / -6)
wow, just read the entire lawsuit and found the smoking gun. What a perfect example of a misleading headline. Everything in the lawsuit confirms that the routine was written by AI, and not by any humans at all. Here's the part which makes the headline clickbait'y and totally misleading: PAGE 13 LINE 7 of the lawsuit: "Assuming Defendants’ representation that the Dudesy Special was created by artificial intelligence is accurate, the result was not created by “listening.” AI models do not “listen”; they apply algorithms to data inputs in order to generate an output. Here, the data input was George Carlin’s entire corpus of copyrighted works." So the headline saying "George Carlin was human-written" is only technically true because it was originally written by George Carlin himself. Neither the youtube channel owners nor the dudesey ai company they outsourced to wrote the material, they used George's original material and fed it to an AI in written form. So the stand-up routing is in fact AI generated. Too bad the damage is done by the misleading title. Watch this get soft shadow-banned.

You should probably read articles before you comment on them.

Despite the presentation as an AI creation, there was a good deal of evidence that the Dudesy podcast and the special itself were not actually written by an AI, as Ars laid out in detail this week. And in the wake of this lawsuit, a representative for Dudesy host Will Sasso admitted as much to The New York Times.


“It’s a fictional podcast character created by two human beings, Will Sasso and Chad Kultgen,” spokeswoman Danielle Del told the newspaper. “The YouTube video ‘I’m Glad I’m Dead’ was completely written by Chad Kultgen."
 
Upvote
5 (5 / 0)

hillspuck

Ars Scholae Palatinae
2,179
it's fake. Look up the "representative", do some more digging. It's literally not in the lawsuit.
You know the original lawsuit was written and filed before the hosts admitted it was human-written, right? And that the article has been updated with links showing that? Nah, probably not.

If you have some actual citable facts, please do link to them. No, a random guy's reddit comment is not a citable fact. You know what's a citable fact? A spokesperson for the show talking to the New York Times. Now that's quite citable.
 
Upvote
3 (3 / 0)

raffiscousinbob

Smack-Fu Master, in training
1
You know the original lawsuit was written and filed before the hosts admitted it was human-written, right? And that the article has been updated with links showing that? Nah, probably not.

If you have some actual citable facts, please do link to them. No, a random guy's reddit comment is not a citable fact. You know what's a citable fact? A spokesperson for the show talking to the New York Times. Now that's quite citable.
The irony and hypocrisy of what you wrote is palpable. Where are the author's citable facts other than "he said so"? Is there an official document or an actual video or recording of him saying so, or did the made up spokesperson say it was true? If you dig a little deeper, you'll see the same copy/paste article written by a number of journos. However, there is some discrepancy when they get to the part we're talking about. Some articles say the "rep" sent them an email disclosing Kultgen's confession, while other articles just say "the rep told us", and now you're citing an article that says Kultgen himself said it (no actual evidence provided btw other than "trust me bro"). It would be in Kultgen's best interest to say he wrote it all himself, from a legal standpoint, so there's also a conflict of interest to top it off. Anyway, I'm not mad at you, but you haven't changed my mind; I still think the headline stinks and the article as a whole is low effort. ChatGPT could've probably done better.

-raffiscousinbob

p.s. the easiest way to win an argument is to silence your critics.
 
Upvote
-5 (0 / -5)
I am, in fact, a lawyer.

Any attorney using it right now without massive checking is fucking insane.

I actually, perhaps surprisingly to some, find that there will be a very good use for it in law…eventually. The big pull will be synthesizing both publicly available stuff AND (for larger firms) pulling from internal copies of arguments. A few small examples of potential use:
  • add all oral arguments before a given judge that they ruled favorably to your position and use that knowledge to better phrase your arguments
  • keep a record of wins and losses and analyze different judges and courts preferences from a variety of angles to help decide strategy
  • eliminate form documents, but use AI to help keep consistent tone and general arguments on specific topics (of course, have ones that argue both sides, if you’re a large firm)
Basically, AI has potential uses. Letting it actually write a document, right now? Fuck no. Create citations and never bother to check them? That should be a one-step disbarment.

This highlights something which I've run headlong into for a couple of years now. There is an iron-cast belief in the corporate world that any and every human input can and should be 'optimized' away in favor of template solutions covering just about every aspect of commerce. I've so far seen no less than three consolidated attempts to replace the entire contracting process in my work with automation. It always fails to deliver and is then eventually replaced with a new attempt. The end result of which is invariably that rather than accept a computerized template set can't replace legal and contract people the business model is forcibly changed to the point where raw templates will do the job.

Meaning the templates, having to cover more potential ground, become more complex and harder to understand while the business itself becomes much less flexible. I can't imagine it saves very much money.

But AI has become the holy grail, intended to replace mere fleshbags, sayeth all the dreamers in upper management everywhere, looking at the lure of personnel costs reduced to zero. So no matter how badly the fit gets, the push to keep trying is persistently massive.

I'm not surprised to see bad lawyers trying to find ways to do copy corporations. Why spend a working week checking legal paperwork and formulating a response when you can tell a computer to read it and respond and cursorily check the result?

This worries me. Looking at the 'I'm Glad I'm dead' video...it's off. Carlin's voice, some of his mannerisms, much of his brand of comedy. Squint a little...it's about 80% there.
Which might be good enough for a laugh but sure as hell won't be enough for literally anything in business, let alone law. Yet that's where we keep seeing the attempts to fit LLM's.
 
Upvote
0 (0 / 0)
it's fake. Look up the "representative", do some more digging. It's literally not in the lawsuit.
I did look up the representative. She's Will Sasso's publicist.

And of course it's not in the lawsuit. The statement came after the lawsuit was filed, which you would know if you read the article you were commenting on.
 
Last edited:
Upvote
3 (3 / 0)

hillspuck

Ars Scholae Palatinae
2,179
Where are the author's citable facts other than "he said so"?
See the NYT link. Where is your citable fact other than "I said so"?

Is there an official document or an actual video or recording of him saying so, or did the made up spokesperson say it was true?
You mean the "made up spokesperson" the he lists as his manager before all this. Or that executive produced his movie from early last year and is listed in this article from the time as his "longtime manager"?

Some articles say the "rep" sent them an email disclosing Kultgen's confession, while other articles just say "the rep told us"
Are you really so ignorant you do not understand that written statements are often referred to as "______ said" or "____ told"?

and now you're citing an article that says Kultgen himself said it (no actual evidence provided btw other than "trust me bro").
He said it through a spokesperson. You don't understand what a spokesperson is, do you?

Plus, it's been so long ago that if it was a person that was impersonating their spokesperson, they would have released a statement contradicting it by now.

It would be in Kultgen's best interest to say he wrote it all himself, from a legal standpoint
It's not typically in anyone's best interest to commit perjury. Because him saying outside of court will be meaningless for his best interest in court. So there's really no point in doing so.

Anyway, I'm not mad at you, but you haven't changed my mind;
Of course I haven't. You didn't use logic or proof to arrive at your position, so why should logic and proof change your mind now?

I still think the headline stinks and the article as a whole is low effort. ChatGPT could've probably done better.
This from a person who couldn't take the minute it took me to google the spokesperson's name and confirm that there's plenty of documentation she exists and is who she says she is.

-raffiscousinbob

p.s. the easiest way to win an argument is to silence your critics.
And the easiest way to get silenced is to create the most obvious sockpuppet ever.

This is almost making me miss Kamus.
 
Upvote
3 (3 / 0)

definitelynotraffi

Smack-Fu Master, in training
1
And the easiest way to get silenced is to create the most obvious sockpuppet ever.
the handle was supposed to be obvious and humorous. It's obviously me. Also:
ChatGPT
As of my last knowledge update in January 2022, here are a couple of examples of retractions and corrections made by The New York Times:

  1. WMD Reporting (2003): The New York Times, along with several other news outlets, faced criticism for its reporting on weapons of mass destruction (WMDs) in Iraq leading up to the U.S. invasion in 2003. The newspaper published articles that relied on faulty intelligence, and these reports were later discredited when it became clear that Iraq did not possess the WMDs that were claimed.
  2. Caliphate Podcast (2018): The New York Times launched a podcast titled "Caliphate" in 2018, which focused on the experiences of a Canadian man who claimed to have been an ISIS executioner. However, in 2020, The New York Times retracted the central premise of the podcast, stating that the main subject had fabricated his story. The retraction raised questions about the editorial processes involved in fact-checking and vetting the podcast content.
  3. Nikole Hannah-Jones' 1619 Project (2019): The New York Times published the 1619 Project, a multimedia initiative led by journalist Nikole Hannah-Jones, which aimed to reframe the history of the United States by centering it around the consequences of slavery. However, the project faced criticism and fact-checking from some historians and scholars. In response, The New York Times issued corrections and clarifications to certain aspects of the project, addressing historical inaccuracies and disputes.
  4. Climate Change Article (2020): In January 2020, The New York Times published an article with the headline "Australia’s Fires Reflect Its Arid Conditions, but Worsened by Climate Change." The article faced criticism for suggesting a direct link between climate change and the severity of the Australian bushfires without sufficient evidence. The newspaper later issued a correction, acknowledging that the article did not meet its standards for accuracy.
  5. Russian Bounty Story (2020): In June 2020, The New York Times published an article that reported on intelligence assessments claiming that Russia had offered bounties to Taliban-linked militants for killing U.S. and coalition troops in Afghanistan. The story faced criticism for relying on anonymous sources, and subsequent investigations cast doubt on the veracity of the intelligence. The New York Times later issued a correction, stating that the initial article should have included more skepticism about the intelligence reports.
  6. Misleading Kavanaugh Article (2019): In September 2019, The New York Times published an opinion article about Supreme Court Justice Brett Kavanaugh, which originally included a misleading excerpt. The excerpt described an incident during Kavanaugh's college years, but the information provided was later corrected as it lacked proper context. The Times updated the article and issued a correction, acknowledging the oversight.

It's easy to write stuff as fact and then later admit it's fiction and offer an apology. Anyway, I didn't vet any of the stuff ChatGPT gave me above. I'd figure I'd spend as much time on this response as was spent on this article we're discussing.
 
Upvote
-5 (0 / -5)
the handle was supposed to be obvious and humorous. It's obviously me. Also:
ChatGPT
As of my last knowledge update in January 2022, here are a couple of examples of retractions and corrections made by The New York Times:

  1. WMD Reporting (2003): The New York Times, along with several other news outlets, faced criticism for its reporting on weapons of mass destruction (WMDs) in Iraq leading up to the U.S. invasion in 2003. The newspaper published articles that relied on faulty intelligence, and these reports were later discredited when it became clear that Iraq did not possess the WMDs that were claimed.
  2. Caliphate Podcast (2018): The New York Times launched a podcast titled "Caliphate" in 2018, which focused on the experiences of a Canadian man who claimed to have been an ISIS executioner. However, in 2020, The New York Times retracted the central premise of the podcast, stating that the main subject had fabricated his story. The retraction raised questions about the editorial processes involved in fact-checking and vetting the podcast content.
  3. Nikole Hannah-Jones' 1619 Project (2019): The New York Times published the 1619 Project, a multimedia initiative led by journalist Nikole Hannah-Jones, which aimed to reframe the history of the United States by centering it around the consequences of slavery. However, the project faced criticism and fact-checking from some historians and scholars. In response, The New York Times issued corrections and clarifications to certain aspects of the project, addressing historical inaccuracies and disputes.
  4. Climate Change Article (2020): In January 2020, The New York Times published an article with the headline "Australia’s Fires Reflect Its Arid Conditions, but Worsened by Climate Change." The article faced criticism for suggesting a direct link between climate change and the severity of the Australian bushfires without sufficient evidence. The newspaper later issued a correction, acknowledging that the article did not meet its standards for accuracy.
  5. Russian Bounty Story (2020): In June 2020, The New York Times published an article that reported on intelligence assessments claiming that Russia had offered bounties to Taliban-linked militants for killing U.S. and coalition troops in Afghanistan. The story faced criticism for relying on anonymous sources, and subsequent investigations cast doubt on the veracity of the intelligence. The New York Times later issued a correction, stating that the initial article should have included more skepticism about the intelligence reports.
  6. Misleading Kavanaugh Article (2019): In September 2019, The New York Times published an opinion article about Supreme Court Justice Brett Kavanaugh, which originally included a misleading excerpt. The excerpt described an incident during Kavanaugh's college years, but the information provided was later corrected as it lacked proper context. The Times updated the article and issued a correction, acknowledging the oversight.

It's easy to write stuff as fact and then later admit it's fiction and offer an apology. Anyway, I didn't vet any of the stuff ChatGPT gave me above. I'd figure I'd spend as much time on this response as was spent on this article we're discussing.
Kind of like how it's easy to fabricate a story of a podcast written by an AI, and then a "comedy special" written and performed by said AI, and then later admit the AI is a fiction?

Seriously, this refutes your assertion at least as much as it supports it. Hell, one of your examples is literally about a podcast based around a premise that was later revealed to be a hoax, much like Dudesy.
 
Last edited:
Upvote
3 (3 / 0)