Suit alleges copyright infringement and illegal use of Carlin's name and likeness.
See full article...
See full article...
I get where you're coming from on the Black Mirror episode. Personally, I thought it was one of the best in the series. It explores the idea of 'resurrecting' someone using the data they've left behind, which, while it sounds like sci-fi, isn't entirely out of the realm of possibility with advancing technology.
Ray Kurzweil, has even talked about a similar goal to 'resurrect' his father using collected data. This idea sounded far-fetched to many back in 2010 (It only took me a short time to 'get it' though), but as technology evolves, the concept becomes more conceivable. It's like how .par files work in data recovery. They reconstruct missing parts of a file from the remaining data. Similarly, with enough data about a person, theoretically, you could recreate a semblance of them.
While it's not exactly the same (especially recreating their experiences), this concept is fascinating. It shows the potential of technology to bridge gaps we once thought unbridgeable. Sure, it's a complex and sensitive topic, but it's undeniably intriguing to think about the possibilities.
IMO? Cause it's better and may someday be indistinguishable.
Ars posters have drawn a line about jobs AI can take and it seems Them Pictures and arts stuffs are the line that has been drawn. Everyone else can be outta a job but them arteests can't be touched.
Sadly, that's the truth. This OP is about an hour long presentation of an AI trained to sound like George Carlin and which does so while being slightly, but noticeably off.
Tomorrow's OP will be about russians and republicans crafting similar disinformation on social media in order to make it look like their political and military adversaries are monsters, clowns, or both.
And you just know that the third of the US voting population which went all-in on Covid parties and Ivermectin to prove their loyalty to Dear Leader are going to eat that shit up.
Yeah, the iconic (alas) dumb surprised face on a Youtube thumbnail is a good indicator that I can block the channel and never watch any of the shit they do.
It's mostly Stephanie Sterling, Yong Yea and some local creators that are not dumb![]()
Of course. Now consider the future where you want to find a given youtube clip of 'politician A' who said something worth hearing but, since that spiel was about gun control the NRA has flooded youtube with bit reels where he advocates the use of baby blood for rejuvenation, or confesses to being aroused by young children, et cetera.
Not to mention all the flicks we can expect where some person conspicuously identifiable as part of a given minority group, 'accidentally' spills the story of their organized part in, oh, grooming, Great Replacement, kidnapping white women...you get it.
What AI can do is to flood the market with bullshit you can't separate from the real quickly. Maybe that'll just screw youtube over, but at least for a time I predict we'll be seeing a whole lot of shit like this.
I get the temptation not to let a troll get away with trolling, but every quote surfaces the message. I recommend ignoring and/or downvoting.
It's the "other than" part - "other than distribute copies or derived works."
First sale doctrine says that if you buy a CD of Metallica music you can sell that CD to someone else. It doesn't let you sell copies of the CD to someone else, nor does it let you rip individual tracks off and sell them to someone else.
What's more it also doesn't allow you to take those tracks and transform them into a derived work - like by incorporating them as a sample into a work you're creating. (which is why samples are licensed works).
Generative AI is all about sampling and creating derived works. The question that is still open is whether it's being done in a way that is going to be considered "fair use" or if it's being done in a way that ultimately model makers are going to need to be paying for. The same way that musicians have to license samples to create new works.
Why do you feel you're entitled to use the work of others without their permission?
It really isn't.
Bullshit. You're just unwilling to put in the work.
They're not original or unique if they rely on other people's work that heavily.
Nobody gives a shit how you feel. You just want to be able to take from others without compensating them for it.
Then why should anyone give a rat's ass about your ability to use other people's work because you can't come up with your own stuff?
Again, nobody cares how you believe that you're entitled to the work of others for free, and that artists don't deserve compensation.
You don't have to, we all know your only reason for this is that you don't believe artists deserve compensation for their hard work.
It helps to remember that Kamus writes most long form posts via LLM. The LLM doesn’t take rebuttals into account. It’ll never “learn” from discussions here.
Though the LLM he uses may not be the only one with that problem.
The reason you 'sometimes agree with me' stems from the fact that cynicism often struggles to stand up against this thing called 'reality'. Consider both of your stances on Bitcoin: despite numerous pronouncements of its demise by people like you, it remains resilient and relevant. Trends backed up by evidence always have the final say.
Vindication may take time, but when the tide turns, even the staunchest skeptics, like yourself, might find themselves adjusting their stance. Hell, It wouldn't be surprising if, by the end of this year, you finally capitulate and buy some Bitcoin, possibly through an ETF. (which is just an IOU, and kind of defeats the purpose of getting into Bitcoin in the first place, but whatever)
Regarding the functioning of LLMs and their ability to 'remember' rebuttals: While LLMs don't have memory in the human sense, they operate within context windows. This means if a rebuttal or any piece of information is within the current discussion's context window, the LLM can access and use it for generating responses. It's not about recalling past conversations but about processing the available information within the current interaction's scope. This method allows for coherent and contextually relevant responses, as long as the discussion details remain within the LLM's accessible context. So, while It can't 'remember' past sessions, It can maintain continuity and address points effectively within an ongoing conversation.
Moreover, the actual reason I often find myself repeating points, especially to individuals like hillspuck, isn't a shortcoming of the LLM's memory capabilities. Rather, it's a necessity to continuously address and counter persistent skepticism. When someone, like him admits to not fully engaging with detailed responses, it becomes inevitable that my points need reiterating.