Who are you trying to exempt? Musk? Thiel who literally said "I no longer believe that freedom and democracy are compatible" (and who destroyed Gawker Media when it was actively investigating Epstein's social circle, I might add)? They are active participants and supporters.If that's all they were I would feel a hell of a lot better, but don't you fucking dare pin what happened in Minnesota on the tech bros.
Seriously, pull your head out of your ass and recognize evil where it actually exists.
I guess more to the point directly: Actual good uses of AI and ML are being tested and utilized, it's just that LLMs don't actually have good uses without massive downsides. The problem is that most applications where this tech would be useful aren't tolerant of the error rates that it comes with, or are sensitive to the concept of introducing it as a proxy. A lot of businesses are essentially just throwing shit at the wall to see what sticks, and unfortunately businesses are run by starry-eyed "visionaries" who's first order of business is to gaslight themselves into thinking they have a good idea. The people who are pushing out these crappy apps and uses, well, they are actually trying to get people dependent on the tech, you're completely right on that point. However, they don't actually know that their ideas are trash because when you go through the financial incentives of everyone involved, it turns out that all of the people who build the project are being paid to follow through with stupid ideas and all of the investors who suffer from the idea's stupid failing are chasing the possibility of an "industry disrupting" jackpot.A better example wouldn't change the basic point. What appear to be easier, faster, good for humanity, and near term ROI applications, are not gathering up the enormous investment dollars that mass market LLMs are getting. AI can be profitable in the here and now, but that is not the focus of the 'I need 10 GW of processing' crowd. They are chasing the scaling game, like social media has. Like the legions of addicted social media doom scrollers, those who become dependent on AI do not have the kind of future I look forward to. In short, too much magical thinking (solves all problems!), not enough practical payoff. I'm not anti-AI, there are lots of good uses, it's getting subverted for crappy uses that breed dependence.
Pretty insightful to realize that 6 years ago. It's not surprising in hindsight, since devs have so much relevant data to feed it. Most devs probably thought that since they're the ones that know how to build the systems, they'd always be needed to do that even as they got more sophisticated, never thinking their role could be automated as early or easily, which isn't necessarily hubris, but a lack of foresight. I figured it would happen eventually, but I must admit things seem to be progressing faster now than I would have guessed years ago.It was maybe six years ago that a SVP at Amazon called me ridiculous for saying developer jobs would be some of the easiest to replace with AI. He, and an overwhelming percentage of the developers I knew, insisted devs had the only jobs that would be safe. The unjustifiable self-importance. hubris, and belief in personal specialness that came to pervade the industry in recent years is staggering.
When you are put out to pasture and there are no employment options for you, I'll fight for you whether you want me to or not because I believe our species has only made it as far as we have by taking care of each other. Social Darwinism is complete and utter bullshit.
Ill point out there's a high probability that assuming you will be able to adapt, is also either hubris or lack of foresight.My options are retirement or adaptation.
Well bully for you. Not everyone is so resilient. Nor apparently so cavalier about being made redundant.I'm in software development. AI is coming for my job, so I'd better adapt. I don't expect to be spared, or for anyone to fight for me, or take pity on me ether.
You seem to think people aren't very adaptable. Maybe I won't manage to do it well, or maybe I'll become much more productive - we'll see. But what good would getting angry and complaining about it do? BTW, I may not be as old as you think, but I'm certainly not young.Ill point out there's a high probability that assuming you will be able to adapt, is also either hubris or lack of foresight.
Additionally, if you're of an age where retirement is an option, and not seeing the people with all the money seeing this as the greatest "rent seeking" opportunity of all time you really haven't been paying attention to the trajectory of economics in this country since Regan.
No, just the people who are quite sure they're the "special" ones that won't get rat-fucked; even if they offer a token, half hearted admission that maybe they're over their skis. As far as these companies not "rent-seeking", its a completely nascent industry that's already shutting down state level regulations under the guise of not stifling innovation, though it's very obviously about self enrichment. They aren't at "consolidated monopoly" levels yeah, but they're certainly looking like they're trying to speed run the achievement.You seem to think people aren't very adaptable.
Now you're throwing lame snarky insults. I don't think I'm so special, nor am I certain how well I'll be able to adapt, especially as things keep changing - but I wouldn't say I'm "over my skis" either. You're putting words in my mouth. What I am saying is that I'm not on the outside looking in, I have a stake in this too.No, just the people who are quite sure they're the "special" ones that won't get rat-fucked; even if they offer a token, half hearted admission that maybe they're over their skis. As far as these companies not "rent-seeking", its a completely nascent industry that's already shutting down state level regulations under the guise of not stifling innovation, though it's very obviously about self enrichment. They aren't at "consolidated monopoly" levels yeah, but they're certainly looking like they're trying to speed run the achievement.
To be fair to the SVP I mentioned he thought it was coming but would be 20 years out and the last jobs to go while I thought it was 5 and among the first. I had the advantage of an adjacent perspective instead of an insider's one I guess. Plus I'd done a project where I interviewed several top people in natural language processing and it occurred to me that most of what they described as the toughest challenges they described are significantly reduced in computer languages. Limited vocabulary, strict definitions, clear syntax...Pretty insightful to realize that 6 years ago. It's not surprising in hindsight, since devs have so much relevant data to feed it. Most devs probably thought that since they're the ones that know how to build the systems, they'd always be needed to do that even as they got more sophisticated, never thinking their role could be automated as early or easily, which isn't necessarily hubris, but a lack of foresight. I figured it would happen eventually, but I must admit things seem to be progressing faster now than I would have guessed years ago.
My options are retirement or adaptation. I'm lucky that retirement is an option, but I would be bored, so adaptation it is, to some degree at least. The thing to fight for is not to keep jobs as they are, but to make sure that large chunks of people aren't completely left behind. You're making the assumption that I have no desire for people to take care of each other. I just don't think the way to do it is by trying to fight inevitable technological change. For now, at an individual level, it's more productive to try to adapt than to complain or fight. Ultimately we'll need to address the larger issues of disruption as a society. If we do it right we'll end up in a much better place. If we just rage against it we'll be steamrolled.
Curious how its the incumbents lobbying to stop these regulations, especially the ones trying to ensure any kind of safety built into their products. We all learned a valuable lesson when it came to social media, and turning a blind eye to what they were doing. Saying it's "unreasonable" for states to have interest in the safety of something that is likely 10 to 100 times as disruptive, is disingenuous at best and tech-bro boot licking at it's worst. I'll go ahead and give you the benefit of assuming the former, since you appear sensitive to "lame insults".They would also be most difficult for startups to comply with, and would actually give the large incumbents a regulatory advantage since they have more resources to deal with it, painful as it would be.
Not sensitive, just calling them out.Curious how its the incumbents lobbying to stop these regulations, especially the ones trying to ensure any kind of safety built into their products. We all learned a valuable lesson when it came to social media, and turning a blind eye to what they were doing. Saying it's "unreasonable" for states to have interest in the safety of something that is likely 10 to 100 times as disruptive, is disingenuous at best and tech-bro boot licking at it's worst. I'll go ahead and give you the benefit of assuming the former, since you appear sensitive to "lame insults".
Why can't machines be compared to people? Why is it ok for people to learn from others, but not for machines?
Interesting!To be fair to the SVP I mentioned he thought it was coming but would be 20 years out and the last jobs to go while I thought it was 5 and among the first. I had the advantage of an adjacent perspective instead of an insider's one I guess. Plus I'd done a project where I interviewed several top people in natural language processing and it occurred to me that most of what they described as the toughest challenges they described are significantly reduced in computer languages. Limited vocabulary, strict definitions, clear syntax...
If the small number of people you referred to chose not to pursue such promising technology, money would have followed others that did. People like Demis Hassabis would have had to choose not to pursue it at all, or at least not enough to make it clear how promising it was. He's an example of someone who genuinely seems to have good intentions for the technology, and is in a position to guide it responsibly. He does have pressure to create products, of course, and has to balance that with safety and job displacement. In the scheme of things, the timing may not have been inevitable, but on a planet with billions of people free to pursue their interests with advancing computer technology, the eventual creation of powerful AI capable of replacing a lot of human labor probably is inevitable.Anyway you describe "inevitable technological change" as if it was inevitable technological change but it's not. It's a path chosen by a very small number of people made extremely powerful by the amount of wealth they control. They could easily choose other paths, but frankly the SV-centered tech world hasn't had a truly new idea in quite a long time so when this presented itself they all rushed to it, motivated not by solving real pressing needs but in order to keep the quarterly earnings reports up so their stock doesn't tank. A good percentage of the layoffs aren't because those people aren't doing useful work anymore but because cutting salaries offsets the expense of building out data centers that won't be finished before the mania loses steam.
I'm the same as Dave, software engineer. I feel exactly the same way as him, I have to adapt or I'll be out. I've done it multiple times in the 30 years I've been working. 4GL, Thin Client, Fat Client, Thin Client again, Fat Client again, 16 bit, 32 bit, 64 bit, SQL, no-SQL, that internet thing, text, web, HTML, CSS, JavaScript, C, C++, C#, bespoke languages and so on.Well bully for you. Not everyone is so resilient. Nor apparently so cavalier about being made redundant.
That argument is making less sense with every new model release. It's not the same as humans, but as it closes the gap in so many ways, that sort of argument becomes one that could just as well be used on humans.Because they're not learning in any meaningful sense?
LLMS don't understand. It's not learning, it's just MASSIVE statistical copying via tokenization and statistical mapping.
I personally dislike this "techbros" thing, I think it's a vague lazy slur. And your comment shows the problem, who is a "techbros" for you ? Because I would think there is a lot more people than Musk and Thiel (who are nazi, there is no discussion here). Does that include only CEOs, or does that include the people actually working on the models, does that include non-US ones, there is a lot of it in China, you might have heard. Does that include various levels of hobbyists ? It's much easier to target your hate to some vague bad entity rather than recognizing that reality is a bit more complicated and nuanced.Who are you trying to exempt? Musk? Thiel who literally said "I no longer believe that freedom and democracy are compatible" (and who destroyed Gawker Media when it was actively investigating Epstein's social circle, I might add)? They are active participants and supporters.
When I explain what I want and I get it, then vaguely say to add that thing, and it does it. I'm pretty sure that is the definition of understanding.Because they're not learning in any meaningful sense?
LLMS don't understand. It's not learning, it's just MASSIVE statistical copying via tokenization and statistical mapping.
This argument is used a lot. For some reason, nobody tries to prove that human brain works differently. Are you sure this is the case?Because they're not learning in any meaningful sense?
LLMS don't understand. It's not learning, it's just MASSIVE statistical copying via tokenization and statistical mapping.
"You don't have to like it, but poisoned food is not poison. It's food. The only way to define food to exclude poisoned food is that it has to be edible by humans, which is completely contrived."You don't have to like it, but AI generated music is not "so-called music." It's music. The only way to define music to exclude it is to claim that music has to be generated by humans, which is completely contrived.
Be honest - what you don't like about it is that it wasn't generated by humans, that it was trained on human generated music, or fear that it's so easy to generate it could drown out human music. Nothing to do with the music itself. Even if you can honestly say now that nothing you've heard from AI has appealed to you, there will come a day when you hear some AI music without knowing and think it's good, only to find out and then decide you hate it. And that's probably not far off.
Whine all you want about AI's increasing capabilities. The ones fighting it and complaining will be left behind while people who learn to make use of it will thrive.
Perhaps the incumbents care more about not having a painful patchwork of scattered regulations stifling innovation than about using that mess to keep startups out. I doubt startups want state level AI regs either, but they're not the ones people are paying the most attention to. Many of the tech leaders have called for sensible federal level regulations, balancing innovation and safety. China would love it if we hobbled ourselves with burdensome regulations though.
Emphasis mine. Brainstorming / exploring ideas with an LLM has always seemed very silly to me. Almost dumb. Don't do that. Because that's not brainstorming; it's just letting yourself dragged along in random directions by a random number generator that happens to echo back the words you used in your prompt. You might think you're finding out new things, but in that case, you were supposed to use a search engine.It's getting downvoted but there is something here. I've been told by some creatives I know that they find AI can be helpful for brainstorming. Not a real income draw but it does have its uses. Yes the music it produces is derivative, as are all the images, video and text. But it does create a well to draw upon I suppose.
Now, about that energy use...
Pretty insightful to realize that 6 years ago. It's not surprising in hindsight, since devs have so much relevant data to feed it. Most devs probably thought that since they're the ones that know how to build the systems, they'd always be needed to do that even as they got more sophisticated, never thinking their role could be automated as early or easily, which isn't necessarily hubris, but a lack of foresight. I figured it would happen eventually, but I must admit things seem to be progressing faster now than I would have guessed years ago.
My options are retirement or adaptation. I'm lucky that retirement is an option, but I would be bored, so adaptation it is, to some degree at least. The thing to fight for is not to keep jobs as they are, but to make sure that large chunks of people aren't completely left behind. You're making the assumption that I have no desire for people to take care of each other. I just don't think the way to do it is by trying to fight inevitable technological change. For now, at an individual level, it's more productive to try to adapt than to complain or fight. Ultimately we'll need to address the larger issues of disruption as a society. If we do it right we'll end up in a much better place. If we just rage against it we'll be steamrolled.
I used the older version to make silly one-off battle/background music for a silly ttRPG I DM'ed awhile ago. The results were mediocre but it got some ironic lulz and the v2 was only $0.06 per 30 second clip(obviously sold at a huge loss).Why does AI "Music" exist? There are use cases for llms and gen AI albeit far fewer than the AI companies shoehorning it into everything want. Even AI "Art" has use cases, it's not in any way actual art but there is a use case albeit with messy ethics around it for creating images even when they lack any artistic merit and there are associated technologies like upscaling blurry photos using it that could be helpful. I could also see the point in arguments that artists could use it in intermediary steps to edit or expand their work etc. as long as it is supplementing instead of replacing artists and the ethical situation improves.
But I don't see any value or purpose in AI music whatsoever. Is there any actual reason or justification for this to exist? Music's sole reason to exist is as art what is the point of music created entirely by a machine? With images they can at least be used for illustrative or informative purposes like depicting how to perform a task or things like creating a background a real human artist paints over and fills in even if there would still be serious ethical concerns around training, the environment and destroying jobs for artists. Nothing like that is possible for music, there's no informative value, real musicians can't use it partially in any meaningful way as a supplement or aid to their actual art. It's just replacing real music with soulless slop.
I find the key to brainstorming is not answers but questions. The right question can completely refocus how you are thinking. LLMs are notoriously proficient with confident answers, right or wrong. Do these claims for using LLMs for brainstorming somehow get the LLM to ask a good question?Emphasis mine. Brainstorming / exploring ideas with an LLM has always seemed very silly to me. Almost dumb. Don't do that. Because that's not brainstorming; it's just letting yourself dragged along in random directions by a random number generator that happens to echo back the words you used in your prompt. You might think you're finding out new things, but in that case, you were supposed to use a search engine.
Brainstorming and exploring ideas is exactly what you're writing all your essays for, remember? To help you think it through, to explore, to ask yourself questions which you, yourself, answer, and you provide feedback to yourself in written form. It's frightening how good your own brain is at exploration like this. Try. Just try.
And this idea is old — it's from the 16th century.
Excellent question. I'm convinced the answer is "no", but I'm open to a change of mind.Do these claims for using LLMs for brainstorming somehow get the LLM to ask a good question?
That's a meaningless argument, because HUMANS are using GenAI, so it's just going to amplify the very problems you're stating, just increasing their blast radius into creative arts.
It's like your saying everyone having nukes is fine, because HUMANS are the ones who are going to use nukes poorly, and we shouldn't blame the nukes. GIVING PEOPLE NUKES IS BAD, M'KAY?!
If we can agree that humans abuse technology to do shitty things, why are you arguing to give them even more powerful tech that can fuck with even more things?
Counterfeit? It's hard to take you seriously, but here goes...Which is it? Is it inevitable, and all we can do is lie back and try to enjoy it? Or is counterfeit intelligence so fragile and uncertain that the "innovation" could be "stifled" if we apply any regulation that wasn't written by the industry itself?
I see your /s but I think you've actually nailed it.
The basic misapprehension on display is the belief that "creating" means "getting what I want." It's a mindset that doesn't see value in anything other than acquisition.
This will be received rationally and warmly, I just know it! Historically, though:
Tools that when released lowered the barrier into creating music, causing distaste and hatred. Now they're just tools. We adapt.
- Drum machines, soulless
- Synths, fake instruments
- Sampling, theft
- Auto-tune, cheating
- DAWs, not real music
- Backpackers, not musicians
- Turntables, not instruments
It's ultimately an arms race. Anything that's possible and powerful will be done by someone who is determined and capable, for better or worse. Thank goodness nukes are extremely difficult to make, or humanity would be over already. With billions of people with a wide range of motivations, and the realities of human nature, it's not really possible to prevent this situation. One person or company or country deciding not to do a given thing would only delay it, and possibly give the upper hand to someone with worse motivations. We're actually lucky the AI frontrunners have motivations as good as they do - it could be a whole lot worse. The idea that the people building these technologies could just not do it and everything would be fine is just wishful thinking.
Then, regulation should be avoided on the following?[...] Burdensome regulation by states would slow it down, and limit people's access to the best models - but only in the US, allowing China to pull ahead. Whatever regulation we do implement should be done federally so it's not a patchwork, and be designed to balance safety and other concerns with progress and maintaining an advantage in the US.
robots.txt? Pirating books and other copyrighted works? To create "better" models for "the public"? Unconditional scraping and using pirated works is problematic in and of itself (and already regulated as intellectual theft), but the public doesn't really benefit, because models are crappy no matter what, due to their nature; no amount of training can add value (unless slop is value now?). Even if the models become "better" (HOW???) by more training, it's still illegal to steal copyrighted works. The European AI Act, so reviled requires in its transparency provisions that products of generative AI be tagged for their artificial nature in a way anyone can check.All AI music should be legally required to have AI speak this before each AI song begins. Sort of like a health warning on a pack of cigarettes.
Look at it this way, whenever an underground sound starts going mainstream the RIAA does whatever it can to produce as many sound-alike bands, with the most generic, flattened, vanilla radio-friendly versions as possible. This AI will continue to do that. It really is no different than the current standards in the music industry.Brian Eno was a great inspiration to me when I was learning to incorporate digital tools into creative workflows. And this quote from him gets trotted out a lot by AI-boosters:
I love the idea that with proper tools, judgement can be elevated to co-exist with skill on the same creative plane. For example, I've never really learned to play keyboards, but I've composed music with MIDI sequencing that I am not ashamed to have people hear. My drawing skills are limited but I've been paid for graphics I made with vector design programs. I am confident that my judgement using tools like these is no less a form of creativity than someone playing a guitar or painting a picture is.
That's what I get from that Eno quote. But rather than try to use it to justify their approach to AI what those boosters really should do is read and think about this more recent quote from Eno:
Pivoting to work at a web startup in 1996 was my first "corporate" job and I had to think it through to know I wasn't just doing it for the steady paycheck. I had to actually believe that participating in the early stages of what I thought was going to be a communication revolution at least as disruptive as the spread of the printing press, and probably more, was a good thing. Still basking in the afterglow of the fall of the Soviet Union it was pretty easy for me to think we'd learned enough that this time we could avoid unpleasant side effects like something between 30-40% of the population of Central Europe dying in the 30 Years' War. No, the printing press didn't cause those deaths, but by giving impetus to the Reformation it played a major role in creating the conditions for the conflict.Better analogy is The Internet. The 1995 version of Me would probably have looked at generative AI in the same positive way. 30 years of experience later, it feels more like an exploding disaster in the making.
If you're going to frame this as an arms race between nation states, wouldn't the more rational solution be forging stronger diplomatic ties and binding international oversight of arms instead of trying to ensure "the good country" is the one that wins the inevitable war?Counterfeit? It's hard to take you seriously, but here goes...
That's a false dichotomy. Burdensome regulation by states would slow it down, and limit people's access to the best models - but only in the US, allowing China to pull ahead. Whatever regulation we do implement should be done federally so it's not a patchwork, and be designed to balance safety and other concerns with progress and maintaining an advantage in the US.
Misrepresentation and jumping to conclusions.Then, regulation should be avoided on the following?
Let these be regulated, for starters. Just these three, and then, if the LLM companies survive, we'll deal with it in due time.
- Developint artificially-biased LLMs towards "engagement"? When unregulated, this aspect of LLMs will result in mental health issues, radicalization, suicide attempts and more in otherwise sane people. See recent articles on Ars Technica on these subjects; they're serious.
- Scraping the Web while ignoring
robots.txt? Pirating books and other copyrighted works? To create "better" models for "the public"? Unconditional scraping and using pirated works is problematic in and of itself (and already regulated as intellectual theft), but the public doesn't really benefit, because models are crappy no matter what, due to their nature; no amount of training can add value (unless slop is value now?). Even if the models become "better" (HOW???) by more training, it's still illegal to steal copyrighted works.- Weaponization of slop? LLMs can create convincing propaganda in industrial quantities. Personalized just for you. Would you want to have some laws to fall back on, when you, personally, will be the target / victim of an LLM-driven campaign of misinformation?
That's only one aspect of it, but it's an important one. Yes, diplomatic ties and international oversight would be good if and where trust can be established, and sensible guidelines agreed upon. Establishing that trust and verifying compliance would be quite tricky though. And can world leaders agree on sensible guidelines for AI that neither hobble it (providing strong incentives to circumvent) nor quickly become irrelevant? I'm not saying we couldn't theoretically do better than we are, but I don't realistically expect we will.If you're going to frame this as an arms race between nation states, wouldn't the more rational solution be forging stronger diplomatic ties and binding international oversight of arms instead of trying to ensure "the good country" is the one that wins the inevitable war?