Some semi-unhinged musings on where LLMs fit into my life—and how I'll keep using them.
See full article...
See full article...
Not a programmer, but this feels like when a professor who is 20 years out of date in modern lab practices, brags about the work done in his lab.As a professional programmer I don't have issue with people using LLMs to solve programming problems, but it does rub me the wrong way when people say "I programmed this with AI". No, you didn't program anything - the AI model parsed it together for you, mostly from code written by others.
Its the same as if I would ask someone else to make a painting for me based on my description. It would give me no right to claim that I made the painting even if I had paid the artist to do it just as I want.
less -S column -t | less -S Not just that - at least many people have to write novel code, that hasn't got analogues in the open-source world.These articles have been interesting and have shown off the use case, albeit with caveats, for those who main job isn't coding.
In my case any code I write I have to have ownership of. Not just that it works right now, but that it will work and can be extended years down the road.
I can see places where they could be useful but I also don't see myself trusting them to just write code for me anytime soon.
If you think LLMs and their assorted Sturm Und Drang are going to go away or even become less visible for a nano second because you have some fundamental beef with the entire concept then I'd suggest looking up the word 'hubris'.I know Ars isn't a democracy, and it isn't audience captured, otherwise articles like this wouldn't get written, but it seems odd to still be seeing this sort of thing on a site where the readership is widely capable of seeing the naked emperor and generally willing to say so.
I guess I wonder if there's genuine interest in the slop generators, or if it's coming from on high, as I've seen from various companies for which I've done work.
Arguably, indeed. You might very well call it plagiarism or forgery if a person did it, depending on what they produce. Calibrate your conclusions about LLM output accordingly.The "stolen" stuff is a murky one and enormously charged with emotion. If you go to a bunch of museums to observe a heap of paintings, then read a stack of books, then create your own work inspired by these learnings is this stealing? You wouldn't really call it this. With AI, it's similar in the sense that the technology has trained on material. It hasn't stolen it from anyone. Arguably.
People don't really buy Teslas or get X accounts or vote the far right "because they're here to stay", they buy them because they want to, and create accounts on X and use Grok because they want to. They vote the far right because they want to. If Lee wanted to make a readable, enticing account of his actions for "the hubris-filled", he could have avoided that "and I feel good about it". Cool. You like it. Have at it. But if your only take on the horrible downsides of the technology is "doesn't matter, look at my log colorizer, how cool, I liked it!" then maybe it's not just my hubris.If you think LLMs and their assorted Sturm Und Drang are going to go away or even become less visible for a nano second because you have some fundamental beef with the entire concept then I'd suggest looking up the word 'hubris'.
We get it. There are horrible downsides to the technology. And the current political environment. And the current environment. And Elon Musk. But they are here to stay and we are going to have to deal with them in some way other than saying 'they suck'.
To name just one, a bunch of authors are suing because they feel their copyright... rights have been trampled by training models on pirated content, so yeah, "arguably". I could be liable for fines or maybe jail if I did the same thing these companies did, but somehow, "arguably".Look, I'm a photographer and video producer of almost 20 years. I too feel apprehension and uncertainty, in particular for my young kids. The environmental argument is one that genuinely concerns me and I hope that the lofty promises of AI orchestrated energy breakthroughs will materialise.
The "stolen" stuff is a murky one and enormously charged with emotion. If you go to a bunch of museums to observe a heap of paintings, then read a stack of books, then create your own work inspired by these learnings is this stealing? You wouldn't really call it this. With AI, it's similar in the sense that the technology has trained on material. It hasn't stolen it from anyone. Arguably.
Reminds of this small (/s) incident calledTo disable wrapping, would piping your output to
less -S
have worked?
Maybe, I think it is….
column -t | less -S
Overall, I think it is a fun project. The only slight worry I have, are logs considered adversary-controlled inputs? I’m not sure what all the fields in your logs mean, but is there anything close chance the attacker has enough control over them to stick a buffer overflow or an escape character in there? Then you have a program you don’t really understand parsing hostile stuff from the internet… (I don’t do web stuff, so maybe this is an overly-paranoid take).
log4shellI just want to say that I resemble this remark and it makes me question a lot of the things I'm currently working on.But here’s the thing with the joy of problem-solving: Like all joy, its source is finite. The joy comes from the solving itself, and even when all my problems are solved and the systems are all working great, I still crave more joy. It is in my nature to therefore invent new problems to solve.
to further hit the nail on the head, human learning and inspiration is usually novel and transformative. when it isn't, well, there's several degrees of punishment. crucially, human learning isn't in no way comparable to a company using massive amounts of unlicensed works to train their product. this reductive and ignorant persistence on comparing the functionality of the human brain to prompt-driven pattern matching is silly. just because if feels like it is (and i personally don't even believe it does), doesn't mean it actually is.Arguably, indeed. You might very well call it plagiarism or forgery if a person did it, depending on what they produce. Calibrate your conclusions about LLM output accordingly.
This is a prime example of a “you” problem.Fair enough, I knee-jerked a response before reading the article.
I have to ask though...
Did you or someone else at Ars intentionally go with an article title that you knew would trigger a lot of people like me and get us to comment before even reading the article? Because at this point you have to know that the words "vibe coding" has strong negative connotation for a lot of people so why use it unless you want to drive engagement?
It kinda sucks that we live in a world were I have to look at decisions about article titles in the context of questions like "Are these guys trying to ragebait me?"
Indeed. People who say these LLMs are just learning like people do always seem to hand-wave away the fact that people don't learn for free. Museums and art galleries which exhibit copyrighted works charge admission to compensate the artists. People pay for books, or libraries do, and authors are compensated. And this is quite besides that most people don't learn in order to act as slaves for others. Usually, they learn to enrich their lives in one way or another, and that has its own intrinsic value which training an LLM will never have.additionally, i can't excuse my copyright violations as mere inspiration so, if people insist in putting LLMs and humans in the same playing field, then... pay up, chatbot makers.
Lee - glad you had fun vibe-coding ... but oyu could have just opened the log files directly into VS Code and changed the Language Mode to something like Coffeescript or Javascript or JSON or any number of other lang modes and you would have gotten similar results.It was time to fire up VSCode and pretend to be a developer. I set up a new project, performed the demonic invocation to summon Claude Code, flipped the thing into “plan mode,” and began.
Painting: Well no. The anaology would be more like you commissioned someone to slap clipart together for you, where still yet another layer of artists generated said clipart and your direct contact appropriated those individual parts to piecemeal something reconstituted together.As a professional programmer I don't have issue with people using LLMs to solve programming problems, but it does rub me the wrong way when people say "I programmed this with AI". No, you didn't program anything - the AI model parsed it together for you, mostly from code written by others.
Its the same as if I would ask someone else to make a painting for me based on my description. It would give me no right to claim that I made the painting even if I had paid the artist to do it just as I want.
I can attest have heard many similar reports. Most notably, I have several friends who are not developers but are quite technically capable and they're getting a LOT of value out of AI-assisted coding and/or pure AI coding.No, no one told me to write this. I've been a reader since 1998 and I generally find that my own personal interests align with the audience's, because I am them. Further, I've been employed here since 2012. I'm a senior editor with direct reports and I sit on the Ars editorial board. I am the "on high" at this point.
I had a solid experience over the Christmas break and I wanted to write it up. If I have further solid experiences, I'll write those up, too!
I'm aware fiverr exists, but I don't feel like it'd be a good use of anyone's time to have to be subjected to my dumb nitpicky "make the logo bigger" levels of unending change requests for silly casual passion projects. Nor do I want (or have the means) to pay a competent coder what they're worth to monopolize that time for at minimum a few days per stupid project—there are other not-mine projects out there that are way better uses of skilled human time. Pulling in an actual skilled coder means now suddenly I've got outside constraints like "money" and "contracts" and "formalized feedback rounds" being placed on what should be my fun screw-around-with-computers free time—now it's work.Lee: you can't learn to code? Hire a coder.
This reminds me of every time I've fixed a problem with Powershell. Powershell is obviously not an LLM (yet...) but I almost always find within about 10 minutes that it has a single command that does almost exactly what I want. I then spend two days trying to fix the format, working out why it fails to pipe into another command, working out why some of the data it produces is in a different type than the rest, etc...And about those two days: Getting a basic colorizer coded and working took maybe 10 minutes and perhaps two rounds of prompts. It was super-easy. Where I burned the majority of the time and compute power was in tweaking the initial result to be exactly what I wanted.
This rings alarm bells. I wouldn't want to use an LLM for anything exposed to the internet unless I thoroughly understood the code it produced.I’ve had the thing make small WordPress PHP plugins
Which model are you using ?Periodically I run my internal “is generative AI finally good at summarising yet” benchmark and fire up one of the newer local open-weight models.
It’s still garbage. 48 GB of RAM devoted purely to the model and it still can’t fit a couple of megabytes of PDFs into context without RAG and the answers are still simultaneously good enough to waste my time and bad enough to be ultimately useless.
When does this stuff actually become good?
And I had fun doing these things, even as entire vast swaths of rainforest were lit on fire to power my agentic adventures.
Heard, yeah—and fortunately, both of the mu-plugins I'm using are simple enough that I can in fact understand what they're doing. I included links in the piece, but one is here and another is here if you'd like to glance.This rings alarm bells. I wouldn't want to use an LLM for anything exposed to the internet unless I thoroughly understood the code it produced.
Periodically I run my internal “is generative AI finally good at summarising yet” benchmark and fire up one of the newer local open-weight models.
It’s still garbage. 48 GB of RAM devoted purely to the model and it still can’t fit a couple of megabytes of PDFs into context without RAG and the answers are still simultaneously good enough to waste my time and bad enough to be ultimately useless.
When does this stuff actually become good?
you know, in the story, the genie does in fact go back into the bottle? Its actually kinda the point of the story? I find that a pretty good metaphor for the whole thing. People using an example they don't understand that actually directly contridicts the point they think they are making.Look, I'm a photographer and video producer of almost 20 years. I too feel apprehension and uncertainty, in particular for my young kids. The environmental argument is one that genuinely concerns me and I hope that the lofty promises of AI orchestrated energy breakthroughs will materialise.
The "stolen" stuff is a murky one and enormously charged with emotion. If you go to a bunch of museums to observe a heap of paintings, then read a stack of books, then create your own work inspired by these learnings is this stealing? You wouldn't really call it this. With AI, it's similar in the sense that the technology has trained on material. It hasn't stolen it from anyone. Arguably.
What I do sympathise with is the bitter taste some people are experiencing feeling like their work has been used in ways they didn't expect. Or that it might be used to create something that competes against them. Or that it might be part of something dangerous to our existence and humanity. This is actually a very different matter and one I acknowledge and sometimes feel too.
Then there's the matter of whether all this stuff is actually useful. Some say it's revolutionary. Some say it's hot garbage. But as is often the case, it's more nuanced than this and the reality probably sits somewhere in the middle. Maybe this stuff might lead to a net win in the long run. It's possible. We just don't know. Personally I'm trying to remain optimistic. I don't really agree with the absolutist doomer takes even though there's merit to some of the arguments from this corner. I really don't like Sam Altman at all either and I think he's been reckless and irresponsible on a number of levels. There are better stewards of AI than this guy.
I agree with the Lee. The genie is out of the bottle. But that doesn't mean we just sit back and let the wave crash over us. What I hope to see from here on is concerted efforts from government, private industry and the AI industry itself to mitigate harm and maximise collective benefit. Unfortunately I have more faith in some governments than others.
So if we think companies pirating stuff by the ton to train LLMs is, you know, not cool, we're "emotional". Nice.The "stolen" stuff is a murky one and enormously charged with emotion.
So, let's ridicule the underlying concept, that will go well with the people who think that.And I had fun doing these things, even as entire vast swaths of rainforest were lit on fire to power my agentic adventures.
Yeah, we are just full of hubris.If you think LLMs and their assorted Sturm Und Drang are going to go away or even become less visible for a nano second because you have some fundamental beef with the entire concept then I'd suggest looking up the word 'hubris'.
We are also calcified. Haughty, and set in our ways.Is your neuroplasticity up for it?
Oceans boiled and forests burn. You probably used the power of a small town to slop copy/paste your way to something a real engineer could write in 10 minutes. I think humanity has lost the plot.