So yeah, I vibe-coded a log colorizer—and I feel good about it

crmarvin42

Ars Praefectus
3,113
Subscriptor
As a professional programmer I don't have issue with people using LLMs to solve programming problems, but it does rub me the wrong way when people say "I programmed this with AI". No, you didn't program anything - the AI model parsed it together for you, mostly from code written by others.

Its the same as if I would ask someone else to make a painting for me based on my description. It would give me no right to claim that I made the painting even if I had paid the artist to do it just as I want.
Not a programmer, but this feels like when a professor who is 20 years out of date in modern lab practices, brags about the work done in his lab.

Prof. probably doesn't know how to do any of the work anymore, and so should ideally credit his students without whom the work could not have been done, because that is where the knowledge/effort came from. His role as the professor is knowing enough to put their skills to use answering useful questions. He needs a certain level of understanding, but not a deep technical one, as it pertains to the specific lab skills involved.

Of course, those students will potentially develop into professors in their own right, and keep pushing the envelope of scientific research. LLMs, OTOH, have no initiative and will always need that human to direct their tasks. So what does research look like when the professor has never done the lab work, and relies on students who can never progress to be a colleague (programmers and LLMs, respectively)? Looks like potential for stagnation to me.

But I'm not a programmer, so maybe my analogy doesn't work.
 
Upvote
30 (36 / -6)
To disable wrapping, would piping your output to

less -S

have worked?

Maybe, I think it is….

column -t | less -S

Overall, I think it is a fun project. The only slight worry I have, are logs considered adversary-controlled inputs? I’m not sure what all the fields in your logs mean, but is there any chance the attacker has enough control over them to stick a buffer overflow or an escape character in there? Then you have a program you don’t really understand parsing hostile stuff from the internet… (I don’t do web stuff, so maybe this is an overly-paranoid take).
 
Last edited:
Upvote
21 (23 / -2)

flamingjello

Ars Centurion
256
Subscriptor++
Thanks for the write-up, Lee. While Claude is the focus of this article it's realistically the same as any of us that have picked up a Python programming project as nonprogrammers, and fallen down the rabbit hole of "This code from github is ALMOST what I want, I just need to tweak a few things". Weeks later you wake up from a caffeine and sugar filled dream where you have 1000s of failed attempts to get what you want. You go to one of a dozen or so coding forums where you have been posting questions and a helpful forum poster gives you a one line fix for what you've been trying to do.
 
Upvote
26 (28 / -2)

chip_1

Wise, Aged Ars Veteran
104
This is the sort of menial shit code that AI is actually useful for.

I'm old enough to have worked through the last big tech mass-hysteria event in the late 90s. The dotcom bubble was not quite as high-stakes as the AI bubble, but it was equally dumb and predictable. It also saw techbros conning financebros to throw all their money at projects and companies that overpromised to the point of being clinical delusion. Do you remember the pet-portal wars? I do. And just like that nonsense, the AI nonsense will eventually crumble under the weight of factual reality.

But also like that nonsense, there is more than a little actual utility buried under all the hype and stonk-seeking behavior. The dotcom bubble burst and a lot of idiots went broke, but "the internet" didn't go away. It really was an important new technology - it just wasn't the magical panacea that the startups were selling. Similarly, AI is an important new technology. It does have real-world utility and valid use-cases. It will stick around in various forms indefinitely. You just have to realize that for every task AI can meaningfully assist with, there are 50 hucksters trying to scam investors into believing it can do something it can't.

AI isn't going to replace anybody. It isn't going to change every industry, and it absolutely doesn't need to be baked into every platform and application. But it can vibe up some useful slop like this, and that has value. Nowhere near the one-floppity-bajillion dollars of value that the deranged finance dickheads are pretending it is has, but not nothing either.
 
Upvote
46 (54 / -8)

rcduke

Ars Tribunus Militum
2,171
Subscriptor++
This is a well written article, and I cannot argue with that.

However, personally, I consider the use of any AI (LLM or Generative) to be a waste of brain power (on the user) and natural resources (on the server). As the article stated near the end, LLMs only work if you already know the content, can analyze the content, and can check the output. It may save you time to simply a task you've already done, or closely familiar with. You end up giving money to a billionaire's coffers and wasting brain power checking something when you could have just researched and developed it in the first place.

However that's not how most LLMs and Gen AI are being marketed. They're being pushed on the masses as this end-all be-all all-knowing oracle that will answer any question, and there is no double checking of the results by the system. Everyone wants a quick fix of knowledge or a solution, and AI helps the corporations push the narrative that they can provide that if you pay for the AI (either money or data collection.)

In my opinion, the only way to fight against this barrage of AI is to not use it at all. Any use of it shows the companies that their billion dollar investment is justified and we need to ignore it completely. The comparison from someone earlier about how is AI use different from going to a library and researching, is that someone other than the AI company gets paid for the privilege of researching the data. Authors get paid by publishers, libraries buy books from publishers, and users (usually) pay for libraries through their local taxes. AI companies just scrape the internet for information generally without paying the sources for the info. I consider that to be theft, both literally and in the copyright sense, and does not justify LLMs or Gen AI.

I'm glad Lee ended up with a mostly positive experience and felt like an article was worth the work. I just don't agree with using LLM or Gen AI software and Lee's article didn't change that for me.
 
Upvote
3 (41 / -38)

GlitchReport

Smack-Fu Master, in training
1
Subscriptor++
It's interesting that this article completely missed an essential point about LLMs and the specific problem of parsing log files. Lee wrote about using an LLM to vibe-code a tool to help him do that. A more natural use case for LLMs is to parse logs; that's what LLMs are good at! With my limited experience with LLMs, the only practical tool I've built using an LLM is one that reads my daily journal and summarizes my past week, month, or any timeline I name. Great for those Monday morning meetings when I've got brain fog and can't quite recall last week's activities. It's quite possible to build an LLM-based tool that can parse log files and pinpoint the problem that Lee found. I wouldn't be surprised if future sys-admin tools rely heavily on LLMs to read logs and fix issues by parsing them.
 
Upvote
-17 (6 / -23)
These articles have been interesting and have shown off the use case, albeit with caveats, for those who main job isn't coding.

In my case any code I write I have to have ownership of. Not just that it works right now, but that it will work and can be extended years down the road.

I can see places where they could be useful but I also don't see myself trusting them to just write code for me anytime soon.
Not just that - at least many people have to write novel code, that hasn't got analogues in the open-source world.

I'm still waiting to see even one example of something novel an agentic setup has pieced together with any amount of governance.
 
Upvote
13 (21 / -8)

mateo9

Smack-Fu Master, in training
63
I don't see any future where the current technology of LLMs can efficiently replace competent thinkers (LLMs can get to valid, "creative" conclusions via brute force), but I do think 90% of 90% of programming jobs are nothing but boring, repeatable code. So the complaint "that is all LLMs can do" is kind of a compliment. I use Opus 4.5 daily, and while I won't make any claims that it makes me faster, it does save me from boredom. So if you guys love writing that code or are blessed with an army of interns to do it for you, then yes, LLMs have almost no value.
 
Upvote
9 (16 / -7)

ColdWetDog

Ars Legatus Legionis
14,402
I know Ars isn't a democracy, and it isn't audience captured, otherwise articles like this wouldn't get written, but it seems odd to still be seeing this sort of thing on a site where the readership is widely capable of seeing the naked emperor and generally willing to say so.

I guess I wonder if there's genuine interest in the slop generators, or if it's coming from on high, as I've seen from various companies for which I've done work.
If you think LLMs and their assorted Sturm Und Drang are going to go away or even become less visible for a nano second because you have some fundamental beef with the entire concept then I'd suggest looking up the word 'hubris'.

We get it. There are horrible downsides to the technology. And the current political environment. And the current environment. And Elon Musk. But they are here to stay and we are going to have to deal with them in some way other than saying 'they suck'.
 
Upvote
11 (42 / -31)

arsrudi08

Seniorius Lurkius
27
Oh I got something similar! One of my favorite games, X4 foundation, is poorly documented, but game files are extractable and mod friendly…

So, few python (mostly LLM made) and few agents (one creates artifacts with game flow, other converts artifacts to wiki page) later, I’ll occasionally ask Claude to generate wiki page for area I don’t know enough about. I know just enough about programming and game that I can spot when LLMs are off target, thankfully.
 
Upvote
15 (16 / -1)

J.King

Ars Praefectus
4,390
Subscriptor
The "stolen" stuff is a murky one and enormously charged with emotion. If you go to a bunch of museums to observe a heap of paintings, then read a stack of books, then create your own work inspired by these learnings is this stealing? You wouldn't really call it this. With AI, it's similar in the sense that the technology has trained on material. It hasn't stolen it from anyone. Arguably.
Arguably, indeed. You might very well call it plagiarism or forgery if a person did it, depending on what they produce. Calibrate your conclusions about LLM output accordingly.
 
Upvote
9 (23 / -14)

adrianovaroli

Ars Tribunus Militum
1,590
If you think LLMs and their assorted Sturm Und Drang are going to go away or even become less visible for a nano second because you have some fundamental beef with the entire concept then I'd suggest looking up the word 'hubris'.

We get it. There are horrible downsides to the technology. And the current political environment. And the current environment. And Elon Musk. But they are here to stay and we are going to have to deal with them in some way other than saying 'they suck'.
People don't really buy Teslas or get X accounts or vote the far right "because they're here to stay", they buy them because they want to, and create accounts on X and use Grok because they want to. They vote the far right because they want to. If Lee wanted to make a readable, enticing account of his actions for "the hubris-filled", he could have avoided that "and I feel good about it". Cool. You like it. Have at it. But if your only take on the horrible downsides of the technology is "doesn't matter, look at my log colorizer, how cool, I liked it!" then maybe it's not just my hubris.

By the way, there could be a series of articles on legitimate, cool uses of the tech that avoid the horrible downsides. Like, how to set up local models that do stuff that local models are actually good for.
 
Upvote
-9 (22 / -31)

adrianovaroli

Ars Tribunus Militum
1,590
Look, I'm a photographer and video producer of almost 20 years. I too feel apprehension and uncertainty, in particular for my young kids. The environmental argument is one that genuinely concerns me and I hope that the lofty promises of AI orchestrated energy breakthroughs will materialise.

The "stolen" stuff is a murky one and enormously charged with emotion. If you go to a bunch of museums to observe a heap of paintings, then read a stack of books, then create your own work inspired by these learnings is this stealing? You wouldn't really call it this. With AI, it's similar in the sense that the technology has trained on material. It hasn't stolen it from anyone. Arguably.
To name just one, a bunch of authors are suing because they feel their copyright... rights have been trampled by training models on pirated content, so yeah, "arguably". I could be liable for fines or maybe jail if I did the same thing these companies did, but somehow, "arguably".

Oh. I just remembered Aaron Swartz. "Arguably". "enormously charged with emotion". "murky one."
 
Last edited:
Upvote
9 (21 / -12)

c128l

Seniorius Lurkius
3
Subscriptor
To disable wrapping, would piping your output to

less -S

have worked?

Maybe, I think it is….

column -t | less -S

Overall, I think it is a fun project. The only slight worry I have, are logs considered adversary-controlled inputs? I’m not sure what all the fields in your logs mean, but is there anything close chance the attacker has enough control over them to stick a buffer overflow or an escape character in there? Then you have a program you don’t really understand parsing hostile stuff from the internet… (I don’t do web stuff, so maybe this is an overly-paranoid take).
Reminds of this small (/s) incident called log4shell
 
Upvote
10 (10 / 0)

nuggolips

Seniorius Lurkius
47
Subscriptor
But here’s the thing with the joy of problem-solving: Like all joy, its source is finite. The joy comes from the solving itself, and even when all my problems are solved and the systems are all working great, I still crave more joy. It is in my nature to therefore invent new problems to solve.
I just want to say that I resemble this remark and it makes me question a lot of the things I'm currently working on.
 
Upvote
23 (23 / 0)

yumegaze

Wise, Aged Ars Veteran
110
Arguably, indeed. You might very well call it plagiarism or forgery if a person did it, depending on what they produce. Calibrate your conclusions about LLM output accordingly.
to further hit the nail on the head, human learning and inspiration is usually novel and transformative. when it isn't, well, there's several degrees of punishment. crucially, human learning isn't in no way comparable to a company using massive amounts of unlicensed works to train their product. this reductive and ignorant persistence on comparing the functionality of the human brain to prompt-driven pattern matching is silly. just because if feels like it is (and i personally don't even believe it does), doesn't mean it actually is.

additionally, i can't excuse my copyright violations as mere inspiration so, if people insist in putting LLMs and humans in the same playing field, then... pay up, chatbot makers.
 
Upvote
13 (23 / -10)

zman54

Ars Scholae Palatinae
847
Fair enough, I knee-jerked a response before reading the article.

I have to ask though...

Did you or someone else at Ars intentionally go with an article title that you knew would trigger a lot of people like me and get us to comment before even reading the article? Because at this point you have to know that the words "vibe coding" has strong negative connotation for a lot of people so why use it unless you want to drive engagement?

It kinda sucks that we live in a world were I have to look at decisions about article titles in the context of questions like "Are these guys trying to ragebait me?"
This is a prime example of a “you” problem.

It’s also a “teaching moment”.

Is your neuroplasticity up for it?
 
Upvote
3 (20 / -17)

J.King

Ars Praefectus
4,390
Subscriptor
additionally, i can't excuse my copyright violations as mere inspiration so, if people insist in putting LLMs and humans in the same playing field, then... pay up, chatbot makers.
Indeed. People who say these LLMs are just learning like people do always seem to hand-wave away the fact that people don't learn for free. Museums and art galleries which exhibit copyrighted works charge admission to compensate the artists. People pay for books, or libraries do, and authors are compensated. And this is quite besides that most people don't learn in order to act as slaves for others. Usually, they learn to enrich their lives in one way or another, and that has its own intrinsic value which training an LLM will never have.
 
Upvote
28 (39 / -11)
It was time to fire up VSCode and pretend to be a developer. I set up a new project, performed the demonic invocation to summon Claude Code, flipped the thing into “plan mode,” and began.
Lee - glad you had fun vibe-coding ... but oyu could have just opened the log files directly into VS Code and changed the Language Mode to something like Coffeescript or Javascript or JSON or any number of other lang modes and you would have gotten similar results.

On one hand overkill+reinventing the wheel; on the other hand you learned something that isn't as far outside of your wheelhouse as you thought.

There is enough overlap of knowledge that most dedicated (or primarily) sysyadmins can pick up on the coding side basics better than some random person that only uses a computing device to surf social media.

[edit] even one of the screenshots you provided shows you the solution I'm talking aout I've done this for years looking at server logs - and never once thought I wish there were a program to color code server logs:
https://cdn.arstechnica.net/wp-content/uploads/2026/01/lee_vscode-1536x846.png

The middle code section - the color coding there works on more than the specific langaguage selected. Pick a language and cycle until the colors shift to something you can look at/read better.
 
Last edited:
Upvote
2 (12 / -10)

loge999

Seniorius Lurkius
13
A lot of ignorant takes in the comments. You don't understand how LLMs work if you think they should "credit the original programmers". Check out the IBM series on youtube.

Another thing I generally see is a misunderstanding that AI coding is "all or nothing". Like you tell the AI to make an entire program for you and it spits out a crap buggy mess. Sure, that is one way (the wrong way) to use it. A majority of AI programming is just tab auto-complete helpers to avoid typing as much. Or helping with research, planning and design, etc.
 
Upvote
-7 (22 / -29)
As a professional programmer I don't have issue with people using LLMs to solve programming problems, but it does rub me the wrong way when people say "I programmed this with AI". No, you didn't program anything - the AI model parsed it together for you, mostly from code written by others.

Its the same as if I would ask someone else to make a painting for me based on my description. It would give me no right to claim that I made the painting even if I had paid the artist to do it just as I want.
Painting: Well no. The anaology would be more like you commissioned someone to slap clipart together for you, where still yet another layer of artists generated said clipart and your direct contact appropriated those individual parts to piecemeal something reconstituted together.
 
Upvote
-1 (12 / -13)
This is a thoughtful article, and presents what seems like a reasonable use case for LLMs. That is the problem. If AI firms can taut even one plausible use case for their product, they have and will use that as their pretext to continue to siphon money, resources, and above all else: human work and IP. If you participate in AI usage, you are, as least in part, endorsing an industry whose aims are to further enrich the tech oligarchy at the direct expense of the rest of us.

Lee: you can't learn to code? Hire a coder. It may already be too late, but we must bend our collective efforts to deflating this AI bubble quickly, before it invades every aspects of our lives.
 
Upvote
-13 (12 / -25)

chillbert

Wise, Aged Ars Veteran
166
Subscriptor
No, no one told me to write this. I've been a reader since 1998 and I generally find that my own personal interests align with the audience's, because I am them. Further, I've been employed here since 2012. I'm a senior editor with direct reports and I sit on the Ars editorial board. I am the "on high" at this point.

I had a solid experience over the Christmas break and I wanted to write it up. If I have further solid experiences, I'll write those up, too!
I can attest have heard many similar reports. Most notably, I have several friends who are not developers but are quite technically capable and they're getting a LOT of value out of AI-assisted coding and/or pure AI coding.

For example, one of my friends used Claude Code to make a specialized iOS app to perform some simple manipulations on video that his wife was working on. There is no doubt in my mind that AI coding is democratizing some portion of coding work. In the same way that YouTube democratized video distribution. Are there limits to what it can do? Of course. Is it opening up meaningful new possibilities for many people? Yes.

Let's not pretend that earlier generations of technology were fundamentally more purposeful than this. When the web browser came along, initially it was developed for sharing academic documents. But immediately people (myself included) started exploring how it might be used for all kinds of other things, many of which have become a standard part of how we use them today.
 
Upvote
21 (24 / -3)

pokrface

Senior Technology Editor
21,512
Ars Staff
Lee: you can't learn to code? Hire a coder.
I'm aware fiverr exists, but I don't feel like it'd be a good use of anyone's time to have to be subjected to my dumb nitpicky "make the logo bigger" levels of unending change requests for silly casual passion projects. Nor do I want (or have the means) to pay a competent coder what they're worth to monopolize that time for at minimum a few days per stupid project—there are other not-mine projects out there that are way better uses of skilled human time. Pulling in an actual skilled coder means now suddenly I've got outside constraints like "money" and "contracts" and "formalized feedback rounds" being placed on what should be my fun screw-around-with-computers free time—now it's work.

To thoroughly bastardize a quote from Groucho for humorous effect, I don't think I'd want to work with any dev who'd actually have me as a client. I don't feel like any of the things I'd want to code rise above dumb trivialities, and they'd be a waste of others' time. (Like this particular project!)
 
Upvote
49 (54 / -5)

vershner

Ars Scholae Palatinae
706
Subscriptor++
And about those two days: Getting a basic colorizer coded and working took maybe 10 minutes and perhaps two rounds of prompts. It was super-easy. Where I burned the majority of the time and compute power was in tweaking the initial result to be exactly what I wanted.
This reminds me of every time I've fixed a problem with Powershell. Powershell is obviously not an LLM (yet...) but I almost always find within about 10 minutes that it has a single command that does almost exactly what I want. I then spend two days trying to fix the format, working out why it fails to pipe into another command, working out why some of the data it produces is in a different type than the rest, etc...

I’ve had the thing make small WordPress PHP plugins
This rings alarm bells. I wouldn't want to use an LLM for anything exposed to the internet unless I thoroughly understood the code it produced.
 
Upvote
29 (29 / 0)
D

Deleted member 221201

Guest
Periodically I run my internal “is generative AI finally good at summarising yet” benchmark and fire up one of the newer local open-weight models.

It’s still garbage. 48 GB of RAM devoted purely to the model and it still can’t fit a couple of megabytes of PDFs into context without RAG and the answers are still simultaneously good enough to waste my time and bad enough to be ultimately useless.

When does this stuff actually become good?
Which model are you using ?
 
Upvote
0 (6 / -6)
And I had fun doing these things, even as entire vast swaths of rainforest were lit on fire to power my agentic adventures.
1770221837346.gif
 
Upvote
-9 (15 / -24)

pokrface

Senior Technology Editor
21,512
Ars Staff
This rings alarm bells. I wouldn't want to use an LLM for anything exposed to the internet unless I thoroughly understood the code it produced.
Heard, yeah—and fortunately, both of the mu-plugins I'm using are simple enough that I can in fact understand what they're doing. I included links in the piece, but one is here and another is here if you'd like to glance.
 
Upvote
10 (12 / -2)
Post content hidden for low score. Show…
Periodically I run my internal “is generative AI finally good at summarising yet” benchmark and fire up one of the newer local open-weight models.

It’s still garbage. 48 GB of RAM devoted purely to the model and it still can’t fit a couple of megabytes of PDFs into context without RAG and the answers are still simultaneously good enough to waste my time and bad enough to be ultimately useless.

When does this stuff actually become good?

A 2 MB (text) pdf is 250,000-500,000 tokens. Most local models are only designed for 64k or 128k context size. You can have it summarize 32k or 64k chunks, then summarize the summaries.
 
Upvote
-2 (6 / -8)

studenteternal

Wise, Aged Ars Veteran
106
Look, I'm a photographer and video producer of almost 20 years. I too feel apprehension and uncertainty, in particular for my young kids. The environmental argument is one that genuinely concerns me and I hope that the lofty promises of AI orchestrated energy breakthroughs will materialise.

The "stolen" stuff is a murky one and enormously charged with emotion. If you go to a bunch of museums to observe a heap of paintings, then read a stack of books, then create your own work inspired by these learnings is this stealing? You wouldn't really call it this. With AI, it's similar in the sense that the technology has trained on material. It hasn't stolen it from anyone. Arguably.

What I do sympathise with is the bitter taste some people are experiencing feeling like their work has been used in ways they didn't expect. Or that it might be used to create something that competes against them. Or that it might be part of something dangerous to our existence and humanity. This is actually a very different matter and one I acknowledge and sometimes feel too.

Then there's the matter of whether all this stuff is actually useful. Some say it's revolutionary. Some say it's hot garbage. But as is often the case, it's more nuanced than this and the reality probably sits somewhere in the middle. Maybe this stuff might lead to a net win in the long run. It's possible. We just don't know. Personally I'm trying to remain optimistic. I don't really agree with the absolutist doomer takes even though there's merit to some of the arguments from this corner. I really don't like Sam Altman at all either and I think he's been reckless and irresponsible on a number of levels. There are better stewards of AI than this guy.

I agree with the Lee. The genie is out of the bottle. But that doesn't mean we just sit back and let the wave crash over us. What I hope to see from here on is concerted efforts from government, private industry and the AI industry itself to mitigate harm and maximise collective benefit. Unfortunately I have more faith in some governments than others.
you know, in the story, the genie does in fact go back into the bottle? Its actually kinda the point of the story? I find that a pretty good metaphor for the whole thing. People using an example they don't understand that actually directly contridicts the point they think they are making.
 
Upvote
5 (17 / -12)

adrianovaroli

Ars Tribunus Militum
1,590
The "stolen" stuff is a murky one and enormously charged with emotion.
So if we think companies pirating stuff by the ton to train LLMs is, you know, not cool, we're "emotional". Nice.
And I had fun doing these things, even as entire vast swaths of rainforest were lit on fire to power my agentic adventures.
So, let's ridicule the underlying concept, that will go well with the people who think that.
If you think LLMs and their assorted Sturm Und Drang are going to go away or even become less visible for a nano second because you have some fundamental beef with the entire concept then I'd suggest looking up the word 'hubris'.
Yeah, we are just full of hubris.
Is your neuroplasticity up for it?
We are also calcified. Haughty, and set in our ways.

I mean, I don't know about anybody else, but I'm sold. I just can't wait to use this tech after this rhetoric.
 
Upvote
1 (25 / -24)

akenthet

Wise, Aged Ars Veteran
122
Subscriptor
Thanks, I enjoyed the article. I can relate to a lot of the same things as I've had an LLM write up some code for some tools to help me learn a new language. I wanted some esoteric features I couldn't find in existing apps. In the end, would it have been better to spend the time studying the language? Almost definitely. But sometimes I too am motivated to do work when I have a new toy to play with.
 
Upvote
7 (11 / -4)
Oceans boiled and forests burn. You probably used the power of a small town to slop copy/paste your way to something a real engineer could write in 10 minutes. I think humanity has lost the plot.

Claude Pro accounts get 44k tokens per 5 hours. So 220k tokens per day, so he used an upper bound of 440k tokens. It is roughly .4 J/token. So he used 0.049 kWh. His local computer probably used about .5 kWh for the days usage (so 1 kWh for two days). For comparison - playing an hour of video games on a desktop computer uses about .5 kWh.

So yeah, I don't think he 'boiled oceans and burned forests'. Also his local computer usage in any cold location in the country (ie anywhere in the US almost), simply offset heat that would otherwise have been generated by his boiler.
 
Upvote
21 (36 / -15)