So yeah, I vibe-coded a log colorizer—and I feel good about it

Perrin42

Wise, Aged Ars Veteran
102
Subscriptor++
If I have further solid experiences, I'll write those up, too!
If you're not having one of those daily you might need to increase your fiber intake. 😆

But seriously, LLM's are problematic. I've used them for minor coding tasks, and I've recently found another use for them in visualization. I have aphantasia which makes it difficult for me to visualize things, so I've used it to feed in descriptions of characters, creatures, and scenes from novels to have a better point of reference. I know it's built on (stealing) other people's work, but I could never afford to have someone create artwork for every scene I run into, let alone refine it as I try to get it closer to what I imagined. This is the only way for me to "see" some of these things.

I'm generally down on AI/LLM's, but I can't admit a minimal utility. I just wish there were a better, less parasitic way to achieve it.
 
Upvote
7 (8 / -1)

Aurich

Director of Many Things
40,904
Ars Staff
Hey!
thinker-rotate.gif
doesn't appear in my dropdown list. Are you keeping the good stuff for yourself?
It's like the secret menu at In N Out, you just need to know how to order it.

:pacman: :pacghostpink:
 
Upvote
10 (10 / 0)

mysciencefriend

Smack-Fu Master, in training
94
Subscriptor
Aww man Lee, thank you for writing this article! This is going to sound made up, but I swear to you its not... I was genuinely thinking to myself on my drive into work this morning that it bums me out sometimes that the Ars community that I ordinarily relate to so much takes such an automatically cynical view of these tools.

I've been playing with Claude code and OpenAI's Codex over the last little bit, and it is so much fun! I am not a computer programmer... When I was younger I dabbled - starting with mIRC script, then trying to learn python to scratch a specific itch, and then I took an intro to object-oriented programming at the local community college for fun - but that was all many years ago and I don't really know what I'm doing.

While my career path didn't take me down a technology route, I've always loved technology, and spending hours or days as a younger person trying to code something pointless but interesting to me up was a big part of that love... but now I'm old, I have a busy and stressful job, and I have two small kids... I don't have hours or days to hunch over my laptop reading python docs and scouring stack overflow to figure out how to do something that would be trivial for a 'real' programmer. The last few weeks though, I've been able to scratch that itch again within the constraints of my busy life, and it made me realize how much I missed having fun with computers!

Are these things perfect? No. Do I worry about what they mean for my kids and their careers? God, I really do. But these things exist, they're not going away, and for me, they've allowed me to recapture a love of technology that's brought genuine moments of fun to my life that I didn't really even appreciate I was missing. So much so that the Ars community's feelings about LLM preoccupied my morning commute.
 
Upvote
8 (18 / -10)

GPBurdell93

Seniorius Lurkius
32
Subscriptor++
maybe an O’Reilly book
Ha! Dude, I'm old enough to say something like this, but the kids these days? "What's O'Reilly?" Thanks for the chuckle.

On receiving my first paycheck from my first real-paying job, I went out to the bookstore and bought some O'Reilly's. Yes, the paycheck was paper!
 
Upvote
7 (7 / 0)

graylshaped

Ars Legatus Legionis
67,691
Subscriptor++
It's like the secret menu at In N Out, you just need to know how to order it.

:pacman: :pacghostpink:
Double double animal style with chopped chilis, please. A grilled cheese, no spread or lettuce, tomato on the side, for the kid. Fries, well done. Strawberry shake. Extra ketchup, please.
 
Upvote
4 (4 / 0)
As a professional programmer I don't have issue with people using LLMs to solve programming problems, but it does rub me the wrong way when people say "I programmed this with AI". No, you didn't program anything - the AI model parsed it together for you, mostly from code written by others.

Its the same as if I would ask someone else to make a painting for me based on my description. It would give me no right to claim that I made the painting even if I had paid the artist to do it just as I want.
I will borrow an observation from brandon sanderson. If you use aai to do something, you didn't do it. If you make an image with AI, you are not an artist. You are an art director.

Applied here, if you ued vibe coding, you aren't a coder. You are a supervisor. You are setting the goal. Not doing the work.
 
Upvote
3 (9 / -6)

haxial

Smack-Fu Master, in training
11
Asking an LLM to build you something is not problem solving. It's akin to cheating. Go on reddit and you will find dozens of vibe coders advertising projects that they "built", but it just over-engineered slop put together by LLMs. None of these projects are of any value and most of the time the tools already exist, but the person typing in the prompt or even the LLM itself can't figure this out.

I don't have a problem with people using LLMs to create personal projects for their own enjoyment, but don't pretend you are learning or solving problems. I guarantee if you put a vibe coder in front of a blank canvas and tell them to build a basic programming project, they will crumble.
 
Upvote
-9 (10 / -19)

J.King

Ars Praefectus
4,390
Subscriptor
Ah! I see. I guess I'm just so pissed off at the situation, and so bothered by it, and so "meh, doesn't matter" I'm getting from everywhere I'm utterly aghast at such a lack of ANYONE being concerned about it. At least with Passkeys, there's strong industry and cryptographic safety built into that and still does the "passwordless" experience.
In a way I know how you feel. I ended up singling you out, but your post was just the latest in a long line of posts I felt made unreasonable assumptions about the author and what he did or did not try, consider, or know. Of course, Lee is a grown person and can defend himself, but it was making me anxious that multiple people seemed to be piling on, so I had to vent a bit.
 
Upvote
9 (11 / -2)

seasonedtelephone

Seniorius Lurkius
12
Subscriptor++
Part of the role of working at Ars—in my mind, not some kind of official stance—is staying curious about technology.

Not being positive about it, or negative about, just being interested in it. And then reflecting those interests and experiences in the writing and work.

That's gonna mean calling out BS when you see it. But it also means trying things, and then being honest about them.

If someone has a positive experience they shouldn't hide it. Having an agenda where you bury good times to try and paint the world in a certain way isn't the kind of place I would want to work honestly.
Sure, be honest about the experience, but part of your job is to wrestle more with the downsides of the tech you are writing about, right? Maybe it's just me, but the piece reads as a way to persuade more people to use AI.
 
Upvote
-8 (9 / -17)

david newall

Ars Scholae Palatinae
1,168
If you go to a bunch of museums to observe a heap of paintings, then read a stack of books, then create your own work inspired by these learnings is this stealing?
If the work included Gogh's Sunflowers and the main person from Munch's Scream, yes, that would be stealing. There's a line you can cross where inspiration becomes infringement, and LLM neither cares nor understands.
 
Upvote
3 (8 / -5)
Would you pride your self and write an article this long about creating a log colorizer with stolen code? "I tricked a programmer into working 20 hours for this app and didn't pay a thing for it". Does this sound good? All "code generation" is based on stolen work and uncompensated work.
It's not fun, it's not silly. It doesn't matter that you know how awesome Muse and Placebo are.

Work is work. Pay for the work.
Would you be ok with someone using a model that uses open-source code? Because if you are, it seems like that would kind of negate your point about it being stolen. And if you aren’t ok with that, then 🤷‍♂️

https://allenai.org/tulu
 
Upvote
1 (10 / -9)

graylshaped

Ars Legatus Legionis
67,691
Subscriptor++
Asking an LLM to build you something is not problem solving. It's akin to cheating. Go on reddit and you will find dozens of vibe coders advertising projects that they "built", but it just over-engineered slop put together by LLMs. None of these projects are of any value and most of the time the tools already exist, but the person typing in the prompt or even the LLM itself can't figure this out.

I don't have a problem with people using LLMs to create personal projects for their own enjoyment, but don't pretend you are learning or solving problems. I guarantee if you put a vibe coder in front of a blank canvas and tell them to build a basic programming project, they will crumble.
I'm an "AI" skeptic, but tool is a tool, and Lee described how he used this tool to successfully solve a problem AND correctly called out the risk of assuming because it helped with this task, it needed some amount of expertise to do it.

It's like saying I'm not doing carpentry if I don't use one of these to replace a section of damaged floor moulding.

8025547eec7641b78ff01fa6c231a44f

At the same time, just because one of those exists doesn't mean I have to use it to replace a piece of damaged floor moulding.
 
Upvote
3 (10 / -7)

pokrface

Senior Technology Editor
21,512
Ars Staff
@pokrface How much did the whole experiment cost?
It fit within the $20/month I'm paying for Claude Pro, so the cost to me was $20. For other definitions/applications of "cost," I don't think data are available to do more than wild-ass-guess the numbers, which @LetterRip took a stab at doing right here.
 
Upvote
13 (14 / -1)

daduke

Smack-Fu Master, in training
81
I am managing a distributed service over 100s of servers and last year moved all our logs to single line json objects.
No more parsing, coloring or any of this... Structured log lines in a well defined format that any reasonable log tool should be able to deal with changed my life and enable next level observability framework.
 
Upvote
4 (5 / -1)

Drizzt321

Ars Legatus Legionis
33,061
Subscriptor++
In a way I know how you feel. I ended up singling you out, but your post was just the latest in a long line of posts I felt made unreasonable assumptions about the author and what he did or did not try, consider, or know. Of course, Lee is a grown person and can defend himself, but it was making me anxious that multiple people seemed to be piling on, so I had to vent a bit.
And thank you for that actually. I DO need to be aware, in my own language and communication, when I'm doing things like that. Now that it's in my brain, less likely to occur in the future.

Anyways, the situation still pisses me off, and genuinely curious what others, including Lee, feel about that "limitation" in their individual user security.
 
Upvote
8 (8 / 0)

pokrface

Senior Technology Editor
21,512
Ars Staff
Python:
def is_ipv6(ip_addr):
    """Check if an IP address is IPv6 (contains colons)."""
    return ':' in ip_addr

def is_ipv4(ip_addr):
    """Check if an IP address is IPv4 (contains dots)."""
    return '.' in ip_addr

OH BOY
Yeah I'm getting the impression that maybe i need to go back in and decide on a better way to differentiate IPv6 vs IPv4 — but, in the LLM's defense, I'm pretty sure the colon vs dot thing was my idea in the first place, and also, given the fact that the IP address showing up in the nginx logs is being specifically pulled from a visitor's X-FORWARDED-FOR header and any traffic not coming in through cloudflare gets an automatic 403 without landing in the logs and therefore all logged traffic should already have something clean stuffed into that header by cloudflare, I'm not super-duper worried about a race condition in identifying the two.

I really thought you guys would get way more upset about using greater-than/less-than to figure out HTTP error codes than about this :D
 
Upvote
16 (17 / -1)
Post content hidden for low score. Show…

graylshaped

Ars Legatus Legionis
67,691
Subscriptor++
It fit within the $20/month I'm paying for Claude Pro, so the cost to me was $20. For other definitions/applications of "cost," I don't think data are available to do more than wild-ass-guess the numbers, which @LetterRip took a stab at doing right here.
Two full days of premium pokrface TIME, son. That's the long pole in the tent on cost. Having said that, the value equation is :D/:cry:

On the numerator side:
1) Identified and resolved the business issue;
2) Fed the DIY urge;
3) Got a fun story to share with us.

On the denominator side:
1) Some portion of $20;
2) The undistributed costs from the back of letterip's napkin;
3) Your time and what else you might have done with that time.

I'd say we're all >1 here.
 
Upvote
6 (6 / 0)

tydavis

Smack-Fu Master, in training
64
Subscriptor++
@pokrface The part that most-concerned me about your article is that you were worried you'd get "smacked down" by uber-geeks on StackOverflow. You've got a forum (Ars) full of relative experts and it's not terribly hard to poke people on bluesky / mastodon for answers.

I get your Groucho joke about "not wanting the coder who wants me for a client" but did you give it a go?
 
Upvote
0 (7 / -7)

cleek

Ars Scholae Palatinae
1,025
Look, I'm a photographer and video producer of almost 20 years. I too feel apprehension and uncertainty, in particular for my young kids. The environmental argument is one that genuinely concerns me and I hope that the lofty promises of AI orchestrated energy breakthroughs will materialise.

The "stolen" stuff is a murky one and enormously charged with emotion. If you go to a bunch of museums to observe a heap of paintings, then read a stack of books, then create your own work inspired by these learnings is this stealing? You wouldn't really call it this. With AI, it's similar in the sense that the technology has trained on material. It hasn't stolen it from anyone. Arguably.
learning how other people have achieved things and then applying what you've learned takes time and effort and will always include some amount of your own personality (more if you're creating art of your own, less if you're deliberately trying to copy). that's how all art and learning in general works for humans. everyone knows this and understands it and that's just how it is.

an LLM detects and reproduces patterns on demand. it is not creating art in any sense. it learns humans' patterns mathematically and spits them back at us. it's not art. it's mechanical.

when you publish art, you implicitly consent to humans studying it. the line of art and inspiration continues.

but did you consent to feed the machine that will generate profit from people who ultimately want their machine to eliminate the need to have (expensive) artists in the first place?
 
Upvote
1 (10 / -9)

Atterus

Ars Tribunus Militum
2,326
LLM "expert" delusionment is real...

More people think they are experts in something because of unverified chatbot outputs...

A word of advice... a vague training set like the kind GPT and Gemini uses will always be a PoS compared to dedicated models designed by actual experts. Even then, those vastly superior and venerable tools warn the user that it is just that: a tool.

Want writing help? Go use Prowrite or Autocrit. Art? Practice a bit, sheesh... Science? I hope you can get into the U of your choice! Then and only then you may have a clue of how stupid it is to use a chat bot as a foundation...

LLMs will never be on par with advanced degree holders worth their salt. A real AI? Maybe. A chat bot? Comical...

But please! Encourage more fools to torpedo decades of complex code and pretend to be expert coders! Fortunately some real experts know rule 1: backups (there is a new horror i know a lot of the vibe hate comes from, these idiots dont back up and just straight overwrite code). It is a beautiful thing seeing arrogant vibe coders get their pride suplexed when their shitty code and practice are called out... better when their "hard task" can be done during the meeting by the "luddite"...

This shit is dangerous, and the fawning over it all just further justifies old held beleifs these chat bots should never have been allowed into the public sphere.
 
Upvote
-12 (9 / -21)

ExhaustedTechConsumer

Smack-Fu Master, in training
60
If an LLM can help you bypass the assortment of blog posts, vague Wikis and dead links that pass for documentation for some libraries these days, and actually tell you how to make a function call on some seemingly fundamental widget which is example-shy then that's a huge net positive. An AI doing work that no one else wanted to do.

If it's effectively stealing someone's actual work - which is what seems to be the case in the creative art fields - then it's a problem. For coding is that moral problem such an issue (assuming it's not going to give you big chunks of someone's IP) ?
 
Upvote
8 (8 / 0)

CarrerCrytharis

Wise, Aged Ars Veteran
130
It's good to hear about your generally positive experience. My issue with all this is, what happens when Anthropic and OpenAI run out of money and they can no longer provide these chatbots at an extremely subsidized cost? Will this technology still be around in its current form a year or two from now?
 
Upvote
4 (8 / -4)

Aurich

Director of Many Things
40,904
Ars Staff
Sure, be honest about the experience, but part of your job is to wrestle more with the downsides of the tech you are writing about, right?

We write that kind of content all the time though. Just to pull a recent example of my hat:

https://meincmagazine.com/ai/2026/02/...-prompts-may-be-the-next-big-security-threat/

Maybe it's just me, but the piece reads as a way to persuade more people to use AI.

I do think it's just you. Lee wrote about his personal experience. Warts and all. Interpreting that as a trying to persuade people to do anything strikes me as you injecting your own viewpoint in.
 
Upvote
31 (33 / -2)

timothystack169

Smack-Fu Master, in training
1
It came down to requirements—I wanted what I wanted. lnav is great and gets close, but does not do precisely what I want.
Did you ask the lnav author (me) to add the features you wanted? If you want them, it's likely other folks will as well. You can file issues on GitHub.

For the highlights that you're doing, yes, lnav is a bit limited. It currently only matches highlight regexes against the whole line or message body. There is currently no way limit the matched part to a particular field in the message. So, it might be technically possible, but is kind of a hassle. I would say this is a gap and I created an issue to track getting it added.

For filtering, to match the IPv6 filtering you're doing, you can run :filter-expr :c_ip LIKE '%:%' (that creates a filter that checks log messages using a SQLite expression). You can do a similar thing for IPv4.

To hide the referrer, you can run :hide-fields cs_referer. For the user agent, it would be :hide-fields cs_user_agent.

Of course, you don't have to use lnav. I fully understand sometimes folks just want their own thing.

Thanks for the mention and hope you have a good day!
 
Upvote
65 (65 / 0)
For every story of a successful and reasonable use case, I have to wonder if the LLM would be capable of doing that task now.

Ars covered Gemini, so I tried it. Seemed whip-sharp, as these things go. About three days ago it became incoherent. It was like they cranked the knob up during media coverage, but instead of a modest drawback once interested parties were engaged, they dialed it so far back their product became worthless. Fucking bizarre, not the first time it’s happened.

Double check which model you were using - Gemini defaults to 'Fast' often which is a much worse performing model for any coding task.
 
Upvote
5 (5 / 0)

pokrface

Senior Technology Editor
21,512
Ars Staff
Did you ask the lnav author (me) to add the features you wanted? If you want them, it's likely other folks will as well. You can file issues on GitHub.

For the highlights that you're doing, yes, lnav is a bit limited. It currently only matches highlight regexes against the whole line or message body. There is currently no way limit the matched part to a particular field in the message. So, it might be technically possible, but is kind of a hassle. I would say this is a gap and I created an issue to track getting it added.

For filtering, to match the IPv6 filtering you're doing, you can run :filter-expr :c_ip LIKE '%:%' (that creates a filter that checks log messages using a SQLite expression). You can do a similar thing for IPv4.

To hide the referrer, you can run :hide-fields cs_referer. For the user agent, it would be :hide-fields cs_user_agent.

Of course, you don't have to use lnav. I fully understand sometimes folks just want their own thing.

Thanks for the mention and hope you have a good day!
Thanks for responding, and thanks for your work on lnav. It's a hell of a tool that I am using near-daily for log parsing.

I definitely did not think to bother you or anyone else—primarily because bothering others with a dumb personal use case makes me feel like I'm taking advantage of being the guy from Ars Technica. And there's also the Tom Sawyer effect, mentioned in the piece, of trying to motivate myself to solve a problem.

@pokrface

So you found the cause, but I didn't see where you mentioned how you fixed it. Did you just enable your plugin again knowing if was actually a reasonable plugin, or did you do something else to stop the bots getting the story before discourse could?

Thanks for pointing that out, d'oh. Yes, I kept the mu-plugin that adds no-cache headers active. I looked at ways to tweak the Apple News plugin to alter its behavior, and also at poking deeper into Wordpress' guts to see if screwing around with how the post publication event works would be the right call, and in the end I decided to stick with what I know—and I know how http headers work. It seemed the safest, sanest way forward.
 
Upvote
13 (14 / -1)