So yeah, I vibe-coded a log colorizer—and I feel good about it

HamHands_

Ars Centurion
200
Subscriptor
It fit within the $20/month I'm paying for Claude Pro, so the cost to me was $20. For other definitions/applications of "cost," I don't think data are available to do more than wild-ass-guess the numbers, which @LetterRip took a stab at doing right here.
Yup your definition was what I was looking for. Friends of mine who have vibe coded apps for just themselves reported that they can spend ~$100 for basic desktop apps, web scrapers, APIs, etc. I was curious to see how your costs compared to theirs.
 
Upvote
2 (3 / -1)
Yup your definition was what I was looking for. Friends of mine who have vibe coded apps for just themselves reported that they can spend ~$100 for basic desktop apps, web scrapers, APIs, etc. I was curious to see how your costs compared to theirs.
Over the weekend, I used Claude Code to build a personal finance web app to replace my old system. It's a React app and Postgresql db and deployed with Docker. It took around 9 hours to build, split over Saturday and part of Sunday using Opus and the Design and Superpowers plugins. Cost was my Pro sub and $60 in Extra usage charges.

On the other hand, over the course of about 2 weeks, I built a Sonos dashboard web app that can track the SiriusXM songs that play and keeps statistics, favorites, and lets me update alarms. The web app is only about 30% as complex as the finance app but I've also built an MCP server for the Sonos app and use it as a PWA. This one was built up over time and I never used more than the 5 hour quota limit so cost was limited to the $20 sub.
 
Upvote
-1 (6 / -7)
LLMs can be fantastic if you’re using them to do something that you mostly understand. If you’re familiar enough with a problem space to understand the common approaches used to solve it, and you know the subject area well enough to spot the inevitable LLM hallucinations and confabulations, and you understand the task at hand well enough to steer the LLM away from dead-ends and to stop it from re-inventing the wheel, and you have the means to confirm the LLM’s output, then these tools are, frankly, kind of amazing.

All true, but I'd add: And the LLM has enough examples in its training set.

If you want Minesweeper or a text editor or a log colorizer, the LLM probably has dozens, hundreds, even thousands of examples in its training set and has had its weights adjusted properly to do its predictive generation thing.

I've posted this in other LLM coding threads but I think it's relevant enough to repeat:

When I tried Gemini and ChatGPT on InstallShield 2022 MSI + PowerShell development my guess is there is a severe lack of examples because most InstallShield development is proprietary to the company releasing an app and they have no reason to share it. The results I got were more hallucinations than not. It didn't help to specify the version, I still had to point out that the suggested UI did not exist, the PowerShell commands would not work in an MSI install, etc.
 
Upvote
5 (7 / -2)

Maxer

Ars Legatus Legionis
19,019
Subscriptor
Thanks for pointing that out, d'oh. Yes, I kept the mu-plugin that adds no-cache headers active. I looked at ways to tweak the Apple News plugin to alter its behavior, and also at poking deeper into Wordpress' guts to see if screwing around with how the post publication event works would be the right call, and in the end I decided to stick with what I know—and I know how http headers work. It seemed the safest, sanest way forward.

No, no one told me to write this. I've been a reader since 1998 and I generally find that my own personal interests align with the audience's, because I am them. Further, I've been employed here since 2012. I'm a senior editor with direct reports and I sit on the Ars editorial board. I am the "on high" at this point.

I had a solid experience over the Christmas break and I wanted to write it up. If I have further solid experiences, I'll write those up, too!
I appreciate the write up and found it very interesting!

Question: I'd love to know more about the costs to run these code helper LLMs.

You mentioned additional costs being very common, and that you ran out of credits.

What would the cost per prompt or hour work out to be?

Would it cost $100 a day if you used this 6 or 8 hours a day? $800 a day?

Does it start to approach the cost of hiring an actual person?
 
Upvote
4 (5 / -1)

Spiderman10

Ars Scholae Palatinae
963
Subscriptor++
It's amazing how you've managed to learn about AI but avoided to hear any of the advertisement around AI tossed by Anthropic, OpenAI, Microsoft, etc. Maybe go check what those companies are saying AI can do. You might learn something new about the people "missing the point". I would love to go back to the time when LLMs were just decent tools for text analysis and the like. But we're not back then. We are here now.
Cool your jets Vin Diesel. I'm referring to coding specific tools/models, not wider AI tools that are doing and integrating into more complex workflows. This article is about coding tools and I'm talking about coding tools.
 
Upvote
-3 (5 / -8)
To ask those questions…
I can't speak to your efforts personally but I continue to see the same tit for tats in almost every AI discussion thread, this one included. The interactions barely ever move to "this could be a practical policy" or "a law like this might prevent mass redundancy while still allowing AI to assist in X sector" or "AI can be useful in this instance but we need tax funded income safety nets for such and such." It's almost always "AI is an abomination" vs "AI is the technology of our time and you just need to suck it up and get used to it."

It's fighting. It's not constructive. It feels like a lot of wasted time and energy.
 
Last edited:
Upvote
2 (8 / -6)

philhanson

Ars Scholae Palatinae
1,298
Subscriptor
I'm currently working on a completely vibe coded iCloud/Google docs replacement that runs on my own VPS. It's ... surprisingly polished and works fantastic. It has email (soon), Files (Drive/iCloud) with full file manager features, Photos with editing, full notes app with Markdown, Todo list with alerts, Calendar, Contacts (that work with the other aspects like calendar, mail, todo) a vault for pw, cc's, logins, and a MS Office replacement (Not fully compatible but I never use, get, or send actual office files.

Screenshot 2026-02-05 at 1.24.25 AM.png
 
Upvote
-4 (5 / -9)

philhanson

Ars Scholae Palatinae
1,298
Subscriptor
I appreciate the write up and found it very interesting!

Question: I'd love to know more about the costs to run these code helper LLMs.

You mentioned additional costs being very common, and that you ran out of credits.

What would the cost per prompt or hour work out to be?

Would it cost $100 a day if you used this 6 or 8 hours a day? $800 a day?

Does it start to approach the cost of hiring an actual person?
I use the $100 a month Claude Max and almost never hit the limits. This is with just hours and hours of typing into terminal a day sometimes.
 
Upvote
-5 (1 / -6)

nedscott

Ars Praetorian
553
Subscriptor++
But the comments are always filled with people who have the nuance of a Knight Templar. How is any productive conversation going to happen when there are so many extremists torpedo-ing any and all discussions about LLMs… genuinely flummoxed.
Yeah like all those "extremists" who keep saying we have a massive climate crises, and between crypto currencies and LLMs, we've wiped out what little energy savings we've made for the last 30 years. Bringing that up is such a bummer and has no "nuance".
 
Upvote
4 (12 / -8)

nedscott

Ars Praetorian
553
Subscriptor++
The dotcom bubble burst and a lot of idiots went broke, but "the internet" didn't go away. It really was an important new technology - it just wasn't the magical panacea that the startups were selling.
That was never a thing. Why do people keep acting as if the dotcom bubble was about the internet/web itself? It was simply called the dot com bubble. It was never an internet bubble. Very specific services/websites got overvalued, and a LOT of money was involved, but it was never about the whole internet, or even involving a majority of it.
 
Upvote
7 (9 / -2)

blarfiejandro

Ars Scholae Palatinae
867
I use the $100 a month Claude Max and almost never hit the limits. This is with just hours and hours of typing into terminal a day sometimes.
I remeber what a revolution GNU software was and how it managed to democratize computers with open source. Watching people throw that all way to chase the next shiny thing while absolutely demolishing what remains of the decentralized, open internet was one of the saddest parts of 2025. I got a notification today on a different forum that someone had returned after a multi-year absence and had just noticed one of my open source projects. Of course I was greeted by that Cloudflare time waster because the forum is buckling under the strain of AI bots. And for what? So you don't have to think? So you can buy a subscription to avoid learning? Thanks.

I'm not surprised that the Ars faithful would be so dismissive of the glaring ethical problems with AI, but I am disappointed.
 
Upvote
-2 (9 / -11)

blarfiejandro

Ars Scholae Palatinae
867
Over the weekend, I used Claude Code to build a personal finance web app to replace my old system. It's a React app and Postgresql db and deployed with Docker. It took around 9 hours to build, split over Saturday and part of Sunday using Opus and the Design and Superpowers plugins. Cost was my Pro sub and $60 in Extra usage charges.

On the other hand, over the course of about 2 weeks, I built a Sonos dashboard web app that can track the SiriusXM songs that play and keeps statistics, favorites, and lets me update alarms. The web app is only about 30% as complex as the finance app but I've also built an MCP server for the Sonos app and use it as a PWA. This one was built up over time and I never used more than the 5 hour quota limit so cost was limited to the $20 sub.

Meanwhile I've spent the past month doing a deep dive into some embedded hardware projects. And you know what? AI meant that I repeatedly had to prove to DigiKey that I'm a human even while I was logged in. AI meant that every other vendor had similar issues. AI meant that I can't access the manufacturer's support forums because they think I'm an AI bot. AI meant that in addition to punitive tariffs things with RAM got a lot more expensive. AI meant that when Cloduflare went down I was dead in the water because pretty much every site is running for cover from the hyper aggressive AI scrapers. AI meant that when I put time and effort into comments and docs for humans in a pull request, I got a gibberish AI summary and "code review" in response.

Fuck your AI.

Edit:

Oh yeah I forgot github. Microsoft is so troubled by aggressive AI scrapers that they've locked down unauthenticated use of their API and moved their site to client side rendering. So unless you're logged in you can look at maybe one pull request a week. So much for privacy and decentralization. Thanks AI. 🖕
 
Last edited:
Upvote
-3 (8 / -11)
See, I spent way too long trying to manage the work of a subcontractor team as a "lead developer". There was no joy to be found in that work, there was no fun. Luckily the tide turned there, eventually, and the subcontracting is largely done with.

So when management now gets the bright idea that developers should be forcefed LLMs and pushed to try and do pretty much the same thing (keep an eye on code quality, act as the one always reviewing the work and responsible for any bugs that go through, without any of the joy of actual construction), just with AI agents that can create code vomit so much more quickly - and with so many more subtle and hard to notice errors... no. Just no. The only pleasant thing about that job was the human interaction, and even that is gone with AI.

But this seems somehow impossible to get through to the managers - if I actually enjoyed things like that, I would have switched to a management career track at any of the points when there was opportunity to do so. Or maybe this is payback for actually daring to enjoy myself at work, occasionally, over the past decades.
 
Upvote
4 (5 / -1)
I appreciate the write up and found it very interesting!

Question: I'd love to know more about the costs to run these code helper LLMs.

You mentioned additional costs being very common, and that you ran out of credits.

What would the cost per prompt or hour work out to be?

Would it cost $100 a day if you used this 6 or 8 hours a day? $800 a day?

Does it start to approach the cost of hiring an actual person?
The cost differs from tool to tool, provider to provider. MS GitHub Copilot is considered to be the cheapest option. In enterprise version, it gives a user 300 premium requests and unlimited number of regular requests (smaller and older LLMs) per month for $18. For the best LLM (Claude Opus 4.5) they charge 3x. It's impossible to have a simple comparison of the costs between using a human and LLM. It depends on the human, LLM, project type etc. But there is no doubt that LLM gives the human a huge boost. And, the more proficient (in coding and domain expertise) the human is, the greater leverage LLM provides. Keep in mind that with proper use, the best LLMs can produce more code in 6 hours than human would in a month (or more).
 
Upvote
0 (3 / -3)

pokrface

Senior Technology Editor
21,512
Ars Staff
You should have something like fail2ban on any hosting server so any rogue bot doesn't crash your system. This is amateur stuff.
I'm curious how you think fail2ban would have helped in this specific situation—are you suggesting i have f2b watching https traffic?

IME, after extensive on-and-off usage over years, fail2ban is a terrible tool for most applications. It's gross and heavy and bloated, and the amount of CPU time it steals from the host doing live matching against your logs grows unacceptably large as the traffic scales. For SCW, fail2ban just watching ssh traffic racks up more cpu time than my redis and mariadb processes combined. If you think fail2ban is a good idea versus better tools and methods, you should re-examine your requirements and what you think it's accomplishing for you. Perhaps you fall into the extremely narrow band where the tool gives you some real utility? If not, repeating context-free advice like "lol just use fail2ban" is cargo-cult system administration, and that's amateur stuff.

I'd use plain ol' nftables rate limiting for any given port or service before I reached for fail2ban.

I'm happy to discuss my sites' security postures & strategies in detail if you'd like, and answer any questions you have. That way, you won't have to make silly assumptions and maybe we'll all learn something! :)
 
Upvote
4 (7 / -3)

pokrface

Senior Technology Editor
21,512
Ars Staff
I appreciate the write up and found it very interesting!

Question: I'd love to know more about the costs to run these code helper LLMs.

You mentioned additional costs being very common, and that you ran out of credits.

What would the cost per prompt or hour work out to be?

Would it cost $100 a day if you used this 6 or 8 hours a day? $800 a day?

Does it start to approach the cost of hiring an actual person?
Noted above, but while there are options to pay for extra usage (or to do straight API-based billing), everything I've ever done with Claude Code (including this project) was done with a regular $20/month pro subscription. I've never paid for overages or instant credit refills; sometimes, for projects burning a lot of tokens, this means I have to wait a few hours for the token count to reset.

There's no universe in which I'd pay $100/day for agentic coding on dumb personal side projects.

In terms of actual human cost, as a firm believer in the "fuck you, pay me" philosophy when doing work for others, I would want to pay the developer a fair, living wage that they're happy with. That feels like something in the $40/50 hour at minimum, which makes sense for a professional who does professional work. (Yeah, fiverr is cheaper, but I also believe you get what you pay for when it comes to work that matters.)

If I had a real project to do with real stakes and a real deadline, I'd absolutely hire someone (probably from the Ars openforum!). But for silly personal projects, if the opportunity cost isn't effecftively nothing, I'd just skip the project.
 
Upvote
9 (13 / -4)
I'm curious how you think fail2ban would have helped in this specific situation—are you suggesting i have f2b watching https traffic?

IME, after extensive on-and-off usage over years, fail2ban is a terrible tool for most applications. It's gross and heavy and bloated, and the amount of CPU time it steals from the host doing live matching against your logs grows unacceptably large as the traffic scales. For SCW, fail2ban just watching ssh traffic racks up more cpu time than my redis and mariadb processes combined. If you think fail2ban is a good idea versus better tools and methods, you should re-examine your requirements and what you think it's accomplishing for you. Perhaps you fall into the extremely narrow band where the tool gives you some real utility? If not, repeating context-free advice like "lol just use fail2ban" is cargo-cult system administration, and that's amateur stuff.

I'd use plain ol' nftables rate limiting for any given port or service before I reached for fail2ban.

I'm happy to discuss my sites' security postures & strategies in detail if you'd like, and answer any questions you have. That way, you won't have to make silly assumptions and maybe we'll all learn something! :)
You just make a rule for 403. After three fails it bans that source for 5 minutes.

Edit: I just want to explain that you can't look at the application layer 403 with nfttables. You also don't need a database file, unless you want to keep long-term bans after a reboot, just don't use one at all. And you should be using pyinotify, so it doesn't have read your entire logs again, even if you are lazy and don't rotate your logs. I hope this helps.
 
Last edited:
Upvote
0 (2 / -2)

pokrface

Senior Technology Editor
21,512
Ars Staff
You just make a rule for 403. After three fails it bans that source for 5 minutes.

Edit: I just want to explain that you can't look at the application layer 403 with nfttables. You also don't need a database file, unless you want to keep long-term bans after a reboot, just don't use one at all. And you should be using pyinotify, so it doesn't have read your entire logs again, even if you are lazy and don't rotate your logs. I hope this helps.
Again, why would I want a whole-ass extra application to screw with my layer 7 traffic like that, when nginx is perfectly fine generating 403s without all the extra logging and tracking? (Edited to add: the "but what if a rogue host takes down my web server by generating 403s?" concern is, frankly, silly.)

If the goal is blocking large amounts of malicious/suspicious traffic, the place to do it is at the WAF. If we're just rate limiting layer 7? WAF again. The WAF can eat that traffic, not me.

There are better tools than fail2ban for basically every single situation.

edited to add - apologies if i'm coming off here as overly combative. that's not my intent. your advice is useful for folks to see and i'm glad you're posting it.
 
Last edited:
Upvote
5 (8 / -3)

uhuznaa

Ars Tribunus Angusticlavius
8,585
These tools can be really useful when you know what you're doing. The ironic thing is that you learn what you're doing while you cut your teeth on learning to program and this is actually not really about learning a programming language. But about learning to correctly analyze the problems you want to solve. This is often vastly underrated because you (ideally) learn it on the way while you think you learn coding.

If you don't even have to learn this you'll never really learn it because you're caught between your own cluelessness and the shortcomings of the LLM.
 
Upvote
2 (4 / -2)

camlost

Smack-Fu Master, in training
1
The "stolen" stuff is a murky one and enormously charged with emotion. If you go to a bunch of museums to observe a heap of paintings, then read a stack of books, then create your own work inspired by these learnings is this stealing? You wouldn't really call it this. With AI, it's similar in the sense that the technology has trained on material. It hasn't stolen it from anyone. Arguably.
This is a bad argument. If I create art inspired by other works of art, it's a new thing. It's filtered through my human experience. I create things that are the sum of my experience and skills.

Generative AI has no experience. It doesn't have skills in the same way that people do. It has no point of view. It can't pick things that are important to it to be inspired by because it doesn't value aesthetics or prose or get inspired. It doesn't know or think, despite the anthropomorphism practiced by its proponents. It can only regurgitate based on probabilities. Maybe it's remixed based on the prompt, but the machine that's generating images isn't creating. It steals because it doesn't bring anything new to what it generates.

Likewise, the prompter is not creating art. They may be using a tool to ask, "What would it look like if I combined these two things?" While they may be bringing their own values to the process, they are not creating. They're equivalent to the ideas guy who's constantly trying to get people to work for them for free. That's why GenAI has such appeal to the executive class. They don't have the skills to actually do anything, only ideas.
 
Upvote
-1 (6 / -7)

uhuznaa

Ars Tribunus Angusticlavius
8,585
I remeber what a revolution GNU software was and how it managed to democratize computers with open source. Watching people throw that all way to chase the next shiny thing while absolutely demolishing what remains of the decentralized, open internet was one of the saddest parts of 2025.

You must have been asleep for the past 10 years or so... The "decentralized, open Internet" was already dying long before LLMs appeared on the landscape.

BTW same with people who complain about AI ruining Google search: It was getting harder and harder to find anything but SEO fodder and webshops with Google since a long time. Real, decentralized content in good old home pages, blogs and forums was vaporizing since ages. What we got instead was social media filled with engagement bait. And this did NOT only happen in 2025.
 
Upvote
-2 (2 / -4)

cool_ish

Smack-Fu Master, in training
67
It's nice to see a wet blanket thrown on the doom-saying once in a while. I've used agents to help with projects, pushed as far as I could until the logic broke down. They're still so limited.

I'm more hopeful for the next generation of agents with REAL complex reasoning. I don't think we're far off.

Surely they just need to strip a few more RAM sticks from consumer shelves. ... Surely!

Alright, I'm feeling less optimistic.
 
Upvote
1 (5 / -4)
I appreciate the write up and found it very interesting!

Question: I'd love to know more about the costs to run these code helper LLMs.

You mentioned additional costs being very common, and that you ran out of credits.

What would the cost per prompt or hour work out to be?

Would it cost $100 a day if you used this 6 or 8 hours a day? $800 a day?

Does it start to approach the cost of hiring an actual person?

He has a 20$ a month plan. It has a token limit for every 5 hours. You can get way higher limits (5x to 20x) for 100$ a month.

Or you can pay per million tokens. (Heavy users are a million tokens a day). These users are often spending 200$ a month.

A freelance python developer is 50$ a hour as a low basis. So your maximum monthly spend would get you 4 hours of developer time or less.

Note that the cost of tokens was assuming Claude. Kimi 2.5 is vastly cheaper but pretty competitive output, and DeepSeek 4 should be out in a week (rumours are it will be out before Chinese New Year - Feb 17) and also drastically cheaper and competitive output.
 
Last edited:
Upvote
4 (7 / -3)
Modern python has a built in library for ipaddresses - ipaddress

If I were to write your app I would have probably used existing libraries - apache-log-parser for the parsing, and rich for colorization and display, argparse for arguement parsing.

Yeah it does, and using ipaddress to determine if something is, y'know, an IP address, is actually one of the most useful functions of that library. You can just do ipaddress.ip_address(val).version and it will return either 4 or 6 if it's a valid address and raise a ValueError if it isn't.

I've used this sort of thing a bunch in my own code.
 
Upvote
6 (6 / 0)

TheNewShiny

Ars Scholae Palatinae
1,197
Subscriptor++
Thanks for the writeup. It's really a fantastic enabler for the code-curious. For many things in my daily life I have at one point thought "I'm sure this can be automated", but the effort to put that together was prohibitive.

But suddenly I find myself having "vibe-coded" a rather substantial piece of Javascript. That is: Copilot suggests code, I implement it, modify it, it breaks, and it will help me spot the missing } or find logical errors. Occasionally I suggest a way to make things more efficient, which then occasionally even sometimes works.

I love it! It's so fun to think about the logical challenges, and seeing something start working exactly how it should. It's kind of addictive, I think I've easily put twenty hours into this in the past week. Downside, tokens used, CO2 produced, cooling water wasted. Other than those major downsides, it's an amazing way of coding/learning to code.
 
Upvote
4 (9 / -5)

TheNewShiny

Ars Scholae Palatinae
1,197
Subscriptor++
Come on, I'm a power-user using Windows. That's never, never, never ever happened to me.

And please don't ask me why I was repeatedly re-booting my system at 7 am this morning.
I'll do you one better, I restarted the Copilot app several times today in the hopes of resolving UI issues that started creeping up (failure to render code blocks) and getting back to vibe coding.
 
Last edited:
Upvote
-7 (0 / -7)

crmarvin42

Ars Praefectus
3,113
Subscriptor
You must have been asleep for the past 10 years or so... The "decentralized, open Internet" was already dying long before LLMs appeared on the landscape.

BTW same with people who complain about AI ruining Google search: It was getting harder and harder to find anything but SEO fodder and webshops with Google since a long time. Real, decentralized content in good old home pages, blogs and forums was vaporizing since ages. What we got instead was social media filled with engagement bait. And this did NOT only happen in 2025.
Google search has indeed been declining for a while before AI came along, but that resulted from a conscious choice by Google to worsen their product for the purposes of increased revenue/profit through enshitification.

OTHER search engines, do a much better job, but the muscle memory to go to google is too strong. DDG is better in my experience (as a free, ad-supported option option) as is Kagi (as a paid option). I've even had good results the few times I kicked the tires on SearXNG as well (as a more privacy minded option).

AI certainly has not helped the matter, by further breaking the revenue flow for those independent websites by increasing traffic (and thus costs), while decreasing actual eyeballs on their page to view the ads that paid the bills, because AI search does not generate much click-through at all.
 
Upvote
4 (4 / 0)
Sounds like you might be doing something wrong there. Are you setting enough context when you load the model? You certainly shouldn't need to use a RAG solution for what you're describing, unless you're really pushing some very odd edge cases.


Many of us would say around mid-last year, with the releases of GPT-OSS, GLM-4.5, Qwen 3 (for open weights) and Gemini 2.5 for closed. But YMMV.

Look at bit more carefully at his use case - 2 MB pdf could be in the 500k token range, most open source LLM's simply can't handle that large of a context.
 
Upvote
2 (2 / 0)

graylshaped

Ars Legatus Legionis
67,692
Subscriptor++
I can't speak to your efforts personally but I continue to see the same tit for tats in almost every AI discussion thread, this one included. The interactions barely ever move to "this could be a practical policy" or "a law like this might prevent mass redundancy while still allowing AI to assist in X sector" or "AI can be useful in this instance but we need tax funded income safety nets for such and such." It's almost always "AI is an abomination" vs "AI is the technology of our time and you just need to suck it up and get used to it."

It's fighting. It's not constructive. It feels like a lot of wasted time and energy.
When the developers of "AI" products include relevant disclaimers about the limitations and true costs of the tools they sell and for which they avidly, rapaiously, and relentlessly seek highly speculative investments, it might then be appropriate to think discussion of the pros and cons of their efforts should be more balanced.

Until that happens, "fair and balanced" rather requires giving prominence to the skepticism the shills want to have wave away, n'est ce pas?

Are you asking skepticism be set aside? We should stop resisting the deluge of fuzzy investor pitches? Now's your chance to show us who you are.
 
Upvote
4 (8 / -4)

Spiderman10

Ars Scholae Palatinae
963
Subscriptor++
This isn’t about IF LLMs can be useful, it’s about control.

We already gave up control of our social graphs to Meta. We gave up control of our PCs to Microsoft. We gave up control of video ownership to Netflix. Now you want to give up control of programming, of thinking and problem solving?

I'm OK with giving up control over typing words in a text editor. We don't need to necessarily be limited by syntax anymore.

What do you mean when you say "thinking and problem solving"? What specific mental processes are you referring to? Do you mean creativity? In my experience, using these tools has caused me to shift from spending most of my time implementing a core idea (i.e. typing), to iterating on the core idea to make it even better, and improving the software architecture/security/usability. Then implementation just takes about 30 mins. It's directed a shift in where my time is spent.

I'm doing less grindy work, and more thinking work.
 
Upvote
1 (5 / -4)
Are you asking skepticism be set aside? We should stop resisting the deluge of fuzzy investor pitches? Now's your chance to show us who you are.
No, I'm not asking for skepticism to be set aside. I'm suggesting more efforts be directed towards how we might steer AI towards a future we want given it's here to stay in some form. Be that specific regulation, taxation, policy, incentives, etc. It's in our interests to assume AI becomes as powerful as some are suggesting so that we can do the thinking needed while there's still time. And I think part of this role falls on the tech community like us.
 
Upvote
-3 (3 / -6)

crmarvin42

Ars Praefectus
3,113
Subscriptor
No, I'm not asking for skepticism to be set aside. I'm suggesting more efforts be directed towards how we might steer AI towards a future we want given it's here to stay in some form. Be that specific regulation, taxation, policy, incentives, etc. It's in our interests to assume AI becomes as powerful as some are suggesting so that we can do the thinking needed while there's still time. And I think part of this role falls on the tech community like us.
The problem with making that assumption is that it serves the interests of the very people who are pushing the fantastical narratives about what it can/will be able to do.

I'm paraphrasing Cory Doctorow here, but an AI may not be able to do all jobs, but someone selling AI may very well be able to convince key decision makers (politicians, CEOs, etc.) that it can do YOUR (or anyone else's) job due to the hype around AI right now.

Conversations about what AI can or cannot be used for at a society level, imo, needs to take a back seat to popping the hype (and financial investing) bubble around it. We can't have a reasonable conversation about its capabilities until after the ketamine huffing tech CEOs have been knocked down a dozen pegs or so, and the politicians who are profiting off of their stock portfolios have taken a drubbing in the process. Until then, only the fabulists will be listened to, and the rest of us will suffer.
 
Upvote
0 (4 / -4)

graylshaped

Ars Legatus Legionis
67,692
Subscriptor++
No, I'm not asking for skepticism to be set aside. I'm suggesting more efforts be directed towards how we might steer AI towards a future we want given it's here to stay in some form. Be that specific regulation, taxation, policy, incentives, etc. It's in our interests to assume AI becomes as powerful as some are suggesting so that we can do the thinking needed while there's still time. And I think part of this role falls on the tech community like us.
While you argue about the best route, I'm going to continue to suggest rather than drive at full speed in a random direction, we slow down, figure out where this bus is headed, and look at getting drivers who aren't either fucking lying liars who lie or starry-eyed evangelists.
 
Upvote
3 (8 / -5)
While you argue about the best route, I'm going to continue to suggest rather than drive at full speed in a random direction, we slow down, figure out where this bus is headed, and look at getting drivers who aren't either fucking lying liars who lie or starry-eyed evangelists.
Sure but that is the part that arguably can't be controlled. The speed of development is being driven by global competitive forces. It's effectively the genie out of the bottle stuff that's been touched on.

What can be controlled is how government and industry implement these tools via various policy and legislative levers.
 
Upvote
-2 (1 / -3)

graylshaped

Ars Legatus Legionis
67,692
Subscriptor++
Sure but that is the part that arguably can't be controlled. The speed of development is being driven by global competitive forces. It's effectively the genie out of the bottle stuff that's been touched on.

What can be controlled is how government and industry implement these tools via various policy and legislative levers.
Okay. I'll accept your assurances and continue in Full Mock the Lemmings mode.
 
Upvote
-2 (4 / -6)