Google is bringing vibe coding to your terminal with Gemini CLI

MagicDot

Ars Scholae Palatinae
1,074
Subscriptor
I don't care if it's open-source, it's still a tool that people will take advantage of to be lazy which will result in more mistakes getting pushed into production, and it's still a tool that businesses will use to take advantage of the workforce.
...and once they have half the population using it and unable to do the work on their own, the subscription plan kicks in - Ka-Ching!
 
Upvote
76 (80 / -4)
It would probably be more accurate to say that it is an "open source client" rather than an "open source agent" since it sounds like it just calls home to the black box in the mothership for everything.

Certainly getting a better license for the client is nicer than getting a worse one; but describing the resulting system as 'open source' is like saying that ChatGPT is open source if you visit in firefox.
 
Upvote
143 (144 / -1)

Rirere

Ars Centurion
311
Subscriptor++
Question: it seems readily apparent that the training set may include "all rights reserved", GPL, or other somehow-restricted licensed code scraped from various sources.

Is there any word on whether or not the coding output of LLMs could be held subject to the terms of the upstream licenses? Obviously, proving providence would be nearly impossible, but it's a somewhat fascinating thought experiment nonetheless given historical precedents around direct copies and derivations of code.
 
Upvote
16 (21 / -5)

Shiunbird

Ars Scholae Palatinae
728
I don't like this trend of CLI applications that look like CLI-GUI stuff, like with text fields and animated progress bars, etc.. (unless we are talking about HP's SAM or Microsoft's edit or nano)

It makes much more sense if they just output clean stdout that I can pipe into another application, otherwise what's the point?

(unless there's a flag or argument that will just give me clean stdout, then I take my point back)
 
Upvote
0 (16 / -16)

ultimatebubs

Smack-Fu Master, in training
43
...and once they have half the population using it and unable to do the work on their own, the subscription plan kicks in - Ka-Ching!
Yeah... since it's a Google product, this will end one of two ways:

1) The product will not achieve mass adoption, and Google will send to to the graveyard and screw over any clients who rely on it, OR

2) The product achieves mass adoption, and Google adds a paywall to it for commercial use. The free version will probably be limited to something like 5 queries a day.
 
Upvote
34 (37 / -3)
Been using Gemini 2.5 a lot lately - pros can handle up to 150 or so files (The upload allows '1000' but it can only see 150 of them with its file tool).

Cons - about 10-20% of the time it completely hallucinates the content of your files making patches against files in your codebase that have an actual filename but have code nothing like what it says is there, and sometimes 'guesses' what api calls will be rather than reading the actual content of your files. It often will have a preference for an API name and will use it regardless of what the actual name is.
It's internal tool use (file fetcher) seems to fail frequently which is probably a big part of the hallucination issue (if it can't get the contents it guesses).
It is 'inconsistently lazy' - it will look like it is planning to update the entire file contents but instead quits about part way through.
It hallucinates line numbers when doing patches.

It also by default uses fairly crappy code style so often does things like lacks type checking when doing python, or if it does type checking uses 'Any' for anything that isn't a basic type in python. Or when doing scripts hard codes the paths instead of using argparse.

Claude and GPT have similar failures and different strengths and weakenesses. (Both Claude and GPT tend to limit to 30 files, which is usually fine but far less convenient than uploading big folders of code and instead having to pick and choose...)
 
Upvote
45 (45 / 0)

nancy-drew

Ars Centurion
356
Subscriptor++
This reads like an advertisement, down to the headline. I thought at first glance it was an ad banner.

Pretty odd and out of the normal for in-house Ars articles.
I have been noticing, with increasing frequency, that more Condé Nast is leaking in; and with only a finite volume to fill, I fear Ars is being pushed out.
 
Upvote
31 (41 / -10)
Question: it seems readily apparent that the training set may include "all rights reserved", GPL, or other somehow-restricted licensed code scraped from various sources.

Is there any word on whether or not the coding output of LLMs could be held subject to the terms of the upstream licenses? Obviously, proving providence would be nearly impossible, but it's a somewhat fascinating thought experiment nonetheless given historical precedents around direct copies and derivations of code.
USCO currently takes the position that bot output is not copyrightable; but that human-edited/curated/otherwise-munged material remains copyrightable regardless of being partially bot output.

I'm not a legalmancer; but I presume this situation changes considerably if a particular bit of bot output ends up being a literal regurgitation of a copyrighted element of the training set; since that case would look a lot more like someone just using a really esoteric compression format in the course of copyright violation; but in the general case human authorship is required to assign copyright(see also the notorious 'monkey selfie' dispute); but the USCO is inclined to be accepting rather than adversarial about works that are a human mostly taking chunks of bot output and dropping them in, sometimes with modest changes.

The guys that outright lose have been the ones who actively insist that The Robot Did It in order to prove some sort of point; they specifically wanted to usher in glorious age of AI is author now for whatever reason; but you basically have to give the USCO a really good confession, and insist on it when they ask if you are sure, to get them to strike a 'mixed' document.
 
Upvote
8 (8 / 0)
I don't like this trend of CLI applications that look like CLI-GUI stuff, like with text fields and animated progress bars, etc.. (unless we are talking about HP's SAM or Microsoft's edit or nano)

It makes much more sense if they just output clean stdout that I can pipe into another application, otherwise what's the point?

(unless there's a flag or argument that will just give me clean stdout, then I take my point back)

It seems like there's partially a retrogame-style trend of shell as aesthetic; and partially the (most familiar to me via powershell) desire to do things 'better' than clean stdout in ways that can be powerful but can also be a really unintuitive pain in the ass under certain circumstances.

The former seems kind of twee and without obvious redeeming features(unless you absolutely need to interact with a remote host over SSH and without X forwarding); but the latter I can at least the see the logic behind. Being able to just pipe an object to something without coming up with an ad-hoc comma delimiting arrangement or worrying about escaping special characters is nice; until something that looks like it should work because the shell does a lot of silent autoconverting and pretty-printing gives me a CSV or other output that is just a bunch of names of custom objects rather than values; or an object has 5-6 sensible properties you can easily access and then an array of arrays of properties that you just need to iterate through and figure out somewhere.
 
Upvote
0 (3 / -3)

Bash

Ars Scholae Palatinae
1,467
Subscriptor++
Google: you get *free CLI access to our coding tool for the low, low price of allowing us to keep copies of all the files and code snippets you upload!

As with many LLM's I wonder if we're quickly approaching the ceiling of coding performance since they've already trained on all available written material. Between Google's on "G3" internal repo (which internally hosts tons of open source content along with Google's own software stack) and their almost certain scraping of sites like Github, I doubt there is much code left for them to use as new training material.
 
Upvote
22 (24 / -2)

Uncivil Servant

Ars Scholae Palatinae
4,667
Subscriptor
I am not an expert coder, but the terminal, in Linux...this is the thing where I type commands directly into my OS, yes? This is where, say, sudo rm -c (or whatever, the Linux version of "format c:/") would have immediate consequences, yes?

Put an LLM in that and I will find some Bond Villain to plant a superconducting 10T magent atop your hard drives.
 
Upvote
-13 (1 / -14)

aliksy

Ars Scholae Palatinae
1,081
I don't care if it's open-source, it's still a tool that people will take advantage of to be lazy which will result in more mistakes getting pushed into production, and it's still a tool that businesses will use to take advantage of the workforce.
Management: "With this tool you can be twice as productive!"

Worker: "So I'll get paid twice as much, or work half the hours?"

Management: "lol no"

Capitalism is bullshit.
 
Upvote
24 (34 / -10)

peterford

Ars Praefectus
4,233
Subscriptor++
I don't care if it's open-source, it's still a tool that people will take advantage of to be lazy which will result in more mistakes getting pushed into production, and it's still a tool that businesses will use to take advantage of the workforce.
whilst I 100% agree on the latter, is the former the fault of the tool or the person and the processes?
 
Upvote
0 (2 / -2)
Google: you get *free CLI access to our coding tool for the low, low price of allowing us to keep copies of all the files and code snippets you upload!

As with many LLM's I wonder if we're quickly approaching the ceiling of coding performance since they've already trained on all available written material. Between Google's on "G3" internal repo (which internally hosts tons of open source content along with Google's own software stack) and their almost certain scraping of sites like Github, I doubt there is much code left for them to use as new training material.

People should really read the privacy policy on these AI tools, they'd be shocked.
 
Upvote
20 (21 / -1)

Lorentz of Suburbia

Ars Praetorian
588
Subscriptor
This reads like an advertisement, down to the headline. I thought at first glance it was an ad banner.

Pretty odd and out of the normal for in-house Ars articles.
Dilemma: cover the maybe-zeitgeist despite misgivings and possible appearance of promotion

or

lose the handful of Kool-AId tech bros here?

🤔
 
Upvote
9 (11 / -2)

Red Knight

Ars Praetorian
409
Subscriptor
Management: "With this tool you can be twice as productive!"

Worker: "So I'll get paid twice as much, or work half the hours?"

Management: "lol no"

Capitalism is bullshit.
Capitalism is not perfect and does not generate fair outcomes in a vacuum but it is far from bullshit.

By working just as hard, earning just as much, and delivering twice the output you would be literally doubling your productivity. In essence you'd be twice as valuable to the economy without any increase in cost to the economy of that labor. This is a huge benefit to society as a whole as whatever services your code provides, there's now twice as much of it at no increase in cost to consumers.

On the margins you still get to negotiate for higher wages but what you are competing against at any slice of time T is NOT your improvement in productivity from some time in the past T-n; its your relative value compared to other workers at that time T. In other words, if you can deliver twice as much code as the guy next to you right now, you can negotiate for higher wages or move to another job or start your own firm where you can reap higher rewards for your competitive advantage. But if every talented engineer is picking up the same tools at the same time you are not competing against your past self who was less productive, you're competing against another engineer who is now just as productive as you are TODAY. The price signals sent by your wages and the wages of your competitors and the increased productivity of your labor all flow into the market and the end result is that new companies and business models become possible that weren't before; just one example that people are discussing is the possibility for bespoke code. Imagine a future where you don't have Windows and Office and etc., but rather a set of code written by LLMs specifically for your needs as an individual. Its too expensive to have armies of engineers to write software for just one person or a small group of people but as the cost of development decreases towards the cost of sand and electricity new businesses and applications will appear and generate both new opportunities for human flourishing and new opportunities for employment!

I have no idea where this new equilibrium will come to rest as LLMs are moving too fast and there are too many variables at play but we're already starting to see the predictable effects of LLMs in the tech industry. On the right tail of engineer wage distribution you have AI gurus earning seven figure salaries and on the left tail of the distribution, the fat tail, you're seeing unemployment tick up for new CS grads (guess which CS grads are getting jobs just fine and which are not!). In the middle of the distribution you are seeing a mixed bag of engineer salaries as some are earning more, some are earning less but the overall trend line is still positive (source).

In other words, the most successful engineers will be those that can use these tools to deliver code in circles around those that do not. Competition will drive out the less competitive as it always has. I'm 41 years old and when I studied CS in college we were deep in the most recent AI winter, so the way that I was trained as an engineer looks nothing like how engineers today are learning and will learn. I could put my head in the sand and keep writing code the way I have been for 20 years but I know exactly where that level of productivity will lead - pure management or the unemployment line. So I'm retraining myself on these tools daily so that I can still be an effective engineer and technology leader and any engineers in this thread need to be doing the same or they will be left behind.
 
Last edited:
Upvote
-15 (13 / -28)

Ozy

Ars Tribunus Angusticlavius
7,448
In other words, the most successful engineers will be those that can use these tools to deliver code in circles around those that do not. Competition will drive out the less competitive as it always has. I'm 41 years old and when I studied CS in college we were deep in the most recent AI winter, so the way that I was trained as an engineer looks nothing like how engineers today are learning and will learn. I could put my head in the sand and keep writing code the way I have been for 20 years but I know exactly where that level of productivity will lead - pure management or the unemployment line. So I'm retraining myself on these tools daily so that I can still be an effective engineer and technology leader and any engineers in this thread need to be doing the same or they will be left behind.
Agreed. The general Ars readership is doing the community a large disservice by putting their heads in the sand on this issue. Ignoring model improvements, pretending that AI could 'never' do something that they end up doing in the next model update, claiming that AI is completely useless because it can't multiply numbers effectively, the list goes on.

Maybe, like me, most of the readership is old enough that the overall impact on their lives and careers will be minor, but make no mistake, this is a position of privilege compared to those who will have to navigate the usage of AI tools to remain competitive.
 
Upvote
-4 (16 / -20)

Dmytry

Ars Legatus Legionis
11,380
USCO currently takes the position that bot output is not copyrightable; but that human-edited/curated/otherwise-munged material remains copyrightable regardless of being partially bot output.

I'm not a legalmancer; but I presume this situation changes considerably if a particular bit of bot output ends up being a literal regurgitation of a copyrighted element of the training set; since that case would look a lot more like someone just using a really esoteric compression format in the course of copyright violation; but in the general case human authorship is required to assign copyright(see also the notorious 'monkey selfie' dispute); but the USCO is inclined to be accepting rather than adversarial about works that are a human mostly taking chunks of bot output and dropping them in, sometimes with modest changes.

The guys that outright lose have been the ones who actively insist that The Robot Did It in order to prove some sort of point; they specifically wanted to usher in glorious age of AI is author now for whatever reason; but you basically have to give the USCO a really good confession, and insist on it when they ask if you are sure, to get them to strike a 'mixed' document.
I think the really interesting question is what will happen when the code is not a literal copy of the original, but somehow contains a number of extremely specific bugs and security exploits present in some open source implementation of software that is performing the same function.

To be clear, this is yet to happen. Things like "github copilot agent" simply hadn't been around for long enough, and hadn't written enough important code that was put into actual production.

The YOLO car is still driving towards the cliff (or according to some of the passengers, towards a bridge).
 
Upvote
4 (4 / 0)

10Nov1775

Ars Scholae Palatinae
889
This reads like an advertisement, down to the headline. I thought at first glance it was an ad banner.

Pretty odd and out of the normal for in-house Ars articles.
Is it?

I feel like quite a lot of articles in recent years have covered software service features.

Atomistic, less in-depth but more timely articles on small parts of software have also been a trend in recent years—these can indeed seem like press releases, but I think that's inevitable when most of the information available is limited enough to fit in the space of a press release.

It's true that Ars used to trend towards longer, less timely, and more in-depth coverage—and it still has quite a lot of that today.

Which means the appearance of these articles is less depressing than you might think: they haven't replaced classic Ars-style coverage, they're simply an addition to it.
 
Upvote
1 (6 / -5)