What? Don't worry, everything will be fine. Just fine. /sHmmm I wonder if cli AIs will evolve into a means to simplify the use of the cli in general
...and once they have half the population using it and unable to do the work on their own, the subscription plan kicks in - Ka-Ching!I don't care if it's open-source, it's still a tool that people will take advantage of to be lazy which will result in more mistakes getting pushed into production, and it's still a tool that businesses will use to take advantage of the workforce.
Yeah... since it's a Google product, this will end one of two ways:...and once they have half the population using it and unable to do the work on their own, the subscription plan kicks in - Ka-Ching!
I have been noticing, with increasing frequency, that more Condé Nast is leaking in; and with only a finite volume to fill, I fear Ars is being pushed out.This reads like an advertisement, down to the headline. I thought at first glance it was an ad banner.
Pretty odd and out of the normal for in-house Ars articles.
USCO currently takes the position that bot output is not copyrightable; but that human-edited/curated/otherwise-munged material remains copyrightable regardless of being partially bot output.Question: it seems readily apparent that the training set may include "all rights reserved", GPL, or other somehow-restricted licensed code scraped from various sources.
Is there any word on whether or not the coding output of LLMs could be held subject to the terms of the upstream licenses? Obviously, proving providence would be nearly impossible, but it's a somewhat fascinating thought experiment nonetheless given historical precedents around direct copies and derivations of code.
I don't like this trend of CLI applications that look like CLI-GUI stuff, like with text fields and animated progress bars, etc.. (unless we are talking about HP's SAM or Microsoft's edit or nano)
It makes much more sense if they just output clean stdout that I can pipe into another application, otherwise what's the point?
(unless there's a flag or argument that will just give me clean stdout, then I take my point back)
Oh jeez wait till they start the ads....and once they have half the population using it and unable to do the work on their own, the subscription plan kicks in - Ka-Ching!
Management: "With this tool you can be twice as productive!"I don't care if it's open-source, it's still a tool that people will take advantage of to be lazy which will result in more mistakes getting pushed into production, and it's still a tool that businesses will use to take advantage of the workforce.
whilst I 100% agree on the latter, is the former the fault of the tool or the person and the processes?I don't care if it's open-source, it's still a tool that people will take advantage of to be lazy which will result in more mistakes getting pushed into production, and it's still a tool that businesses will use to take advantage of the workforce.
Google: you get *free CLI access to our coding tool for the low, low price of allowing us to keep copies of all the files and code snippets you upload!
As with many LLM's I wonder if we're quickly approaching the ceiling of coding performance since they've already trained on all available written material. Between Google's on "G3" internal repo (which internally hosts tons of open source content along with Google's own software stack) and their almost certain scraping of sites like Github, I doubt there is much code left for them to use as new training material.
Right on.At least, being on the command line, you can pipe the slop it generates straight to /dev/null.
I'll get the AI to summarize it for me.People should really read the privacy policy on these AI tools, they'd be shocked.
Dilemma: cover the maybe-zeitgeist despite misgivings and possible appearance of promotionThis reads like an advertisement, down to the headline. I thought at first glance it was an ad banner.
Pretty odd and out of the normal for in-house Ars articles.
Capitalism is not perfect and does not generate fair outcomes in a vacuum but it is far from bullshit.Management: "With this tool you can be twice as productive!"
Worker: "So I'll get paid twice as much, or work half the hours?"
Management: "lol no"
Capitalism is bullshit.
Agreed. The general Ars readership is doing the community a large disservice by putting their heads in the sand on this issue. Ignoring model improvements, pretending that AI could 'never' do something that they end up doing in the next model update, claiming that AI is completely useless because it can't multiply numbers effectively, the list goes on.In other words, the most successful engineers will be those that can use these tools to deliver code in circles around those that do not. Competition will drive out the less competitive as it always has. I'm 41 years old and when I studied CS in college we were deep in the most recent AI winter, so the way that I was trained as an engineer looks nothing like how engineers today are learning and will learn. I could put my head in the sand and keep writing code the way I have been for 20 years but I know exactly where that level of productivity will lead - pure management or the unemployment line. So I'm retraining myself on these tools daily so that I can still be an effective engineer and technology leader and any engineers in this thread need to be doing the same or they will be left behind.
There is no context, no comparison, no analysis. It's 100% an ad. Though, I'm sure Ars has not been paid to run it. It's mind boggling, to be honest.Is this an AD?
I think the really interesting question is what will happen when the code is not a literal copy of the original, but somehow contains a number of extremely specific bugs and security exploits present in some open source implementation of software that is performing the same function.USCO currently takes the position that bot output is not copyrightable; but that human-edited/curated/otherwise-munged material remains copyrightable regardless of being partially bot output.
I'm not a legalmancer; but I presume this situation changes considerably if a particular bit of bot output ends up being a literal regurgitation of a copyrighted element of the training set; since that case would look a lot more like someone just using a really esoteric compression format in the course of copyright violation; but in the general case human authorship is required to assign copyright(see also the notorious 'monkey selfie' dispute); but the USCO is inclined to be accepting rather than adversarial about works that are a human mostly taking chunks of bot output and dropping them in, sometimes with modest changes.
The guys that outright lose have been the ones who actively insist that The Robot Did It in order to prove some sort of point; they specifically wanted to usher in glorious age of AI is author now for whatever reason; but you basically have to give the USCO a really good confession, and insist on it when they ask if you are sure, to get them to strike a 'mixed' document.
Is it?This reads like an advertisement, down to the headline. I thought at first glance it was an ad banner.
Pretty odd and out of the normal for in-house Ars articles.