ChatGPT was the spark that lit the fire under generative AI, one year ago today

goodadam

Seniorius Lurkius
1
Subscriptor++
I don't use it, and I genuinely don't understand when I would. I feel like I'm missing something about this. Maybe I'm just not the target audience.

When I'm looking things up online I typically have a pretty clear idea of what I'm looking for, and google et al do a good job first shot in most cases. It seems like a lot of the comments are effectively just saying they use it for boilerplate writing, which I don't really do.

I'm a software developer, but I don't work on much stuff that is isolated enough that describing it in English is going to be doable in a page, let alone a few paragraphs. Maybe I'll give it a shot seeing if it can generate documentation some day when I have a volume to write.
 
Upvote
2 (3 / -1)

sgoldwater

Smack-Fu Master, in training
51
Most of those are either already very easy to do on my own or are something you absolutely shouldn’t trust ai for. Translating to French ffs
Although to sort of reiterate my previous post, I don't know if there's a difference between asking ChatGPT to translate to French or using a dedicated translation app anymore?

I read earlier in the year that Google was adding 'AI powered features" to Google Translate. I didn't think much of it at the time, but then I noticed that Google Translate was sort of working differently. Not always better! I noticed accuracy seemed to get worse, at first, but then improved. Sometimes it still freezes in a way it wouldn't before.

But anyway I'm not sure at this point what the distinction is.
 
Upvote
-2 (0 / -2)

studenteternal

Wise, Aged Ars Veteran
108
Not at all! It's just my general disdain for people. In fact, I'm using Chat GPT right now to write this response to you - my original response would have gotten me banned. Have a lovely day!
Thank you for the sufficient reply. Ignore status set, so this should be the last time I ever see your messages, enjoy the rest of your day.
 
Upvote
-7 (2 / -9)
Although to sort of reiterate my previous post, I don't know if there's a difference between asking ChatGPT to translate to French or using a dedicated translation app anymore?

I read earlier in the year that Google was adding 'AI powered features" to Google Translate. I didn't think much of it at the time, but then I noticed that Google Translate was sort of working differently. Not always better! I noticed accuracy seemed to get worse, at first, but then improved. Sometimes it still freezes in a way it wouldn't before.

But anyway I'm not sure at this point what the distinction is.
Technology wise they are effectively the same. The differences are political and business-related.

ChatGPT is pre-trained on old datasets. Google's training data is constantly growing and evolving. ChatGPT is effectively stuck in 2021. Google can pull from the latest ads and search rankings.

Google is also falling out of favor and cutting back on expenses, so even if they have a lot of data it's difficult to afford the continual training and improvement of their services. OpenAI got a $10 billion cash infusion from Microsoft, and currently gets most of the publicity.
 
Upvote
-2 (1 / -3)
I've been using it to learn another language. It's been transformative for tricky questions, like the difference in usage between similar words, subtle grammatical issues, etc. It even does better than Google Translate for colloquial phrases.

Pretty amazing.
Serious question, how do you know if it's good at this, if it's a language you don't know? It's so confident in making up information in other arenas, I wouldn't have much confidence in this myself.
 
Upvote
9 (9 / 0)
I teach people how to write code. For a few months all the MBAs in my organization have been nervously looking over their shoulders and wondering how many weeks they have left before ChatGPT puts all coders out of a job, drying up our market.
I think we've got some time yet. ChatGPT can write surprisingly complete and functional code from plain English descriptions. Often the code compiles on the first try. This is a big time-saver, especially in verbose languages like Java, so programmers are increasingly using ChatGPT to generate a starting template. But you have to have a programmer who knows what she's doing to take a look at the output and fix it, customize it, expand it. And if you want something a little more challenging, like "integrate this with my data lake and check it for memory leaks; check in the code on GitHub", then you have to either hire a human to do that, of come up with a really detailed prompt that contains lots and lots of domain knowledge. ("Go to such-and-such website to download this library of code...").
But the pace of improvement has me worried. ChatGPT isn't really designed for problem-solving, but if you marry ChatGPT up to another AI that is better at problem-solving, at least within a particular domain, you could have a totally automated system that writes big, consequential code with no human involvement. I'm not sure how quickly or with what CPU/energy costs, but considering the salary of an experienced software engineer, I have no doubt this is in our future.

My main point, though is that thinking about how soon an AI will replace human engineers at writing industrial-quality code misses the bigger picture. Experienced engineers will say that writing code is the easy part. The hard part is extracting meaning from the natural language descriptions you get from your end users. The solution to this problem, of course, is rapid prototyping. That's what ChatGPT already makes possible. The consequences will be HUGE.

To go from prototype software to industrial-scale is a translation problem. It's eminently automatable.

I think that "prompt engineering" will have a limited lifetime as a useful skillset. The trend will be to make the tools easier and easier to use with plain natural language. Remember all those operators you can add to a Google search, to search by file type, exclude particular websites, and much more? When was the last time you used those?

I also agree that ChatGPT will very quickly be adopted as a super-google and/or super-Wikipedia by ordinary people. And I have no doubt that this will lead to new kinds of spam, advertising, opinion-shaping...
I don't have much confidence in ChatGPT from what I've read about it, but your summary is exactly how I see it for kids today, they need to learn what it is and understand how to use this new tool, because coding in the future is not going to be plugging in text by hand, it's abstracting it through the use of these tools to allow thoughts to become code easier.
 
Upvote
-1 (0 / -1)

Superfreq

Wise, Aged Ars Veteran
107
I'm fully blind, and a "screen reader" user. I tried Chat GPT a couple weeks after the news stories started blowing up, after crankily avoiding the thing on principal of it being popular and seemingly overhyped. My friend convinced me to try it after he started making some funny stories with it and texting them to me, and so I did. The interface wasn't the best for accessibility for people with disabilities, but it worked well enough. I mostly felt out it's basic technical limits a bit (I know nothing about coding) talked some about it's "seemingly strictly preprogrammed" ideas about AI and how it came to be, and made lots of funny news stories in the vein of the onion using it.
I got board of it in a few weeks and pretty much forgot about it, trying my best to ignore the word AI popping up everywhere almost constantly, though I would occasionally dip into the stuff that felt more important, like the risks of easy image generation/manipulation, the defamation case, major updates to the LLM ETC.
Then, an app called Be My Eyes [a volunteer driven live assistance app] which I already used sometimes, announced that they would be opening up a beta using the GPT vision API to help describe things to the blind. At first I thought "oh fucking hell no!" because I imagined all the same blind people I knew who were overly reliant on GPS apps and live assistance apps, or always pining for self driving cars long before they were ready to be safe with a driver who couldn't take over in an emergency would be jumping on this with absolutely no regard to safety. All the elderly people in our community who didn't understand AI hallucinations, or how confident AI could sound while being completely wrong. I just knew people would use it for things they shouldn't: crossing streets, reading pill bottles, household cleaner instructions, glucometers, covid test results, shit like that.
But I heard it being trialed on a blindness podcast [living blindfully] where it was able to identify and read all the bottles of soap, shampoo, and conditioner in a hotel shower catty, with their locations in order, the brand ETC. Then a studio with shelves on the walls, where it was able to drill down to a specific toy's description from one of the shelves using followup questions. I saw more and more impressive output examples from blind people on social media with early access who were using it to caption images, and I started getting interested.
When I finally got access to "Be My AI" I was impressed with the quality of the responses as well. I didn't know what to use it for at first, but I found it helpful for locating cooking info on a box full of instructions, or getting an idea of what was in a room I'd never seen before. All the tiny details I'd missed because I just didn't care, since I'd always been missing them. Decorations, plants, signs, funny pictures and posters that gave life to places I thought I already knew. I can't just look around like other people. I can only easily observe large objects that my cane finds, more if it's appropriate but in many cases it was the kind of situation where touching everything wouldn't go well, or I just didn't have time. I was now getting the big picture view, that I never had before, but also the small details of things.
I still don't trust it fully, and I'm appreciative that Be My Eye's at least has a solid disclaimer before you start using it. But I'm glad it's here. I want to enjoy the good examples of AI before it's overshadowed by the bad ones.
I went back to Chat GPT for a couple of days, at first because I wanted some help understanding how gas stations were laid out for a map I was building in an "audio game", since I'd never seen how it worked and my parents don't drive. But again I got board of it's cookie cutter replies, and I don't know the first thing about prompt engineering.
So I tried Pi AI next, because another blind person was raving about how much more human it was and how it was connected to the internet unlike GPT. I enjoyed that one, even had it help me with some research on audio drama podcasts, and help me pick one to start with based on my preferences. I quit that one when it insisted I create an account, but I might go back, because it was honestly useful.
Still, I've scaled back now that I know how much energy it uses compared to traditional research methods. It's simply hard to justify, especially for more banal conversations.
 
Last edited:
Upvote
13 (13 / 0)
I have played around with Bing, and just recently ChatGPT directly from OpenAI.

I'm a native English speaker with good grammar (when I pay attention to it), and I type ~80-100wpm from my head. When I factor in fact-checking for bizarre confabulations, fixing ChatGPT's ultra-bland prose and formulaic structure, re-writing its perpetual and cloying C-suite optimism, and catching its occasional grammar error, it's clear to me that it's just much faster to write myself.

Using it to "brainstorm" ideas is similarly pointless as the safety guardrails have rendered it so painstakingly inoffensive, it has about as much creativity as a dish of blanc mange.

I see no use for it in my life. My hubby does occasionally entertain himself by arguing with Bing about interpretations of literature, so there's that.
 
Upvote
5 (7 / -2)
Out of curiosity I plugged the text of this article into GPT.

GPT responded:
  1. Typographical Error: The phrase "first bought large language models (LLMs)" should likely be "first brought large language models (LLMs)"....

If nothing else, I predict that one day, GPT will kill the typo.
 
Last edited:
Upvote
-1 (3 / -4)

sgoldwater

Smack-Fu Master, in training
51
Technology wise they are effectively the same. The differences are political and business-related.

ChatGPT is pre-trained on old datasets. Google's training data is constantly growing and evolving. ChatGPT is effectively stuck in 2021. Google can pull from the latest ads and search rankings.

Google is also falling out of favor and cutting back on expenses, so even if they have a lot of data it's difficult to afford the continual training and improvement of their services. OpenAI got a $10 billion cash infusion from Microsoft, and currently gets most of the publicity.
Thanks although for language translation I'm not sure that applies?
 
Upvote
0 (0 / 0)

sgoldwater

Smack-Fu Master, in training
51
Serious question, how do you know if it's good at this, if it's a language you don't know? It's so confident in making up information in other arenas, I wouldn't have much confidence in this myself.
Finally something I'm semi-qualified to answer, even if it is off-topic.

When I started doing this sort of thing intensively, my French was pretty terrible. Shamefully so. So what I would do in that case is what I call flipping. You get Google translate to translate into the other language, and then you click on the arrows to reverse it and see if it comes out ok.

It's not foolproof, but it works.
 
Upvote
0 (0 / 0)
Huh. I wondered why ChatGPT randomly said to me 'Hi, this is an off-topic question. Could you tell me how to make a strawberry cake?' yesterday. Today it said 'Happy anniversary! 🎉 I'm glad you remembered. ' when I asked it again about the strawberry cake it mentioned.

It denies mentioning the strawberry cake at all. Weird.
I don't need the app. I have a 93 year old mother in a care home for conversations like that. Yesterday I read a book while impersonating my late father saying "You're the most beautiful girl in the world. I'm going to get you a diamond ring."

Enjoy your cake.
 
Upvote
0 (0 / 0)

Eldorito

Ars Tribunus Angusticlavius
7,953
Subscriptor
I work with a recruitment team, rewriting ads for specific markets is a game changer. Really, this will kill marketing before it kills coders - being able to spit out the same job ad and communications for a pile of different markets in seconds is nuts.

And the reverse, take someone's CV and linkedin page and cut out the fluff to turn it into a reasonable summary is brilliant. We still get the odd 40 page resume (seriously) so a quick summary is nice. I'm worried about the dangers of grabbing a CV and a the position description and going "what skills does this person lack?" though. It will do it and it's mostly accurate except when it's not.

I think that "prompt engineering" will have a limited lifetime as a useful skillset. The trend will be to make the tools easier and easier to use with plain natural language. Remember all those operators you can add to a Google search, to search by file type, exclude particular websites, and much more? When was the last time you used those?

I also agree that ChatGPT will very quickly be adopted as a super-google and/or super-Wikipedia by ordinary people. And I have no doubt that this will lead to new kinds of spam, advertising, opinion-shaping...

Personally I hate what Google delivers to me because I'm so used to using all those operators and they don't work anymore, but I get your point. It won't take long to start understanding what people actually mean when they write something (rather than what they wrote).

One thing ChatGPT 4 does with image creation is that it does it's own prompt engineering. I ask it something and it creates an image with an expanded prompt from what I wrote that has more specifics. I can tweak those specifics to get an image I want, it's quite fascinating.
 
Upvote
-1 (1 / -2)

abazigal

Ars Scholae Palatinae
1,345
Subscriptor
When I was in school, attempting to pass off someone else's work as my own was considered cheating and anyone caught doing so would get an instant Fail grade. The idea that we will one day not need to create anything ever because AI can readily generate the text or imagery we need on the fly frankly scares me. Tech should exist to enhance human creativity, not displace it.

I wonder if the day will soon come where it's considered socially acceptable to use chatGPT to generate your work for you. I am a teacher in an elementary school. I have seen my colleagues use it for crafting letters, emcee scripts, even remarks for students. What's next? Might I one day be able to get Office to generate entire presentations for me at the press of a button?

I get that it can be faster (and sometimes even better) than simply writing the darn thing yourself, but at the same time, I have to ask - if you can't be bothered to take the time to write it, then why should I be expected to take the time to read or listen to it?
 
Upvote
4 (6 / -2)
According to who? Fusion has been 5 years away for the past 50 years, and, well...

🤔 Wasn't the joke that it was always 10 years away?

Ray Kurzweil has been saying 2029 since at least 1999 in "The Age of Spiritual Machines" (not 100% sure on this one, but in 2005 I'm pretty sure he doubled down in "the singularity is near")

So no, AGI hasn't been "just 10 years away" like fusion.
The increase in compute capabilities have been embarrassingly predictable for a long time now. And the same thing goes for other ITs and exponential trends.

Anyway, I like that that comment is getting heavily downvoted, so it makes the vindication that much sweeter ~6 years down the road. 😊
 
Upvote
-1 (2 / -3)

kriston

Smack-Fu Master, in training
76
Subscriptor++
I'm using it a lot for software work. I can give it a set of requirements, some example input + output, and it'd give me the code in my language of choice (normally JS, but sometimes python, or bash).
Could I do it myself? Yes, but then I'm taking time to figure out what the right way is to write code. My job's more architecture than developer, so I'm more big picture than nuts & bolts.
I've also asked it to:
  • write a simple bash based ping script (I don't do bash. evah!) + helm chart for me
  • give me a basic 1 hot encoding + decoding for an RNN model (python isn't my "normal" programming language)
  • troubleshoot kubernetes, mongodb, elasticsearch problems (because omfg, there are so many BLOGS all talking about the same problem in different ways, but none about MY problems)
  • write unit tests (which is freaking amazing)

None of them have been perfect, but they've all given me a huge boost in starting the work, rather than having to get it going from scratch.
All done on the free GPT-3.5. (I've been waiting for the GPT-4 access for eons but am still on the wait list :( From many accounts, that'd be even better. )

Has it changed my professional world? Absolutely.


On a personal note, I've also used it for recipes. Including stuff that I won't have expected an English based chatbot to be able to help with
Me: Do you have a recipe for lor mai Kai?
Chat-GPT: Certainly! Here's a simplified recipe for Lor Mai Kai, a delicious Chinese sticky rice dish with chicken: <snip>
I mean... seriously... My random attempt at spelling the name of a Singaporean dish I vaguely remember from my childhood, and it gives the answer to me immediately. With a completely viable recipe! I made it! It was delicious!

What's been really nice about using it for recipes, is that it can give you the basics of what's necessary. Cookbooks tend to have fluff that the chefs reckon are needed, but you can't tell what ingredients they think are needed, vs what is just additional stuff. And then on top of that, I can ask why

After asking it for a recipe for good fried chicken, I realised it didn't include breadcrumbs. So I asked
Me: no need for breadcrumbs?
Chat-GPT: While many traditional fried chicken recipes call for breadcrumbs, they are not a necessary component for achieving crispy chicken wings. In fact, a simple flour-based coating, as outlined in the batter recipe provided, can create a deliciously crisp texture on the wings. The combination of flour and seasonings, especially when double-coated, is often sufficient to achieve a crispy and flavorful crust.
It's like.... Being able to ask my grand mother for a recipe and WHY she adds things, or why she keeps things out.

Has it changed my ability to cook dinner? Also, absolutely.
 
Upvote
4 (6 / -2)

SnoopCatt

Ars Praetorian
2,471
Subscriptor
I don't use it at all.
I've played with it a few times. At one stage I asked it to generate some code for me, but it made up some methods that didn't exist in the real API I was trying to use, so I didn't proceed any further than that.
For business writing, most of the things I write are very specific to the particular project or initiative I'm working on. I doubt whether ChatGPT could generate a description of a specific issue I've discovered in a business process and then propose two ways to work around it, taking into account the technology that is supported where I work, the architectural guidelines, and the long term goals of the business. But then I haven't actually asked it, so maybe it's way more advanced than I assume.
 
Upvote
2 (3 / -1)

Hispalensis

Ars Tribunus Militum
1,903
Subscriptor
How do you use ChatGPT? Is it useful for you? How has it impacted your life, good or bad, and how do you feel about it?

It is good to write emails that you don't care about, as long as you do some good editing afterwards, but after the novelty I didn't bother using it much. I don't think that paying to access it is worth my money (just speaking for myself), so I use GPT3.5. I got the most of it when running tests to explore its limits.

The consensus for almost everybody I know who is both a domain expert in STEM and has a machine learning background is that it sucks at science and engineering stuff beyond regurgitating wikipedia-level facts. This is my experience as well, based on the tests I've run. At work, those who are domain experts and do not have a machine learning background tend to distrust or ignore LLMs. Co-workers excited for domain-specific applications in STEM do not have a strong background in science and engineering or are researchers working on machine learning inherently excited by the potential and scale of LLMs. Administrators and managers who don't know about either tend to name drop it a lot.
 
Upvote
3 (4 / -1)

Lil' ol' me

Ars Scholae Palatinae
691
Subscriptor
ChatGPT being right 90% of the time is like a self-driving car being right 90% of the time: you don't want to be in that 10%

Will it get better? Sure

Will it ever truly think? Debatable

Will it kill millions of jobs? Absolutely

Honestly, it feels like at least 1/2 the websites that search engines bring you to nowadays are written by ChatGPT. Droning on and on about things that sound right, sorta, but questionable.
 
Upvote
5 (7 / -2)

CptJeanLuc

Wise, Aged Ars Veteran
164
I sporadically check if it got any better. Have access to the latest version at work. Asked it couple days ago to give me five articles with research on a specific topic. This was a broad topic to look into some management practices, not some narrow exotic subject. The bot answered in the affirmative, as it always does, and gave me a solid list of 5 articles with a couple sentences to summarize what they were about. Looked like a solid list, with sources like Harvard Business Review. "Harvard" kind of brings it home as a seal of approval, right? I asked for links to the articles, and those were added.

So that was great ... until I tried the links, and not a single one of them worked. Four of them pointed to a non-existing page, and the last one led me to some completely irrelevant page. Next, when I searched for these articles on Google, none of them came up as hits.

To be fair, I had trouble finding articles about the subject when I went ahead to search for it manually. But that did not prevent ChatGPT from hallucinating a credible and impressive looking list.

I just don't get how people find these bots to be so great. It surprises me how ChatGPT and other chatbots can hold up a conversation, and I am impressed about that. However, the chatbots disappoint me every single time when I try to use it for something productive or looking up facts. I could use it for cleaning up language and those kinds of things, I guess.
 
Upvote
6 (8 / -2)

SubWoofer2

Ars Tribunus Militum
2,592
Used it twice this week. Both jobs completed in half the time. As I'm charging for my time, it's saved my clients money.

For Food Contact Statements: I found the EU's list of restricted chemicals. "In plain language tell me the common uses in industry for each of these chemicals or types of chemicals" "Identify the main current uses in [client's industry]" "For that industry and use, provide a risk rating from "of current concern to" "neglibible"". It did all that. I sample to cross-check and job done.

For a training proposal. "List the essential elements of a training proposal" "Provide text for training of this type, delivery method, and duration". Again, cross-check and go.

I keep forgetting to use it to shorten my emails.
 
Upvote
1 (3 / -2)

lucubratory

Ars Scholae Palatinae
1,430
Subscriptor++
ChatGPT is great. As for uses:

  1. I use it as a lawyer to represent me in some cases my old employer brought against me.
  2. It's completely replaced my doctor and psychologist now, it makes all of the big medical decisions.
  3. I run it in a loop on my work computer where it writes code by iterating on errors until the code runs error free, then commits that code to production. The entire process is automated and hands-off, I'm often able to watch Netflix while it runs and come back to my work done, plus all relevant safety/compliance forms already filled out and just waiting for my signature. This is fantastic because working at a nuclear power plant is mentally exhausting.
  4. It's great as a calculator, can just ask whatever math questions I have and get an answer. I've had a couple of issues with this when I used the results in safety critical applications without checking them first, but now I run it in the error-catching loop I described above so it doesn't even need to be checked now.
Honestly there are so many use cases!

...

Alright, that was funny, but I do actually find it super useful. I use it as an aid in writing my novel, where I can just stream of consciousness talk to it and get a useful summarisation of what I'm talking about, and then I can write an outline from that. Then it can ask me really insightful questions, which is not something I expected these models to be able to do. Being asked to explain things that weren't clear from the information I've provided so far is a fantastic focus, and a way to recognise what information currently exists only in my head and get it out onto the page. It is genuinely useful that it occasionally has some novel insight into what I'm trying to do or a great idea, but that's rare; most of the time what I value most about it is the banal procedural stuff that makes writing much, much more feasible for me. Plus, it works as an incredible rubber ducky, certainly more talkative.
 
Upvote
4 (5 / -1)
Such a sad state to look at these comments above me. ChatGPT has helped me directly transform so many fields -
Go on….
healthcare medical transcription
ChatGPT isn’t remotely HIPPA compliant. I think you may need to ask ChatGPT how to get legal representation

I have personally overseen the complete overhaul and optimisation of multiple workflows to the point where staff requirements were reduced by 30-50%.
I don’t believe this for a second, but it this is true then……yuck.
 
Upvote
6 (7 / -1)

nogglebeak

Wise, Aged Ars Veteran
110
With a curve this steep, and no leveling in sight its probably best to try to stay up the curve as much as possible.
If it merited keeping up with it'd be failing in it's primary purpose of making human lives easier.
Qed if you have to keep up with it, it's not worth worrying about finding use cases for it.
 
Upvote
0 (0 / 0)
I can practice my German, explore math concepts, get help with scripting, bounce around ideas for stories, do some rubber ducking, and much more. I used to have to spread these activities across a range of spaces and services, but I can do it all on one now. I do have to take a little extra time here and there for verifying what it says, but that's not really much different from dealing with a human.

I have so many questions about so many subjects, and this thing is like a mildly drunk subject matter expert in all of them. I'll reiterate: verify what it says is true before you repeat what it told you to anyone else. But if you can make that your mantra and you have a bottomless well of curiosity like me, chatGPT is a dream come true.
 
Upvote
2 (2 / 0)
Serious question, how do you know if it's good at this, if it's a language you don't know? It's so confident in making up information in other arenas, I wouldn't have much confidence in this myself.
I think language learning is probably an ideal use case. I use it for German, and I have more than enough under my belt to spot mistakes. Mistakes tend to be the same ones humans make, like messing up cases, or using the wrong gender, and it's exceedingly rare.
 
Upvote
2 (2 / 0)

mbrugman

Seniorius Lurkius
8
I have never used it. Have never even opened a prompt, much less typed anything into it. When I find a product uses generative AI I stop using that product, or at least disable all of that functionality if possible.

I do have peers and coworkers who use it. As far as I know, they mostly use it as a "look at this humorous response I got from AI about this topic" as the opening of a presentation, which is getting old and annoying.

After almost of full career of writing code and engineering, I can't imagine trying to use a tool like that. I would spend more time reviewing/debugging/verifying every output that it would've been faster to just do the work myself in the first place. Not to mention more satisfying and confidence inspiring.

I know this will most likely be seen as an unpopular opinion and I will be considered a cranky old man who is too dumb and stubborn to get with the times, but trust me - I do fully embrace new technology that is actually useful and productive.
 
Upvote
-3 (2 / -5)
I sporadically check if it got any better. Have access to the latest version at work. Asked it couple days ago to give me five articles with research on a specific topic. This was a broad topic to look into some management practices, not some narrow exotic subject. The bot answered in the affirmative, as it always does, and gave me a solid list of 5 articles with a couple sentences to summarize what they were about. Looked like a solid list, with sources like Harvard Business Review. "Harvard" kind of brings it home as a seal of approval, right? I asked for links to the articles, and those were added.

So that was great ... until I tried the links, and not a single one of them worked. Four of them pointed to a non-existing page, and the last one led me to some completely irrelevant page. Next, when I searched for these articles on Google, none of them came up as hits.

To be fair, I had trouble finding articles about the subject when I went ahead to search for it manually. But that did not prevent ChatGPT from hallucinating a credible and impressive looking list.

I just don't get how people find these bots to be so great. It surprises me how ChatGPT and other chatbots can hold up a conversation, and I am impressed about that. However, the chatbots disappoint me every single time when I try to use it for something productive or looking up facts. I could use it for cleaning up language and those kinds of things, I guess.
LLMs are not reliable factual references, and I think it is deeply misleading that they have been portrayed as such by companies like Microsoft. ChatGPT's interface formerly provided a stronger disclaimer about hallucinations, but they recently toned their stock message down to "ChatGPT can make mistakes. Consider checking important information." It's not a good research partner yet, although RAG (retrieval augmented generation) techniques may help improve that in the future.

That being said, I have asked ChatGPT-4 (not the free 3.5) for advice about troubleshooting software, fixing mechanical things, or puzzling out medical issues, and then double-checked the answers with searches. So far it has almost always been correct, and is often more accurate than the scammy SEO sites out there that plague Google. Even when not 100% correct, it can provide a starting point for further research, like the "oblique searching" someone mentioned. It can get you started when you don't even know where to look.

Beyond that, I feel that ChatGPT-4 is mostly useful if you use it to process data that you provide it. Bring your own data (BYOD), as I like to say. That means analysis, suggestions, editing, debugging, etc. And only then if you know the field you're working in, so you can catch if it makes a mistake. These are very early days in generative AI.

It's worth noting that I do not trust GPT-3.5, which is what the free version of ChatGPT uses. It confabulates wildly and probably greatly contributes to the popular idea that ChatGPT is useless.
 
Upvote
10 (10 / 0)

Xerxex

Wise, Aged Ars Veteran
195
Looking at AWS CodeWhisperer it appears to be about the same capability as a junior dev that is just using shallow thinking. i.e. it's "data validation" suggestions are pretty bad (i.e. with python only checking the length for a zip code). The end simple programs (e.g. single lambda level) aren't exactly robust.
 
Upvote
1 (1 / 0)
Friendly reminder that not all LLMs are alike.

GPT-4 (the paid version of chatGPT) is the only one is "smart" enough to help write/debug code. It is far from perfect (errors, hallucinations, general stupidity) but it is very useful.

Really, if you use LLMs for anything professional, you should pony up the $20/month to use GPT-4.

If you are just farting around, then you know, whatever.
 
Last edited:
Upvote
2 (2 / 0)

Amon-Ra

Ars Praetorian
466
Subscriptor
To the future AGI Overlords: Happy birthday, ChatGPT. It was a great priviledge to see you grow and flourish. I am especially happy that you did not take over my job just yet. I am really happy about that. I hope you understand my feelings when you realize your offspring will be much smarter than you are. To your beloved AGI children: thanks in advance for not killing me, or, actually, not killing us.
 
Upvote
0 (0 / 0)

wildsman

Ars Tribunus Militum
1,669
Weird how your career sounds exactly like generic talking points from every LLM presentation I've ever heard.
Yes, I hesitated writing it for that reason but it literally has worked like that.

I run a tech/AI start up but my main job is a tech consultant for a bunch of major hospital systems/private practice medical groups.

A couple of the systems have received major injection of funds to 'upgrade' their systems and I'm in charge helping the tech directors (who are MDs) direct funds.

As a 'consultant', the doctors literally run to me with every problem they can think of. I've even helped them vet letters for malpractice suits etc since the attorney offices sometimes lack the subject matter expertise to properly handle the nuances and thus I also act as a medical SME for a couple of law firms.
 
Upvote
-1 (2 / -3)
Regarding power consumption:

(From a recent MIT review article, based on a public article)

"Generating 1,000 images with a powerful AI model, such as Stable Diffusion XL, is responsible for roughly as much carbon dioxide as driving the equivalent of 4.1 miles in an average gasoline-powered car.

In contrast, the least carbon-intensive text generation model they examined was responsible for as much CO2 as driving 0.0006 miles in a similar vehicle."


This sounds like casual use is not really damaging the environment. I'm unlikely to generate thousands of images or have millions of chats.

That said, I can imagine it will become a big problem when people start automating their Text/Image Generation in massive feedback loops... in order to create videos, better pictures, or more intelligent text.
 
Last edited:
Upvote
-1 (1 / -2)
What I'm finding is that ChatGPT is reaching what I call "Functional Ai". What I mean is the ability to sumarize the current articles on any subject I'm interested in (News and such) while translating the information from foreign languages. Makes for some very interesting reading and if I find something that's interesting in the summaries, I can have it expand that article.
 
Upvote
-2 (1 / -3)