It's been my experience that ChatGPT seems to weight stuff from earlier in the discussion a little more heavily than more recent information. Like if you tell it somebody is a horse, but then later tell it that you lied and that person is really a human, not a horse, ChatGPT still seems to stay stuck on that person being a horse and won't let go of the idea. I expect longer memory will result in some strange behavior.
(I swear this isn't a wind up to a Musk joke. This was an actual ChatGPT conversation.)
A horse is a horse, of course, of course, unless it's Mr. Ed, who I lied about being a horse.It's been my experience that ChatGPT seems to weight stuff from earlier in the discussion a little more heavily than more recent information. Like if you tell it somebody is a horse, but then later tell it that you lied and that person is really a human, not a horse, ChatGPT still seems to stay stuck on that person being a horse and won't let go of the idea. I expect longer memory will result in some strange behavior.
(I swear this isn't a wind up to a Musk joke. This was an actual ChatGPT conversation.)
I remember seeing a joke a few years ago about how we no longer have to choose between different cyberpunk dystopias, because we were living in a mashup of all of them.“Remember being here, a second ago?”
“No.”
“Know how a ROM personality matrix works?”
“Sure, bro, it’s a firmware construct.”
“So I jack it into the bank I’m using, I can give it sequential, real time memory?”
“Guess so,” said the construct.
“Okay, Dix. You are a ROM construct. Got me?”
“If you say so,” said the construct. “Who are you?”
“Case.”
“Miami,” said the voice, “joeboy, quick study.”
“Right. And for starts, Dix, you and me, we’re gonna sleaze over to London grid and access a little data. You game for that?”
“You gonna tell me I got a choice, boy?”
Gibson, William. Neuromancer (Sprawl Trilogy Book 1) (pp. 76-78). Penguin Publishing Group. Kindle Edition.
Optimistic of you to assume you're living through it.I remember seeing a joke a few years ago about how we no longer have to choose between different cyberpunk dystopias, because we were living in a mashup of all of them.
Nothing prevents you from using OpenAI's API for doing that.It'd be nice to be able to pre-load that 32k context size with 2-4 classes + unit tests and 4k tokens for the response,
max_tokens parameter is limited only by the model according to the docs.Wow, this will make a huge difference in usefulness for my use cases. It can be time consuming to re-prompt a few prompts every session to get it "up to speed"
I remember seeing a joke a few years ago about how we no longer have to choose between different cyberpunk dystopias, because we were living in a mashup of all of them.
Sorry, in this garbage timeline the Horrors pierce the astral veil first.When does magic return, and the dragons wake up?
Horses make terrible people.It's been my experience that ChatGPT seems to weight stuff from earlier in the discussion a little more heavily than more recent information. Like if you tell it somebody is a horse, but then later tell it that you lied and that person is really a human, not a horse, ChatGPT still seems to stay stuck on that person being a horse and won't let go of the idea. I expect longer memory will result in some strange behavior.
(I swear this isn't a wind up to a Musk joke. This was an actual ChatGPT conversation.)
Or maybe ChatGPT, now being multimodal, can look up a portrait of Elon Musk and decided that yes, that actually is a horse, and your later statement is a lie?It's been my experience that ChatGPT seems to weight stuff from earlier in the discussion a little more heavily than more recent information. Like if you tell it somebody is a horse, but then later tell it that you lied and that person is really a human, not a horse, ChatGPT still seems to stay stuck on that person being a horse and won't let go of the idea. I expect longer memory will result in some strange behavior.
(I swear this isn't a wind up to a Musk joke. This was an actual ChatGPT conversation.)
I am confused by this kind of comment, which I see from time to time. I have written extensively about the drawbacks of AI technology going back to 2022, including how AI models may disrupt history, threaten privacy, enable abuse, lead to legal injustice, and use copyrighted material without permission. I commissioned a piece on how AI might affect the environment.Why does this author's news stories on AI sound more like press releases than anything like a balanced report on the pros & cons of this technology?
Ars has so many really talented reporters but this reporter does not stand among them. The breathless takes on a utopian AI future from this person are a disservice to your brand, Ars.
I'd be surprised if this is anything other than them inserting those memories at the top of every context window and charging you for the privilege of extra tokens.![]()
I am reliably informed by Gemini that a horse-person would have many valuable traits. However, it also states that horses would be unlikely to be capable of making people due to their lack of dexterity, among other reasons.Horses make terrible people.
That's almost exactly what an AI would say. Suspicious.I am confused by this kind of comment, which I see from time to time. I have written extensively about the drawbacks of AI technology going back to 2022, including how AI models may disrupt history, threaten privacy, enable abuse, lead to legal injustice, and use copyrighted material without permission. I commissioned a piece on how AI might affect the environment.
Regarding OpenAI and LLMs in particular, I've written about how ChatGPT makes things up, how OpenAI should be more transparent with its models, how ChatGPT can be unreliable due to "laziness," and most recently about how LLMs are not ready for widespread production use (see section at the bottom). I'm sure I've mentioned privacy implications of ChatGPT and cloud APIs many times, such as in this article.
Does that sound like I'm pushing a utopian AI future? It's true that I am cautiously optimistic that some AI tech can be useful, so I'm not going to merely put it down continuously, as some may hope. It will likely improve over time with critical feedback from users and the press. Plenty of people use ChatGPT and enjoy it (see other comments), and they read Ars Technica too.
My main job is to relay the news to you quickly and briefly (as per my directive), and when an interesting new AI product or feature comes out, it's my job to tell you about it. Sadly I can't delve into deep critical reviews of everything that comes along. In this case, I have not used the memory feature yet, so I am working with the information I have available, which comes from OpenAI.
I am reliably informed by Gemini that a horse-person would have many valuable traits. However, it also states that horses would be unlikely to be capable of making people due to their lack of dexterity, among other reasons.
I swear the best part about these chatbots is presenting them with absurd premises. That probably says something about my sense of humor.
No you're right, you can always pick a chat back up, but after 20 or so prompts, things can start going off the rails so I start a new chat. But then I have to tell it what we're working on, the context, etc etcWait, I thought if you picked back up in an existing chat the next day that it would keep the context, but apparently not?
A basic "every token attends to every other token" context window is quadratic, yes. But if you think of it as reading a book then words form sentences, paragraphs, chapters etc. and realistically we mentally create more and more high level summaries unless a particular phrase is important like a spell incantation or something.I have a vague memory of reading that the system memory use is quadratic or exponential to context length, but I'm not certain if that's accurate. If that is the case, the increased context memories do incur higher operating costs. But as I said, the provenance of that memory is wobbly and I can't find anything at the moment to corroborate or disprove it. My search-fu is failing me today. I definitely suspect that increasing context length is not linear, though.
Let’s try starting with joy.having made important developments in the relatively straightforward concept of memory, their next project is a little trickier: to make ChatGPT capable of experiencing pain and fear