128k context is great, but I suspect its going to get way more bonkers. Ive been waiting for it for some time. I want to try to feed it transcripts of things like depositions, long form misinformation documentaries, and the like to see if it weeds out the logical fallacies, counter factual statements, and general manipulation of the content reader.Busy playing with the 128K context window for documents. Absolutely bonkers and I thank Anthropic for leading the way here. Unfortunately for them, it means all my spend is going to OpenAI and GPT-4. It's a winner-take-all market.
Setting to json_object enables JSON mode. This guarantees that the message the model generates is valid JSON.
Note that your system prompt must still instruct the model to produce JSON, and to help ensure you don't forget, the API will throw an error if the string JSON does not appear in your system message. Also note that the message content may be partial (i.e. cut off) if finish_reason="length", which indicates the generation exceeded max_tokens or the conversation exceeded the max context length.
Must be one of text or json_object.
Given the context of some of the comments, the AI is used for more than conversational interaction. It's also used to parse data and such. The data can be short or long, but given that a lot of metadata is long, having a longer cache to use to process that data helps.As a bystander, I don't really understand why you need a super long context window. Why not maintain a prompt of a much shorter length and modify that as new information is added by the longer input? I certainly don't remember every detail of my history, code base or a book's plot, I compress it to the key details and go from there.
For analyzing books, or book-length content. I've used it for a large number of projects in this regard—anything from querying for specific pieces of information in a nonfiction text to creating plot summaries and character breakdowns for a novel to popping in a homebrew TTRPG and using it to help with world-building and create rulesets. I find that ChatGPT works best as a brainstorming assistant, and when you have a text you're working from "together" it's a really fantastic one.As a bystander, I don't really understand why you need a super long context window. Why not maintain a prompt of a much shorter length and modify that as new information is added by the longer input? I certainly don't remember every detail of my history, code base or a book's plot, I compress it to the key details and go from there.
One door closes, another door opens. We've been trying to work around some of the frustrating limitations of current LLMs. OpenAI removed some of those limitations today, and made other things easier to build; it just means that we can focus on other things.I feel a disturbance in the force, like a million startups acting as thinly-veiled thin-wrappers over OAI APIs crying out in pain as they vanish into the ether..
vacant eyes, unfocused, suddenly snap to yours with a crazed sparkIt is a bad time in history to be cursed with chronic anxiety.
128K context length
George RR Martin is furiously submitting what he has written so far for winds of winter.Interesting, if you feed it the first half of a novel, can it finish the novel for you
But 128k!? Nothing even comes close that can be run locally or in a realistically-sized cloud environment.
It is a bad time in history to be cursed with chronic anxiety.
How I've been using it: feed a bunch of stories about myself and a job description, then have it write a cover letter using input from both. Longer context windows means I can give it way more stories to choose from when it's trying to fit them to the job description.As a bystander, I don't really understand why you need a super long context window. Why not maintain a prompt of a much shorter length and modify that as new information is added by the longer input? I certainly don't remember every detail of my history, code base or a book's plot, I compress it to the key details and go from there.
GPT-3.5-turbo-16k already works fine with short and medium-length documents like company financial reports. I can't imagine what 128k context would be like unless you're feeding a novel or a huge internal corpus into it. I hope we can do away with the RAG technique and the hassle of searching for matching embedding vectors - just feed a giant slurp of data into the model and let it grok away.128k context is great, but I suspect its going to get way more bonkers. Ive been waiting for it for some time. I want to try to feed it transcripts of things like depositions, long form misinformation documentaries, and the like to see if it weeds out the logical fallacies, counter factual statements, and general manipulation of the content reader.
It definitely does it with shorter context where I feed it transcripts of a fox news clip.
In the 5000 B.C. range? Paranoia is great when there's no civilization.Is there a good time?
Seriously, I have a time machine, if you know of one I'm outta here.
The way I see it, Apple is focusing on on-device AI for image recognition, image processing, voice recognition and text-to-speech. It's great for selling devices and not much else.It is kind a funny to witness the level of commaraderie from Altman to Nadiella. I could almost hear him wishpering "if it was not for those 10bn you would get the fu** out of my OAI devday!"
What’s going to be left for human knowledge workers to do? I fear we’re just going to end up as nothing more than spectators.
I know I look through heavily rose tinted glasses, and that there was also plenty to complain about then, but I feel like pre-2016 was a hopeful time.Is there a good time?
Seriously, I have a time machine, if you know of one I'm outta here.
. If Wintermute calls, just hang up.
To me, the last currencies that will remain in this world will be human-made art, time, and human contact. Once all of the basics are set (Food, water, shelter, clothing, transportation, data, electricity), choosing what to do in life will be simply a matter of having a generative drive (check into Paul Conti's work on mental health) and sharing that drive with others.What’s going to be left for human knowledge workers to do? I fear we’re just going to end up as nothing more than spectators.
You know you're on strong ethical and legal grounds when you're compelled to offer protection from copyright litigation as a perkAlso on Monday, OpenAI introduced what it calls "Copyright Shield," which is the company's commitment to protect its enterprise and API customers from legal claims related to copyright infringement due to using its text or image generators. The shield does not apply to ChatGPT free or Plus users. And OpenAI announced the launch of version 3 of its open source Whisper model, which handles speech recognition.
Not sure why you are getting voted down, it's a reasonable question. For a lot of us working with earlier versions of the system its been a game to work with the smaller context windows. Generally we are recording conversational history in like a database and using various techniques to summarize it or to reduce it such that you can maintain the illusion that the system is carrying on a long term conversation even with the smaller context windows. That game is still relevant because longer context windows don't solve everything (cost for example, and maybe you want to pare down a history to a smaller context in order to get better and faster responses.)As a bystander, I don't really understand why you need a super long context window. Why not maintain a prompt of a much shorter length and modify that as new information is added by the longer input? I certainly don't remember every detail of my history, code base or a book's plot, I compress it to the key details and go from there.