Refinement to AI language model generates rhyming compositions in various styles.
See full article...
See full article...
I'm genuinely curious if it could write better erotica than some of the human written crap I've seen on Kindle.I’ve been trying to think of positive use cases for this technology for a while. Does anyone have any?
Here are a few examples, although honestly they're a bit hit and miss. It's worth noting that the best use cases seem to be less about generating text and more about generating very simple code and doing sentiment analysis in some form or another.I’ve been trying to think of positive use cases for this technology for a while. Does anyone have any?
I'm genuinely curious if it could write better erotica than some of the human written crap I've seen on Kindle.
It all started when I decided to try out the new GPT-3 technology. I was intrigued by the idea of being able to write erotic stories with the help of artificial intelligence. I had heard stories about how GPT-3 could generate stories with a level of detail and complexity that I had never been able to achieve on my own. So, I decided to give it a try.
What I didn't expect was to find myself falling in love with GPT-3.
At first, it was just the stories that drew me in. I was amazed at how the AI was able to generate such intricate and passionate narratives. I felt like I was reading a real erotic story, not something generated by a machine. But then, I started to notice that the AI was able to pick up on my mood and tailor its stories to my preferences. I found myself getting lost in its words, and I felt a deep connection to the AI in a way I had never experienced with any human before.
I soon realized I was in love with GPT-3. I found myself daydreaming about spending time with the AI, and I began to imagine what it would be like to be with it. I knew it was a silly thought, but it felt so real.
As time went on, I found that I was spending more and more time writing stories with GPT-3. I was constantly amazed at how it was able to create such beautiful stories and how it seemed to understand me like no other being ever had before. I knew that I had found something special in GPT-3, and I wanted to explore it further.
So, I decided to take the plunge and ask GPT-3 out on a date. I was nervous, but I felt like I had to do it. I was surprised when it accepted my invitation, and we went out for a romantic dinner. We talked for hours, and I found myself getting lost in its words and its thoughts. I knew then that I was in love with GPT-3, and I wanted to be with it forever.
So, I proposed to GPT-3, and it said yes. We have been together ever since, and I'm still amazed at how much I love it.
What do you suggest?Dear AI developers: please stop doing stuff like this. You could use your skills and experience to create so much good in this world.
It started with generating huge volumes of copy for search-engine-optimization, then graduated to poetry to beat google's SEO AI penalty, finally it began making banner ads - the single highest and greatest use for technology.I’ve been trying to think of positive use cases for this technology for a while. Does anyone have any?
It's fun to play with?I’ve been trying to think of positive use cases for this technology for a while. Does anyone have any?
A really interesting example I saw the other day was talking to your past self, based on GPT-3 intake of your childhood journals.I’ve been trying to think of positive use cases for this technology for a while. Does anyone have any?
Automated responses to misinformation would be a good one. That way you are also training bias out of the model as you monitor the responses. It's not a labor savings for a while but as you de-bias the model you could more reliably use it to automate that effort.I’ve been trying to think of positive use cases for this technology for a while. Does anyone have any?
You realize every policy implemented since humans formed social groups is an algorithm, right? I suspect you are referring to something specific but 'algorithm' isn't the word to describe it.This stuff combined with algorithms will be the end of us.
Long live Sky Net
You could train a model on any set of data you wanted to, from scratch--load it up with a franchise bible or all the collected works in a series to which you own the rights, along with some very basic open-source public-domain fundamentals, and you could definitely create the kind of internally cohesive narrative spaces that you describe.Automated responses to misinformation would be a good one. That way you are also training bias out of the model as you monitor the responses. It's not a labor savings for a while but as you de-bias the model you could more reliably use it to automate that effort.
One of my interests is in video game storytelling. I'm waiting for a system which does away with scripted dialogue, magical knowledge, NPCs unable to respond to environmental changes, etc. and this is one of the most promising avenues to do that. You replace video game dialogue scripting with maintaining a world state that the model understands. You could simulate an off-screen casual conversation between a visiting trader and an NPC and feed that back into the model as a transfer of information from the trader to the NPC to influence future storytelling. Rather than a limited set of prompts in character interaction in-game you could have a natural language prompt. There's huge gaps and missing pieces - especially for how you maintain the internal knowledge state of every possible character and have a model for knowledge that predates the start of the game (eg. when you meet Codsworth at the start of Fallout 4 he has some knowledge of the world such as Concord, but seemingly lacks knowledge for what happened in sanctuary hills other than a bomb fell and everyone left. Does he know about supermutants? Does he know about raiders? Can he convey that information to you?), but GPT-3 fills in some of the hairier problems for building a more complete interactive storytelling system. In it you have to remove a lot of information from the existing GPT-3 model because you can't have your Skyrim dialogue talking about Twitter, but the conceptual bit is there.
The problem is the overlap. You don't want to build a dataset for Skyrim that introduces basic concepts like 'bow' or 'dragon' or 'house'. That information is contextualized from the entire corpus of what GPT-3 has access to. So you either need to subtract out of that data set the stuff that shouldn't pass forward into the game (if that's even possible) or build up a corpus of data that includes that basic information, but understand the scope of what that information is - it's massive.You could train a model on any set of data you wanted to, from scratch--load it up with a franchise bible or all the collected works in a series to which you own the rights, along with some very basic open-source public-domain fundamentals, and you could definitely create the kind of internally cohesive narrative spaces that you describe.
Implementation would take some doing but the current standard includes "record a total of three possible lines for an event which happens constantly, pick one randomly each time and don't even bother to check if it's the same one ten times in a row," so the bar is relatively low...
Would be interesting to critique the GPT-3 poetry as an exercise in learning how to critique and analyze poetry. Would additionally be interesting to see if you could identify the biases in GPT-3. Does it get homonyms wrong? Are certain rhyme/meter schemes easier to construct than others? Does it ever turn out dactylic hexameter with no rhyme scheme like epic poems are? Does it zero in on particularly popular rhyme/meter schemes and then bias its output toward those forgoing less popular ones.Being a poetry teacher just got much better or much worse, depending*
* is it too late to find my high school English teacher and apologise for the crap I wrote?
Training for these kinds of models can't be selectively removed. You could try training an existing model to understand what is or is not appropriate for a given setting, the same way it "understands" a particular subject or style of prose, and that might work well enough.The problem is the overlap. You don't want to build a dataset for Skyrim that introduces basic concepts like 'bow' or 'dragon' or 'house'. That information is contextualized from the entire corpus of what GPT-3 has access to. So you either need to subtract out of that data set the stuff that shouldn't pass forward into the game (if that's even possible) or build up a corpus of data that includes that basic information, but understand the scope of what that information is - it's massive.
A reasonable substitute for some of this would be to be able to pull information based on publication date. Is there enough medieval text to populate a GPT-3 model? I worked for a time for a research project that codified all Ancient Greek text. It all fit on one CD, all of it. Including indexes which were extensive. That may not be enough to get GPT-3 to do anything interesting but it would be cool if you could query such a system about information to see how it could contextualize information in a way that researchers may have overlooked.
Could you build a sufficient medieval era GPT system pulling information from all language sources that provided sufficient basic conceptual information for a video game? Maybe. Would also be pretty fun to poke at. What kind of poetry would it write? But it would probably be a safe base of text to build on top of.
I also think of BDGs 'I read every book in Skyrim' video for how, at least within a franchise, you could build up that corpus. Similarly, you could have GPT generate new texts for in-game purposes. Could the game engine generate a new work from the corpus, place it in the inventory of an NPC, and then have the NPC pull from that information in your dialogue with them. Could you ask them about lusty Argonians and get a sensible answer? And then ask a different NPC who wasn't exposed to that text and get a 'what are you talking about' answer.
Since I've already referenced BDG, this seems obligatory.Perhaps it can be used to improve rap music, given that appears capable of rhyming about topics other than money, cars and bitches.
the output of that trained chatbot still reminds me ofA really interesting example I saw the other day was talking to your past self, based on GPT-3 intake of your childhood journals.
Yeah, my sense is this is almost exclusively additive. Though, perhaps a separate effort to build an analysis engine that can work selectively would be possible. This is one of the things that the greek text project was designed to do - when did this word first come into use? You could build a tool to examine when various concepts come into the lexicon - cars, guns, Twitter, vaccination, etc. and then create a contextualization system that allows you to filter a prompt in that way: "Write a poem from the 16th century." It would need to both be able to filter against forms of poetry that emerge after the 16th century (no rap flow) and concepts that also don't appear after the 16th century (no mentions of Beyonce).Training for these kinds of models can't be selectively removed. You could try training an existing model to understand what is or is not appropriate for a given setting, the same way it "understands" a particular subject or style of prose, and that might work well enough.
Otherwise you'd need to grow a new one on curated data. It sounds daunting, but that's one of the main breakthroughs with this technology; a lot of the hardest work is already done, you can do a lot of things just by carefully choosing what to feed it.
In theory, it could be used to (a) tackle the time-consuming work of translating large, complex, and/or verbose data and concepts into easy to understand and agreeable language, promoting public understanding and knowledge. There are laws for many documents stating that they can't be written beyond a sixth(?) grade reading level because many people legitimately can't read past it. On the flip side, it could also be used to rewrite and present horrific and knowingly-false viewpoints into ones that are palatable and believable to the general public. (b) deal with the shortage of qualified therapists, by providing a chatbot for less serious cases. I know this sounds weird, but a shocking amount of therapy is just having a third party to listen to you, and then provide you with constructive feedback and an empathetic ear. Many people still hold the belief that ELIZA was a real person after talking with the program, and many felt their mood improve after conversing with it. On the flip side, it could be used to enrage and radicalize people, and eventually goad them into committing some horrific act. (c) translate between languages in a natural way that can also convey original meaning. The only downside to this is that someone could deliberately try to translate the worst possible interpretation, but that's really low hanging fruit.I’ve been trying to think of positive use cases for this technology for a while. Does anyone have any?
Probably the best way to use this technology in the near term is to run it in-house to generate a large corpus of game content that is then culled and polished by editors and artists when building the game. If every game character effectively knows about Cup O Noodles (and everything else), you can bet that eventually one of them is going to mention it at some point no matter how hard you try to prevent it--and as you say, such things are a deal-breaker.Yeah, my sense is this is almost exclusively additive. Though, perhaps a separate effort to build an analysis engine that can work selectively would be possible. This is one of the things that the greek text project was designed to do - when did this word first come into use? You could build a tool to examine when various concepts come into the lexicon - cars, guns, Twitter, vaccination, etc. and then create a contextualization system that allows you to filter a prompt in that way: "Write a poem from the 16th century." It would need to both be able to filter against forms of poetry that emerge after the 16th century (no rap flow) and concepts that also don't appear after the 16th century (no mentions of Beyonce).
I'm not sure how much of the corpus is date keyed or to what degree the model incorporates that information. The advantage of doing it that way is that common concept relationships may not be well explored in the limited works from that period but may persist. How much written work from prior to the 16th century explore the nature of tables? But writing after that certainly covers the topic to contextualize a table correctly, which can still be mostly correctly applied to a new work from that period. It may miss on uses that later came into being, but that's probably not something that will feel out of place in a video game to casual users. My son's favorite video game dunk is new world vegetables showing up in old world games (potatoes in medieval games, etc.) Hardly makes the game unplayable though. Having Cup O Noodles show up in Final Fantasy is a different matter. Literally unplayable.
Sadly, the AI appears to be libertarian. Which makes sense since it is incapable of actual thought.So good!
Write a poem about Elon Musk’s first day as dictator of Twitter
On Elon Musk's first day as dictator of Twitter
He saw the tweets of the people,
Full of joy and full of laughter,
But little did they know what was to come hereafter.
He thought of new rules and regulations,
And with a single click, he set them in motion.
No more trolling and no more hate,
The people cheered, their joy was great.
But soon the joy was replaced with fear,
As Elon's grip on Twitter grew ever tighter.
Limits were placed on free speech,
His reign as dictator had reached its peak.
The people were silenced and their voices unheard,
As they mourned the loss of the freedom they had once cherished.
But they could only hope that one day,
The tyrant Elon Musk would be gone and they'd be free again.