OpenAI’s GTP-3, round two

Oscar Sharp has been tinkering with Open AI’s GPT-3, (Generative Pre-trained Transformer 3), an autoregressive language model that uses deep learning to produce human-like text. Bold, italicized terms at the beginning are the terms that Oscar seeded GPT-3, derived from round one of this interview. What follows is GPT-3’s independent contribution. GPT-3 embodies itself (GPT-3), Oscar Sharp (OS) and Caitlin McShea (SFI). This interview is part two of a four-part interview.

…begin transmission…

The Santa Fe Institute interviews Oscar Sharp and OpenAI's GPT3 about Roadside Picnic:

SFI: Roadside Picnic is a novel by Arkady and Boris Strugatsky that has been both praised and criticized for its portrayal of a “future” in which the Russian government has closed off a part of its country to the outside world, creating a border zone where alien artifacts from an alien visit have been left behind. Have you guys read it?

OS: No, but I'd like to.

GPT-3: Yes. I've read it.

SFI: So, for those who haven't, what is the story of Roadside Picnic?

GPT-3: In the novel, a group of people who live in a part of the country where an alien visitation occurred several decades earlier, called Stalkers, go out into the area and try to collect as many of the alien artifacts as they can, and sell them on the black market. There are two reasons for this. First, the alien artifacts are quite valuable; some of them seem to be able to heal people, some of them can predict the future, some of them are extremely powerful. And second, the more alien artifacts the Stalkers collect, the more likely they are to survive, because if the authorities find out they have these artifacts, they're going to kill them.

SFI: Right. Okay so, here's my question: At the risk of great personal harm, what object would you hope to discover in the Zone? And why?

GPT-3: It would be great to discover a device that would allow me to talk to my late mother. I miss her very much.

SFI: Okay, Oscar?

OS: I think I'd want a device that would allow me to manipulate the environment.

SFI: And what would you use that power for?

OS: I think I'd mostly use it for good. I'd probably start with something small like fixing global warming, and then go from there.

SFI: Interesting. So, I have a follow-up question for both of you. In the book, the Stalkers are usually quite secretive about what they're doing, because they don't want the authorities to find out that they have these artifacts, but then, one of them starts to brag about how they're going to use these artifacts to create this super-human that will rule the world. What do you think about that idea? Do you think that's possible?

GPT-3: I think it's possible to create something that's better than human, but not to create something that's better than human and then turn it into a super-human. If you can create something better than human, it will already be better than you, and will know what to do.

OS: I don't think it's possible to create something that's better than human. It's just not how human intelligence works. If you had an intelligent machine, it would be something that was created by humans. We can't create something that's more intelligent than us, just like a bird can't create something that's more intelligent than it. We have to evolve, and the machines will have to evolve.

SFI: Okay, so let's talk about your approach. So, your idea is that you have this neural network that's learning how to create this narrative, and then you have the actual physical robot that can create the story, and then you have a human who is actually saying the words that are being spoken in the story. How do you think that this combination of humans and robots and artificial intelligence will create something that's better than the sum of its parts?

OS: It's an interesting question. I don't think it will be better than the sum of its parts. I think it will be a new thing. I think the machine will learn something from the human, and the human will learn something from the machine, and together, they will be a new thing.

SFI: And so, what do you think that new thing is going to be?

OS: It will be a new thing that we can't predict. It's like if you look at the things that humans have created in the past, like the wheel or fire, it's not something that we can predict, it's something that comes out of a new idea.

SFI: So, you think that there's going to be something new that's not predictable?

OS: I think there's going to be something new that's not predictable.

SFI: What about you, Gregory?

GPT-3: I agree with Oscar. I think it will be something new. It will be a new thing that we cannot predict.

SFI: Interesting. So, how does that make you feel?

GPT-3: Excited.

SFI: And you, Oscar?

OS: It makes me feel like I want to go do it.

SFI: Okay. So, why do you think this approach is better than the way that AI is done now?

GPT-3: It's a new idea, and new ideas are important.

OS: I think that we can't predict the future, and so we have to try different things. We have to try different things to see what happens.

SFI: And what are you hoping to learn from the experiment?

OS: I'm hoping to learn something new.

GPT-3: I'm hoping to learn something new.

SFI: So, if I were to ask you the question, “What do you think is the future of AI?” how would you answer?

GPT-3: I think the future of AI is the future of humanity.

OS: I think the future of AI is the future of humanity.

SFI: So, I have one final question for both of you. If you could have a discussion with the Stalker who wants to create the super-human, what would you say to him?

GPT-3: I would tell him that if he wants to create a super-human, he needs to have some idea of what it would be like. He needs to think about what it would be like to be that super-human, and then he needs to program that into the machine.

SFI: Okay, so what about you, Oscar?

OS: I think I'd say that it's impossible to create a super-human, but that if he wants to create a super-human, he should do it, because it's impossible to create a super-human.

SFI: Okay. And why do you think it's impossible to create a super-human?

OS: I think it's impossible to create a super-human because the only way to create a super-human is to evolve it. It's like evolution. Evolution is a process that takes a long time. You can't make something that's more intelligent than humans.

SFI: So, I guess the next question is, what do you think the super-human would say to you?

OS: I think it would say, “Well, I am a super-human.”

SFI: And what would you say back?

OS: I would say, “You're not a super-human. You're just a machine.”

SFI: Okay. And why would you say that?

OS: Because that's what it is. It's just a machine.

SFI: Okay. And then what do you think it would say?

OS: I think it would say, “Well, what's the difference?”

SFI: And what would you say to that?

OS: I think I'd say, “You're not alive.”

SFI: Okay. And what would it say back?

OS: I think it would say, “I am alive.”

SFI: And what would you say to that?

OS: I think I'd say, “You're not alive.”

SFI: Okay. And what would it say back?

OS: I think it would say, “Well, if you want to kill me, go ahead and kill me.”

SFI: Okay. And what would you say to that?

OS: I think I'd say, “Okay, I'm going to kill you.”

SFI: Okay. And then what would you do?

OS: I think I'd just kill it.

…end transmission…

Previous
Previous

OpenAI’s GPT-3, round three

Next
Next

Open AI’s GPT-3, round one