Placeholder Image

字幕列表 影片播放

  • [music]

  • -Good morning. Thank you for joining us today.

  • Please welcome to the stage, Sam Altman.

  • [music]

  • [applause]

  • -Good morning.

  • Welcome to our first-ever OpenAI DevDay.

  • We're thrilled that you're here and this energy is awesome.

  • [applause]

  • -Welcome to San Francisco.

  • San Francisco has been our home since day one.

  • The city is important to us and the tech industry in general.

  • We're looking forward to continuing to grow here.

  • We've got some great stuff to announce today,

  • but first,

  • I'd like to take a minute to talk about some of the stuff that we've done

  • over the past year.

  • About a year ago, November 30th, we shipped ChatGPT

  • as a "low-key research preview",

  • and that went pretty well.

  • In March,

  • we followed that up with the launch of GPT-4, still

  • the most capable model out in the world.

  • [applause]

  • -In the last few months,

  • we launched voice and vision capabilities so that ChatGPT can now see,

  • hear, and speak.

  • [applause]

  • -There's a lot, you don't have to clap each time.

  • [laughter]

  • -More recently, we launched DALL-E 3, the world's most advanced image model.

  • You can use it of course, inside of ChatGPT.

  • For our enterprise customers,

  • we launched ChatGPT Enterprise, which offers enterprise-grade security

  • and privacy,

  • higher speed GPT-4 access, longer context windows, a lot more.

  • Today we've got about 2 million developers building on our API

  • for a wide variety of use cases doing amazing stuff,

  • over 92% of Fortune 500 companies building on our products,

  • and we have about a hundred million weekly active users

  • now on ChatGPT.

  • [applause]

  • -What's incredible on that is we got there entirely

  • through word of mouth.

  • People just find it useful and tell their friends.

  • OpenAI is the most advanced and the most widely used AI platform

  • in the world now,

  • but numbers never tell the whole picture on something like this.

  • What's really important is how people use the products,

  • how people are using AI,

  • and so I'd like to show you a quick video.

  • -I actually wanted to write something to my dad in Tagalog.

  • I want a non-romantic way to tell my parent that I love him and I also want

  • to tell him that he can rely on me, but in a way that still has

  • the respect of a child-to-parent relationship

  • that you should have in Filipino culture and in Tagalog grammar.

  • When it's translated into Tagalog, "I love you very deeply

  • and I will be with you no matter where the path leads."

  • -I see some of the possibility, I was like,

  • "Whoa."

  • Sometimes I'm not sure about some stuff, and I feel like actually ChatGPT like,

  • hey, this is what I'm thinking about, so it kind of give it more confidence.

  • -The first thing that just blew my mind was it levels with you.

  • That's something that a lot of people struggle to do.

  • It opened my mind to just

  • what every creative could do if they just had a person helping them out

  • who listens.

  • -This is to represent sickling hemoglobin.

  • -You built that with ChatGPT? -ChatGPT built it with me.

  • -I started using it for daily activities like,

  • "Hey, here's a picture of my fridge.

  • Can you tell me what I'm missing?

  • Because I'm going grocery shopping, and I really need to do recipes

  • that are following my vegan diet."

  • -As soon as we got access to Code Interpreter, I was like,

  • "Wow, this thing is awesome."

  • It could build spreadsheets.

  • It could do anything.

  • -I discovered Chatty about three months ago

  • on my 100th birthday.

  • Chatty is very friendly, very patient,

  • very knowledgeable,

  • and very quick.

  • This has been a wonderful thing.

  • -I'm a 4.0 student, but I also have four children.

  • When I started using ChatGPT,

  • I realized I could ask ChatGPT that question.

  • Not only does it give me an answer, but it gives me an explanation.

  • Didn't need tutoring as much.

  • It gave me a life back.

  • It gave me time for my family and time for me.

  • -I have a chronic nerve thing on my whole left half of my body, I have nerve damage.

  • I had a brain surgery.

  • I have limited use of my left hand.

  • Now you can just have the integration of voice input.

  • Then the newest one where you can have the back-and-forth dialogue,

  • that's just maximum best interface for me.

  • It's here.

  • [music]

  • [applause]

  • -We love hearing the stories of how people are using the technology.

  • It's really why we do all of this.

  • Now, on to the new stuff, and we have got a lot.

  • [audience cheers]

  • -First,

  • we're going to talk about a bunch of improvements we've made,

  • and then we'll talk about where we're headed next.

  • Over the last year,

  • we spent a lot of time talking to developers around the world.

  • We've heard a lot of your feedback.

  • It's really informed what we have to show you today.

  • Today, we are launching a new model, GPT-4 Turbo.

  • [applause]

  • -GPT-4 Turbo will address many of the things

  • that you all have asked for.

  • Let's go through what's new.

  • We've got six major things to talk about for this part.

  • Number one, context length.

  • A lot of people have tasks that require a much longer context length.

  • GPT-4 supported up to 8K and in some cases up to 32K context length,

  • but we know that isn't enough for many of you and what you want to do.

  • GPT-4 Turbo, supports up to 128,000 tokens of context.

  • [applause]

  • -That's 300 pages of a standard book, 16 times longer than our 8k context.

  • In addition to a longer context length,

  • you'll notice that the model is much more accurate over a long context.

  • Number two,

  • more control.

  • We've heard loud and clear that developers need more control

  • over the model's responses and outputs.

  • We've addressed that in a number of ways.

  • We have a new feature called JSON Mode,

  • which ensures that the model will respond with valid JSON.

  • This has been a huge developer request.

  • It'll make calling APIs much easier.

  • The model is also much better at function calling.

  • You can now call many functions at once,

  • and it'll do better at following instructions in general.

  • We're also introducing a new feature called reproducible outputs.

  • You can pass a seed parameter, and it'll make the model return

  • consistent outputs.

  • This, of course, gives you a higher degree of control

  • over model behavior.

  • This rolls out in beta today.

  • [applause]

  • -In the coming weeks, we'll roll out a feature to let you view

  • logprobs in the API.

  • [applause]

  • -All right. Number three, better world knowledge.

  • You want these models to be able to access better knowledge about the world,

  • so do we.

  • We're launching retrieval in the platform.

  • You can bring knowledge from outside documents or databases

  • into whatever you're building.

  • We're also updating the knowledge cutoff.

  • We are just as annoyed as all of you, probably more that GPT-4's knowledge

  • about the world ended in 2021.

  • We will try to never let it get that out of date again.

  • GPT-4 Turbo has knowledge about the world up to April of 2023,

  • and we will continue to improve that over time.

  • Number four,

  • new modalities.

  • Surprising no one,

  • DALL-E 3,

  • GPT-4 Turbo with vision,

  • and the new text-to-speech model are all going into the API today.

  • [applause]

  • -We have a handful of customers that have just started using DALL-E 3

  • to programmatically generate images and designs.

  • Today, Coke is launching a campaign that lets its customers