Placeholder Image

字幕列表 影片播放

由 AI 自動生成
  • The first example I have is very simple.

    我舉的第一個例子非常簡單。

  • It's just counting the letter R's in a word strawberry.

    就是數草莓這個單詞中的字母 R。

  • So let's start with the traditional, like existing model GPT-4.0.

    是以,讓我們從傳統的、類似現有型號的 GPT-4.0 開始。

  • So as you can see the model fails on this.

    是以,正如你所看到的,模型在這一點上是失敗的。

  • There are three R's, but the model says there are only two R's.

    有三個 R,但模型說只有兩個 R。

  • So why does this advanced model like GPT-4.0 make such a simple mistake?

    那麼,為什麼像 GPT-4.0 這樣先進的模型會犯如此簡單的錯誤呢?

  • That's because models like this are built to process the text, not with the characters or words.

    這是因為這樣的模型是用來處理文本的,而不是處理字元或單詞。

  • It's somewhere between, sometimes called a sub-word.

    它介於兩者之間,有時被稱為子詞。

  • So if we ask the question to a model, a question that involves understanding the notion of characters and words, the model can really just make mistakes because it's not really built for that.

    是以,如果我們向一個模型提出一個問題,一個涉及理解字元和單詞概念的問題,這個模型可能真的會犯錯誤,因為它並不是為此而構建的。

  • So now let's go on to our new model and type in the same problem.

    現在,讓我們進入新模型,輸入同樣的問題。

  • This is the O1 preview, which is a reasoning model.

    這是 O1 預覽版,是一款推理機型。

  • So unlike the GPT-4.0, it starts thinking about this problem before outputting the answer.

    是以,與 GPT-4.0 不同的是,它在輸出答案之前就已經開始思考這個問題了。

  • And now it outputs the answer.

    現在,它給出了答案。

  • There are three R's in the word strawberry.

    草莓一詞中有三個 R。

  • So that's the correct answer.

    這就是正確答案。

  • And this example shows that even for seemingly unrelated counting problem, having a reasoning built in can help avoiding the mistakes because it can maybe look at its own output and review it and more just be more careful and so on.

    這個例子說明,即使是看似無關的計數問題,內置的推理功能也能幫助避免錯誤,因為它可以查看自己的輸出結果並進行審查,從而更加小心謹慎,等等。

The first example I have is very simple.

我舉的第一個例子非常簡單。

字幕與單字
由 AI 自動生成

單字即點即查 點擊單字可以查詢單字解釋