Placeholder Image

字幕列表 影片播放

由 AI 自動生成
  • Crazy's like a boogie out in low ninja.

    瘋狂就像在低空忍者中飆車。

  • Big fun bro, yeah?

    大樂趣兄弟,是嗎?

  • Alright, so hi everybody, it's me, Cary C- Now, I've always thought of myself as a musical person.

    大家好,我是凱里-C- 我一直認為自己是個音樂人。

  • Isn't it amazing?

    是不是很神奇?

  • No, no Cary, that isn't amazing.

    不,不,凱里,這並不神奇。

  • Anyway, given that I've used AI to compose Baroque music, And I've used AI to compose jazz music, I think it just makes sense for me to fast forward the musical clock another 60 years to compose some rap music.

    總之,既然我已經用人工智能創作了巴洛克音樂,也用人工智能創作了爵士樂,我想,讓音樂的時鐘再快進 60 年,創作一些說唱音樂也是合情合理的。

  • But before I do that, I gotta give credit to Siraj Raval, who actually did this first.

    但在此之前,我得先表揚一下 Siraj Raval,實際上是他先這樣做的。

  • Homie grows punani, likely I'm totin' inspired enough.

    Homie 種了 punani,可能是我得到了足夠的靈感。

  • But you know what they say, no rap battle's complete without two contenders.

    但是,俗話說得好,沒有兩個競爭者的說唱比賽是不完整的。

  • So, what did I do to build my own digital rap god?

    那麼,我是如何打造自己的數字說唱之神的呢?

  • Well, I used Andrej Karpathy's recurrent neural network code again.

    我再次使用了安德烈-卡帕奇的遞歸神經網絡代碼。

  • An RNN is just an ordinary neural network, but we give it a way to communicate with its future self with this hidden state, meaning it can store memory.

    RNN 只是一個普通的神經網絡,但我們給了它一種方法,讓它通過這種隱藏狀態與未來的自己交流,這意味著它可以存儲記憶。

  • Now, I've done this countless times before, so I won't dive too deep into what an RNN is.

    現在,我已經說過無數次了,所以我不會太深入探討什麼是 RNN。

  • Instead, I want to focus more on a twist I implemented that makes this quote-unquote algorithm more musical.

    相反,我想更多地關注我實現的一個轉折,這個轉折使這種引號算法更具音樂性。

  • Before I do that though, I need to introduce you to Dave from Boina Band.

    不過在此之前,我得先向大家介紹一下 Boina 樂隊的戴夫。

  • He's, um, a tad bit good at rapping, I guess.

    我猜,他說唱得還不錯。

  • So when I first trained Karpathy's RNN to generate rap lyrics in 2017, I invited him over to read the lyrics my algorithm had written.

    是以,當我在 2017 年第一次訓練 Karpathy 的 RNN 生成說唱歌詞時,我邀請他過來閱讀我的算法寫出的歌詞。

  • But then, I lost the footage, and then he lost the footage, and well, long story short, there's no footage of it ever happening.

    但後來,我把錄像弄丟了,他又把錄像弄丟了,長話短說,根本就沒有發生過這種事。

  • That made me bummed for a bit, but then I realized this could be interpreted as a sign from above.

    這讓我沮喪了一會兒,但後來我意識到,這可能是上天的一個暗示。

  • Perhaps the AI prevented us humans from rapping its song because it wanted to do the rap itself.

    也許,人工智能阻止我們人類說唱它的歌,是因為它想自己說唱。

  • Well, Computery, if you insist.

    好吧,如果你堅持的話,那就用 Computery 吧。

  • To give Computery a voice, I downloaded this Python module that lets us use Google's text-to-speech software directly.

    為了讓 Computery 發出聲音,我下載了這個 Python 模塊,它可以讓我們直接使用 Google 的文本到語音軟件。

  • I'm pretty sure you've heard this text-to-speech voice before.

    我敢肯定,你以前一定聽過這種文本到語音的聲音。

  • Now, as we hear Computery's awesome rap, I'm gonna show the lyrics on screen.

    現在,當我們聽到 Computery 的精彩說唱時,我會在螢幕上顯示歌詞。

  • If you're up for it, you viewers out there can sing along too.

    如果你願意,外面的觀眾也可以一起唱。

  • Alright, let's drop this track.

    好了,我們開始吧。

  • Wait, why aren't you singing along?

    等等,你為什麼不跟著唱?

  • Why aren't you- The reason it performed so badly is because it hasn't had any training data to learn from.

    它之所以表現如此糟糕,是因為它沒有任何訓練數據可供學習。

  • So let's go find some training data.

    所以,讓我們去找一些訓練數據吧。

  • With my brother's help, I used a large portion of the original hip-hop lyrics archive as my dataset to train my algorithm on.

    在我哥哥的幫助下,我使用了大量原始嘻哈歌詞檔案作為數據集來訓練我的算法。

  • This includes works by rap giants like Kendrick Lamar and Eminem.

    其中包括 Kendrick Lamar 和 Eminem 等說唱巨匠的作品。

  • We stitched around 6,000 songs into one giant text file, separate with line breaks, to create our final dataset of 17 million text characters.

    我們將大約 6,000 首歌曲拼接成一個巨大的文本文件,並用換行符分開,最終創建了包含 1,700 萬個文本字元的數據集。

  • Wait, that's only 17 megabytes.

    等等,這隻有 17 兆字節。

  • A single 4-minute video typically takes up more space than that.

    一段 4 分鐘的視頻所佔空間通常比這更大。

  • Yeah, it turns out that text, as a data type, is incredibly dense.

    是的,事實證明,文本作為一種數據類型,密度高得驚人。

  • You can store a lot of letters in the same amount of space as a short video.

    在與短視頻相同的空間內,您可以存儲大量的信件。

  • Let's see the algorithm learn.

    讓我們看看算法的學習效果。

  • Okay, ready?

    好了,準備好了嗎?

  • Go.

    去吧

  • Stop.

    停下。

  • As you can see, after just 200 milliseconds, less than a blink of an eye, it learned to stop putting spaces everywhere.

    正如你所看到的,僅僅過了 200 毫秒,不到一眨眼的時間,它就學會了不再到處留空格。

  • In the dataset, you'll rarely see more than two spaces in a row, so it makes sense that the AI would learn to avoid doing that too.

    在數據集中,你很少會看到連續出現兩個以上的空格,是以人工智能也應該學會避免這樣做。

  • However, I can see it still putting in uncommon patterns like double I's and capital letters in the middle of words, so let's keep training to see if it learns to fix that.

    不過,我發現它還是會在單詞中間出現雙 I 和大寫字母等不常見的拼寫模式,所以讓我們繼續訓練,看看它是否能學會糾正這些問題。

  • We're half a second into training now, and the pesky double I's seem to have vanished.

    訓練已經進行了半秒,討厭的雙 I 似乎已經消失了。

  • The AI has also drastically shortened the length of its lines, but behind the scenes, that's actually caused by an increase of the frequency of the line break character.

    人工智能還大大縮短了行的長度,但在幕後,這實際上是由於換行符的使用頻率增加了。

  • For the AI, the line break is just like any other text character.

    對於人工智能來說,換行符就像其他文本字元一樣。

  • However, to match the dataset, we need a good combination of both line breaks and spaces, which we actually get in the next iteration.

    不過,為了與數據集相匹配,我們需要將換行符和空格很好地結合起來,而在下一次迭代中,我們確實得到了換行符和空格。

  • And here, we see the AI's first well-formatted word, it.

    在這裡,我們看到了人工智能的第一個格式化單詞,它。

  • Wait, does eco count as a word?

    等等,"生態 "算一個詞嗎?

  • Not sure about that.

    不確定。

  • Oh my gosh, you guys, Future Kerry here.

    哦,我的天哪,你們,未來的克里在這裡。

  • I realize that's not an uppercase I, it's a lowercase L.

    我知道那不是大寫的 I,而是小寫的 L。

  • Major 2011 vibes.

    2011 年的主要氛圍。

  • Now at one full second into training, we see the AI has learned that commas are often not followed by letters directly.

    現在,訓練已經進行了整整一秒鐘,我們看到人工智能已經學會了逗號後面通常不會直接跟字母。

  • There should be a space or a line break afterwards.

    之後應有一個空格或換行。

  • By the way, the average human reads at 250 words per minute, so a human learning how to rap alongside the AI has currently read four words.

    順便提一下,人類的平均閱讀速度為每分鐘 250 個單詞,是以,與人工智能一起學習說唱的人類目前已經閱讀了 4 個單詞。

  • I'm gonna let it run in the background as I talk about other stuff, so one thing I keep getting asked is, what is loss?

    在我談其他事情的時候,我會讓它在後臺運行,我一直被問到的一個問題是,什麼是損失?

  • Basically, when a neural network makes a guess about what the next letter is gonna be, it assigns a probability to each letter type.

    基本上,當神經網絡猜測下一個字母是什麼時,它會為每個字母類型分配一個概率。

  • And loss just measures how far away those probabilities were from the true answer given by the data set, on average.

    而損失只是衡量這些概率與數據集給出的真實答案平均相差多遠。

  • So, lower loss usually means the model can predict true rap lyrics better.

    是以,較低的損失通常意味著模型能更好地預測真正的說唱歌詞。

  • Now I'm playing the training time-lapse ten times faster.

    現在,我播放訓練延時片的速度快了十倍。

  • The loss function actually held pretty constant for the first 18 seconds, then it started to drop.

    實際上,損失函數在最初的 18 秒內保持不變,然後開始下降。

  • That big drop corresponds to the text looking much more English, with the lines finally beginning to start with capital letters, took long enough, and common words like you, I, and the making their first appearance.

    與這一大幅下降相對應的是,文本看起來更加英語化,行文終於開始以大寫字母開頭,篇幅也足夠長,像 you、I 和 the 這樣的常用詞也首次出現。

  • By 54 seconds, I'd say about half of the words are real, so rudimentary grammar rules can start forming.

    到 54 秒時,我想大約有一半的單詞是真實的,是以可以開始形成基本的文法規則。

  • Of the is one of the most common bigrams in the English language, and here it is.

    Of the 是英語中最常見的雙音詞之一。

  • Also, apostrophes are starting to be used for contractions, and we're seeing the origins of one-word interjections.

    此外,撇號開始用於縮略詞,我們也看到了單詞插入語的起源。

  • Over a minute in, we see the square bracket format start showing up.

    一分多鐘後,我們看到方括號格式開始出現。

  • In the data set, square brackets were used to denote which rapper was speaking at any given time.

    在數據集中,方括號用於表示在任何特定時間哪個說唱歌手在發言。

  • So that means our baby AI's choice of rappers are Goohikomi, Moth, and Burstdogrelacy.

    也就是說,我們的人工智能寶寶選擇的說唱歌手是 Goohikomi、Moth 和 Burstdogrelacy。

  • I also want to quickly point out how much doing this relies on the memory I described earlier.

    我還想快速指出,這樣做在很大程度上依賴於我之前描述的記憶。

  • As Andre's article shows, certain neurons of the network have to be designated to fire only when you're inside the brackets, to remember that you have to close them at some point to avoid bracket imbalance.

    正如安德烈的文章所示,網絡中的某些神經元必須被指定為只有當你在括號內時才會啟動,以記住你必須在某個時候關閉括號,避免括號失衡。

  • Okay, this is the point in the video where I have to discuss swear words.

    好了,視頻到這裡我不得不說髒話了。

  • I know a good chunk of my audience is children, so typically I'd censor this out.

    我知道我的觀眾中有很大一部分是兒童,所以通常我會刪掉這部分內容。

  • However, given the nature of a rap data set, I don't think it's possible to accurately judge the neural network's performance if we were to do that.

    不過,鑑於說唱數據集的性質,我認為如果要這樣做,就不可能準確判斷神經網絡的性能。

  • Besides, I've included swears in my videos before, people just didn't notice.

    此外,我以前也在視頻中說過髒話,只是大家沒有注意到而已。

  • But that means, if you're a kid under legal swearing age, I'm kindly asking you to leave to preserve your precious ears.

    但這意味著,如果你是一個未到法定說髒話年齡的孩子,我懇請你離開,以保護你寶貴的耳朵。

  • But if you won't leave, I'll have to scare you away.

    但如果你不走,我只好把你嚇跑了。

  • Ready?

    準備好了嗎?

  • With that being said, there is one word that's prevalent in raps that I don't think I'm in the position to say, and dang it, why is this glue melting?

    話雖如此,但有一個詞在說唱中很流行,我想我沒有資格說這個詞,該死的,為什麼這個膠水會融化?

  • Okay, well I'm pretty sure we all know what word I'm talking about, so in the future I'm just going to replace all occurrences of that word with ninja.

    好吧,我敢肯定大家都知道我說的是什麼詞,所以以後我就用忍者來代替所有出現的這個詞。

  • After two minutes, it's learned to consistently put two line breaks in between stanzas, and the common label chorus is starting to show up.

    兩分鐘後,它學會了在句與句之間連用兩個換行符,常見的標籤式副歌也開始出現。

  • Correctly.

    正確。

  • Also, did you notice the mysterious line, That doesn't sound like a rap lyric.

    另外,你注意到那句神祕的臺詞了嗎?

  • Well, it's not.

    其實不然。

  • It appeared 1172 times in the dataset as part of the header of every song that the webmaster transcribed.

    該詞在數據集中出現了 1172 次,是網站管理員轉錄的每首歌詞頭的一部分。

  • Now over the next 10 minutes, the lyrics gradually got better.

    接下來的 10 分鐘裡,歌詞逐漸變得好聽起來。

  • It learned more intricate grammar rules, like that motherfucking should be followed by a noun, but the improvements became less and less significant.

    它學會了更多複雜的文法規則,比如 "他媽的 "後面應該跟一個名詞,但進步越來越小。

  • So what you see around 10 minutes is about as good as it's gonna get.

    所以,你在 10 分鐘左右看到的東西,已經是最好的了。

  • After all, I set the number of synapses to a constant 5 million, and there's only so much information you can fit in 5 million synapses.

    畢竟,我將突觸的數量設定為固定的 500 萬個,而 500 萬個突觸所能容納的信息量是有限的。

  • Anyway, I ran the training overnight and got it to produce this 600-line file.

    總之,我通宵運行了訓練,並生成了這個 600 行的文件。

  • If you don't look at it too long, you could be convinced they're real lyrics.

    如果不看太久,你就會相信它們是真的歌詞。

  • Patterns shorter than a sentence are replicated pretty well, but anything longer is a bit iffy.

    短於一個句子的模式複製得很好,但任何更長的模式都有點不確定。

  • There are a few one-liners that came out right, like, The lines that are a little wonky, like, Oh, I also like it when it switches into shrieking mode, but anyway, we can finally feed this into Google's text-to-speech to hear it rap once and for all.

    有幾句臺詞說得不錯,比如 "哦,我也喜歡它切換到尖叫模式的時候",但不管怎麼說,我們終於可以把它輸入到谷歌的文本到語音技術中,一勞永逸地聽它說唱了。

  • Hold on, that was actually pretty bad.

    等等,這其實很糟糕。

  • The issue here is we gave our program no way to implement rhythm, which, in my opinion, is the most important element to making a rap flow.

    這裡的問題是,我們的節目沒有辦法實現節奏感,而在我看來,節奏感是讓說唱流暢的最重要元素。

  • So how do we implement this rhythm?

    那麼,我們該如何實現這種節奏呢?

  • Well, this is the twist I mentioned earlier in the video.

    這就是我之前在視頻中提到的轉折。

  • There's two methods.

    有兩種方法。

  • Method one would be to manually time-stretch and time-squish syllables to match a pre-picked rhythm using some audio-editing software.

    方法一是使用一些音頻編輯軟件,手動對音節進行時間拉伸和時間調整,使其與預先選定的節奏相匹配。

  • For this, I picked my brother's song, 3,000 Subbies, and I also used Melodyne to auto-tune each syllable to the right pitch so it's more of a song. oooooooooooo Although, that's not required for rap.

    為此,我選擇了我哥哥的歌曲《3,000 Subbies》,還用 Melodyne 將每個音節自動調整到正確的音高,這樣就更像一首歌了。

  • So, how does the final result actually sound?

    那麼,最終效果究竟如何呢?

  • I'll let you be the judge!

    讓你來評判吧!

  • Looking like that break-in, them bitches bitches riding alone outside, why don't you get up now and guess what you think?

    看起來就像闖入,他們母狗母狗獨自在外面騎馬,你為什麼不現在起床,猜猜你怎麼想?

  • This is a breakout!

    這是一次突破!

  • Now haters who have costs, must like what that's the pity!

    現在,有成本的仇敵一定喜歡什麼,這才是遺憾!

  • Just ask a body!

    問問屍體就知道了!

  • Take a lot of shit!

    吃大便

  • Eat all the ninja!

    吃掉所有忍者

  • Wow I think this sounded pretty fun, and I'm impressed with Google's vocal range.

    哇,我覺得這聽起來很有趣,谷歌的音域給我留下了深刻印象。

  • However, it took me two hours to time align everything, and the whole reason we used AI was to have a program to automatically generate our rap songs.

    然而,我花了兩個小時才把所有東西的時間對齊,而我們使用人工智能的全部原因就是要有一個程序來自動生成我們的說唱歌曲。

  • So we've missed the whole point.

    所以我們忽略了整個問題。

  • That means we should focus on method two, automatic, algorithmic time alignment.

    這意味著我們應該把重點放在方法二上,即自動算法時間對齊。

  • How do we do that?

    如何做到這一點?

  • Well firstly, notice that most rap background tracks are in the time signature 4-4 or some multiple of it.

    首先,請注意大多數說唱背景音樂的時間符號都是 4-4 或其倍數。

  • Subdivisions of beats, as well as full stanzas, also come in powers of two.

    節拍的細分以及完整的詩句也是以 2 的冪來表示的。

  • So all rhythms seem to depend closely on this exponential series.

    是以,所有節奏似乎都與這個指數序列密切相關。

  • My first approach was to detect the beginning of each spoken syllable and quantize, or snap, that syllable to the nearest half beat.

    我的第一種方法是檢測每個口語音節的開頭,然後將該音節量化,或者說按最接近的半拍計算。

  • That means syllables will sometimes fall on the beat, just like this.

    這意味著音節有時會落在節拍上,就像這樣。

  • But even if it fell off the beat, we'd get cool syncopation, just like this, which is more groovy.

    但是,即使它偏離了節拍,我們也會得到很酷的切分音,就像這樣,更有節奏感。

  • Does this work?

    這有用嗎?

  • Actually, no.

    事實上,沒有。

  • Because it turns out, detecting the beginning of syllables from waveforms is not so easy.

    因為事實證明,從波形中檢測音節的開頭並不那麼容易。

  • Some sentences, like, come at me, bro, are super clear.

    有些句子,比如 "來吧,兄弟",就非常清晰。

  • But others, like, hallelujah, our auroras are real, are not so clear.

    但其他的,比如 "哈利路亞,我們的極光是真的",就不那麼清楚了。

  • And I definitely don't want to have to use phoneme extraction.

    我絕對不想使用音素提取。

  • It's too cumbersome.

    太麻煩了。

  • So here's what I actually did.

    我的實際做法是這樣的

  • I cut corners.

    我偷工減料。

  • Listening to lots of real rap, I realized the most important syllables to focus on were the first and last syllables of each line, since they anchor everything in place.

    聽了很多真正的說唱,我意識到最重要的音節是每行的第一音節和最後一個音節,因為它們是一切的基礎。

  • The middle syllables can fall haphazardly, and the listener's brain will hopefully find some pattern in there to cling to.

    中間的音節可以雜亂無章地落下,希望聽眾的大腦能從中找到一些規律來加以依託。

  • Fortunately, human brains are pretty good at finding patterns where there aren't any.

    幸運的是,人類的大腦很擅長在沒有規律的地方找出規律。

  • So, to find where the first syllable started, I analyzed where the audio amplitude first surpassed 0.2.

    是以,為了找到第一個音節的起始位置,我分析了音頻振幅首次超過 0.2 的位置。

  • And for the last syllable, I found when the audio amplitude last surpassed 0.2, and literally subtracted a fifth of a second from it.

    至於最後一個音節,我找到了音頻振幅最後一次超過 0.2 的時間,並從中減去了五分之一秒。

  • That's super janky, and it doesn't account for these factors, but it worked in general.

    這種方法非常笨拙,沒有考慮到這些因素,但總體上還是行得通的。

  • From here, I snapped those two landmarks to the nearest beat, time dilating or contracting as necessary.

    從這裡開始,我把這兩個地標拍到最近的節拍,必要時進行時間擴張或收縮。

  • Now, if you squish audio the rudimentary way, you also affect its pitch, which I don't want.

    現在,如果用最原始的方法擠壓音頻,就會影響音調,這是我不希望看到的。

  • So, I instead used the phase vocoder of the Python library AudioTSM to edit timing without affecting pitch.

    是以,我改用 Python 庫 AudioTSM 的相位聲碼器來編輯時序,而不影響音高。

  • Now, instead of this, Just tell me I'm fuckin' right Weak, stathered, I please Mobs help All in line in them stars Holla We get this.

    現在,與其這樣,不如告訴我,我他媽是對的,我請暴民們幫忙,讓他們在星空中排好隊,我們明白這一點。

  • Just tell me I'm fuckin' right Weak, stathered, I please Mobs help All in line in them stars Holla That's pretty promising.

    只要告訴我,我他媽的是對的 Weak, stathered, I please Mobs help All in line in them stars Holla That's pretty promising.

  • We're almost at my final algorithm, but there's one final fix.

    我們就快完成我的最終算法了,但還有最後一個修正。

  • Big downbeats, which occur every 16 normal beats, are especially important.

    每 16 個正常節拍出現一次的大降頻尤為重要。

  • Using our current method, Google's TTS will just run through them like this.

    使用我們目前的方法,谷歌的 TTS 會像這樣運行它們。

  • Not only is that clunky, it's just plain rude.

    這不僅笨拙,而且非常粗魯。

  • So, I added a rule that checks if the next book in line will otherwise run through the big downbeat, and if so, it will instead wait for that big downbeat to start before speaking.

    是以,我添加了一條規則,檢查排在前面的下一本書是否會跑過大下行節拍,如果會,它就會等待大下行節拍開始後再說話。

  • This is better, but we've also created awkward silences.

    這樣更好,但也造成了尷尬的沉默。

  • So, to fix that, I introduced a second speaker.

    是以,為了解決這個問題,我介紹了第二位發言人。

  • It's me.

    是我

  • Google text-to-speech pitch down 30%.

    谷歌文字轉語音的音調降低了 30%。

  • When speaker 1 encounters an awkward silence, speaker 2 will fill in by echoing the last thing speaker 1 said, and vice versa.

    當說話者 1 遇到尷尬的沉默時,說話者 2 就會重複說話者 1 最後說過的話,反之亦然。

  • What we get from this is much more natural.

    我們從中得到的東西要自然得多。

  • Alright, so that's pretty much all I did for rhythm alignment, and it vastly improves the flow of our raps.

    好了,這就是我為節奏調整所做的全部工作,它極大地改善了我們說唱的流暢性。

  • I think it's time for you to hear a full-blown song that this algorithm generated.

    我想,是時候讓你們聽聽這套算法生成的完整歌曲了。

  • Are you ready to experience Computery's first single?

    準備好體驗 Computery 的首支單曲了嗎?

  • I know I sure am.

    我知道我肯定是這樣。

  • I'm in the later I can.

    我是能晚一點就晚一點。

  • I want to play the battle, so I don't know she won it this, and I don't fuck with X Rez I mog OS.

    我想打這場仗,所以我不知道是她贏了,我也不跟 X Rez I mog OS 幹。

  • It's been all out on this booty beggle bove.

    我們已經全力以赴了。

  • Chorus, Eminem.

    合唱,阿姆

  • Clean, Busta Rhymes.

    Clean, Busta Rhymes.

  • Gangsta, bitch, come cock wild.

    痞子,婊子,來吧,狂野的雞巴。

  • Stop the movie, F5.

    停止播放電影,F5。

  • Dookie to that.

    杜基

  • Four asterisks.

    四個星號

  • That's four asterisks.

    這是四個星號。

  • Kept the naked party dead right.

    讓裸奔黨死心塌地

  • Remember why I need them in the eyes.

    記住我為什麼需要他們的眼睛。

  • Spreadin' with the same other ninja. 137 wave is on the glinty.

    與其他忍者一起傳播。137 波在閃爍。

  • Shoot out to charge help up your crowd.

    向外射擊,幫助你的人群。

  • Out to charge help up your crowd.

    出去充電,幫助你的人群。

  • That ain't foul.

    這不是犯規。

  • What the fuck?

    搞什麼鬼?

  • You're getting cheap.

    你越來越小氣了。

  • Chorus, Busta Rhymes.

    合唱,Busta Rhymes

  • They say stick around, and today's a season.

    他們說,堅持下去,今天就是一個季節。

  • Busta Rhymes.

    布斯塔-萊姆斯

  • Hip-hop traded large digidel.

    嘻哈音樂交易了大量的 digidel。

  • Traded large digidel.

    交易了大量的 digidel。

  • Brought my site down with a record.

    我的網站被記錄在案。

  • I can't be back to the motherfuckin' beggle.

    我不能再回到那該死的乞討之地了。

  • Bitch, and when I help you, shit in this at school.

    賤人,我幫你的時候,你就在學校拉屎。

  • So beside that, with the universe in the baseball.

    是以,在棒球的宇宙旁邊。

  • Universe in the baseball.

    棒球中的宇宙

  • Cuz I don't go to the rag.

    因為我不看報紙。

  • At all when I russet.

    在我赤化的時候,根本沒有。

  • It ain't no rowdy body touch like I supposed to work it.

    這不是我應該做的那種粗暴的身體接觸。

  • Pimpy, but I study your tech just to make no slow.

    皮皮蝦,但我研究你的技術就是為了不拖後腿。

  • Snoop Dogg.

    史努比-道格

  • I'm a light, don't post rolls, but a ton of meat.

    我是個輕量級的,不貼麵包卷,但肉很多。

  • So when you sell the motherfuckin' body.

    所以,當你賣了他媽的身體,

  • Chorus, Bizwerky.

    合唱,比茲韋基

  • You tell me what you feelin', but I'm tryin' to fight cuz I'm the city.

    你可以告訴我你的感受,但我在努力戰鬥,因為我是這座城市的主人。

  • When I grip head at you is my fate.

    當我向你伸出手時,這就是我的命運。

  • I got slick in the cocks.

    我在雞巴里滑溜溜的。

  • You're girls all up and down.

    你們這些女孩上躥下跳。

  • Stay body to be a cock, beat the mawfuckin' you.

    留著身體當公雞,打得你滿地找牙。

  • Mawfuckin' you.

    去你媽的

  • Weed the ball time with my rhyme faster kicks.

    用我的韻律踢出更快的球。

  • It's all a da-sa-da.

    都是噠噠噠。

  • Give one rinit, just stay right.

    給我一次機會,保持正確姿勢。

  • Armorn up, peep boy.

    起床了,小窺探

  • Remember the famine, carry the pain.

    銘記饑荒,揹負傷痛。

  • I'm scared of B-W-W-W when you get well quick to be.

    我害怕 B-W-W-W-W,當你變得很好的時候。

  • It's my brother until it's and I gone.

    它是我的兄弟,直到它和我離開。

  • I'll leave the handle back about a ninja home.

    我會把把手留在忍者之家的後面。

  • A ninja home.

    忍者之家

  • So we went the two at Nuva Place.

    於是我們去了 Nuva Place 的那兩家。

  • Question mark.

    問號

  • Question mark.

    問號

  • And I'm the doe you know it really knows.

    我是你真正瞭解的母鹿。

  • G-A-L-Y-S-H-O-G-I-G.

    G-A-L-Y-S-H-O-G-I-G。

  • We put with the profack they quit.

    我們讓他們退出了。

  • Spit the bang vocals.

    吐字清晰

  • Tie the e-skate and MC let the money have heat.

    綁好電子溜冰鞋,主持人讓錢發熱。

  • Up in the court, yup, with the motherfuckin' lips.

    在法庭上,是的,用那該死的嘴脣。

  • Quote.

    引用。

Crazy's like a boogie out in low ninja.

瘋狂就像在低空忍者中飆車。

字幕與單字
由 AI 自動生成

單字即點即查 點擊單字可以查詢單字解釋