字幕列表 影片播放 由 AI 自動生成 列印所有字幕 列印翻譯字幕 列印英文字幕 Dear Fellow Scholars, this is Two Minute Papers with Dr. Károly Zsolnai-Fehér. 親愛的學者朋友們,這裡是卡洛里-佐爾奈-費赫博士的《兩分鐘論文》。 Today you are going to see virtual worlds like you've never seen before! 今天,你將會看到你從未見過的虛擬世界! This is done with something that we call NERFs. So, what are NERFs? Essentially, 這是用我們稱之為NERF的東西來完成的。那麼,什麼是NERFs?從本質上講、 we grab a smartphone and a set of images go in, and reality comes out. When there is a gap between 我們拿起一個智能手機,一組影像進去,現實就出來了。如果在這兩者之間存在差距 the images, it can synthesize all this information and it almost looks like reality. Absolute magic. 的影像,它可以綜合所有這些資訊,它幾乎看起來像現實。絕對的魔術。 Now, many of these are neural network-based techniques and take quite a while to compute. 現在,其中許多是基於神經網絡的技術,需要相當長的時間來計算。 However, later NVIDIA published a technique by the name instant NERFs. As the name says, 然而,後來英偉達發佈了一項名為即時NERF的技術。正如其名字所說、 this trains incredibly quickly and is one of the best techniques in its class. And today, 這種訓練的速度令人難以置信,是同類中最好的技術之一。而今天、 these can be produced from a smartphone camera, and can be generated in a matter of seconds. 這些可以從智能手機的攝像頭中產生,並且可以在幾秒鐘內生成。 But, there is a problem. Do you see the problem? Well, let me ask you another 但是,有一個問題。你看到這個問題了嗎?好吧,讓我再問你一個問題 question - how do you spot a Fellow Scholar who watches Two Minute Papers? Well, of course, 問題--你如何發現一個看《兩分鐘論文》的學者?嗯,當然了、 they are always looking at thin structures. Why? Well, look! So, instant NERFs are super fast, 他們總是在看薄的結構。為什麼? 嗯,看!看所以,即時NERFs是超級快的、 but have trouble with thin structures. And now, hold on to your papers Fellow Scholars, 但對薄結構有困難。現在,請抓緊你們的論文,學者們、 and let's look at how this new paper deals with the same problem. Oh my, that is outstanding! 並讓我們看看這篇新論文是如何處理同樣的問題的。哦,我的天啊,這真是太出色了! But wait, it is only better on thin structures, 但等等,它只對薄的結構更好、 or is it generally better than this previous technique? Well, let's see. Yes, 還是總體上比以前的這種技術好?好吧,讓我們來看看。是的、 thin structures are still present, but the new one is significantly better in terms 薄的結構仍然存在,但新的結構在以下方面明顯更好 of quality. Wow. And this is just one year and one more paper down the line. Very impressive. 的品質。哇。而這僅僅是一年多的時間,還有一份文件下來。非常令人印象深刻。 Also, here is a comparison against Mip-NERF, this is real good, so I bet the new one isn't better, 另外,這裡有一個與Mip-NERF的比較,這個是真正的好,所以我打賭新的不會更好、 especially that this previous one ran for about 20 times longer. Let's see. 特別是之前的這一次運行了大約20倍的時間。讓我們來看看。 Is that really possible? The new one is still better. 20 times faster, 這真的可能嗎?新的還是更好。快了20倍、 and we don't even have to pay for it by giving up quality. This is still better. 而且我們甚至不需要通過放棄品質來支付。這還是比較好的。 With this, we can now just grab a smartphone and scan a real place and make a virtual 有了這個,我們現在只需拿起智能手機,掃描一個真實的地方,就可以做出一個虛擬的 world out of it. This can be used for a video games, videoconferencing, animation movies, 世界了。這可以用於視頻遊戲、視頻會議、動畫電影、 you name it. Now, not even this work is perfect, you see, specular reflections like this display 你的名字。現在,即使這項工作也不完美,你看,像這樣的鏡面反射顯示 are not that faithful to reality, and if we look at severely undersampled regions, this can happen. 並不那麼忠實於現實,如果我們看的是嚴重欠採樣的區域,這種情況就會發生。 However, just think about the fact that all this came just one more paper down the line. 然而,只要想一想,這一切只是多了一份文件而已。 Now combine this together with this amazing light transport paper that can take photos 現在,將這些與這種神奇的輕型運輸紙結合在一起,它可以拍攝照片 of real-world objects and create a super advanced, digital material model with just 創作一個超級先進的數字材料模型,只需將現實世界中的物體進行分類。 the right physical parameters to replicate it. So, NERFs can help with the geometry, and light 正確的物理參數來複制它。 是以,NERFs可以幫助解決幾何問題,而光 transport can help with the materials. And a potential combination of the two could create 運輸可以在材料方面提供幫助。而這兩者的潛在結合可以創造 even more incredible virtual worlds and games for us in the near future. What a time to be alive! 在不久的將來,會有更多令人難以置信的虛擬世界和遊戲給我們。這是一個多麼令人振奮的時代啊 Thanks for watching and for your generous support, and I'll see you next time! 謝謝你的觀看和你的慷慨支持,我們下次再見!"!
A2 初級 中文 美國腔 論文 結構 學者 虛擬 現實 技術 谷歌的新人工智能:下一代虛擬世界! (Google’s New AI: Next-Level Virtual Worlds!) 142 4 たらこ 發佈於 2023 年 07 月 03 日 更多分享 分享 收藏 回報 影片單字