David Hoffman
David Hoffman|Jan 30, 2025 22:28
Okay, 3 out of the 4 @BanklessHQ podcasts this week talked about DeepSeek's R1 model ======== After all the interviews, here are my summarized thoughts: 1) At worst, the introduction of efficient inference models like R1 is a small L for Nvidia, because inference costs will approach 0 faster than prev expected. 2) At best, it's actually a solid W for Nvidia, because cheap inference will induce demand for more inference ("Jevons Paradox") 3) Open-source AI takes the biggest W, since it shows that proprietary models will quickly be copied, and ultimately become open source commodities 4) Consumers and the economy also take a very large W, because - The AI Arms race increases in speed & funding - All AI products become cheaper, quicker, and more applicable 5) Apple takes a W, because R1 makes locally-run AI inference more feasible. 6) The accelerated commoditization of models shows that proprietary data is the most valuable edge that exists in the AI market - Data is where the line will be drawn between the U.S. and China. Compute is a fungible commodity, but having data that your competitor doesn't is priceless.
+6
Mentioned
Share To

Timeline

HotFlash

APP

X

Telegram

Facebook

Reddit

CopyLink

Hot Reads