.
Haha. Zuck says equivalent compute didn’t say is same price. Your assumption is
A possibility is a myth?
.
Haha. Zuck says equivalent compute didn’t say is same price. Your assumption is
A possibility is a myth?
Interesting that you didn’t ask your question to Siri. Didn’t you say Apple is super efficient in AI research?
Zuck in his own words.
Hopefully Meta Glasses will one day topple evil Apple’s iPhone.
https://www.axios.com/2024/01/17/alex-karp-davos-ai-us-advantage
Many comments are negative for PLTR
.
I see you have reading comprehension. Your statement…
…implied Apple spends too little and is ineffective. So I offer you two extra possibilities in my post: a. Spend less is because super efficient. b. Spend less is because has less DCs.
…
Instead of imprison by Apple, you prefer to be imprisoned by META? Why not MSFT? How about TSLA? Samsung then? Whichever you choose, you’re a prisoner.
…
Huh? I wasn’t using voice. Given that I have mentioned many times here that I hate to use voice… prefer typing… that question is trolling
You like to offer lots of possibilities without reasons and data to back them up. Just some idle speculations.
Number 2 is a joke. If Apple is serious about AI, why would it choose to have fewer data centers. Where would it train its crap? Renting space from Amazon?
Number 1 is just as bad. I hear the same defense from the Elon cult. Why does Tesla spend so little on R&D? Oh it’s because it’s super efficient. I guess all fanboys think the same.
It’s a cult. Same style as Tesla. They even attract the same following.
.
Your is not? I use the same data as you to come out with those possibilities.
Your statement is a typical false dichotomy.
Firstly, no idea about TSLA/ EM cult’s view. As I said is…
based on whatever facts presented here. No point going in circles unless you have more facts/evidences to present. Btw, I merely point out to you, base on your content presented, your deduction is not the only possibility. Take note…
I use the same data as you to come out with those possibilities.
Companies absolutely do NOT have the underlying systems in place to replace customer support with AI. They’d have to do massive data projects first to prepare for it.
How would they even know that? I don’t think nVidia sells servers. They are bought from Dell, Super Micro, etc.
Any shipments less than 15k are not shown.
Can buy DC services.
The chart show shipments not orders.
Almost every companies listed are software companies.
Lead time is also much less than 52 weeks at least for quantities in the 100’s. The long-lead item is now the nVidia chip for the 400Gb NIC not the H100. Each H100 takes 8 NICs if a company wants to network them together for multi-node training which is required for larger models. The lead time for the networking gear is even longer. Model serving can be done over 10Gb NIC. I think most people are building with 400Gb though. That way the machines can be shared. Most applications don’t need the same model serving capacity 24/7, so they can use off-peak capacity for training.
I’ve learned way more about GPU, NICs, model training, model serving, networking gear, and capacity planning than I ever cared to know.
.
So NVDA is a hold for 10 years? Will AMD yield better return?
Take note, as an investor, we care about return not which one is a better company.
Disclosure: I am a great believer in concentrated bet. So only one chip stock, NVDA.
I’d wait for AMD to start shipping products. Then people can do testing to verify.
That is a great 10 minutes TED talk by Jim Fan showing how NVDA is so far ahead of everyone.
It is using chatgpt to create a “foundation agent”.
NVDA is not a hardware company by any measure…
I held on for very long term my NVDA is for a simple fact: It is the only company with unlimited supply of gpus…
Right. NVDA has more people working on software than hardware.