AMD’s flagship AI GPUs, the Instincts, are being sold at a significant discount to Microsoft, offering a competitive alternative to Nvidia’s pricier H100. According to Tom’s Hardware, Nvidia’s H100 AI GPUs cost up to four times more than AMD’s competing MI300X, with prices peaking beyond $40,000.
Despite the price advantage, AMD’s market strategy is not expected to significantly impact Nvidia’s stronghold in the AI GPU market. This is due to Nvidia’s established CUDA software stack, which has been optimized for a wide range of AI applications and workloads, resulting in overwhelming demand for its GPUs.
I wonder how we can effectively exclude AI generated data during traing process if most of human being stop generating thier original works. If neural network starts to train itself with AI generated data, the whole system would be similar to iterations on lossy compression, where increasing artifacts are inevitable.
Sora’s Content-Generation Capabilities Could Have Important Applications In Robotics
Frank Downing & Tasha Keeney
Director of Research, Next Gen Internet & Director of Investment Analysis
OpenAI ‘s new generative AI video creation model, Sora generates content with quality and detail that users need to see to believe. According to its technical report, OpenAI combined diffusion model technology—DALL-E style models that use text prompts to generate images and video—with the transformer architecture that powers ChatGPT. Notably, Sora trained on videos of varying durations, resolutions, and sizes, unlike prior text-to-video models that trained on a standardized resolution and aspect ratio. OpenAI’s success suggests that the diversity of Sora’s training data has enabled it to frame and compose scenes more effectively and to accommodate a more diverse array of input and output modalities than other models. By leveraging its expertise in both diffusion and transformer models, and training on vast amounts of raw video and image data, OpenAI appears to have raised the state-of-the-art to a new level.
Given a video, Sora can extend the scene ahead or backward in time, potentially predicting what did or will happen before or after any scene—a capability that could help predict the movements of pedestrians and vehicles in autonomous driving applications. In short, Sora appears to demonstrate simulation capabilities that could have broader use cases, specifically in robotics.
Although never fed with physics explicitly, Sora generates videos that can visualize the movement of people and objects accurately, even when they are occluded or out of frame, which potentially could be useful in simulation-based training for robots. While its understanding of the physical world has yet to be perfected, Sora seems to be a leap forward for multimodal models, which already have proven useful in autonomous driving.
That’s generally the problem of non-specialists. They don’t really have deep knowledge in the domains they are yapping about. Anyway, just a data point to be aware of.
What kind of AI company doesn’t spend any money on Cap-Ex? $5M for the entire quarter? Really?
Palantir Technologies’s Capital Expenditure for the three months ended in Dec. 2023 was $-4.86 Mil. Its Revenue for the three months ended in Dec. 2023 was $608.35 Mil.
Hence, Palantir Technologies’s Capex-to-Revenue for the three months ended in Dec. 2023 was 0.01.
Table comparing PLTR Capex-Rev ratio among peers. Cloudflare spends 10x more. Didn’t show in the table but Meta spends 20%. These seem more legit AI companies to me.