Is ARM ready for server dominance?

Very interesting article covering the very recent (just two days ago) announcement of Amazon’s brand new Graviton-2 server chips. Performance is on par with x86 chips but Amazon is offering these EC-2 instances at 20% discount compared with the x86 ones.

This could have very deep implications for many companies. First off, with the new cheaper compute instances, will AWS pull ahead of its competitors MSFT and GOOG? Developing custom silicons takes huge amount of time and expertise, both MSFT and GOOG are far behind on server chip dev.

Second, what does it mean for INTC and AMD, exp INTC?

The article mentions a very young startup Nuvia, which came out of stealth mode only last month:

Silicon Valley is now seeing a new wave of startups doing silicon dev. Coming full circle.

2 Likes

I thought this was a settled question. But, does not hurt to see it reopened.

Interpolating that statement a bit with the mobile chip backgrounds of the founders at Google and Apple, it seems evident that the extreme energy-to-performance constraints of mobile might find some use in the data center, particularly given the heightened concerns about power consumption and climate change among data center owners.

Sure didn’t infringe on Apple patents?

Please enlighten.

Note that the startup is based in Santa Clara. Not Austin. :smiling_imp:

No worries it takes time. Skate to where the puck is going to be. Two reasons for why SF is preferred over Austin for startups: Talents and $$$. Austin has sufficient talent, what it lacks is $$$. Soon…

Top three FCs in USA: NYC, SF and Chicago. But…
Welcome To Y’all Street: The Cities Challenging New York For Financial Supremacy

Finance, increasingly conducted electronically, is no longer tethered to its traditional centers. Large global financial companies like UBS , Deutsche Bank , Morgan Stanley MS +0% and Goldman Sachs are all committed to relocating operations to less expensive locations.

So not just tech companies are relocating, financial services companies are too.

In the U.S., this has benefited the South the most. This year’s list of the metro areas that are increasing employment in financial services at the fastest rate is led by first-place Nashville-Davidson-Murfreesboro-Franklin, Tenn., No. 2 Dallas-Plano-Irving, Texas, No. 4 Austin-Round Rock, Texas, and No. 5 Charlotte-Concord-Gastonia N.C.-S.C.

Austin :star_struck:

Austin and the south get the low margin low tech back office jobs. In short, they get the cost centers. The guys who generate revenue are still based in NYC and surrounding suburbs like Connecticut.

Infection points are hard to call. 9 out of 10 times it came out wrong. You are literally saying this time is different. We will see. I will bet on the side with 90% odds.

Connecticut? AFAIK, houses there hardly move, yes, from 2009. Think it actually drops! So your hypothesis that startups would eventually lead to galloping house prices is flawed.

You have retired for too long and not keeping up with development. Call up your ex-Intel colleagues who are still working. R&D has started to trickle to there. It takes time. So you need to go there before the floodgate is open.

So you are admitting talents & $$$ are moving to Austin. A good start.

You are fond of putting words in my mouth. Recall what I have told you many times? My approach is:

Figure out the possibilities
Design a strategy that can win (at least not lose) in all possibilities

So I freeze investment in BARE and diversify to Austin but still holding on to my SFH rentals. Get it?

Status of my strategy: Pretty good so far. From 2013 to now, NW Austin/suburbs shot up 60% vs BARE 47%, using Zillow’s estimates. Ytd return: Ausin +7%, BARE -10%. Why 2013? Start of diversification.

Btw, when I talk about Austin, always refer to NW Austin and N suburbs. Whereas articles refer to “proper” Austin i.e. Downtown and immediate surrounding suburbs, which I don’t have any investment. Just in case, you didn’t notice.

Below is a map of semi in Austin,

Seeking Alpha article on Amazon’s internal silicons.

Intel’s And Nvidia’s Margin Is Amazon’s Opportunity - Amazon.com, Inc. (NASDAQ_AMZN) _ Seeking Alpha.pdf (43.0 KB)

ARM is based on RISC (reduced instruction set computer ) and X86 is based on CISC ( complex instructions set computer). What this means in simple terms is that one CISC instruction can be seen as sum of several RISC instruction. Like on step of a giraffe is equivalent of several steps of deer. Now what x86 machines can do efficiently will take long to be done by ARM machine. Several attempts have been made in past to develop ARM based data-center chips by Broadcomm and Qualcomm but they went no where. ARM is less power hungry for the reason described above and have a niche market. Qualcomm is trying to build a processor based on ARM to run windows OS for less demanding situations. Industry has spent years on this concept and now Amazon is trying. That is what I said may be Amazon will be able to find a niche where this can be profitable. It is like discussing which one is better, A car or a VAN. It all depends upon how you want to use it.

1 Like

@manch ARM started in a barn outside Cambridge in England. This one is not a purely Silicon Valley company.

Long ago, servers ran on RISC chips such as Sun Sparc and SGCS MIPS. At that time, CISC Intel chips are too hot, inefficient, and suitable for low-end servers. Now is RISC can’t do it?

CISC vs RISC has been settled long ago. RISC won. With Intel’s x86 chips, the first thing it does is to translate the CISC x86 instruction sets into internal RISC ops. The internal organization is an entirely RISC machine.

What Amazon and others are doing have nothing to do with RISC vs CISC. It’s the big secular trend of Moore’s law slowing down. 20 years ago if your code is slow you can just wait a year or two and Intel will release a new chip that’s 50% faster. You get the performance boost for free. Nowadays you can’t do that anymore. If you want an edge against your competitors you have to design custom silicons to speed up your code and specific workload.

The slowdown of Moore’s Law has huge implications for years to come.

BTW now that everyone is designing silicon we see EDA software companies like Synopsys and Cadence doing brisk business. That’s why CDNS stock price soared. I wonder if double majoring in EE and CS will come back in vogue, and that hardware people’s salaries will finally go up a bit faster. In the last 20 years SW people are making much more than HW people.

1 Like

That is a good point. I do not know. The result is before you. Neither Sun nor Silicon Graphis is with us anymore. There is always a possibility of someone else being able to succeed with ARM.

x86 is not intel. It is from AMD.

Fact check your history. Maybe you were talking about the 64 bit extension.

Correct. intel wasted tons of money on IA64 (Itanium) only to grudgingly accept AMD64.

What amazon seems to be trying is to leverage on ARM design to come up its own chips. That’s all. It is trying to keep cost low for less demanding applications.

All instructions (whether CISC/RISC) get translated to machine language because that is the only language silicon understands. RISC have not shown any advantage that is why they are not getting any traction.

Added later: For the sake of honesty, I say that I am not a computer architect. So use my above lines with a pinch of salt.

translated? There is no translation once you’re at the RISC vs CISC level.

The ultimate low level… machine language is different on a RISC cpu. I’ve written thousands of lines in machine language code, but most of it was 6502 (apple ][) and Z80/ 8086 (DOS) processors. On those processors, you had a maximum of 256 possible instructions. With the 16 and later 32bit CPUs, the instruction sets grew bigger and more complex. For example, they included multiplication and division on machine language code.

I’ve never written machine level code for a RISC cpu, but my understanding, from 35 years ago was, that their instruction set was reduced. Probably no multiplication.

I remember vaguely, that my list of instructions had for each instruction the opcode (1 byte) and the number of machine cycles or tact units the instruction would take. A machine cycle for example could be 2,3 or 4 tact units. Then you’d look at different ways to move the data around and see which one uses fewer micro seconds. (The 6502 “ran” at 1MHz.)

I did that in the early 80s, and I may have the terminology wrong esp “tact units”, in German we call it “Taktzyklus”, maybe it’s “tact cycles”. It’s basically what the 6502 would do 1 million times per second. The Z80 4 million times. It was not 4x as fast though!

I think the key difference with RISC was that every instruction uses only 1 tact (or a similar low number)… reduced complexity, more instructions to execute to do the same thing. A compiler was be better than a human to implement with a RISC instruction set.

Stuff from another life.

1 Like

Since the advent of EDA tools from Cadence/Synopsys (1986), traditional hardware design has transformed for good. What was once a tedious work, is now done by software. Much of hardware design is now the ability to write good scripts, HDL code, with a good understanding of underlying engineering (like logic design, timing, and physical design). It takes only 10 years to mature in this field these days. The result of all this is that hardware design grew leap and bound and attracted a lot of new designs and talents in the process, resulting in a drop of the average salary of hardware engineers worldwide. Is that going to change? It does not look like. But, I am baffled too why EDA companies are selling for such a high premium.