Is ARM ready for server dominance?

The biggest difference between CISC vs RISC is that every RISC instruction takes one clock cycle. That has enormous implication for architecture. That makes things like pipelining possible.

No, it’s not just translation. It goes deeper than that. Why do you think the two RISC guys won Turing Award?

Thanks. Like I said I do not have hands-on experience with RISC/CISC and machine code writing except that we studied these topics in engineering courses. That said, the key point that you seem to say too is that the RISC requires more instruction to execute a task than CISC does. Will that remain a handicap to ARM for long time to come is to be seen.

I do not understand the logic of award to RISC guys making RISC superior in some applications. I can give a few names who won Nobel Prize for Peace but whose contribution to world PEACE had just the opposite effect.

For x86…
C Lang > CISC instructions > RISC instructions > Microcodes

AFAIK, RISC is ideal for server and high performance computing.

INTC won over other RISC is not because of better CPU architecture, is due to its outstanding manufacturing operations.

Nvidia will support Arm hardware for high-performance computing

So the era of x86 is ending.

Arm-based server
Arm-based Macs

We already have arm-based iPhones, iPads, Apple TVs and Apple watches.

This is a good article on CISC vs RISC.

After reading this article, one this is clear, if you want hardware to do most of the work, you go with CISC. If you want software to do most work, you go with RISC. Software and hardware are logically equivalent. You can have silicon do things for you, or you can have software do things for you. I remember using software modem and software graphic long ago before we started getting dedicated cards (silicon for modem or graphics). Silicon (hardware) is fast and efficient. That is why it wins over software for heavy duty work. To walk one mile, an obese and an athlete will not be much different in speed and timing, but if you have to walk 10 miles with 100 pound of load, obese will have hard time doing. So, I said before, it all depends upon the work load and applications. Clearly we see that RISC machines have become extinct for heavy duty work, and RISC survives only in miniature applications like phones other small devices. But, We will see what future throws. Most of AI activity is done by software, but Google and Facebook can have silicon (chips ) to perform these functions because silicon are fast.

Is what I learn in school but… but… but… software is so much flexible, you can release buggy alpha version, build the plane as you fly it so to speak. Making modifications to hardware is a killer. So unless the functionality is so matured and unlikely to change, you want it in software.

I conceptualize the situation differently. See explanation above. Using graphics as an example, after we use it for awhile, we learn how to model the implementation into invariant functionalities and changing services. Implement invariant functionalities in hardware, and implement changing services as software.

Can’t believe we are debating CISC vs RISC in 2019. There aren’t any horse more dead than CISC. With RISC, because each instruction takes one cycle, you have more tricks to speed up the chip. Go dig out your copy of Hennessy and Paterson to learn about pipelining for example.

Computer architecture is not rooted in abstract. Each generation has different tradeoffs because of technology advances. When CISC came out in the 60s and 70s, memory was extremely expensive. Less so in the 80s when RISC came out and is downright dirt cheap today.

RISC has not produced any successful heavy duty machine in last 20 years. When RISC returns to market successfully. We will sure celebrate. I agree in future some technological advancement can make “RISC way” all of the sudden more attractive. Till then it is all “vapor ware”.

Wow.

Actually all modern x86 processors, whether it’s from Intel or AMD, are internally RISC. The only thing CISC about Intel chips is just the x86 assembly code. It’s kept intact for backward compatibility so code compiled against 386 would still run today without change. First thing the processor does is break apart these CISC machine code into uOps, and run them in their modern RISC cores. They have been doing that ever since the P6 architecture in Pentium Pro.

P6 processors dynamically translate IA-32 instructions into sequences of buffered RISC-like micro-operations, and then analyze and reorder the micro-operations in order to detect parallelizable operations that may be issued to more than one execution unit at once. The Pentium Pro was not the first x86 chip to use this technique — the NexGen Nx586, introduced in 1994, also used it — but it was the first Intel x86 chip to do so.

Big part of speeding up is exploiting parallelism and it’s very hard to do in CISC when instructions can take different number of clock cycles.

What is the point you are trying to make by digging out notes on 25 years old technology, which was good at the time? I agree. Technology keep moving forward as more efficient and innovative way of solving the problem are found.

Regarding the pipelining and parallelism in hardware design. They are not free and good for all. Both come at the cost. Pipelining adds delay (or latency ) in the data path. It is like adding traffic signal on road to streamline flow. And similarly parallelism requires logic to be build to synchronize the activities that are performed in parallel. Think of it as a metering light at the freeway entrance. You can think of it as data in parallel arriving at a narrow gate. They still cause the delay. It is like arriving san Francisco in on hour, and then taking another hour to drive within city to reach your destination. They both (pipeline and parallel ) can slowdown execution. That is why parallelism and pipelining, though good, are not always most efficient way to solve a problem.

Coming back to my statement, Whether CISC is good or RISC is good it all depends upon the application where it is to be deployed.

@manch is an ex-intel engineer. I won’t debate with him on semiconductor :sweat_smile:

1 Like

Only in the bay area can a real estate forum spawn discussions on RISC vs CISC …

Based on Manch’s statement that modern x86 CPUs break up internally the complex instructions into uCodes and implement with an internal RISC engine, then it’s really not about RISC-vs-CISC but rather x86-vs-ARM architecture. So far many companies have tried to grow ARM-based server market but no success yet so far, so I am skeptical that Amazon which seemingly has no prior chip design experience whatsoever can just put out of the blues a bunch of people together and produce a market grabber almost on the first try. That seems too easy. Then why can’t Microsoft do it? Or maybe Uber can do it too? :slight_smile:

Granted we are not talking about an ARM-based server competing on absolute performance against x86. We all know that ARM has a long way to go there. What’s new here is ARM beating x86 on some kind of performance per dollar metric. And given the cloud business model that AWS provides which adds an extra layer, for customers that are more sensitive to cost and not so much as performance these chips provide a good alternative, provided that the ARM software ecosystem can catch up for these applications.

But then it is also possible that ARM-based servers may not be the most optimal solution for these applications in the long run. ARM is still a for-profit company and it charges royalties (albeit relatively low). When the world has produced enough EECS engineers that have too much free time or can’t find a job or have retired rich then someone somewhere is going to produce a free CPU for everyone to use. I think it exists already, just not competitive enough to be noticed. But this can change too, especially if some other industry heavyweights can throw their support behind these efforts.

2 Likes

That’s why I freaking love the Bay Area. :smile:

1 Like

I think what @manch is emphasizing on is how Intel performs under the hood. Who cares? For most people, these chips are black box. I am not even sure a particular instruction set has to be implemented exactly the same way. If that was true. Intel and AMD chips would be exactly copy from different vendors. There can be more than one way to implement an idea. I am sure there are more than one way to make a four stroke engine. Or there is more than one way to make a two stroke engine. Or there are more than one way to implement auto shifting in cars.

Intel employs (way) more SW engineers than HW. Manch’s previous life was in SW, right? Firmware? OS above drivers?

Dear experts, can you start a thread on cloud technology? My knowledge is weak in that area and want to benefit from what others know. It will be helpful in investing in cloud companies and what is the underlying technology,

There’s a topic called cloud kings with a lot of info.