Is there something about RISC that is still makes it better than CISC when it comes to per-watt performance? Seems like nobody has any success making an x86 processor that's as power efficient as ARM or RISC.
> Is there something about RISC that is still makes it better than CISC when it comes to per-watt performance?
CLASSIC CISC was micro-coded (for example, IBM S/360 have feature, you could make your custom microcode for compatibility with your inherited equipment, like IBM-1401 machines or IBM-7XXX series, or for other purposes), and RISC was with pipeline from birth.
Second thing, as I understand, many CISC existed as multiple chips board or even as multiple boards, so have great losses on wires, but RISC appear in 1990s as one die immediately (only external cache added as additional IC), but I could mistake on this.
> nobody has any success making an x86 processor that's as power efficient as ARM or RISC
Rumors said, Intel Atom (essentially CMOS version of Pentium first generations) was very good in mobiles, but ARM far succeed it on software support of huge number of power saving features (modern ARM SOC allows to turn off near any part of chip any time and OS support this), and because of lack of software support, smartphones with Intel have poor time on battery.
More or less official info said, that Intel made bad power conversion circuit, so Atom consumes too much in mode between deep sleep and full speed, but I don't believe them, as this is too obvious mistake for hardware developer.
Sometimes. Far from always. Some would have a complicated hardwired state machine. Some would have a complicated hardwired state machine and be pipelined. Some would have microcode and be pipelined (by flowing the microcode bits through the pipeline and of course dropping those that have already been used so less and less microcode bits survive at each stage).
Well, as I see you don't have enough bravery to answer simple question about purpose of mini-computers, so I will.
When computers first appeared, they was big, just because technology limitations made small machines very expensive to use, so scale used to make computations cheaper.
In early 1970s, technology advanced to stage, where become possible to make simplified versions of big computers for some limited tasks, still too expensive for wide use.
Simple illustration, IBM-3033 mainframe with 16M RAM could serve 17500 3270 terminals, and PDP of same time could about few tens (may be 50, I don't know exactly), so mainframes even when was very expensive, but given good cost per workplace.
Known example, PDP used to control one of scientific nuclear reactor. PDP chosen, not because it have best mips/price ratio, but because it was cheapest adequate machine for this task, so is affordable for limited budget.
Very long time, mini machines stay in niche of limited machines, used to avoid much more expensive full-scale mainframes. They used to control industrial automation (CNC), chemical factories and other small things.
Once appeared microcomputers (CPU on one chip), they begin eat mini's space from bottom, when mainframes continue to become more cost effective (more terminals with appearance of cheap modems, etc) and eat mini's space from top.
And in 1990s, when appeared affordable 32-bit microprocessors and became affordable Megabytes of RAM, mini's disappear, because their place was captured by micro's.
To be honest, I just don't know anything we could not name microcomputer now, as even IBM Z mainframes are now have single-chip processor and largest supercomputers are practically clouds of SOCs (NUMA architecture).
And I must admit, I still see PDP's (or VAX's) on enterprises, where they still control old machines from 1990s (they are very reliable even when limited from modern view, but still work).
As I remember, last symmetrical multiprocessor supercomputer was Cray Y-MP, later machines become ccNUMA or just NUMA or even cloud.