My concern about this is that with the failure of Qualcomm Centriq, there is no industry standard/affordable/easy to buy ARM based server platform ordinary persons and small/medium sized businesses can acquire.
It's great that Amazon has ARM based stuff, but it's something proprietary they're purchasing in large quantities from a manufacturer they have a very close relationship with. Undoubtedly the physical hypervisor platform and motherboard these things are running on is something totally bespoke and designed to Amazon's unique requirements.
I can't pull out my visa card and go buy a (atx, microatx, mini-itx) format motherboard for an ARM CPU, the CPU itself, RAM, etc, and build a system to run debian, centos, RHEL, ubuntu whatever on.
This means that, sure, you can get an EC2 ARM based server, but it's something you can't physically own and you'll be paying cloud based service rates forever if you want to keep running it. There are some categories of business and government entities where not having things on-premises, or fully owning and controlling the hypervisor all the way down to the bare metal, is a non starter.
If the ARM platform Amazon is buying becomes truly price/performance competitive with a single/dual socket xeon or threadripper/epyc, it also gives a possible competitive advantage to Amazon over any medium-sized cloud based VM provider out there currently selling (xen, kvm) based VMs on x86-64 hypervisors.
Based on what's available on the market right now I see no signs of there being a viable hardware-purchasing alternative to Intel or AMD based motherboards and CPUs.
> I can't pull out my visa card and go buy a (atx, microatx, mini-itx) format motherboard for an ARM CPU, the CPU itself, RAM, etc, and build a system to run debian, centos, RHEL, ubuntu whatever on.
The difference between ARM and x86 is that there's no standardized ARM interface between the motherboard and the CPU—no ARM equivalent to ACPI. Every integration of an ARM CPU into a board is bespoke. So you have to buy "a board with an ARM CPU on it", not just an ARM CPU and board separately.
But, if you relax that restriction, it's not like it's hard to acquire "a board with an ARM CPU on it." 80% of single-board computers (e.g. the Raspberry Pi) are "a board with an ARM CPU on it." You can wipe pretty much any Android device (not just phones, but also HDMI "streaming boxes", which are a convenient form-factor for basing a workstation on) and install Linux on them. There are also some higher-end development/SDK boards for ARM embedded systems, like the Nvidia Jetson. What more do you need?
> The difference between ARM and x86 is that there's no standardized ARM interface between the motherboard and the CPU—no ARM equivalent to ACPI. Every integration of an ARM CPU into a board is bespoke. So you have to buy "a board with an ARM CPU on it", not just an ARM CPU and board separately.
Not exactly true. ARM server chips do have ACPI. The trick is buying a CPU that is SBSA[1] and SBBR[2] compliant. Then most things work as expected.
My first question would be, why has no industry trade group attempted to define a standard socket, or use an existing physical socket pin-out.
It's not hard to acquire something with an ARM CPU on it, but at prices an ordinary person can afford, they're all in the category of toy computers. Try to find an affordable ARM system with a M.2 2280 NVME SSD slot on it like you can find on a $100 desktop x86-64 motherboard. Or multiple PCI-Express 3.0 x8/x16 slots.
I've previously spent many years working for a hardware manufacturer. Personal theory is that this is a real example of a chicken or egg problem related to economies of scale. Nobody wants to spend dozens of millions of dollars tooling up to produce ARM socketed CPUs and motherboards and stuff, which may or may not be price/performance competitive with current-gen Intel and AMD stuff by the time it's ready for release. And there's a huge risk in manufacturing something like that and then discovering that the sales volumes are really low.
Look at the sheer massive quantities that the top-ten Taiwanese motherboard manufacturers churn out every year.
> You can wipe pretty much any Android device (not just phones, but also HDMI "streaming boxes", which are a convenient form-factor for basing a workstation on) and install Linux on them
No you really can't, and phones aren't servers. Take any modern $600 smartphone and try installing something very close to a stock debian or centos on it. Being able to maybe boot a Linux kernel on something doesn't mean that there's anything like the market demand for that particular hardware platform target for a whole distribution.
> My first question would be, why has no industry trade group attempted to define a standard socket, or use an existing physical socket pin-out.
Because ARM is a design licensed to companies that want to make their own chips for their own purposes. Those purposes involve optimizing cost by choosing, per product, whether to integrate various cores onto the SoC, or to leave them as external devices on the board, or to maybe put them into a secondary chipset microcontroller and route some IO pins to connect the two.
In ARM designs, the licensee determines the required pin-out, because the licensee determines what the CPU does vs. what the board does. You can't really have a generic "ARM board", because no two "ARM CPUs" would have the same expectations for what are on that board.
A particular ARM licensee could standardize the socket between their own designs—but in doing so, they'd lose a lot of the advantage of licensing vs. buying an off-the-shelf chip in the first place.
Has a M.2 slot doesn't mean you can put any M.2 ssd in it, especially M.2 2280 is the most common size and easily can buy on store/online, smaller than that is mostly for OEM devices and you don't have many choices available.
Not about the M.2 slot, but a shout-out to Explainingcomputers. I graduated as an embedded systems programmer, but changed careers along the way. This channel reignited my passion for embedded systems and now i'm an enthusiastic hobbyist again.
An economic viewpoint on your first question is that there is not enough margin (or energy, leverage, etc) left in the system level ecosystem to press for the need for a standard. In the old days Compaq etc very strongly pushed for standards to help them PCs move away from IBM. This began to come undone around the time of EISA, which was ratified but failed, and PCI, which was Intel sponsored. It only got worse after AMD (and other x86 semis) lost ground to Intel.
Isn't the answer quite simple? There is no demand.
We have reached the point where shipping 1.2B Smartphone per year is called massive, comparing to 200M PC market where 150M of those are laptop ( Soldered and non Socketed ).
You cant buy a self made ARM PC. But there will be Qualcomm ARM Laptop. And the AWS ARM CPU are based on ARM N1 Design, it is only a matter of time somebody make the same CPU, or System integrator selling CPU made from other ARM CPU vendor such as Fujisu, Ampere, Huawei, Marvell, or possibly Nvidia and others.
I dont see much of a problem in getting ARM access as consumers.
Sphere is a special purpose device, not for every day general computing, and I am yet to see when it will be finally made secure as adverstised (given the use of C without any kind of special security tooling).
ACPI is not "between the board and CPU", it's "between the OS and the computer". Which is actually much more important. Writing drivers for bespoke PCIe, USB and SATA controllers is not fun. ACPI standardizes configuration of these things.
> ACPI is not "between the board and CPU", it's "between the OS and the computer".
Eh, kinda-sorta. ACPI (through the availability of a DSDT in BIOS flash) provides whatever's running on application processors (like CPUs, but also independent coprocessors like mobile baseband processors and server baseboard management controllers) the ability to 1. query "the board" for what's effectively a listing of the available busses, and the devices on those busses; and 2. to initialize those devices with wired address-space regions on those busses, such that nothing conflicts.
Now, certainly, ARM boards that have standardized "network-like" peripheral busses like PCIe, USB, or (maybe) SATA, have an ACPI controller on the SoC, to schedule those devices' address-space regions onto those busses.
But not all ARM devices have those busses. Some are entirely embedded, with the ARM core serving more of the function of a microcontroller than a true application processor. (There are ARM cores in keyboards and mice!) And some—especially cheap—consumer ARM "platforms" only have busses like SPI and I²C. (You know, like an Arduino!) Even older fully-featured ARM "computers", like smartphones and portable game consoles, didn't support ACPI until very recently, either (which is much of why devices like the 3DS and PSVita didn't support Bluetooth or USB OTG HNP.)
In either case, these devices more-often-than-not don't have ACPI controllers, instead "hard-wiring" each peripheral device on the board to a certain address-space range on a certain bus available to the CPU, just like PCs used to do before there were even IRQ jumpers. (One clear example: every Nintendo portable since the GBA was an ARM SoC, but none of them until the Switch have had ACPI. They just had a static IO memory map, that you could program against.)
And even those ARM devices that do have an ACPI controller on-board, almost universally don't use it the way that x86 does, where even chipset-specific devices like Platform Controller Hubs get probed and configured by ACPI rather than just sitting statically on a special bus.
We're not talking about embedded devices in a server thread :)
I'm not sure what you're calling "an ACPI controller". (You seem to be thinking of MMUs and PICs??)
ACPI is a software interface. The tables are the ACPI. It's possible to implement ACPI on a system that originally shipped with U-Boot and Flattened Device Trees. Say, for the Raspberry Pi: https://github.com/tianocore/edk2-platforms/tree/8b72f720d53...
> You can wipe pretty much any Android device (not just phones, but also HDMI "streaming boxes", which are a convenient form-factor for basing a workstation on) and install Linux on them.
Nope, the PostmarketOS project is actively trying to make this feasible. It's quite non-trivial and heavily device-specific.
Enlighten me as to which HDMI "streaming box" I can install (mainline) Linux on. Bonus points for a model where the h264 encoding hardware block is supported.
Amlogic and Allwinner SoCs have varying degrees up media decode support in mainline kernels (h264 is supported, other codecs vary). Armbian supports a bunch of such boards with Linux 5.3. What exactly are you trying to do?
Our current approach is using the Allwinner A20 processor. Investing quite a bit of work in a free driver for the h264 encoding block on there (cedrus vpu).
It was an honest question if we had overlooked something obvious...
It used to be quite coupled to x86 hardware, but nowadays they have a profile that replaces a bunch of fixed hardware (such as the ACPI embedded controller) with an ACPI description of GPIO pins, I2C buses etc.
As a normal person, you're mostly stuck with SolidRun's offerings if you want something that's affordable but serious (i.e. can run PCIe devices).
I run a MACCHIATObin as a desktop with FreeBSD, it's fine, great open source UEFI-ACPI firmware (upstream EDK2), generic ECAM PCIe works (with a quirk but still), but it's not a powerful system. Four A72 cores, single DDR4 channel. It's basically a rather-old-ultrabook level of performance.
Then there's their new thing with the NXP LX2160A 16-core dual-channel chip — there was a loooot of doubt about whether they could achieve good ACPI support on that SoC. But recently NXP have pushed a commit that introduces a generic description of the PCIe controller to the ACPI tables..
And the linked systems are mostly multiples of 48 cores and hundreds of GB of RAM. Totally different class.
A raspberry pi might be good for very small scale experimentation of little apps, but my 4GB model took months to arrive and doesn't come close to matching the performance of the smallest instance I would use at $DAYJOB.
That thing is $750 and probably performs much worse than a two year old Ryzen 1500X ($159 at the time) on a $105 motherboard. Nevermind a new Ryzen 3500 which is $194.
I agree, as someone that's had some "off mainstream" hardware over the years, I'd kind of like to have a 16+ core ARM server to play with; just because that's the kind of dork that I am.
Really though, I think this is other things, if amazon can become independent from Intel and AMD? that gives them huge leverage in terms of negotiations with them. Really, it sets AWS up quite nicely for the time when they are broken up and split out.
> I can't pull out my visa card and go buy a (atx, microatx, mini-itx) format motherboard for an ARM CPU, the CPU itself, RAM, etc, and build a system to run debian, centos, RHEL, ubuntu whatever on.
You can buy a SBC though, so you don't necessarily need to. There's SBC's with more power than the Raspberry Pi (at least up to 3). They just cost more. I'm starting to get into the habit of using more Raspberry Pi's at home for network stuff, and I'm debating setting one up for a web development target platform, something that can be code-pen like but on my Pi. Beats running a bunch of VMs and cheaper than buying a whole rig just for VMs.
They lack some important things you need in a datacenter like hardware management platforms. You also can't upgrade components or add interfaces (which at their cost point makes some sense).
SBCs have generally been hobbyist oriented. It would be interesting to see some be datacenter oriented.
I think there is bias against them because they are so cheap. But they are arm64 and with 4 or 6 cores now. Still slow compared to an Intel/amd desktop but just always ssh/sshfs into them from your primary machine. I do this with vscode and it is great. Even better if you primary machine is Linux based.
Hmmm, at those prices I would worry about the quality of the SSD. (I've learned to trust a few brands, learned to avoid a few others, the hard way. The 512GB SSDs I'd consider purchasing go for 2X-4X your mentioned cost).
An important update has been made to the microbenchmarks to this page. I will quote for you my editorial comment:
"Editor’s Note: The microbenchmarks in this article have been updated to reflect the fact that running a single instance of stress-ng would skew the results in favor of the x86 platforms, since in SMT architectures a single thread may not be enough to use all resources available in the physical core. Thanks to our readers for bringing this to our attention."
You'll note that the newly-used stress-ng command is: "stress-ng --metrics-brief --cache 16 --icache 16 --matrix 16 --cpu 16 --memcpy 16 --qsort 16 --dentry 16 --timer 16 -t 1m"
The count on the flags under the old numbers was 1. This update shows even better numbers for Arm than we originally produced. Thanks to our assiduous readers for pointing this out.
I think one of the big issues may be with high performance multi-threaded code. x86(I am including x64 in this designation) is a lot stronger memory model than ARM. This has two implications. First, x86 is a lot more tolerant of data races, and missing explicit memory fences. When you port server applications that have been running well on x86 to ARM, you may be in for some surprises as data races and missing fences now manifest as data corruption. The other implication is that on x86, the gap between a sequentially consistent memory order and a relaxed memory order is not that great. Thus, many programmers may use atomics with sequentially consistent memory order to reduce the complexity. On x86, this will generally yield decent performance. On ARM, that gap is much bigger and you are liable to have severe performance regressions.
If I'm reading this correctly, you're basically implying that every serious QA department in a a large company might benefit from buying some ARM hardware for running their test suites in order to reveal multithreading bugs and inadvertent data races in their code?
No, that's not quite right. There are things you can do safely in x86's memory model that are not portable to ARM. But they are completely well specified if your target is x86. I.e., the hypothetical QA team that buys ARM hardware may only expose portability issues rather than bugs.
What about Java applications? If the x86 memory model lets you get away with things that are not guaranteed by the Java memory model (in the Java spec) there could be actual threading bugs that would be exposed by running on ARM.
Happily, I know nothing about Java's memory model.
I mostly had C in mind in my earlier comment. Yes, x86 formally guarantees patterns that are not guaranteed by the C standard. The key is that the behavior is implementation-defined, not undefined. Everyday C compilers targeting x86 have x86 memory model semantics.
So that's why I said (with the case of C in mind), no, running on a different memory model wouldn't necessarily expose bugs in the x86 target. Your program might not be portable to another implementation, but that isn't inherently a bug. (Especially given x86's more or less omnipresence in the software space, from pretty low end up to fairly high end systems.)
In modern C (C11 and newer) you would prefer developers use portable memory constructs such as atomic_store_explicit or atomic_load_explicit with some particular memory_order semantics. These are specified in §7.17.3 "Order and consistency." (The C17 publication of the same section might be more clear, I just wanted to illustrate the C language has had portable constructs for this since the 2011 version.) Of course, it is possible that developers use more relaxed semantics than are actually (portably) valid, and it happens to work on x86 just like code that doesn't use C11 memory model atomics.
(And the vast majority of developers should be using higher level constructs like mutexes or rwlocks or existing lock-free data structure libraries, such as ConcurrencyKit[1], instead of messing with complicated memory semantics. I suppose the same is true in Java land.)
In C land, there are starting to be nice tools to detect these kind of things explicitly rather than just observing memory corruption on ARM, such as KTSAN and KCSAN. I don't know if Java has anything similar.
Anyway, I don't know if any of that is useful to you. Sorry for the wall of text.
Do you happen to know of any studies or benchmarks that show how much of a difference in performance there is between strong and relaxed memory consistency models for real-world workloads? It's something that I've been curious about for a while.
One upside of code increasingly being written in languages that are predominantly single threaded like Python/JS is that these issues do not matter as much.
In the case of Scylla, we run on the Seastar engine, which runs single-threaded per core because it is very "greedy." Hence the CPU being pegged at 100%. It wasn't thrashing. We just squeezed everything we could out of it.
We do parallelism across CPUs and nodes. We run single-threaded to get the most out of a CPU in a shared-nothing architecture. Many single-threaded apps aren't written to really take advantage of all a CPU has to offer. But there are also prices to pay to run multi-threaded; context switches, etc.
Not having (thread-level) parallelism encourages designs with small, well defined interfaces where data crosses threads. In a typical NodeJS server setup with one load balancer and a number of node processes that don't know about each other you will experience fewer concurrency bugs and care less about NUMA than in the equivalent ASP.NET app (requests scheduled onto a thread pool, code can share resources at will).
Of course sometimes "small interface" isn't really viable from a performance standpoint and you want a well-engineered multicore application with lots of shared data, and you just can't do that with JS or Python (or at least it's very hard). That's a good reason to choose a different language.
"AWS, the biggest of the existing cloud providers released an Arm-based offering in 2018 and now in 2019 catapults that offering to a world-class spot. With results comparable to x86-based instances and AWS’s sure ability to offer a lower price due to well known attributes of the Arm-based servers like power efficiency, we consider the new M6g instances to be a game changer in a red-hot market ripe for change."
I'm not sure how that conclusion follows from the numbers presented? Yes, the new ARM processor has become much faster than the older one, but clearly looses against x86 in cpu-heavy benchmarks.
Might be a good option for I/O limited workloads, as the NVMe storage is newer and therefore faster.
Are you sure? The raw IO benchmarks presented in the article are higher in all dimensions for the M6g vs M5, so if IO was the bottleneck on the M5 I'd expect the M6g to move ahead.
Much of this depends on whether an app has true linearity in scale-out. If you can use horizontal scalability, you can get the same (or better) aggregate performance while, as you note, still reap savings both in power and dollars. Similar by analogy to how SSDs allowed you to get "good enough" performance for a database compared to all-RAM instances. You could still meet your SLAs and pocket the difference. It's a game changer in that way.
Agree that the advantage on storage is no ARM-specific. It is what it is on AWS and we have shown that. But the whole point is that the Arm platform is doing great.
ARM has recently made some real progress into HPC. HPE is delivering ARM clusters in Europe. Fujitsu has developed a "optimised" ARM CPU with SVE and it will power Japans exascale HPC system[1].
I tried ARM servers in Scaleway and honestly, unless your profile is sort of a sysadmin or you're motivated, it's just dealing with some issues and less power overall.
Also, AFAIR they were around the same price of X86 instances.
But then again, I have almost no sysadmin skills, so maybe it was my lack of knowledge.
My experience with ARM (locally anyway, raspberry pi's and pinebooks) is that everything works great, if it works.
ARM binaries available? Great. You're all set. (hopefully its for the right generation of ARM though, I can't get SteamLink software to run on my pinebook because it does raspberry pi hardware version checks that obviously fail).
Not available? You're gonna run into one of two scenarios:
Closed source app? Sorry. Just never going to get it (unless you want to run it through qemu at a huge performance hit and YMMV).
Open source app? Compile it yourself! Which takes noticeably longer than x86 for me. Maybe in server this works out. For desktop, compiling st myself was actually the path of least resistance to getting a terminal I liked. I wouldn't have wanted to compile firefox on ARM though without some serious server horsepower. When I used to build docker images they would sometimes take hours, for things that were less than a minute on x86.
Compiling my C++ code on ARM on a daily basis, and here it’s OK. In my case the bottleneck was storage. I’ve solved by building on a mounted network share located on a fast SSD on a windows PC. Gigabit LAN + desktop-grade SSD are much faster than eMMC of my target device.
In the cloud, we believe forward-thinking application developers will have little problem porting their software to support their users on the new chipset. That is definitely more of an issue for on-prem or private use, though.
>Open source app? Compile it yourself! Which takes noticeably longer than x86 for me.
Heh, it's funny, right? I switched from an x86 server to an arm server and now it takes seconds and seconds to log me in using ssh. It's like the server really struggles when crunching numbers.
If fairness, I tried a raspberry pi cluster. and it was much slower than my xeon server. and i was insanely slower.
but i could run the entire pi cluster off a 6 port phone charger, instead of an 800w power supply. Power bill was one of my driving motivators. but ultimately i went back to x86 for performance
Disclosure: I currently work at Scaleway but wasn't there at the time when we rolled out the 1st ever ARM baremetal offering.
It was massively cheaper than their x86 counterparts at Scaleway or competitors, thanks in no small parts to very clever electrical engineering by my colleagues.
I think we never publicly disclosed power consumption levels but.... let's say our ARM boards are very power efficient :)
ARMv7 as a physical server. No virtualization. Very low performance, very cheap. Good for running your personal web server or similar, but not for anything seeing load. Looks they have not maintained the kernel for more than a year. They let it die, would be my guess.
ARMv8. A virtualized platform. I have not evaluated it myself yet.
I don't know much about the cloud hosting for ARM stuff (since I don't work in that space), but I have been extremely happy with my ARM home-server setup in my basement. Docker swarm has been extremely nice on my ODroids, and I recently upgraded to the Nvidia Jetson Nano, which has perfectly fine Kubernetes support.
I'll admit that maybe I'm not doing the most elaborate stress tests, but I mostly use them for my video transcoding and my (very) recent interest in machine learning, and I haven't had much issue. The thing that's given me the biggest headache is older versions of Ubuntu's mediocre support of ZFS, which has largely been fixed.
I've been thinking about moving my home server to an ARM-based setup to reduce power consumption/fan noise. My situation is closer to 'glorified NAS' that runs a few additional oddball things, though. Are you just using USB 3.0 to SATA in the cases where such is needed?
My setup is a bit weird now; I have some cases that holster the drives [0], with a low-wattage ATX power supply that exists solely to power the drives. I a bunch of sata male-to-female cables on the case [1], and then a bunch of USB3 to SATA adapters plugged into a couple USB3 hubs into my leader Jetson. I have ZFS On Linux set up on there, and have an NFS share set up on the ZFS mount.
I use Kubernetes to distribute my projects across the cluster, and use NFS for any kind of shared data. It works way better than one might think.
EDIT: Just a note, if you decide for some reason to copy my design, I'm flattered; make sure you either get an ATX power supply that's extremely low wattage or is smart enough to lower the wattage based on load. I made the mistake of taking a power supply out of an old gaming rig the first time I did this, and it ended up consistently using 600W all the time, leading to a hefty power bill one month. I was able to find a power supply that peaked at 200W on ebay for ten bucks, and seems to idle at around 50W, which is much more manageable.
I've debated buying a NUC, but at least on Intel's website, buying just the board cost somewhere in the neighborhood of 500 USD; for that price I can buy 10 Raspberry Pi 4's or ODroid XU4s. Granted, in order to use them to their full potential, you end up having to learn a lot about distributed computing (which is a bonus for geeks like me, but maybe not most people), but if your goal is to use it as a server, the NUCs seemed a bit overpriced to me.
That said, if anyone is looking to stay within the x86/x64 family of CPUs, I actually recommend looking for a used Wyse/Dell thin client on eBay. You can often get a decent quad-core system with USB3.0 and 4-8gb of RAM for around a hundred USD.
The HC1 is great, and I had one, but the problem I had with it was that it doesn't really have any good support for RAID (since it only has one SATA port), making it a little unfit for a real NAS. I ended up having to plug in hard drives via USB, and at that point, you're better off buying the ODroic XU4 (or a newer board like the ODroid N2 or the Nvidia Jetson Nano) due to it having USB3.0 built in.
When I was using it, I ended up not using the built-in SATA port and instead just using it as yet-another-node in my cluster of ODroid XU4s
I bought an AArch64 desktop board based on one of the newer NXP manycore CPUs, because it seems to be the first of its kind.
Every SoC vendor should sell an mATX or Mini-ITX board compatible with PC components, if they want server adoption.
That goes especially for any vendor facing the even harder uphill battle of bringing RISC-V to servers: the server was dominated by PC clones for a reason.
I don't think quite yet. As others have noted, porting x86 apps to ARM can be fraught with issues concerning memory models and concurrency. Especially apps written using unmanaged languages. Newer apps written in things like .NET Core (especially if you can keep any native dependencies out of the equation) are probably going to be a lot easier to port when the time is right.
I think we'd have to be at a point where the ARM server is <50% the cost of the x86 server while offering equivalent real-world performance to make the jump worth it for the average shop. You'd also have to have a very accessible ecosystem of reliable ARM machines that developers could purchase and hack on. There are many businesses that will happily incinerate millions of dollars to keep x86 around just because changing things is frowned upon or otherwise scary.
For some applications ARM is today and it's an excellent approach. But, for most it's still somewhere on the horizon.
Can you meaningfully benchmark stuff in the cloud? It seems to make any claims on price/performance you'd want to use two local servers dedicated to the benchmark, with as near identical hardware as possible (apart from the CPUs of course), and you'd want to know the full cost of both servers.
You can benchmark using bare metal instances to eliminate noisy neighbours, and then you can meaningfully benchmark against other types of instance from the same cloud provider to get a relative cost benefit. I'd agree that obtaining the cost benefit in terms of on prem hardware is a much more involved calculation.
Sure - you're benchmarking something a bit different from whether ARM is ready for "server dominance". You're essentially benchmarking Amazon's prices to early adopters of ARM vs the cutthroat x86 cloud marketplace. That may be interesting for many people but tells you little about ARM hardware.
I'd like to see cheap ARM server hardware for small servers available for purchase.
Something relatively low power (also power efficient) that can replace an entry-level Intel Atom server like those offered by Kimsufi (OVH) and Online.net.
The Raspberry Pi 4 with 4GB RAM is getting close in terms of performance but it lacks some things I'd like to see in a server, i.e. at least two SATA or NVME ports and two LAN ports.
If you want to acquire large numbers of ARM boards for servers, our neighbours in Zhongshan are Firefly. They make a cluster server capable of 11 x their own RK3399 6 core 64-bit 'core boards', so 66 cores in total. It's cheap. http://shop.t-firefly.com/goods.php?id=111
At Kubecon this year, there were three different vendors doing ARM based storage. Two were capable with NVMe and one only with SATA/SAS. I am sure the answer to the article's title question is a no for right now, but in terms of disaggregated storage, I think the answer is yes!
The fact that ARM is written as Arm really threw me for a loop here. At first, I thought they were talking about a new programming language or something.
That comparison was never done. All CPU tests are on EBS backed instances, and in the end the I/O subsystem is compared in isolation for NVMe-backed instances in both cases.
It's great that Amazon has ARM based stuff, but it's something proprietary they're purchasing in large quantities from a manufacturer they have a very close relationship with. Undoubtedly the physical hypervisor platform and motherboard these things are running on is something totally bespoke and designed to Amazon's unique requirements.
I can't pull out my visa card and go buy a (atx, microatx, mini-itx) format motherboard for an ARM CPU, the CPU itself, RAM, etc, and build a system to run debian, centos, RHEL, ubuntu whatever on.
This means that, sure, you can get an EC2 ARM based server, but it's something you can't physically own and you'll be paying cloud based service rates forever if you want to keep running it. There are some categories of business and government entities where not having things on-premises, or fully owning and controlling the hypervisor all the way down to the bare metal, is a non starter.
If the ARM platform Amazon is buying becomes truly price/performance competitive with a single/dual socket xeon or threadripper/epyc, it also gives a possible competitive advantage to Amazon over any medium-sized cloud based VM provider out there currently selling (xen, kvm) based VMs on x86-64 hypervisors.
Based on what's available on the market right now I see no signs of there being a viable hardware-purchasing alternative to Intel or AMD based motherboards and CPUs.