> deliver significant cost savings over other general-purpose instances for scale-out applications such as web servers, containerized microservices, data/log processing, and other workloads that can run on smaller cores and fit within the available memory footprint.
> provide up to 40% better price performance over comparable current generation x86-based instances1 for a wide variety of workloads,
From what I read, it's not terribly hard to tell your compiler to compile for a particular instruction set, you just need to do it. Cost savings and better performance are great incentives, as well as Apple moving their Mac platform to it will drive more market share for developers to take the time to recompile.
It might or might not be hard to compile for a different cpu. Intel lets you play fast and loose with mutil threaded code without as many race conditions. As a result code that works fine on Intel often randomly gives wrong results on arm. Fixing this can be very hard.
Once it is fixed you are fine. Most of the big programs you might use are already fixed. Some languages give you gaurentees that make it just work.
What is different on intel since you can play fast and loose with multi threading?
Two threads reading and writing the same memory area without and locking would give problems regardless of the ISA or am i missing something?
So e.g., on x86 if you store to A then store to B, then if another core sees the store to B it is guaranteed to see the store to A as well. This guarantee does not exist on ARM.
Two threads reading and writing to the same memory area do not necessarily give problems. In fact, many software is built to exploit several facts about how memory accesses work with respect each other.
ARM processors give very few guarantees, so code has to workaround that.
Disclosure: I work at AWS on build cloud infrastructure
It's good to be skeptical. I always encourage folks do experiments using their own trusted methodology. I believe that the methodology that engineering used to support this overall benefit claim (40% price/performance improvement) is sound. It is not the "benchmarketing" that I personally find troubling in industry.
We can measure power consumption, heat, performance, and buy on retail price for physical hardware.
But We can't measure power consumption/heat, possibly noisy neighbor exists while benchmarking, and can't know real price for cloud instance. I don't blame but it's difficult to comparing a hardware.
> deliver significant cost savings over other general-purpose instances for scale-out applications such as web servers, containerized microservices, data/log processing, and other workloads that can run on smaller cores and fit within the available memory footprint.
> provide up to 40% better price performance over comparable current generation x86-based instances1 for a wide variety of workloads,
From what I read, it's not terribly hard to tell your compiler to compile for a particular instruction set, you just need to do it. Cost savings and better performance are great incentives, as well as Apple moving their Mac platform to it will drive more market share for developers to take the time to recompile.
Edit: Forgot to add the source of those quotes: https://aws.amazon.com/ec2/graviton/