You are talking different thing. I don’t care what “purpose “ you want to achieve. I merely point out this performance number is mediocre at best, because the enormous computing power thrown at it , wether you like it or not.
To put it into perspective there are 68 nodes with 98 hard thread each, means only 1000/7000 = 140MB/s per thread or 280MB/s per core, and that’s not that impressive, to be honest.
Large reads tend to require the least CPU of all of the tests that we ran in the post. This is especially true in a 3X replication scenario where reads are serviced by a single OSD like in the 1 TiB/s test. CPU is far more important for small random writes, and also can be important when using erasure coding and/or msgr level encryption.
So the premise that you can only achieve 280MB/s per core is misleading. This cluster wasn't bottlenecked by the CPUs for large reads. Having said that, CPU makes up only a small portion of the overall cost for an NVMe deployment like this. Investing a relatively small amount of money to achieve a higher core to nvme ratio provides a better balance across all workloads and more flexibility when enabling features that consume additional CPU.
1TB/second is a benchmark number, and obviously it is trying to impress. This is already a purpose without further clarification, on the other hand it does not necessarily mean it can not have other purpose - which again looks an extremely expensive cluster for that purpose. And I do not see the reason to down vote except some one got hurt in the feeling with a fact. With the actually configuration shown, it is just not that performant nor economical, as I said in the reply, if you had read anything about it.
To put it into perspective there are 68 nodes with 98 hard thread each, means only 1000/7000 = 140MB/s per thread or 280MB/s per core, and that’s not that impressive, to be honest.