Could you name any projects that are based on a blockchain and which could not have been viable with any other technical solution, such as an SQL-based database like PostgreSQL?
Blockchain-based logging systems. Can't go back and edit anything, edits have to be appended as new records.
There's quite a lot going on in this space because of the obvious benefits, but HN seems very detached from the enterprise world. If you ran a database of land ownership records or patient healt data for a whole country, would you really be at ease with only PostgreSQL?
From what I can read, this press release mentions that they have so far reached an agreement to work together on the project, and they are currently _planning_ on using a KSI blockchain. Too early to say if the project will succeed or not.
> If you ran a database of land ownership records or patient healt data for a whole country, would you really be at ease with only PostgreSQL?
With proper access controls, backup system and proper maintenance, definitely. While the DB engine itself can vary (PostgreSQL, Oracle, whatever Microsoft produces etc.), SQL based database engines are widely used in such scenarios already.
That assumes you are dealing with a trustworthy government / corporation. Look at Lebanon their entire government just resigned overnight, can the people really trust “proper access controls and maintenance” in such chaos? Would it not be better to have essential records be kept track of by the public? Even if you don’t need it now, think of it has a insurance policy.
In most cases the beefiest air cooler + the lowest tolerable fan curve (meaning no overheating) will do the job just fine, especially if the CPU TDP is between 35-65W. Alternatively you can also limit the CPU power usage by disabling turbo or doing other tricks that forces it to run slower, but more efficiently.
Fan cooling a Raspberry Pi 4 is such an odd thing to do. You are essentially taking one of the pros of the Pi - quiet and reliable operation - and replacing it with a moving part that will not offer too much benefit and will eventually wear out or start having issues.
You can USB or network boot a Pi. Also in my experience over 9 years: just don't use Sandisk cards and you're fine. Samsung (Pro|Evo) branded cards have worked for me.
Network boot adds even more points of failure. USB boot is very attractive, but also very new. I’ll play around with it once it’s out of beta and the setup procedure is established.
I use Samsung Evo cards and haven’t had an issue with them either. My problem is one of confidence. I have no data to suggest that these cards won’t just hard fail at some point in the next five years. Simply doing better than their competitors in the micro SD card space is not good enough.
All of the dead cards, if I open my drawer where I store them marked with a black cross-mark, are samsung evo cards. All my Sandisk Ultras (the real ones verified on Sandisk's chinese website) are still running fine after ~3-4 years of usage.
Anyway, Samsung Evos didn't hard fail for me. They just stopped executing writes (without reporting any errors). So your system seems to work (for a while), but after reboot it's back to where it was previously. It's a pretty stupid way to handle end-of-life condition for the card. I wish samsung decided to report write failures instead of simply ignoring the writes and reporting success.
Same, have some units that have been running nonstop for the past 5 years without having to touch them at all, using Evo cards. Also, I don’t think I mounted var/logs as a tmpfs.
I must point out that this does NOT cover ThinkPad T and X series. From the press release:
> Our entire portfolio of ThinkStation and ThinkPad P Series workstations will now be certified via both Red Hat Enterprise Linux and Ubuntu LTS – a long-term, enterprise-stability variant of the popular Ubuntu Linux distribution.
The press release also said "Lenovo is moving to certify the full workstation portfolio for top Linux distributions from Ubuntu® and Red Hat® – every model, every configuration." so I guess the question is: Are only ThinkPad P series laptops considered workstations OR is the plan to bring it to other ThinkPads as well in the long run?
Yes, but note that the P1 workstations are pretty much the same as the X1 Extreme series (but with Nvidia's Quadro "workstation" GPUs rather than Nvidia's "customer" GPUs.)
It looks like the customizable variant of the P1 Gen 2 has different GPU options (including iGPU-only) based on what processor you choose, while the X1 Extreme Gen 2 has a Geforce 1650 in all configurations.
If I had known that when I was picking out an X1 for work, I would have pushed a P1 with no dedicated GPU instead.
The amount of fine tuning I had to do for my X1 7th generation with Ubuntu 19.10 was almost unbearable. Sound still is not working properly, the internal mic is still not working at all.
The only thing I had to do was compile a module for also so I got high quality aptx Bluetooth for my wireless headphones but that’s an IP issue not something I can blame Fedora or Lenovo for.
My T470P with Fedora is as little issue as my work issued 16” MacBook Pro.
The available power budget and memory bandwidth will be the limiting factors for those new APU-s, just like for the current generation. Given the performance requirements that I have seen for VR I don't think the new APU-s will be able to reach that performance level.
For something less demanding it could be OK, not sure where your use case would fit in the performance requirement scale though.
LowSpecGamer did an experiment with VR and Ryzen APU-s, if you want to get an idea of what performance to expect: https://youtu.be/huT6fp7nzwA
Have you tried out ZFS? It's quite good at resolving failures and data corruption and can be configured in all sorts of combinations, depending on your performance and disk failure tolerance requirements.
I have found myself with a similar issue. Haven't been in the industry for too long (been working for over 3 years), but the constant shitshow that is FE development is really taking a toll on me. No matter what you do, people keep changing things, sometimes for the better, other times for no particular reason other than to work on new shiny things. If what I am writing right now will be thrown out soon or rewritten anyway, then why bother? Same goes for some BE stuff, Go language seems to be the new hip thing to do stuff in, repeating the cycle once more.
I have begun trying to find things I truly care about as a countermeasure to this fatigue, so far it seems to be moving in the direction of just helping with repairing and refurbishing used computer hardware. I know how to do it for my own purposes, it's less mentally taxing, you get to save useful stuff from going to the landfill and you will also help cut down on consumption. It probably helps that the results are immediate (broken device -> working device), but going this route will have a minuscule impact also on a larger scale.
> but going this route will have a minuscule impact also on a larger scale.
I think people trick themselves into thinking that "changing the world" is their ticket to happiness. I think looking more locally is the key to happiness. Make your community better in some way and the results are immediate, and you get to be a part of something meaningful and present. If you become the local guy that fixes people's old computer hardware, then you'll be genuinely impacting people's lives for the better.
I remember reading somewhere that nowadays a significant chunk of the instructions isn't actually implemented on the CPU using transistors, but by using CPU microcode to sort of emulate these instructions by combining existing ones. Someone correct me if I'm wrong.
Micro-ops are the actual things that can be executed by the hardware. A floating-point FMA unit is going to support a floating point addition, subtraction, fused multiply add (with various intermediate sign twiddles), and integer multiplication and wide multiplication--all without adding much more hardware: you're adding a few xors or muxes to the big, fat multiplier in the middle of it all. Each of these might have distinct micro-ops, or you might be able to separate the processing stages and use a single multiplier micro-op with distinct preprocessing micro-ops for the different instructions. Realistically, though, you are adding new micro-ops, although the overall hardware burden may be light.
The motivation of adding new instructions is generally to get higher performance, so there's going to be pressure to have hardware to execute it well, as opposed to a more naive emulation. But sometimes people add support without making it fast--AMD chips used to (still do? I'm not sure) implement the 256-bit AVX instructions by sending the 128-bit halves through their units in sequence, so that it technically supported AVX instructions but didn't see any improved benefit from it.
Back in the high CISC era every instruction would be backed by microcode as a series of instructions like "Load the first argument from memory location X; load the address of the second argument from memory location Y; now use that to get the second argument; store the result in memory location Z;"
Then in the RISC era the instructions being fed to processors more closely matched what was going on inside, though pipelining made that a bit more complicated.
These days a processor will still take the incoming instruction stream and sometimes break up instructions into pieces but it will also sometimes fuse two instructions into a single one like a compare followed by a branch.
That's not really the case. Many complex, obsolete or not timing critical instructions are microcoded, but the large majority of instructions executed by the cpu are not. They are translated to microops, but that's a different thing, normally there is a single microop that executes the bulk of the instruction.
A matching socket is not enought though, the motherboard has to support it too. If it's an older one it might not get a BIOS update for the newest generation, and/or the VRMs on cheaper boards might not be able to handle a 16 core chip.
Just an anecdotal observation. I bought a micro ATX MSI motherboard (a relatively niche product) together with my Ryzen 1700X way back when and they provide support for the latest Ryzen CPUs via bios updates. I'd be surprised if other more popular boards did not get updates.
So I can run the latest 3000 series with my current first gen AM4 board which is great. As you point out though the 16 and 32 core chips might be too power hungry for some boards.
Is there any confirmation if this means Ryzen 2 only or also Ryzen 3? Lots of the enthusiasts expect this to include the upcoming Ryzen 3, but the wording is very vague.
Most of what I've seen seems to indicate that Ryzen 3 will be AM4, with Ryzen 4 moving on, I'd be suprised not to see a shift to DDR5 at that time and usb3+thunderbolt in that generation in a couple years.
I would love to see an AM4 3600G and 3700G though, basically a 3600 or 3700 with the addition of a GPU chiplet. There are plenty of people out there that could use more CPU but don't need a discrete GPU for work.
Oh man, this exactly. I'd happily add more heatsink and bump my APU TDP by 20-30 watts, rather than get similar performance by stuffing a whole mountain more hardware in a PCIe slot with its own TDP somewhere around 60-100 watts because of all the bus transceivers and other overhead.
As I understand it, they aim the high-TDP parts at the enthusiast/gamer/workstation market where a discrete GPU is just a given, so there'll probably never be a 3700G. Darn!