I still run an i7-3770k in my homelab (Ivy Bridge, the successor to Sandy Bridge, but still a relatively shared platform with backwards compat) and despite the fact that it's older than dirt it is still a pretty great processor to this day. Not too long ago it was powering my gaming rig and it held up really well to modern titles at 1080p.
The best part is that I got it at a university surplus sale for $60, complete w/ a decent antec case, corsair psu, 16gb of ram and a decent Asus motherboard.
My gaming PC is still a 4770k that I delidded and overclocked the crap out of to like 4.7ghz. Paired with a 3060ti it's pretty much maxed out. It's showing signs of age and I look forward to going team red, but don't want to spend the money yet. 10 years I've gotten out of this chip.
I switched to Intel with Haswell, 4570k I believe. Broke my heart going back to Intel.
I started building computers around 2005, and AMD was the shit. I got caught up in knowing all the spec differences between CPU lines, overclocking potential, and so on. One of my first motherboards was a DFI LANPARTY and there must have been about 70 ways to tweak RAM timing. I've never seen anything like it in modern mobos.
Anyway, K10 or Bulldozer or whatever it was called by AMD was a flop, and Intel just kicked ass. I kept that Haswell probably longer than any other CPU I've used - About 5 years as my primary gaming CPU, before getting a Ryzen 1700.
Manually adjusting timings is somewhat obsolete since the introduction of XMP, but you can still set all the subtimings on modern boards, most people just don't bother (although the gains can be not insignificant, especially if the XMP profile is poor). It's still dozens of parameters.
All my new ASUS boards fortunately have a stealth mode that defeats all onboard RGB. Unfortunately my gaming pc still looks like the SF pride parade because the dimm’s are RGB. My latest workstation though just has regular ripjaw’s so it’s nice and stealthy.
I’ve never understood the idea of RGB DIMMs. Can they be configured to show useful information about the stick (access, writes, parity errors, bus speed) or is it just for show?
I have a theory that hardware vendors are making Christmas tree-inspired products trying to segment the market between gamers and professionals.
Gaming hardware is typically pretty good in terms of performance/dollar. However, similar computers without all these LEDs marketed as workstations are more profitable for the manufacturers.
Just for show and requires software to configure. Fortunately tools like openRGB exist but I’d prefer off by default or a physical switch. That was a nice element of a resent ASRock Taichi card I picked up - it has a physical switch to turn off RGB.
I miss when cases looked really cool and advanced instead of the modern "it's a glass cube with rainbow lights" look. Still think the Cooler Master Cosmos II is the best-looking PC case ever made.
DFI LANPARTY !! What a motherboard! I still remember it’s physical buttons for reset, and power on and off, which were entirely new at the time (and super useful Btw)
I recently replaced my 4770K/3080 Ti rig with a 13600K and can wholeheartedly recommend the upgrade. Such an old CPU is not only keeping the GPU back, but load times etc are also like 10-50 times faster.
Interesting, I have a 3600 too, I thought they weren't over clockable because I read some random forum post saying so. I've been given bad info then? What have you clocked it too? Stock cooler?
I honestly think we're going to see a lot less of this going forward. Recent generations were very hot and both AMD and Intel are now basically designing for the thermal limit (90-100 °C). I doubt there's going to be a lot of 13900Ks or 7950Xes that will last 10 years.
Oh man, I did an upgrade from a i5-2000 series to a Ryzen 5000 series in a machine a while back and it was very worth it. It's great for compiling Rust projects compared to the Sandy Bridge processor, it's super fast.
Worth it to upgrade your storage, as well, to something modern. Makes it even faster for building larger projects.
My 2500k could go up to 4.9 but the juice eventually lead to some massive degradation. It lasted me for 8 years until I jumped to a 3700x and now a 5800x3d.
I like the performance but my next CPU is going to be 65w again. These things are hot. But not hot enough to jump on a 7800x3d.
Funny how the best gaming CPU is 65w but the best GPU is about 8 times that. Heat and noise are important
>I still run an i7-3770k in my homelab ... The best part is that I got it at a university surplus sale for $60
If you don't care about energy efficiency or it's dirt cheap where you live then that's fine, but FWIW a modern 12th gen Intel Alder-Lake N100 NUC clone with quad E-cores[1] will smoke that, at 180 Euros with a TDP of only 10W, and at EU energy prices, the energy savings will make up the extra cost.
I get and support the whole reuse/repair/recycle mantra, but just like with daily driving old cars, at some point it's worth letting them go and upgrading your daily driver to something new and efficient which you can also reuse/repair/recycle, as it's just not worth daily driving in 2023 anymore such old gas-guzzling tower PCs with old CPUs that consume ~95W just to offer less than half of the performance of a modern smartphone[2][3] except for bragging rights online on who has the oldest ruining CPU, or if you don't care about your energy usage.
For example, a modern CPU encoding/decoding a video stream would use way less power thanks to its newer video decoder/encoder hardware IP blocks than a very old CPU going on full blast to SW decode/encode the video stream. The differences can be staggering.
So, while the anecdatas of "lookatme, I'm still using an ancient CPU and it works great" are fun, there's a balance where you need to think how much of the environment and money would I be saving by keeping using very old hardware instead of throwing it away, and how much would I be saving by upgrading to something much more efficient and using that for longer with the same reuse/repair/recycle mantra, as at some point the balance always tips to the latter.
Sandy Bridge consumes that much at full load, but that can also be said of current gen processors (which incidentally usually go even higher).
At idle or low load, Sandy Bridge is plenty economical just sipping power. These aren't Pentium 4s which guzzle power like it's draining Texas of its all its oil even at idle.
My Sandy Bridge i7 2700K is sipping 10W at idle/low load right now, and that's actually lower than my Alder Lake i7 12700K that's sipping 15W at idle/low load. At high loads they go up to 95W and 150W respectively, which still comes out in favor of Sandy Bridge.
>At idle or low load, Sandy Bridge is plenty economical just sipping power.
"Sipping power" is a bit of an overstatement. Sure, at 35W idle, they were very lower for the time but modern <10nm monolithic die CPUs are way more economically than the older 32nm, 22nm, and 14nm CPUs though. That can't be argued.
A 12th Gen quad core NUC CPU will use at full load what Sandy Bridge quad core uses at idle, while also getting better performance.
Yes, at full load a 12th gen NUC will be more efficient. But if you're just using the old system as a home server, most the time the CPU is idling and not consuming much more power than a new NUC.
Also an old system is likely in a case that accepts multiple drives. That NUC is not going to have space for my 3.5" storage drives. Yes I could invest in NVME SSD's, but the point of this is low cost and using parts I already have.
I have 10+ old 3.5" HDD's laying around, if one dies I can easily swap it out at zero cost to me and keep my home server going.
>That NUC is not going to have space for my 3.5" storage drives.
We were just comparing CPU power usage. We're going off-road here and splitting hairs trying to cater to the needs of every enthusiast who builds his custom home lab.
Nobody's telling you how you should build your NAS or home lab, but the point about big differences in CPU power consumption across >10 years of progress still stands.
Where do you get this number 35W from? My Ivy Bridge 3470 idles at 7 watt (cpu only). Most of energy is consumed by the monitor anyway (20-25 Watt) anyway, so the energy savings are minuscule.
Idle power consumption is hard to compare because peripherals, configuration and firmware have enormous sway.
However, painting with broad brushes, Sandy Bridge has modern power management and idles comparably to newer CPUs (a lot better than some modern ones even).
(Not the same commenter) HWInfo64 can report your CPU power consumption. Some digital power supplies can also give power for specific parts, like corsair iCue compatible PSU's
Well if you count expected power consumption of each component and limited efficiency of the power supply, plus reports of people idling their 3470 based machines at 12w (full system) yeah, good enough. I mean what is your point? The 35W idle for Ivy Bridge CPU only is a ridiculous claim, that should be retracted IMO.
I've got an Ivy Bridge quad core system in my closet that my UPS states is currently using ~15W of power with a few HDDs in it and an extra NIC as well. 35W idle CPU usage is thoroughly incorrect.
> there's a balance where you need to think how much of the environment and money would I be saving by keeping using very old hardware instead of throwing it away,
... maybe? Think global. Making CPUs uses an amazing amount of power in making the metallurgical silicon, refining to poly silicon, refining it again to semiconductor grade, refining it again, and then the whole slice polish lithography chain. All of that is still coal powered.
And the chemicals used, and the byproducts created in the chemical manufacture and use, and the amazing quantities of water that are used in the earlier steps of the chain.
Unless you're getting your electricity by burning rainforests, it seems better for the environment to avoid causing new chips to be made, by continuing to use old ones.
It's relatively safe to assume that a $200 CPU contains less than $200 worth of embodied energy. And while the energy used to manufacture the CPU will be cheaper than what a European pays at the outlet, it's not that much cheaper.
$200 is a gross upper limit. It's more likely that a $200 CPU contains $20 of embodied energy. It's highly unlikely that the energy is subsidized by >90%.
>Unless you're getting your electricity by burning rainforests, it seems better for the environment to avoid causing new chips to be made, by continuing to use old ones.
You're also paying for the extra electricity the old CPU uses out of your pocket, plus the extra time it takes to finish certain tasks, lowering your productivity loosing you money.
If you value energy efficiency and productivity, you can upgrade and flip the old CPU on the second hand market to someone for whom the energy efficiency and productivity gains are less important than using an old cheap CPU instead of buying a new expesnive one. Win-win.
The price of a new processor readily embodies the energy used to create it (by a significant margin, because of all the other stuff that needs to be paid for by the price of the processor)
It's a very safe bet that, assuming you do significant work with the processor, that a modern processor will more than make up for its inefficiency.
Right, but we were talking about the energy embodied in manufacturing a processor. It should be obvious that the price of the processor includes the cost of the energy needed to manufacture it, and therefore there is a clear cap on how much extra energy a processor can consume before it would be obviously better to get a more modern and efficient processor.
Your points are valid, but we also have to factor in the amount of electricity consumed by gen X chip vs gen X+1 chip to perform certain amount of computing during their lifetime.
Also don't forget the amount of energy required to deliver it to your house, and the fact that energy is probably more carbon-intensive than the energy powering your old CPU.
Yeah it all depends from how much you actually use the machine, My old i7 gaming system runs maybe 7 hours a week on average. I drive barely 10k kilometers a year with my old ICE car. Upgrading to a new gaming PC and Tesla would make neither economic or environmental sense.
Yes, that's exactly why I emphasized "daily driver" like 3-4 times in my argument. If you hava a gas guzzling gaming PC that you use rarely because your daily driver is an M1 or ARM tablet, then that inefficient PC doesn't make a difference.
My point was only if these old PCs are daily driven on a daily basis as workhorse machines, then the lack of efficiency becomes an sensible issue.
> For example, a modern CPU encoding/decoding a video stream would use way less power thanks to its newer video decoder/encoder hardware IP blocks than a very old CPU going on full blast to SW decode/encode the video stream. The differences can be staggering.
Is that the case for Teams / Zoom & friends, which are what I'd expect to be the most common video decode/encode uses? I'm expecting my 11th gen i7 laptop to become airborne any day now, with the kind of noise it makes on Teams calls.
Anecdotally, my personal zen3u laptop does seem to unload video decoding to the GPU when watching movies, but it still gets fairly warm.
> If you don't care about energy efficiency or it's dirt cheap where you live then that's fine, but FWIW a modern 12th gen Intel Alder-Lake N100 NUC clone with quad E-cores[1] will smoke that
On what do you base this? I've never tried the N100 NUC, but some i5 12th gen with 2 P cores and I forget how many E-cores in a laptop. Compiling Rust, it was comparable to my older gen laptop with the 11th gen i7. Now this is an "U" part in an ultra book, but it's barely faster than a 2013 MBP with an i7, which, if I'm not mistaken, is a 3rd gen core "H" part.
My i7-3770 system I mentioned in another post here pulls 100 watts from the wall with p95 and furmark running everything at 100%. When actually gaming it's usually under 90 watts....
My old fx6300 system though pulled +250 watts just existing lol.
When I was running the stack of SFF business machines I was using vastly more power to cool the room than I was to power the machines.
At $0.12/kWh 1W 24/7/365 costs $1/year. Assuming you don't use sleep mode and/or wake on LAN dropping down from 50W to 10W will save $40/year.
On the environmental virtue side electronics have embodied energy usage that can surpass the lifetime electrical consumption especially if the devices are not always on.
If you read my comment again, I specifically singles out the EU energy proces because I knew North America has much cheaper electricity which is why I specified if you have cheap energy then it probably won't matter for you.
And Europeans are definitely feeling the CoL increase.
Right. It seems paradoxical to save money by spending money on new machines. Versus low hanging fruit like turning off or sleeping unused nodes or taking a hard look at your computing footprint and downsizing.
Whether it is a net win for energy consumption entirely what you're using it for and how the entire power profile of the system looks, not just the CPU and not just at full power.
If it uses as much or more power when busy and you're just using it to push more frames in a video game, you're probably not going to singlehandedly bring about the next ice age. Same if the low utilization system power is similar and you spend most of the time messing around on the internet or editing code occasionally punctuated by a compile job, even if that does run faster.
Maybe, but you get the point on energy efficiency though , no need to go into semantics of encoder quality as that was not the point, just an example off the top of my head that resonated with my anecdotes when i upgraded from a Core 2 Duo.
Even with SW only workloads, just look at the geekbench scores I posted.
If a 2W smartphone SoC can offer over 2x the performance of a 95W chip, then it's bit outdated and energy inefficient and maybe time to upgrade to something better and more energy efficient, unless you're tryin to compete in the "who has the most gas-guzzling terminal/youtube machine" competition.
Something people often overlook when comparing power usage of PC components, is that you often have to multiply the watt difference by 3x (or 4x) when you factor in cooling (air conditioning I mean, in regards to how to dissipate that heat generation, in your home).
I.e. PC-A might consume 100 watts more than PC-B, but the real efficiency when it comes to you our monthly electric bill is more like 300 to 500 W (in my experience, in a hot area of the country). Those 100 W of heat have to be removed from your home environment to keep whatever Ambient temperature you find comfortable.
Edit-it of course, works in the opposite direction in terms of your heating bill in the cold months of the year (but without the 3X multiplier)
> is that you often have to multiply the watt difference by 3x (or 4x) when you factor in cooling (air conditioning I mean, in regards to how to dissipate that heat generation, in your home).
Where are you getting these numbers from? Typical central air CoP (Coefficient of Performance) even when it's > 100 F outside is at least 2, typically 3-5. Assuming based on current outdoor temp the CoP is 3, it'll remove 100 W of heat with 33 W of energy. So instead of 100 W -> 300-500 W you're looking at 100 W -> 133 W total.
Northern Europe maybe. But A/C is pretty common in the Mediterranean. The only difference is that central A/C is correctly seem as something that makes sense in comercial spaces, not homes.
At homes what is more common are ductless split systems, combined with an architecture where you have windows that can be opened when the weather is not that hot.
It is completely idiotic to cool a whole McMansion when the 3 or 4 people inside are occupying no more than 30% of the space most of the time.
You're mistaken. In most of Europe thermodynamics are still very much applicable, lack of AC doesn't mean your hot computer doesn't make your room warmer anymore, so extra heat from an old inefficient computer still ends up in your house, raising your ambient temperature and making you uncomfortable (in the summer) regardless if you have AC or not.
Lack of AC doesn't make the "hot computer making my room warm" problem go away, it just makes it a better case for investing in thermally efficient electronics because you have no AC to vent it out with.
Yes, but we tend to focus on passive cooling rather than active cooling - your comment was all about active cooling and the extra costs thereof, and passive cooling techniques do not imply an extra power cost.
But yes - modern lower power hardware (Apple Silicon, lower power AMD Zen, etc) can be drastically faster than older hardware, at significantly lower power - and massively lower total power consumption for work done.
You can get wins here by making sure even modern hardware is running in the more power efficient part it's perf/watt curve - for example, ECO mode on modern AMD hardware (105W+ to 65W, for instance), reducing the board power limit on GPUs when doing compute (~420W to 225W, for most efficient on my 3090!), etc. You lose remarkably little performance by doing so.
Exactly. If you have the latest and greatest mobile phone and 8GB (or 12GB) of LPDDR5 on a Snapdragon 8 Gen 2 soc you still only have 8.5 Gbps memory bandwith (max, less in actual phones running it at slower speeds). That's 1 GB/s. An ivy bridge i5 3570k has a memory bandwidth of 25 GB/s. Even an ancient Core 2 Duo has 5GB/s memory bandwidth at stock speeds. Phones are very bad at computing tasks outside of "watching video" "buying things online" and the like.
Something is really off with your calculations. High-end Snapdragon chips are running with at least 64-bit busses, combined with the clock speed advantages LPDDR5 should put it well ahead of any 128-bit DDR3 system. At least 50GB/s.
The real reason phones are bad at computing tasks is they throttle when put under any sustained load.
16 bits is where you added data width here, so that would be 52 gigabits a second, no? To go to bytes wouldn't we need to divide by 8?
And I imagine this commenter talking about 8GB RAM, they're assuming not all 4 channels are populated. So one could imagine two channels being populated, 25.6 Gbit/sec, divide by 8 to get bytes, that's then only 3.2GB/s?
Backwards compatibility to what, MS-DOS? People usually complain about forward compatibility regarding TPM, never had any issues with backwards compatibility on the last chips.
How is more money in your pocket virtue signaling?
heh… next to this tower PC is a rack loaded with networking gear and 2x maxed out R720 machines with redundant 1100w power supplies. My electrical bill is around $100/mo.
My home server is still a Gen 8 HP Microserver with a Xeon E3-1230 V2 - so also Ivy Bridge. I wouldn’t mind something more modern and lower power but it seems any alternative with 4+ front drive bays would be quite a bit more expensive - and so it stays.
Ivy Bridge and even Sandy Bridge are still in production at GCE. These things are economically viable long after the (very misguided, extremely outdated) traditional thinking of recycling hardware after 3 years.
IIRC clock for clock even the latest Intel CPUs are only 30-40% faster than Ivy Bridge at best and hover around 20% faster on average. So I don't find it that surprising GCE (or AWS) keeps SB and IB machines around. I imagine VMs on the lower power tiers are far more popular than high power tiers so those older machines still end up with full utilization.
I think the popularity of the ancient machine families on both GCE and EC2 is due to inertia and copypasta and ignorance. People write down "m4.large" and just leave it that way until it stops working. The GCE N1 family that contains SNB and IVB is slower but not cheaper.
That's a good point. I was thinking more about from GCE/AWS side, they'll keep these old machines running because they can shuffle off low priority/power loads to them. But it makes sense from the customer side they're just configuring by rote rather than looking for the best price-performance.
I built a desktop around an i7-2600 (not K) when it released in 2011. Moved it to the basement in 2017, where it's been a server ever since. The 16GB RAM that I loaded it up with is starting to look a bit restrictive, but does its job well.
I will likely replace my current desktop (Ryzen 1800X) when Zen 5 releases, and replace that 2600 with my 1800X, then find someone that would be happy with an old i7 desktop PC.
My 2600 isn't in the same motherboard I originally got it in (or its recalled replacement[0]). I bricked that MSI board when I tried flashing the BIOS from Windows 10. Turns out, that utility didn't support Windows 10, so I'll avoid MSI boards going forward.
I multibox games so I have quite a few systems you would qualify as "old as dirt"
My secondary system is a sff hp/compaq business machine with an i7-3770 and 16 gb ram SSD GT1030 w/gddr5. It's my go to computer for when I need to go somewhere for a while. IT's small easy to pack and handles 1080p games pretty damned well. I'm sure modern "triple A" games would be upset at the lack of vram but I don't really care. I don't find many modern games to be worth my time. I bought the base machine off ebay for $50 bit over 5 years ago. I at one point had a stack of five SFF HP machines for manual multiboxing in a MMORPG. One i5-2500 2x i5-3470 and 2x i5-3470s all connected to USB numkey pads for easy hotkey usage. The i7-3770 CPU I picked up off ebay for $20 couple years later.
My third box (still multiboxing) is an old rosewill case that I put an ebay dell optiplex 7010 mobo in with an e3-1230 (xeon i7-3770 basically). 16 GB ram SSD GTX660. Got the motherboard and cpu off ebay for about $30 total. Benchmarks put the e3 cpu within 1% of my i7-3770 cpu. E3-1230 CPUs are going for a fraction of the cost of an i7 3770 these days. I had originally intended to get a new low end cpu/mobo for this system (I have ddr4 ram sitting around etc) but the prices were out of reach for my budget.
The e3-1230 was $12 shipped so I'm thinking of buying another to put in my other HP/compaq SFF machine that has an i5-3470 in it. I call this one my fourth system. It has 16 gb ram with a ssd in it. The GPU is an hd7570 1gb card I bought in 2018 for around $12. Funny story I ended up buying six of those cards for my stack of SFF because some porch pirate stole one of the cards. I'm sure they were happy to get a card that was worth at most 10 dollars shipped lol.
All of these systems play almost all my steam games at 1080p. They also play World of warcraft, world of warships, warthunder, and such at 1080p.
One of my SFF machines ended up at my sister's as a box for her kids to play games on. Cheap durable and easy to clean.
All those systems use $20 SP 512GB SSD drives bought off amazon. Unbelievably cheap price for a drive that has vastly better performance than spinner drives.
IF you had told me in the early 2000s that I would be using OEM components to game with I would of laughed at you. Then again if you had told me the current state of mainstream gaming I wouldn't of believed you either..
I’m Im still running a sandy bridge i7-2600k on a workbench/electronic hobby/3d printing area PC. Runs all the applications I use or need for tinkering around. Bought it back in 2011 as part of a gaming pc I built. Then transferred the CPU into a mini-itx motherboard and case, so I could mount it under the workbench.
Along with that, I have a 2011 sandy bridge mac mini running ubuntu server and about 8 containers. (Including cloudflare tunnel and wireguard). I’m going to squeeze as much as I can out of them. At least until there is no OS support for either.
I still run the i7-2600k which I purchased new along with the ASUS P8P67 Deluxe motherboard in 2011 (maybe 2012?). It's been overclocked and water cooled from the start, I believe somewhere in the neighborhood of 4.5 or 4.7 GHz, and despite updating my RAM and GPU during that time, I have had practically zero issues whatsoever with the CPU and motherboard. It's certainly not very performant when compared to newer generations of Intel CPU, but for a little bit of gaming here and there I really can't complain!
Me too until about 5 months ago. It's been a great CPU. I'm still using the same machine I bought in 2011 as my workstation, and in 2011 the 3930k was like jumping into the future with its 6-cores. It was definitely worth the cost considering how long it's been useful.
I retired the 3930k CPU 5 months ago when I decided to upgrade the machine to the fastest possible CPU that would go into the socket-2011 motherboard. I found a Xeon E5-2697v2 CPU on eBay for about $40 (currently $30 with free shipping). So it took me from the 6-core/12-thread 3930k up to 12-core/24-threads. It's the final upgrade for this machine, there's really nowhere else for it to go. Maxed out 64GB RAM, RAID10 8-drive array, decent modern video card. I tried to get a TPM module but the BIOS didn't recognize it so it won't install Windows 11 unfortunately, I tried the workarounds but no go.
I don't really need a new modern workstation, but when this system dies (if it dies, it's been a reliable workhorse), I'll get the fastest machine possible at the time and keep that for at least 10 or 15 years.
I've been thinking of upgrading it to E5-2xxxv2 as well, for years now heh. BUT, I think I'll just let it slide for maybe another half a year or year and retire it altogether. It has been really an amazing run also since 2011 and I am quite positive I could still get maybe few to five years out of it.
The E5-2697v2 scores 227 cpumark points faster than an Apple M1 CPU, for the price of $30 on ebay. It's an easy upgrade that makes the system still quite useful, although it runs quite a bit hotter than the M1. But yeah, it's really at EOL now as it won't run the latest OS - I mean it could, but damn Microsoft for locking my system out. I'd keep it going for 5 more years if Windows 10 seemed viable for that long.
Yeah, I'm bummed a lot of these perfectly capable chips are unsupported by Windows 11. Only Intel generation 8 or newer CPUs are officially supported. Installing to an unsupported processor puts you at risk for MS withholding security updates from you in the future.
When MS announced the limitations on CPUs I was kind of gobsmacked. Now I'm just riding windows 10 pro until either all my hardware dies, I get bored with gaming, or an unexpected need arises requiring the change.
My primary machine can run windows 11 but I refuse to "upgrade" out of principle at this point...
Cheap ($35 USD) E5-2650v2 reporting in. Runs all my home server junk (plex, syncthing, etc) and my dev environment (an LXD container that I use via VSCode Remote SSH) without issue.
I ran a ebay-special 3770k at 4.9ghz for years. Aside from occasionally having to hot boot it, that is, boot, then restart immediately, so my USB3 ports and NIC would work, it was flawless. Had to keep it on a nice AIO to run at reasonable temps (<80c), but, it performed well.
I do miss those days, where you could get lucky and score silicon that'd run happily at 30% higher than its rated speed.
Similarly I've never bothered to upgrade from my from old i7-2600k gaming build from 2011, and will likely use it for many years to come. Paired with GTX1070, more RAM, and a larger SSD, it still delivers good enough performance for anything I want (and have time) to play, including AAA games from few years ago.
I had a 2500m ThinkPad T420 until recently. It could handle everything I did with it, including OBS screen recording and minor audio editing. Half-way modern games were out, but otherwise the device ran great. Sandy Bridge was an incredible turning point for consumer PCs.
I have a Nehalem/Lynnfield i5 750 that still runs like a champ. Won't play modern games without a good GPU but otherwise does everything I ask it to, and is still pretty snappy for its age. The biggest reason it's not my main rig anymore is because of a lack of USB 3 ports.
I had an i5-750 with Maxwell GPU behind the TV for games with kids. It worked fine for a lot of things, but it ran out of CPU with Subnautica, which isn't exactly a AAA title, and I wasn't playing it when it first came out, but rather after updates presumably settled down, around 2018+. Overclocking may have given it a longer life, but I generally don't mess with that. If USB3 had been the problem, a USB3 card would be cheap. At about the same time I added a USB3 card to my main desktop for better SD card read/write speeds. I spend more on a good USB3 SD reader than I did on the USB3 card. Now the gaming PC is up to an i5-6500 with GTX1060. It is hanging in there fine for now. I'm only itching to upgrade it for Windows 11 support, but I don't really need Windows 11 on that machine.
I've got an i5 530 in a server in my lab. Since it runs idle for long periods it isn't very power hungry. It is also beefy enough for a lot of different loads.
I've found pretty much everything since the Core 2 Duo to be more than enough of for "server" tasks.
Likewise, my i7 920 continues kicking and handling most any random task I throw at it. I retired it as my daily driver in late 2018, but I sure got my mileage out of that thing.
I bought myself and the kids i7-4770/4790 Haswell Dells when they were 5 years old. It’s still my casual/personal/non-work desktop and was the kids machines until this spring when I finally upgraded them so they could play Valorant (TPM-related not performance related).
Can concur. My last personal PC had a 3570k and lasted for a very, very long time. I do wonder if AMD's Ryzen 5 3600 might be in a similar position for this generation.
I had a Sandy Bridge also and after years of gaming, and years in storage, the machine still fired up and played latest games. Haven't since reproduced this.
I would argue Haswell is the real "still works great today" threshold, with its AVX2 support.
eDRAM Broadwell is even more modern. Its got truly beefy integrated graphics and a Ryzen X3D like cache, but it largely flew under the radar at the time, and Intel never made anything quite like it again.
On a personal machine, it's perfectly reasonable to set mitigations=off - the only place where I run untrusted code is the browser, and browsers had mitigations very early on.
Browser mitigations are about protecting other parts of the browser from JS. Attacks against the kernel are intentionally left out of scope of these mitigations on the assumption that kernel mitigations will handle both JS->kernel as well as any other user process->kernel.
Those cache attacks rely on accurate timing because the method of data exfiltration is faster access to array members that were cached in a speculative access.
By fudging the time (JS doesn't need ns or even µs accuracy) those issues are pretty much solved.
That's not true...at all. There are amplification techniques which work in JS and wasm that amplify a cache eviction into an over 1s delta [1]. Trying to restrict timing as a mitigation has been known to be ineffective since 2019, a bunch of people at Google who worked on the mitigation in Chromium wrote a paper [2] saying as much. Site isolation is the chosen mitigation because it actually works.
I setup an i7 3770k system in 2012; ran it as my primary machine all the way until a couple years ago when I moved to a 5800x.
This is Ivy Bridge, not Sandy, but frankly apart from Intel cheeping out on the TIM by using paste instead of soldering it may as well be the same.. arguably worse since Sandy could generally be overclocked further.
That 3770k system got turned into a Proxmox server; pfsense, TrueNAS Core, few other misc low demand VMs.
Here's the rub: I recently swapped the 3770k for a 3770 because the k model does not support VT-d for some reason.. I had to use PCI-E passthrough in order to use TrueNAS reliably (NICs and an LSi HBA for the six SATA drives).
Really the biggest limitation for me is core count and ram, I'm maxed at 32 gigs and ZFS loves ram. There's other quibbles like Asus not issuing an updated bios to implement Microcode updates on my P8Z77-V, but the Linux kernel has mostly mitigated it in software to the point that the remaining vulnerabilities are the kind I'm not concerned with.
I'm hoping to build a replacement next year.
I would like to talk about how great my 3770k was for lasting so long but really it's just because for gaming there was little incentive for a long time.
You can install the microcode driver to have Linux provide the updated microcode, if you'd like. (Depending on your distro, it might already be doing it.)
I'm running Proxmox (Debain underneath really) which has a meltdown-spectre checker package available, I followed a debian guide I believe for implementing the microcode and the results are mostly in the green now.
I haven't looked into it but as far as I know I don't think I need to worry about this for the VMs but I could be wrong.
What a coincidence. I too have a proxmox server running on my 3770k with an LSI HBA flashed to IT mode, on the same motherboard. Fortunately it works with Proxmox and that is where I am running/managing ZFS. I never attempted to run TrueNAS w/ pass-thru so didn't encounter that VT-d issue.
A coincidence indeed. It has been a solid platform all these years. Now the question is do you also have it housed in a green Corsair Vengeance C70?
I presume since you're running an LSI HBA on the same board you also had to tape over the SMBus pins as well.
I had debated just running all of the HBA (IT mode as well though I didn't mention it before) connected drives direct on Proxmox with ZFS to side step the VT-d issue but I had some other use cases necessitating PCI-E passthrough that I didn't have a workaround for. The non-k 3770 was cheap enough that I was willing to go for it.
ZFS makes good use of RAM but it actually does not need as much of it as people have been parroting. If you need the RAM for other things you should definitely try reducing ZFS's allocation. (All of this is assuming you are not running dedup)
Sandy Bridge was the last CPU uArch changes that we got 20% IPC increase. Since then everything has been ~10% spread up with majority leaning towards floating point with new instructions.
I often spend time wondering the transistor count difference between modern Efficiency Core and Sandy Bridge. If you compare the two [1], SandyBridge vs Gracemont. It is clear the difference where Gracemont Support for AVX, AVX2, FMA3 and AVX-VNNI instructions makes.
The SandyBridge was fabbed on 32nm, Gracemont on Intel 7, would a modern SandyBridge on Intel 7 have similar die size as Gracemont? Or they could have sold me a SandyBridge on Intel 7 on a Raspberry Pi like devices.
This was shockingly true. I had originally written (and deleted) a reply that discounted what you said not realizing that UserBenchmark had flipped the CPUs I searched for. The 2500K does legitimately stomp cheap laptops.
Yeah but you're comparing a 95 Watt 2500k Desktop CPU to a 6 Watt N4020 Celeron laptop CPU.
It's really not a fair comparison.
If you want to cite more recent cheap laptop CPUs against a 95 Watt desktop CPU, maybe something like Intel Core i5-8265U @ 1.60GHz would be a better example. It's 1.45x faster than the 2500k and consumes only 15 watts, in a $270 Dell laptop.
You know, that's a great point. I was trying to stick to what I thought was the spirit of the comparison by just grabbing the cheapest laptop I could find, but $270 is well within the "cheap" range, and the instructions per watt are significantly higher on that 8265U.
That's true unless you have a Micro Center nearby. I picked up a new Asus laptop with an i5 1135g7 for $250 about 6 months ago. They are selling a new Lenovo with an i3 1115g4 for $200 currently.
Still rocking my i7-950, air-cooled overclocked to almost 4 Ghz for the last 14 years without a single major hiccup. Bi-annual cleaning and a re-paste every 2-3 years has probably helped.
It finally showed its age though about 2 years ago when some new games required new instruction sets and couldn't launch at all. A new build is finally in the works.
My i7 2700k is still my Plex server/sometimes gaming rig (mostly game on the switch nowadays). With a 6650xt I was playing Spiderman at 1440p pretty well. I'll update it someday but I honestly don't really have a need to.
I'm using an Alder Lake i7 12700K machine now as my daily driver since last year (2022), but before that I was still using my Sandy Bridge i7 2700K machine I put together in 2012. That's 10~11 years of active service, and I still use it once in a while as a sub machine with no really noticable deficiencies. That 2700K's still more than powerful enough for most things I can throw at it today.
I also still have a home server running that's built around a Sandy Bridge i3 2100, too. That CPU's overkill, let alone anything newer, for a server that just handles backups and LAN file sharing.
Sandy Bridge was, in my opinion, the turning point when CPU performance finally met and outpaced common user demands for processing power. In generations prior it was commonplace for CPUs (and thus entire machines) to be upgraded every few years, because performance always became insufficient, but since Sandy Bridge it became practical to just keep using it for the better part of a decade if not more.
I'd make the argument that Core 2 Duo was the first 10+ year capable chip. For the avaerage person, A Core 2 Duo system from 2007-2008 could easily be used up to 2018. They are now starting to slow down quite a bit on modern web pages, but certainly still usable. The biggest thing holding them back is video decoding. I have a couple C2D MacBooks and the GPU has no hardware GPU decoding.
Yep, owned i5 3470 for several years, and it was very fast for me. Bought 12500 only because I needed AVX512 (and it is the best AVX512 around, with FP16).
1) You need CPU produced in late 2022, not in 2023. I deliberately checked the date code when was buying the CPU. 2) You need a motherboard with old miccrocode. 3) You need to disable microcode update in your OS, be it Windows or Linux.
It depends on the specific BIOS, but AVX512 can be used on Alder Lake CPUs if you buy a performance cores only processor or the efficiency cores are all disabled.
I've been curious about the long term changes in performance, both positive (from toolchain and optimizer improvements) and negative (from Spectre / Meltdown and friends).
Chips and Cheese did a similar writeup about Sandy Bridge's AMD contemporary, Bulldozer. I have a Bulldozer, so I've done some testing with interesting results. I'd love to find someone who has a Core i7 2600(K) / 2700(K) to do the corresponding tests on Sandy Bridge.
When you ponder the abstraction upon abstraction that a true "full stack" view of computation entails it induces vertigo.
While somehow it "all works" in the end it is retains the sense of miracle associated with any sufficiently complex engineering.
But I wonder to what extend the complexity and difficulty of redesigning this early layer to meet evolving computational needs is what creates the current sense of stagnation.
I wish I understood this, but lack a lot of the background in CPU architecture beyond a very superficial understanding (ex. what is meant by "branch prediction"). Can anyone recommend me a good source to help me get up to speed on this?
If you have a slow instruction like a load from memory, you could try to execute following instructions that don’t depend on the load (out of order). But what do you do when you run into a conditional branch?
Conditional branches are taken and go somewhere else or they are not taken and just go to the next instruction.
Branch prediction (really conditional branch prediction which is a form of speculative execution, think Spectre) is making that guess. Getting it right makes the code fast. Getting it wrong costs about 10-15 clock cycles and some lost energy. You can and Intel does throw anything at it, neural networks based on the stack return addresses, recent branch history, … anything.
You're asking for a course in computer architecture. Buy a textbook. I recommend "Computer Systems" by J. Stanley Warford. We used the fourth edition in our class and I found it very enjoyable. It will cover all the way up from "How do you represent data in binary" to "how do you create an arbitrary digital logic circuit from a spec alone" to "here's a toy computer to play with".
Overcoming the legacy front side bus architecture, quick path interconnect opened the door for the advancements of software defined networking and software defined storage.
The best part is that I got it at a university surplus sale for $60, complete w/ a decent antec case, corsair psu, 16gb of ram and a decent Asus motherboard.