It goes the other way, too. When Apple put cameras in all of their laptops, the press relentlessly bashed them for wasting BOM on something so useless and expensive. Then the industry realized it was a good idea and followed suit. Similar for retina displays -- the term "High Definition" had become synonymous with "good enough" and ground PC monitor advancements to a halt for a decade. Phones were coming out with higher resolutions (not pixel densities, resolutions) than full-size monitors. Then Apple figured out how to market higher resolutions, the press mocked them for wasting money, but word got around that HD might not be the end-all of display technology and consumer panel resolutions started to climb again.
Here's a counterexample, a niche that could really use the Apple Bump but hasn't gotten it and probably won't get it: 10 gigabit ethernet. 1GbE became synonymous with "good enough" and got so thoroughly stuck in a rut that now it's very typical to see 1GbE deployed alongside a handful of 10 gigabit USB ports and a NVMe drive that could saturate the sad, old 1GbE port many times over.
Sometimes taking risks results in a Touch Bar or Butterfly Keys. That's just the nature of risks. The only way to have a 100% feature win rate is to limit yourself to copying features that someone else has proven out, but if everyone does that then the industry gets stuck in a rut.
I'm glad Apple exists, even if I don't personally feel the need to fund their experiments.
> 1GbE became synonymous with "good enough" and got so thoroughly stuck in a rut that now it's very typical to see 1GbE deployed alongside a handful of 10 gigabit USB ports and a NVMe drive that could saturate the sad, old 1GbE port many times over.
This has a few reasons:
- 10 GbE was, until quite recently, pretty power intensive and it still is more expensive and hot than gigabit
- Devices in LAN, especially those with high bandwidth usage, have become far rarer. A lot has moved to the cloud and the bandwidth of most people can't saturate 100 Mbit, not to speak of Gigabit.
- LAN as a whole has become rarer. A lot of people now only use WiFi with their phones or laptops, up to the point that most people now have (theoretically) faster WiFi than LAN.
Combined, there are few reasons to take the expense of putting a high-speed ethernet port on a device. Luckily, the introduction of 2.5GbE and 5GbE has decreased the jump a bit and you see those ports on a few consumer devices now.
10 GbE is still iffy even with CAT6 cabling over copper which complicates deployments and user experience. As a result, prosumer type devices like recent AMD x570 motherboards and the upcoming Intel Z690 based ones are including 2.5 GbE ports that are rated to work over CAT5E and provides enough bandwidth for a few hundred GBps with a lot less power usage on the switch side (something like < 4w / port seems common) and makes it easier for low cost passively cooled switches to work alongside a switching SoC that doesn't need to be terribly sophisticated to hit the latency requirements needed to support 2.5 GbE.
10GBASE-T is power hungry and unreliable, but dirt-cheap 10GBASE-LR and 25GBASE-LR transceivers work great up to 10km. If only they could figure out how to fit the transceivers into mobile-friendly packaging. But for a workstation they're great.
That's true, I actually run fiber in my home for that reason. I think the problem with fiber is, though, that the technology is pretty unknown to consumers and working with fibers is a lot harder than working with cables; they take a lot less abuse before breaking, for example. But if someone is going for 10Gbit+ in their home network, I can highly recommend fiber.
I really which LAN made a comeback. There hasn't been a week without a video conference where someone had internet issues due to Wifi problems. In fact, from my experience, most times people talk about issues with their internet it's in reality Wifi issues. But few non-technical (and even technical) people consider connecting devices like TVs or Laptops by LAN, even if they hardly ever move and the router is close by.
All this talk about how fast Wifi can be made people think it's all they need. But in reality, building a Wifi network with fast speeds across a whole house while avoiding too much interference from networks around you (esp in inner cities where you can easily have 40+ networks in reach) is more work and more expensive than pulling LAN cables to the right places.
But LAN isn't sexy and no one advertises how fast a copper cable can be, it just doesn't sell products as well as talking about Wifi 6.
I am partly with you. But considering you have to actually put wires through walls and install sockets vs. just setting up an Wifi 6 access point, I don't see Ethernet making a huge comeback.
> Combined, there are few reasons to take the expense of putting a high-speed ethernet port on a device. Luckily, the introduction of 2.5GbE and 5GbE has decreased the jump a bit and you see those ports on a few consumer devices now.
I think the only thing driving 2.5/5/10GbE at all is that WiFi Access Points need it.
Compare the cooler for a 2.5GbE card [0] to that of a 10 GbE card. The fact that WiFi (which is what most consumers use) now supports those speeds surely helps, but 2.5GbE is also simply far easier to integrate and power.
I agree that 2.5GbE is easier, and I still think AP backhauls are the primary driver for it. AP makers cannot sell multi-gigabit WiFi APs without a back haul that can support them.
> 10 GbE was, until quite recently, pretty power intensive and it still is more expensive and hot than gigabit
PCIe 3.0 transceivers were 8Gb/s and supported preemphasis and equalization, closing the sophistication gap with their off-backplane counterparts. How many PCIe3+ transceivers has the average person been running (or leaving idle) for the last decade? These days a typical processor has 16Gb transceivers by the dozens and 10Gb hardened transceivers by the handful. I just counted my 10Gb+ transceivers -- I have 36 and am using... 10 (EDIT: 8/4 more, HDMI is 4x12Gb/s these days).
The reason why 10GbE is expensive has nothing to do with technology, nothing to do with marginal expense, nothing to do with power, and everything to do with market structure. Computer manufacturers don't want to move until modem/router/ap/nas manufacturers move and modem/router/ap/nas manufacturers don't want to move until computer manufacturers move.
These snags don't take much to develop, just "A needs B, B needs A," and bang, the horizontally segmented marketplace is completely immobilized. That's why the market needs vertical players like Apple who can push out A and B at the same time and cut through these snags, or high-margin players like Apple who can deploy A without B and wait for B to catch up. Otherwise these market snags can murder entire product segments, like we've seen happen to LAN.
No, it isn't because of reduced demand. People are recording and editing video more than ever, taking more pictures than ever, streaming more than ever, downloading hard-drive busting games more than ever, and so on. LAN appliances would have eaten a much healthier chunk of this pie if LAN didn't suck so hard, but it does, so here we are.
> Luckily, the introduction of 2.5GbE and 5GbE has decreased the jump a bit
Yaay, PCIe 2.0 speeds. 2003 called, it wants its transceivers back :P
Power is a big differentiation. You need to send 10GbE over 100m (some break the standard and only offer 30). Have you ever touched a 10GbE SFP module or the heat sink of a card? They're quite hot and you need to provide that energy, which is not a problem on a desktop, but a big one on a laptop. If the laptop has RJ45, that is.
> modem/router/ap/nas manufacturers don't want to move until computer manufacturers move
Modems and routers only make sense once they serve a link that is actually beyond 1Gbit - which is rare even today. Also, these devices are minimal and the hardware required to actually route 10Gbit is a lot more expensive. Even Mikrotiks cheaper offerings today can't do so with many routes or a lot of small packages (no offense to them, their stuff is great and I'm a happy customer - it's still true, though).
APs are a bit different, as WiFi recently "breached" the Gbit wall (under perfect conditions). But there are already quite a few with 2.5Gbit ports to actually use that.
NAS, on the other hand, are a bit held back by the market. Still, high-models have offered either 10Gbit directly or a PCIe-slot for a long time now.
> People are recording and editing video more than ever, taking more pictures than ever, streaming more than ever, downloading hard-drive busting games more than ever, and so on. LAN appliances would have eaten a much healthier chunk of this pie if LAN didn't suck so hard, but it does, so here we are.
The professional video editing studios with shared server are already on 10 Gbit LAN, the stuff has been available for years. Pretty cheap even, if you buy used SFP+ cards. Switching was expensive until recently, but I'd say that the number of people which need a 10G link to a lot of computers are even less.
And LAN competes with flaky, data-limited, expensive 100 MBit lines (if you're lucky). 1GbE is beyond awesome compared to that and yet it lost, anyway.
> Yaay, PCIe 2.0 speeds. 2003 called, it wants its transceivers back :P
I'm not happy, either, but its better to at least go beyond Gigabit speed rather than stay stagnant even longer.
> ... a niche that could really use the Apple Bump but hasn't gotten it and probably won't get it: 10 gigabit ethernet
10GbE was a bit of a mistake on several fronts.
We had become used to these 10x iterations with Ethernet from 10Mb to 100Mb to 1Gb such that 10Gb seems like a natural extension. But running that bandwidth over copper remains a significant technical challenge. For awhile I was using a Thunderbolt 10GbE controller and it was huge (basically the size of an old 3.5" external HD) and most of it was just a giant heatsink, basically.
In commercial situations, the issues with copper often result in using fiber instead. At that point there are less barriers to even higher speeds (eg 25Gb, 40Gb, 100Gb), which make a lot of sense in data centers.
Added to this, there's not a lot of reason to run 10GbE in a home setting or even in many small corporate settings. Even in larger corporate settings, you can go really far with 1GbE using switches, bridges and routers, possibly using higher speed backhaul connection technologies.
What should've happened is what has started to happen in the last few years: interim speeds (eg 2.5Gb and 5Gb). Hopefully these become more widespread and become relatively cheap such that someday they just displace 1GbE naturally.
On top of all of this, Ethernet is an old standard that uses 1500 byte frames. This actually starts to become an issue at 10+ GbE such that various extensions exist for very large frames (eg 9000 bytes) but this runs into issues with various hardware and software.
Probably largely because of the 1500 byte frames of Ethernet, the de facto standard for TCP/IP MTU is pretty much 1500/1536 bytes and this has become a self-fulfilling prophecy as more and more infrastructure is deployed that makes this max MTU assumption.
The scary part? 1GbE is older than I thought. A couple weeks ago I replaced a 1GbE switch (gs524t) at my work and got curious. Said model came out in 2001 or 2002.
Also the parallel port. I remember the drama!
It goes the other way, too. When Apple put cameras in all of their laptops, the press relentlessly bashed them for wasting BOM on something so useless and expensive. Then the industry realized it was a good idea and followed suit. Similar for retina displays -- the term "High Definition" had become synonymous with "good enough" and ground PC monitor advancements to a halt for a decade. Phones were coming out with higher resolutions (not pixel densities, resolutions) than full-size monitors. Then Apple figured out how to market higher resolutions, the press mocked them for wasting money, but word got around that HD might not be the end-all of display technology and consumer panel resolutions started to climb again.
Here's a counterexample, a niche that could really use the Apple Bump but hasn't gotten it and probably won't get it: 10 gigabit ethernet. 1GbE became synonymous with "good enough" and got so thoroughly stuck in a rut that now it's very typical to see 1GbE deployed alongside a handful of 10 gigabit USB ports and a NVMe drive that could saturate the sad, old 1GbE port many times over.
Sometimes taking risks results in a Touch Bar or Butterfly Keys. That's just the nature of risks. The only way to have a 100% feature win rate is to limit yourself to copying features that someone else has proven out, but if everyone does that then the industry gets stuck in a rut.
I'm glad Apple exists, even if I don't personally feel the need to fund their experiments.