CGTN is a Chinese state-run broadcaster; they themselves chose to use the phrasing “China does thing”. It may be in an effort to present a more unified face for the widely varied research happening in China, as well as showing a more easily-understood headline for Western audiences.
There’s another way to find this out. Just one tap to CGTN homepage. Try to read how they describe government news, domestic news, and international news.
It's more that you don't really get any headlines out of China that aren't from or laundered through Chinese English-language media first. Whereas in the West, universities send their PR pieces to the Western media directly. No normal journalist is trolling Arxiv or Nature or whatever for juicy papers, and certainly not Chinese-language journals: they're just rehashing the media release, usually uncritically. Which is why you hear about every time someone sneezes near a new battery chemistry that "may revolutionise energy storage" but "needs more research into mass production methods" at MIT but almost never at Tsinghua, unless it's done the rounds first and caused a stir in the domain-specific publications.
True, but mostly from rags like the Sun and Express, which are only loosely allied to the concept of journalism in general, let alone technological reporting. Also the Register, but that is deliberately riffing on the tabloid headline tropes.
I believe the same, but also, on paper, China is supposedly communistic, so there should only be one "China" to talk of (and, no, I'm not making some subtle statement about anything else.) Perhaps that's the agenda for CGTN in particular.
Growing up in Poland all we heard in the news was that "American scientists found that..."
The article could mention the relevant institution but the headline never would as it would rather confuse the reader.
If you think it should be phrased differently, you should tell it to CGTN, China's Beijing-based state-run media. HN merely accurately transcribed their headline, as per the site rules.
I read it as similar to the US espousing how its researchers developed something.
Not really a big deal, though I'd have... you know... linked to the actual paper or maybe mentioned the professor's name more prominently.
I think most stuff I read emphasizes institute and researchers more heavily, but I can see why anyone doing public research might want to expand the scope of credit.
"American scientists discovered that..." really isn't that uncommon though? At least quite common in German news.
I guess when you're inside a country which produces the news, the actual location inside that country matters more than for people outside that country ...
E.g. for the US what's commonly called Americentrism - I guess a similar term exists for China.
It should be noted that this is a technology for producing a semiconductor material with very high performance, but also with very high cost, a cost that is impossible to reduce, because both indium and selenium are among the least abundant elements on Earth (both having an abundance similar to silver, but being much more difficult to mine than silver, because they are very dispersed).
This material will never replace cheap materials, like silicon or silicon carbide, or even gallium nitride, in the bulk of semiconductor devices, e.g. in CPUs and memories, or in power semiconductor devices.
It will be reserved for a few high-speed devices, in special instruments that need high-speed signal processing or in radars or communication devices used in high-frequency bands (obviously these include military applications).
Selenium gets washed down residential shower drains every day in the form of dandruff shampoo. Surely if we can afford that, there's enough to go around for the semiconductor industry?
According to TFA, they have succeed to produce 2-inch wafers of indium selenide with a small enough density of defects, which nobody did previously.
The first roadblock on the use of new semiconductor materials is that in the beginning nobody succeeds to make crystals that are both big enough and free enough of defects. For size, usually achieving to make 2-inch wafers is the threshold for enabling commercial applications.
The second problem is finding metallization systems that can achieve ohmic contacts and rectifying contacts on the semiconductor crystal and the third is finding impurities that allow to modify the polarity and the concentration of the charge carriers in wide enough ranges.
These 2 problems are particularly difficult for wide-bandgap semiconductors, like gallium nitride, but they are unlikely to be difficult for indium selenide, which should behave similarly to zinc selenide or indium phosphide, for which there is much more experience.
There are vast lists of "better than silicon" semiconductor crystals. Most good power mosfets these days are gallium nitride, for example.
Where they all fail is production process, which amounts to transistor size, basically. Sure, you can make a really great, very efficient, measurably improved very much macroscopic transistor. But no one knows how to put a hundred billion of them on a chip, so... basically who cares?
Someday someone will figure it out, maybe. But announcing an exciting new chemistry says little to nothing.
The problem is not the transistor size, but the transistor cost.
It is easy to make transistors as small as on silicon on most other semiconductors, but the cost of the final product would be many times greater.
One important reason is that silicon is made into huge 12-in wafers, while most other semiconductors are made into small 2-inch to 4-inch wafers, like for silicon several decades ago. At each processing step on a silicon production line one machine processes 9 to 36 times more transistors than if another semiconductor were used. Large wafers also waste much less area when making big dies.
In general, for silicon there exist very big processing machines with very high productivity, while for other materials there is little difference between lab equipment and what can be used for commercial production. An integrated circuit made on a non-silicon material also requires more processing steps and more expensive materials.
In order to keep increasing the performance of CPUs and GPUs, the replacement of silicon with another semiconductor is unavoidable, perhaps in a decade from now. However, that will be done only after all other possibilities of improving silicon devices will be completely exhausted, in order to avoid the increase in production costs.
> It is easy to make transistors as small as on silicon on most other semiconductors
Yikes, [citation needed] here. No, it absolutely is not. All the etch and litho chemistry is highly specific to the substrate and dopants. You can't just feed a germanium crystal through an etcher tool in a TSMC fab and get anything but a brick out the other side.
I'm not aware of anyone anywhere doing low-nm lithography on anything but silicon (even in a demo context, or even announcing plans for the capacity), but I'm willing to be educated.
Nobody has done low-nm lithography on anything but silicon, on integrated circuits.
The reason is that such integrated circuits could not compete in cost, so developing all the fabrication equipment for them would not be worthwhile.
On the other hand, on special discrete devices, e.g. microwave transistors, and on experimental devices, "low-nm" (which means tens of nm for the most advanced devices) has been done for a long time.
The low fabrication yields, which would be unacceptable for the mass production of integrated circuits, have much less relevance for small and expensive discrete devices and for experimental devices.
This does not appear to be an ingot, as silicon would be prior to being cut, but a film grown on some substrate, itself called a "wafer," so perhaps silicon?
The film itself is one atomic layer in thickness?
I don't know how you would make "wells" that form a FET, either for the source and drain, or for the larger complimentary wells of CMOS.
I don't know how advanced the thinking is to do this, or an equivalent.
Many semiconductor materials cannot be grown as ingots that are later cut into wafers, like it is done with silicon or germanium, but they are grown epitaxially as thin layers on wafers made of other semiconductor or insulator materials that have a compatible crystal structure.
Most gallium nitride devices, like those that are used now in miniature chargers for laptops/smartphones, are made like this.
Using this technique for indium selenide is a standard procedure, not something surprising.
The "wells" are made by doping with various kinds of atoms, which is normally done by ion implantation, i.e. a ion beam inserts the desired impurities into the crystal, at the desired depth.
In very thin devices, like in most modern CMOS technologies, the doped zones no longer look like "wells". For an N-channel MOSFET, you just have from source to drain 3 zones of alternating polarity, n-p-n. The middle zone is surrounded partially or even totally by the gate insulator.
Unfortunately, even if the thermally-grown silicon dioxide is what has enabled the appearance of the monolithic integrated circuits and of the MOSFET transistors, eventually it had to be replaced.
There are already around two decades since silicon dioxide is no longer used as the gate insulator in high-performance transistors. The reason is that its dielectric constant is too low, so for very small transistors the gate would have to be too thin, so thin that it is impossible for it to not have holes and also impossible to prevent electrons from tunneling through it.
Therefore silicon dioxide has been replaced by hafnium dioxide (a part of the hafnium may be substituted with zirconium or rare-earth elements), which has a much higher dielectric constant, and which is also chemically resistant enough to survive the following wafer processing steps.
Because HfO2 cannot be grown in place, but it must be deposited in its entirety, it is much more difficult to ensure that the insulator-semiconductor interface is perfect, but the industry had to solve this problem decades ago, otherwise it would have been impossible to make the transistors smaller.
For other semiconductors than silicon, the gate insulator always had to be deposited. Because this is hard, metal-insulator-semiconductor FETs not made of silicon or of silicon carbide had only very rarely been used in the past. The transistors that are not made of Si or SiC typically are either Shottky-gate FETs or heterojunction bipolar transistors.
The reserves of indium are extremely small. Any new use that would increase demand would also increase the price. Also that price is for commercially pure indium. After indium is purified enough to be usable in semiconductor devices the price increases many times, possibly much more than 10 times.
Currently, the major consumers of indium are all the screens for monitors, laptops and smartphones, which use indium oxide as a transparent conductor, then the LEDs used in lighting and indicators. Also the power devices with gallium nitride contain indium, and their use is increasing.
There exist no mines of indium. Indium can be obtained as a byproduct from the extraction of other metals, primarily from zinc mining, but its concentration in zinc minerals is very small.
In order to produce more indium, one also has to produce more zinc, in an amount several orders of magnitude greater than the amount of produced indium. When the demand from indium will exceed that available from the current zinc production, further increases in demand will increase the indium price much steeper.
Except for indium, among the other chemical elements only for the platinum-group metals there is a so great mismatch between the amount that would be required by potential applications and the amount that is available on Earth. Selenium and tellurium are also close of these from this point of view.
Well if it's used as a thin film on screens, it wouldn't require much more production to make a few square centimeters more for chips to build a computer to use that screen.
We also ramped production of those screens without some major supply chain problem requiring 10x zinc production.
It's slightly worrying that they talk about its applicability in "smart terminals" rather than personal computers. I hope that's not them saying the quiet part out loud, about the future (or lack thereof) of personal computing.
Personally, I don't care about mass production of SOTA semiconductors. My personal computing resource needs haven't really increased in the last ten years, and I doubt they have for the median person.
I want 10 million dollar factories that can make 10 year old semiconductor chips.
Yeeeeaaaah... that's not how that works. Legacy processes are often difficult or impractical to transport between fabs at the same node, much less fabs at different ones.
GP is saying you get the same functional performance as older chips for less size, power, and cost. Nobody makes main processors in 22nm anymore, for instance. That’s basically what was used in production 10 years ago for processors.
Nobody makes old processors (except for legacy support) because Moore's law has been a thing for 50 years. It has always been cheaper to produce chips at scale with the latest tech. This has justified the creation of 10 billion dollar factories, till recently
Now that the law is close to coming to an end, the economics changes. Latest tech provides negligible marginal benefits to the median consumer. So now it is possible to think of commodotizing a single process and making the factories much much cheaper.
'Main processor' meaning the SoC used in a modern high performance device. Phones, tablets, or computers.
Plenty of lower power or older stuff (RPI included) use older nodes just because that's available. Microcontrollers tend to use higher nodes (22, 40, or 55nm) just because they don't need the super high speed stuff.
Also, the RPI5 uses 16nm, not 22nm. Still not modern, but not unheard of for stuff like SBCs where performance is not particularly important compared to cost.
People do that too. Fab ports to smaller nodes are something that absolutely happens, especially as older legacy nodes close down. It happens all the time.
I suspect there is much less Indium and Selenium mining resources available on earth compared to silicium, so I wonder to what extent we can speak about scaling to "mass production"...
Is this also common for EU/USA? Do we say "UK developers new method for ..." or "Researchers at Cambridge"?
I swear I'm not making any political statements, just wondering why we treat it as a homogeneous entity.