Man, I'm feeling stronger about LK-99 being it. This paper is theoretical and she finds that particular Cu substitutions onto specific Pb atomic sites are key to enabling a band structure that is usually linked to high Tc superconductors.
What this means for the more practical minded is that the synthesis of superconducting LK-99 is not trivial and you need to make the appropriate substitutional alloy for this to work.
This is a DFT paper, and a band structure that is usually seen in high Tc superconductors just naturally came out. She also talks about the strong electron-phonon coupling that naturally arose from the structure, which is always necessary for superconductivity.
I am, by far, the most excited I've ever been about this being a RT, ambient pressure superconductor.
If this could be simulated, can you help me understand why we couldn't have used simulation to find promising SC materials to investigate further earlier? Are there just too many permutations to investigate?
It seems to my own naive self that if LK99 is the real deal, we mostly just got lucky finding it.
Not an expert but it just happen that my lab is full of DFT folks so I heard a lot about those everyweek. As people above already answered the questions, I gonna talk some extras.
1. Computation cost is large. 1 compute task for a small scale ~100 atoms last about 3 days to 1 week on supercomputer.
2. Search space is hugh. For each composition you can have different atomic (or crystal) structure. And here we are talking doping which means introduce impurities into the molecule. Chemical characteristics differs depending on which atom you swap for the impurity. Sometimes you may want to try all places.
3. Depends on initial values. Sometimes the initial value is just bad that the result is totally unusable, then you have tweak a little bit and throw back to supercomputer. This cycle might happen few times for 1 specific formula and structure.
4. Not 100% accurate. Often the resulting numbers are off by a few % or more which is hugh, compare to experimental results. Reason is that the simulation is not full scale, approximation is here and there to reduce computational cost.
Based on your descriptions, if LK99 is real, it sounds like there are closer to infinite than 0 of these materials. If there are so many combinations and somebody randomly stumbled on one, it makes sense that if the number of combinations is asymptotically infinite, that number of superconducting materials is very large.
The other thing is that a lot of the results depend entirely on what you put into them. Sounds lovely and obvious, but really isn't. Which basis set would you use for simulating your molecular object of interest? There are hundreds to choose from, all developed for different purposes and under different assumptions (eg "frozen core", Born Oppenheimer, Slater orbitals, etc). It's very much not trivial to know what you need, because what the system does answers that question, and in order to find out what the system does you need to solve it -- for which you need a basis set!
It's hard enough to get out the hyperfine interaction to the right order of magnitude of a simple metal complex, let alone something like superconductivity, which fundamentally is a many, many body problem...
We exactly considering this since this year. But there are some major problems that cannot be solved in short term.
Inorganic crystal structure database (and there is one database literally this name) is way smaller than what we have for proteins. Also by nature, Transformer is hardly useful for crystals because the crystal is repetitive. You don't throw the same sequence over and over to transformer and hope it will work like magic.
My current understanding is that Graph Neural Network is perfect for this job because graph can exactly describe this kind of repetitive nature of crystal.
GNNs and graph transformers are the current state of the art methods for this kind of crystal property prediction task. One drawback is that they don’t seem to capture long range structure all that well, and the current generative models (which are really cool) based on GNNs don’t seem to take advantage of symmetry that well
not soo similar really, but yes, alphafold-style generative models could help find realistic structures for a specific composition. However a) data is much worse (I would say so at least...), b) the clever tricks of alphafold centered around strings of aminoacid don't really apply to particles in a box... and c) search space might be even larger if you go to interestingly sized systems.
Also there's been some people arguing about the particles in a box situation for a loooong time and the most promising approach currently is diffusion.
DFT scales horribly so it's phenomenally expensive to run. You have to have some other mechanism for knowing the general atomic layout before entering the DFT realm.
Once you know the atomic positions you can then do little perturbation simulations to model phonon dispersions or ask electron density questions.
Linear scaling DFT is something way more impactful than room-temperature superconductors. What's next? "Hey, I've used my FTL spaceship to verify the material at those friendly alien's library"?
The fact that there has been no Nobel prize and we didn't spend a week around the web arguing "yes, it works!", "no, didn't work for me", "yes, I verified it!" highly implies that the site is trying to say something different than what we are understanding.
Poking around the website, it seems to be owned by Dassault Systems and available for commercial licensing as part of their "Biovia Material Studio" product. PDF flyer about it here:
I'm not intimately familiar with it, just guessing this is some orbital free method that theyve refined. Nothing new, just another approximation that cant model some effects.
You have to put in the structure and then it's expensive to do the calculation. The space of possible structures is extremely large. If you have candidates then you can run through them, but you can't just random search through trillions of trillions of candidates.
I've worked on the CS side of new material discovery.
The tricky thing is that you don't actually have that much data so ML is not even close to plug and play. If you want to get results you end up needing to pair ML with a lot of theory and some tricky algorithms to help narrow the search space and even then that space is huge.
Progress is being made but I think we're still at least 5-10 years from CS providing a real inflection in materials discovery.
The positive view here is that we are 5-10 years from being able to search for materials computationally using AI. That would potentially be more valuable than a single RTS.
They are doing this sort of thing (that's more of a research institute). The problem is that you are not looking for the compound but for the exact way to manufacture it assuming that the original sample really is superconducting.
The problem is that this stuff is severely nonlinear, and for the raw formula there are not that many degrees of freedom to try. If you get a structure, there is no guarantee it's stable, or accessible by our synthetic techniques.
Obtaining the training data is also likely to be tricky.
Disclaimer: this is not my area of expertise in the slightest.
If we have the ability to computationally determine these things without any experimental data needed, and we know we're looking for a specific band structure, wouldn't we just do an automated search of possible chemistries to find everything producing said band structure?
Then just whittle down that list to the easiest to produce and most common materials for the first to test... what am I missing?
The parameter space for such a search even with a limited number candidate materials is immense. You'd need to guide the search somehow, that band structure might be the one, or it may not be... and every candidate that you flag will have to be synthesized which may not be all that easy.
>> If we have the ability to computationally determine these things without any experimental data needed, and we know we're looking for a specific band structure, wouldn't we just do an automated search of possible chemistries to find everything producing said band structure?
Isn't this a plot point in that one Star Trek movie (episode?) where they go back in time and program a current-day computer to do this?
Does anyone know if there's a way to ensure those "particular Cu substitutions" happen at the correct atomic sites? Or I guess what's the way forward in terms of synthesizing
I’m not an expert in anyway but when I see detailed chemical compositions in an arxiv summary, a patents, and multiple publications, it’s almost like it’s ready to smile at all the scrutiny.
It doesn't reproduce: https://arxiv.org/abs/2307.16802. That doesn't mean there's not something to further investigate, but LK-99, at least as described in the paper, is not it.
It doesn't reproduce in that case, which is a useful data point but may not be the final word. The article linked in this thread suggests why making it may not be all that easy.
I'm not following you. The compound is specified, the process is what is poorly specified and the researchers more or less admit that they do not exactly know what the right method is. This makes it a lot harder to replicate and may well cause a lot of attempts to misfire before someone finds something that works or we give up on the search. What TFA here suggests is that there are some potential complications in manufacturing this that would explain why the original researchers had problems trying to make the compound, the number that I recall is that '1 in 10' tries resulted in a working sample.
That would also be a fantastic way to pull a hoax, because it will result in 10x more effort spent on your hoax. If it turns out that it was a hoax I think the original researchers will find immense gratitude from Pons and Fleischmann for taking over the top spot for the textbook example of bad science. But for now, as far as I can see the jury is still out, and if anything the paper linked here actually improves the chances of it being real a bit more than it is offset in the other direction by a failed replication attempt.
Then why do we have separate names for Graphite and Diamond? Same element, different molecular (crystalline) structures. If CuO25P6Pb9 can exhibit superconducting properties, and reproduction attempts are failing, then there must be a specific isomer that needs to be achieved, the creation of which is not described in the LK-99 paper. I'm just trying to understand how to wade through this stuff, I could be wrong, and I dropped solid state physics because I hated it, but that's how I currently understand the situation.
And just like here that's a function of how the bulk carbon got to be formed, under extreme pressure and temperatures or less pressure and temperature. It's the recipe that makes the difference, not the ingredients.
As far as I understand, we're saying the same thing.
isomer
/'aɪsəmər/
noun
a compound that exists in forms having different arrangements of atoms but the same molecular weight
I'm using isomer to refer to possible different arrangements of "the LK99 compound". I see you're saying allotropes refers to diamond vs carbon (allotrope referring a single element vs isomer referring to compound).
That raises a question, what's the name for different crystalline structures of the same compound? Is that still an isomer or something different? I'm out of my element here.
An isomer can have different valences but still have the same overall chemical formula.
So just for an example (I have no idea how to put a tetrahedron in a comment :) ) a chain of C-C-C-C and C-C=C-C (which you likely can not synthesize) would be an isomer but C-C-C-C in one crystal lattice versus C-C-C-C in another crystal lattice would be allotropes.
The carbon in graphite sits in sheets (hence the possibility to form graphene), whereas the carbon in a diamond always forms three dimensional lattices.
edit nah: bad example, sorry, I can't find a good way to visualize this in text the 2nd C-C-C-C chain should have the last C dangling from the 2nd down.
edit2: examples are wrong, I should have used a more complex molecule for the isomer and the carbon can have either three or four other carbons hanging of it for the allotrope. tricky...
Wow that's a bad one, I never even knew about it, thank you for pointing that out. And yes, he's probably worse, with Pons & Fleischmann I never managed to rule out that they themselves were 'true believers' that had deluded themselves rather than fraudsters. But with this guy there is little doubt.
Oh yeah, he is absolutely fucking wild. There's a great documentary on him, here's part one, the other two parts of the documentary are also on YouTube on the same channel: https://youtu.be/nfDoml-Db64
Defining LK-99 as “the thing you get from following the steps in this paper” and not “the allegedly superconducting material these guys have a sample of” is silly.
You say potato... The paper is fucking vague on all counts, from synthesis to result. "LK-99" at this point is understood to be most likely a mixture of compounds. It is not fully defined, it is not one "thing," it's broad proposal that encompasses a family of materials.
Essentially all of my knowledge of materials science comes from reading HN's discussion of this over the last ten days, but I've read repeatedly that these kinds of synthesis processes are not very reliable. Even with well known materials with well known "recipes," someone trained can follow a procedure apparently to the letter and get no result, and then repeat the procedure apparently the same and get a good result.
Semiconductor yield at high end fabs can be lower than 20%. If you tried that process at a new fab and wound up with a complete failure, would you say "well that process just doesn't produce any semiconductors" or would you think you might need to refine the implementation of the process.
Ah no they called the working prototype LK-99 so if it works it is if it does not then it is not. Instead of what you are saying that it works so is not LK-99.
Or it's lk-99 as described, 5% of the time... basically they need to tune the process to improve yields. I'm not sure if that means it wouldn't be LK-99 if that's all that's going on here.
Even if LK99 isn't the real deal, god has it been an exciting 2 weeks. Though I know absolutely nothing about material science, I have enjoyed the sheer enthusiasm and optimism the scientific community has shown. I feel like I'm part of something unique and special, something which could have only been achieved by the medium of accessible mass communication. The excitement here is palpable. I feel fortunate to be part of this infinitesimally miniscule portion of human history where I can share this moment with so many people.
I was many times sad to not be in the future just to get more history to read and more ahead on the tech tree as you say. One of the most interesting things in life is the "story" of life itself. I don't mind not living later but I'd really like to know what happens!
This paper is by someone from Lawrence Berkeley National Laboratory, who has run some simulations of LK99 and found features that are associated with high temperature superconductors.
In the last paragraph before acknowledgements, they point to a feature that could make synthesis difficult, then conclude with "Nevertheless, I expect the identification of this
new material class to spur on further investigations of
doped apatite minerals given these tantalizing theoretical
signatures and experimental reports of possible high-TC
superconductivity."
(I'm a high school dropout, worked for a physics project once)
"However, substitution on the other Pb(2) does
not appear to have such sought-after properties, despite
being the lower-energy substitution site. This result hints
to the synthesis challenge in obtaining Cu substituted on
the appropriate site for obtaining a bulk superconducting
sample"
OK I'm starting to actually believe that LK-99 might be the real deal.
It’s sort of an amazing time. All the things we projected were 30 years out 40 years later and manifesting. The degree of skepticism is high, as should be, but the things we knew were achievable just hard to discover are rapidly unfolding. What falls next?
(N.b., I know I’m displaying unreasonable hubris and it’s still more likely than not an illusion or fabrication, but it certainly feels a lot of long term investments are rapidly coming to a head - AI, space, cancer treatments, aging research, EV, even flying cars and fusion - what a great time to be alive)
I’m currently in a Twitter space with some accounts who know more about this process and this question was answered a few minutes ago. To summarize: no one has yet found a way to “steer” which sites get Cu and which ones don’t. This paper simultaneously makes LK-99 look like the real deal but also points out there may be more, or much more work, to reliably direct the replacement. Someone in the space said “if you find a way to get the Cu to the right site that’s a Nobel Prize”
I don’t think so, not at this point. Their foundational theory and experiments to objectively prove it are more than enough. Anything that follows could be too but it doesn’t diminish what these guys have discovered.
There will be thousands of people working on this now. These guys are the shoulders they all stand on.
I mean, it’s the higher energy site, right? So heat it?
Better yet, do something like a microwave oven tuned to Lead’s resonant frequency to encourage all the sites to be in the higher energy state as the crystal structure is forming.
I'm not an expert on chemistry but it sounds like this would make it ridiculously hard to obtain a high quality sample. Copper can substitute for either lead site; I'm not aware of macroscopic processes that would favor one over the other. Problems like that are usually handled ad hoc. The authors seem to have bumped and shuffled their way there through the darkness.
For context, the preparation of tetrataenite was pursued for decades (first partial success October 2022) even though the structure was well-known and the constituents are just nickel and iron.
Out of curiosity, since this seems like a real inflection point toward trending in that direction, if it becomes increasingly likely that LK-99 or similar material is indeed a high-TC superconductor, what will savvy people be positioning themselves to do? What are good investments? What companies will be started, or what will existing companies be pivoting toward?
Open-ended research grants to anyone with moderate training in experimental science to just throw shit against the wall and try every last possible combination of something, without concern for 'publish or perish' or jockeying for status in academia. Lets get our smartest and most dedicated technical people back in labs rather than off making CRUD apps for 10x academic wages.
If this discovery is true, we just got lucky. Based on the story we know of LK-99 it almost didn't happen, and our current system is not set up to make these kinds of discoveries quickly. Throwing billions at 'just go find stuff that matters' basic research is ultra cheap in comparison to humanity not having a high-tc superconductor.
This has been argued by David Deutsch for a long time and I'm glad to see it being replicated here. People should be free to pursue problems that interest them without fear of not returning results that are not deemed "favourable" to the institution. This will help speed up the creation of new "good explanations" which leads to new knowledge.
* Green energy suddenly becomes way more viable. Megaprojects in the most efficient sites can send energy long-distance and store it with effectively no loss, somewhat mitigating regional variations (especially if we have a high-trust world order where a united global grid is viable). (I read LK99 might have some limitations carrying lots of current but presumably other approaches would do better)
* EVs: improved performance of motors, batteries, charge time, and weight - huge shift for the market. Much safer than most current car batteries too.
* Big breakthrough for computing in the form of fast, cool, and efficient zero-resistance transistors. Step change for cutting-edge component performance, all the cloud hyperscalers would completely revamp their compute. TSMC / ASML probably get huge volumes of new orders.
Obviously the first bet is following the patents. Otherwise, my play would be the industrial companies that build things that build things, like factory automation companies, followed by companies that would see a surge in demand from products incorporating room-temp superconductor technology, like TSMC, ASML (maybe Apple/AWS).
Transmission losses aren't really a big problem for the grid. Cost, geopolitics, and resiliency matters more. I don't expect superconductors to change much here.
Transmission losses aren't a big problem for the grid precisely because we know we will have transmission losses, so we don't do things that will incur unacceptable losses.
The whole "plate a desert with solar power and solve the world energy problems" doesn't work because the desert isn't where the power draw is. Superconductors hypothetically permit the African desert to supply Europe with power.
As this goes from science to engineering, we may find other issues with that plan, beyond the obvious expenses (for example, current limits can be handled with more wires, but if the current limit is sufficiently small that's still a problem, this is no solution if the superconducting bundle has a hundred square meter crosssection), but superconductors at least put it on the table. Conventional conductors do not.
Transmission losses are a concern because they require building huge, very high-voltage structures.
Superconductors could make transmission lines much more compact, sturdy, weatherproof, and less vulnerable to sabotage. You could run a thick armored cable instead of a set of open-air wires on tall towers.
Superconductors have a limit on the current they can carry before the superconducting phase breaks down. This might put similar limits on the voltages.
I agree that the nature of this is entirely different to transmission losses, but I don't expect 230V lines carrying tens of thousands of amps. This would probably require excessive amount of SC material.
You would no longer need high voltages at all. We only use those to cut down on those very transmission losses because we're trying to reduce the amount of current. So you can drop all of the step-up-step-down stuff. On the down side: a short is then practically unlimited in current.
We will still need some step-downs, because current density is limited. But we will need fewer of them, and they won't need the monumental cooling they have now.
This seemed likely the case to me as well. I'd be interesting in hearing any counterarguments (or better still, actual studies) concerning distribution efficiencies.
The price for electricity is mainly driven by cost of distribution not production. Reducing losses on distribution would further increase the cost of the expensive part (upgrade to SC net) and decrease the cost of the cheap part.
Storage would help, the cheapest production cannot run continuously.
No, exactly not. The cost of infra is dominated by the transmission network and that transmission network is very expensive to build and maintain, a superconducting network would have some massive advantages. Besides, the losses are in the 5 to 7% range for a typical grid which may sound great in theory but is still a function of the distance between the generators and the consumers. So you can't really put those where you want them, you put them where you have to in order to minimize the transmission losses or you'll have to live with larger losses. Superconductors for the grid would give far more freedom in siting and would allow all kinds of neat tricks such as transporting solar power across the planet based on the day/night and seasonal cycle and likewise for wind power depending on where it is currently blowing the most.
Generating costs are a small fraction of the final price of electricity, taxes and transportation are the big ones.
If you just look at superconductors as a replacement for any old piece of wire you're going to miss out on a whole bunch of advantages, it is a clear qualitative difference which enables solutions that are entirely undoable today. Yes, there is HVDC, but the lines are expensive and due to the high voltages involved are not easy to interface to or from. They do have some unique advantages, being DC they allow non-synchronized grids to be connected.
Most superconducting logic families aren't using transistors at all. They use Josephson junctions, which are just two pieces of superconductor separated by a non-supercondutor. RSFQ (Rapid Single Flux Quantum) uses millivolt-high picosecond-long pulses to represent logic 1 and their absence as logic zero, instead of using voltage level as in CMOS.
Are you aware of any obvious issues with manufacturing Josephson junctions using photolithography? Like, would switching to an LK-99 or similar set us back to the 70s in terms of wafer density/size?
If using that flip-flop, whole processor will be closer to 100ghz (typically there are multiple transistors which need to stabilise before you have a result of computation). But probably those superconductors could enable even faster transistors and maybe we could get 1THz processors.
I suspect you'd still need very low temperatures to make this work, even with a high temperature superconductor: low temperatures reduce thermal noise, which may be an issue at such time scales (unless you pump a lot of energy per bit, which means high voltages (limits scaling) or current/capacitances (also limits scaling).
Like IDE ribbon cables gave way to sleek SATA cables, we could have another parallel to serial transition from silicon multicore crap to superconducting single core.
No, you'll get superconducting multicore because all that will happen is that the bloat will expand to consume the new cycle budget. This has already happened many times.
I don’t think we’d necessarily have to use it for transistors, but could use it in place of the metal interconnect that is significant in terms of resistive loss.
It's going to be a long time before it's put into use if this paper is correct. While it's exciting that a mechanism has been discovered (possibly), it seems to imply that the current method of synthesizing it is partially luck-based, and not very high quality. Of course, if we do understand how it works, lots of people will put lots of research into making a more reliable process, but it's going to take time. I'm not sure a clear path forward exists.
If it’s shown it’s theoretically possible there’s going to be a gigantic reallocation of resources into productising it and figuring out to make production work. “It’s theoretically possible but hard” is a world of difference away from “unsure if theoretically possible”
I think this is generally true, but also if this is verified it will suddenly have billions of dollars thrown into it overnight... possibly similar to the speed that covid vaccine research was done; there's a TON of money to be made for the first company that can put this discovery to use. This is a once in a lifetime sort of discovery.
That's not that important. What is important is that if it is true that this is the first member of a new class of superconductors, a whole new family if you will and that once the principles are better understood materials scientists can go about their search in a smaller parameter space of which they have proof that at least one set yields results.
Compared to the steps that have been happening in the last decades this one would be absolutely incredible in terms of temperature range, if I understood it correctly they aren't even sure about the upper limit due to a restriction in their measuring gear.
On one hand, yes, a new family will be discovered. On the other hand, high temperature superconductors like YBCO are very brittle and it does limit the applications. Traditional liquid helium-cooled superconductors still have to be used in many places.
One of the reasons it is so brittle is because it is still very cold even though it is high temp for a superconductor. Many materials will become brittle when cooled down that far. This is one of things people hope for with higher temp superconductors: that they will be less brittle. But less brittle usually also implies that a material changes shape easier and that in turn may affect the superconductivity. For instance when a large current runs through a superconductor that leads to strong magnetic fields and those strong magnetic fields will actively push against each other trying to destroy the conductor. A non-rigid superconductor would behave in ways that are not really helpful for instance by pushing it out of its superconducting domain (which would result in some pretty spectacular fireworks because suddenly all that power is available to heat up a small segment of the no-long-superconductor). So there is some chance that all materials that exhibit (useful) superconductivity will end up being somewhat brittle, and will need to be mechanically re-inforced.
Even if it is, we can coat it in epoxy and deal with it. And even if we can't, remember the first semi-conductor was germanium. If we have a theoretical and practical reproducible RT Superconductor we will very fast find new, better ones.
I've read that superconducting inductors are handy for making very high-Q filters (no parasitic resistance!) and are even used in places as prosaic as cellular towers (LN2-cooled microstrip structures).
musicians often use them to generate distortion, which is purely an electrical phenomenon. germanium components tend to filter high frequencies and don't clip as sharply as silicon, generating tonal effects that can't really be replicated.
but mostly, a lot of early guitar pedals used germanium components, and so they are associated with prestigious historic guitar players.
here's a video demonstration. silicon first, then halfway through they flip the switches and play the same circuit with germanium components.
It’s not about playing the wave form but creating the wave form.
Guitarists today still use tube amps and germanium transistors (in guitar pedals) for two reasons. The first is that most guitar amps back in the day used tubes (mostly) and the early guitar pedals used Germanium. Guitarists wanted to sound like the earlier musicians that used that technology [1] so they want to use that technology to achieve a certain tone with their instrument. The generation after them wanted to sound like them, so that means old tech for them, too! Repeat until today.
The second reason is that an electric guitar is a combination of a physical and electrical system, and the distortion that is essentially synonymous with electric guitars [2] comes from pushing an amplifier out of its “intended” linear regime [3] into the nonlinear regime where it stops amplifying and starts clipping the signal. The way this nonlinear regime varies with the choice of tubes and transistors, but in general you can’t really replicate one with the other. These unique non linearities impact both the output sound and, most importantly to me as a player, the way the amplifier responds to my physical technique (e.g., how the sound varies with how hard I hit a string). I have played solid state amps that aim to emulate tube amps, and to me the biggest difference isn’t the sound but that physical response. I haven’t played the top of the line modeling amps, but this has been my main problem with the practice amps I’ve tried. As a result, if I’m not playing in my bedroom (pushing a tube amp to distort at apartment friendly volumes is hard), I play through a tube amp. The differences between silicon and germanium transistors are similar, but more subtle and I’m someone who owns a lot of pedals and is constantly switching them out to fit my mood.
[1] an interesting counter example is that the Beatles used early solid state amps on at least some albums
[2] Plenty of people play “clean” without distortion, but if you spend time on guitar forums, you see a lot of beginners who ask a question of the form “I just got my first electric guitar, why doesn’t it sound like an electric guitar?”
[3] Early on the goal was to produce high headroom amps that didn’t distort, but this was very challenging. Rock musicians latched on to the distorted sound and then that became a design feature in later amps. However, if you try to make like a 59 Bassman distort, you have to play it loud enough to kill someone. You can also achieve distortion in other ways, e.g. clipping diodes, but that’s not really germane to this discussion.
Well, it's the creation rather than the playback that's important, but you aren't really missing anything. It's not like the various waveshaping functionality of those aren't documented, they're entirely reproducible in software.
This is why most "audiophile" stuff is redundant in my mind, a lot of people seem to act like valves have some sort of magic that we cannot profile and just reproduce in software.
But then again there's nothing wrong with people romanticising stuff as long as it's more "valves are cool because they're retro and nostalgic" over "valves are cool because they're objectively better".
My understanding is that it is due to the lower voltage drop across the base junction. Germanium is .3v vs Silicon .7v, so with germanium you get less "crossover distortion" when the input signal is crossing the 0 line.
edit: I understand that typically this is biased out with diodes...but the matching is not perfect and it is easier to start with half the distortion.
1) This is simulation result using density functional theory. While a standard method for understanding the electronic structure of materials it often does not do so accurately when correlations (electronic interactions) are strong. In this kind of context (where strong interactions are expected to be necessary to give something like high temperature superconductivity) what one is looking for from a DFT simulation is an indication of what kind of starting point to extend further and include interactions.
2) What is seen here are features called "flat bands". Essentially, the kinetic energy of the electrons relevant at low energies is only weakly dependent on the (crystal) momentum of the particle. Having lots of different states (different momenta) at similar energy usually means the interactions are more important than in materials where the kinetic energy is larger and more dispersive (depends more strongly on momentum). Here the partially filled d-shells of the Cu atoms appear to make a flat band at low energy. This flat band is partially filled and thus is potentially susceptible to interaction induced instabilities.
3) Flat bands can come from trivial features of a crystal as well. If you've got isolated atoms far apart enough that their atomic orbitals barely overlap their bands will be flat. Some of this may be at play here since the Cu atoms seem to be quite distant (7-9 Angstroms or so).
4) Flat bands appear in many many kinds of systems (at the level of DFT, even at the level of experiments, etc, etc) and do not necessarily imply superconductivity, let alone high temperature superconductivity. Even if the presence of flat bands is pointing towards stronger and more important interaction effects these interaction effects can stabilize other kinds of order instead (magnetism, charge order, etc).
5) Predicting what instability is realized is hard and can be quite delicate. There are materials where this can be debated (theoretically and sometimes experimentally) for years. Predicting the onset temperature of the order that is produced is hard. I.e. Don't necessarily expect a reliable estimate of the critical temperature from theory.
> and do not necessarily imply superconductivity, let alone high temperature superconductivity
That's true, but are there superconductors that do not have those flat bands?
If not then it wouldn't be evidence that it is superconducting but it would at least check one more expected property (based on the evidence obtained about superconductors so far).
> That's true, but are there superconductors that do not have those flat bands?
Yes, many. Most (all?) conventional superconductors. High-Tc iron arsenide superconductors discovered ~15 years ago. DFT (without including Hubbard "U" type corrections) for the cuprate high-Tc superconductors also doesn't indicate show flat bands.
Examples that do have flat bands (or similar physics) include the recently discovered twisted bilayer graphene (still very much actively studied), as well as (morally speaking at least) heavy-fermion superconductors (too many to list).
Superconductivity is a phase of matter than can arise in a variety of different ways depending on the details of the underlying physics. So at least when talking about the microscopic mechanism that stabilizes the superconducting state there isn't any single theory or one set of predictions/properties.
There's a lot of optimism in this thread, but does DFT (or any theoretical model really) actually have much predictive value in quantum chemistry? I've always gotten the impression that in this field the proof is in the pudding.
There are so many bad DFT papers out there because it's cheap to do DFT compared to growing and measuring samples carefully. DFT is notoriously unreliable as a predictive tool in strongly correlated systems, though when electron correlations are small it works well. I mean, I want this to be true, but I put little stock in DFT that doesn't calculate observables. So yes, you're right.
The prof who taught us computational chemistry during masters basically said 90% of published results cannot be trusted and most people in this field don't really know what they're doing. Results can look seemingly good and stil be way off from reality, even for very simple molecules. This is a crystal lattice. I take dft and other computational results with a big grain of salt.
GGA-DFT (+ some corrections) used here seems quite ok to me for this system. For more trust into this, I would like similar calculations with other methods to see how similar or different they are. LDA-DFT will most likely not be great (as in most cases), but I would be very interested in some DFT+GW calculations, even though LK99 might not be it's strength.
But it isn't used for its predictive value here, it is used to verify that which is already known (or at least, strongly suggested to be known). That's different than coming up with a compound based on some hunch, this is modeling a compound with a known structure to check that for properties consistent with the expectations.
That's radically different from searching for a compound with particular properties, that is a much more error prone process.
Explaining why is valuable. The band gap described in this paper is common to other high temperature superconductors. While I remain skeptical, this gives a glimmer of hope, and if the material is indeed superconducting, analysis like this is useful in further understanding high temperature superconductors. If it's not superconducting, then this research may yet be interesting -- if the analysis is correct, it would be interesting to know what's different.
It's funny to read all those grammatical mistakes in the abstract. They are probably just not native English speakers, but to me it sounds like they were frantically typing the paper as soon as they finally got results after a 20 hour lab marathon and way too much caffeine. :D
What? So you didn't even click the paper being discussed here and just threw in a comment criticizing some entirely different person's mastery of english
"I don't like the waiters in this restaurant"
"Have you ever eaten here?"
"Oh. No. I mean the other restaurant."
"Which restaurant?"
"I mean one of those other restaurants in town. Don't like 'em. They talk funny."
It isn't the prettiest prose i have ever read, but no obvious outright mistakes stick out to me. It doesn't read any worse than the average hn comment.
Something incredible to note: it took around 5 years from when the transistor was first developed, to when it started to get integrated into consumer goods. LK-99 appears promising (and at the very least, may lead to other tangentially interesting discoveries), and if this is “it”, we could see commercial applications far sooner, especially if the synthesis is relatively straightforward. We couldn’t be on a more exciting timeline.
But that first contact point transistor, even if it degraded rapidly actually worked, the challenges were to package it properly and to make it smaller and more reliable. This stuff, assuming it is all true is more at the level of the first inkling that semiconductor diodes might be a thing. We still have to reach that transistor stage (which would mean we an expensive way to manufacture a small stretch of usable conductor, say a few cm). Then you can start thinking about high volume production in any desired length and commercialization. So from a strict materials science point of view there is still a ton of work to be done even if everything so far turns out to be true. There is a good chance that even if the material isn't superconducting in bulk small regions of it are (the chances of that are actually higher than that it is all superconducting) and there is still a good chance that they are simply mistaken.
But even if it is just superconducting grains smaller than a millimeter that would already be a massive discovery.
If it has superconducting grains but isn't superconducting in bulk, is it likely that those grains could be separated, oriented, and combined to make a bulk superconductor?
Because I can guess how you'd go about doing the first bit: crush it, put a magnet under it, and scrape off any bits that float...
Even if LK99 turns out to be a dud, it's proven that there's a great need and great opportunity for more direct communication from scientist on their experiments via the internet. Live Stream of the Synthesis, Twitter Threads giving life updates of different teams, etc.
I fully agree that the traditional journal and academic publishing models are having significant negative impact on the conduct of science. I also really like that the Arxiv, open access publishing, open peer review, live science and similar new models represent a refreshing return to less gatekeeping, politics and institutional gamesmanship.
Based on the early successes shown by applying social media platforms like Twitter, Twitch, Discord, etc, to accelerating science, I hope we can evolve better tools and platforms which are equally open but even more suited to the exchange of scientific ideas, results, review and feedback.
I've always felt I love the idea of arXiv as a non-scientist who is interested in following science, but it's so esoteric and hard to follow (the density and format of scientific papers does not help). I would love something that was half arXiv and half-Twitter, and if papers were natively written for the web in an elegant and cross-linked way with rich explanations of related topics, interactive graphs with linked datasets, video, jargon explanations, etc.
Most science is way too boring and requires way to much work to reproduce to follow this kind of model. This open science type work we are seeing for LK-99 works fantastically when a huge number of scientists are interested in the problem and motivated to study it. Most papers aren't even cited more than a dozen times if that.
The first three author paper was uploaded by someone who wasn't authorized by the others to do so, and since that one was so flawed, the others apparently were forced to quickly upload the six author version, which was better but still flawed. So it seems the whole thing was intended to happen in a much more orderly matter than it did.
Agree for sure, really love seeing the evidence and discussion in realtime as it shows us a better way to do worldwide science collaboration than traditional publishing. Really speaks to a need for a new medium specific to science's unique characteristics.
One caveat tho is that hype can also distort and there's a limited resource of 'the public's excitement with science' that depletes if it never delivers, so I wouldn't necessarily couple science education/comms with science process so tightly. Most science is lots and lots of boring toil, and I'd hope the $$s flow to the best scientists rather than the ones who tell the most twitter-friendly stories.
I don't want scientists to be turned into influencers or D-grade celebrities having to debase themselves on shows like Joe Rogan in order to get funding or justify their work. Or what happened this week on Twitter where the scientist was obsessed with trying to get above 15m views so he could get monetised.
It is great to learn more about the scientific process in this ad-hoc manner but I would prefer to let scientists figure out themselves what works best for them rather than it being driven from the unwashed masses.
There’s a part of me that deeply agrees with you, but I don’t think we need to assume binary extremes here.
LK99 is reinvigorating broader interest in science, and this may be one of the most important things that could possibly happen at a moment in our global history that desperately needs broad support for research and progress. None of this matters if we don’t sort out some pretty major climate/energy problems, and soon.
I agree that it would be problematic for science to become what you describe. What you describe is not some guaranteed outcome of a shift towards more public participation and awareness.
And to be honest, if it does lead to some garbage behavior, that may be a small price to pay for getting the public interested and invested in the process of progress.
e: oh wow holy crap manifold has a lot more liquidity than when I last checked in, sold for an easy profit now that I think it has returned to a more reasonable probability
I really hope this unlocks a class of superconductors and isn't a bizarre oneoff compound, because the EPA of 2023 is not going to let us wire the country with thousands of miles of lead-based ceramic wire.
If this material works out and also turns to be massively toxic, then people thinking the way you do is going to cause a lot of lead poisoning. Not fun.
I have not stated a view on whether the hypothetical material of unknown toxicity should be used or not.
I am making a political claim that bureaucrats will behave in certain, highly predictable ways.
If the material proves to be what it claims to be, it will not be banned. At most, it will be highly regulated, as that will preserve and expand government power.
We have an entire class of amazing fluorocarbon materials, and the EPA didn’t do anything about it. And now it appears that may have been somewhat of a mistake.
Dumb question: why all the fuss ? I have checked a bit on the web but I fail to grasp the practical consequences of it (I mean if LK99 is the thing)). Could someone explain what could be done with that material ?
For example, if we can transport electricity with a super conductor over long dsitances (1000 of kms), then what happens ? Is it just an incremental progress or is it a huge breakthrough ?
Energy efficiency is just one gain. This also potentially unlocks major gains in quantum computing, fusion energy, tradition computer chip design allowing another 30 years of moore’s law, batteries with zero energy loss leading to ultra dense batteries and electric aviation
We'll have only part of that because this is lead based. Lead free regulation is the law of the land in many places and it's getting tighter. We spent decades finding out how absolutely harmful lead is to humans.
Only the industrial sector could claim exceptions but would still need to comply with safe cleanup and disposal.
Unfortunately the problem with electric aviation isn't the storage tech. Although having superconductors around would make the ground infrastructure nicer, if the current density limit works out.
That's part of the problem, as is the fact that batteries don't get lighter through the flight. The energy infrastructure on the ground is the thorny bit, though.
Getting enough energy into enough planes fast enough at enough airports to make a dent. As a guide to the order of magnitude of the problem, consider that a 747 needs roughly 100MW to stay airborne.
HN has a search function (bottom of the page, not ideal but the search itself works quite well), type in superconductor, then select order by date. That will give you the stories as posted in reverse order.
One high-level, back of napkin example that gets thrown around is that the world would magically get 40% more power due to reduction in transmission losses. The world spends $6,000b a year on energy purposes.
So a $2,400b/year opportunity. And that's just from transmission let alone the many other things it can do.
Theory confirming experiment is quite different from theory predicting new physics. I think the best use of these types of ab initio DFT calculations is to interpret experimental results. Back in grad school, my theoretical chemistry group regularly was able to help clear up confusing experimental findings with these exact type of calculations (in fact using the same program as used in this paper, VASP).
A lot of other researchers close to this type of work have come out to say the same. Lots of materials are like that, simulation is still too approximate etc.
The mention of quantum neural networks, quantum data encodings, and quantum feature spaces raises my curiosity about the technical intricacies and practical implications of these concepts. It's evident that this work contributes to advancing the understanding and potential applications of quantum machine learning.
I wonder how one would draw out a wire made of such a material. From the sound of it, the copper atoms have to be in the exact right place in the lattice. Wouldn't naively working with the alloy ruin the superconducting properties? Maybe an actual MatSci person would know.
What this means for the more practical minded is that the synthesis of superconducting LK-99 is not trivial and you need to make the appropriate substitutional alloy for this to work.
This is a DFT paper, and a band structure that is usually seen in high Tc superconductors just naturally came out. She also talks about the strong electron-phonon coupling that naturally arose from the structure, which is always necessary for superconductivity.
I am, by far, the most excited I've ever been about this being a RT, ambient pressure superconductor.