Certbot has this down to a science. I haven't once had to touch it after setting it up. 6 days doesn't seem like an onerous requirement in light of that.
In an early example of conspiracy theories that would eventually envelop social media, I actually remember internet commenters pointing to the previous generation of these as supposed "proof" that the government was embedding RFID chips in banknotes to track people (following a blog article by Alex Jones): https://news.slashdot.org/story/04/03/02/0535225/do-your-20-...
If I worked for VISA's marketing team I'd want to spread FUD like this and "XX% of dollar bills have cocaine on them; protect your children with a youth MasterCard!"
For those not familiar with using powered drugs nasally, a credit or debit card is also usually a part of the drug taking toolkit. Its one of those odd occasions when you need both cash and credit.
You usually keep the credit card though and not hand it to someone for a drink 5 minutes later.
Using banknotes is not advisable anyway. If you need to do this shit be smart about it.
There has been legitimate interest in putting RFID chips in high denomination Euro bills. It's not a whacky conspiracy theory but something actively being considered.
Honestly I would welcome it in all bills. Cash drawers could become self accounting, and bill sorters and talliers wouldn't have to rely on magneto optical sensing
What never cease to amaze me with conspiracy theorists, is that they keep inventing preposterous “conspiracies” while being totally oblivious about the real world. Like why bother putting chips in banknotes when the government could track everyone in real time through their smartphone.
And phone tracking and recording was in place since phones went mobile. Sure, nothing smart there but BTS were there, and NSA was listening on all levels.
CIA/NSA perfected the art of tapping optical undersea cables using special submarines without interrupting data flows in 90s IIRC.
Dating and marriage are way down, politics are increasingly polarized by gender, and the original Tea app was hacked because men were angry about it... sounds a bit like gender war.
Politics is only polarized by gender in the US because US men are ignorantly supporting anti women This is a very one sided war on women by conservative men.
It is not just a gender wars that is an indicator of health. We have quantitative metrics correlating that the more modern a society is the slower the population grows. Negative population growth is an actuality, especially with Asia. If Americans didn’t have immigration we’d be going into the negative too. So immigration is a temporary reprieve. Overall the population of the world is going down, though so some countries are still going up (even with no immigration) like India.
The difference between modern society and the past is women have more power.
One theory is that Millions of years of evolution have conditioned the balance of power to be this way. So women gaining total equal rights is experimental and new in human civilization in general. Such changes in behavior cause unforeseen side effects even if those changes are morally correct. Population growth is one possibility here.
What changed is technology. Technology has enabled women to gain more power. It made life in Mother Nature easier by allowing women to operate on the same level as men. Things like greater strength and speed were no longer relevant because things like manual farming or manual hunting were no longer part of human life.
Once this happens the only barrier was culture and women easily fought for their rights and overcame it.
But the behavioral changes it enables like the tea app or tea on her app or gender wars is something we’ve never seen before.
The population is crashing and lowering and we don’t know why. There is a correlation between how modern a society is and how slow the population is growing. But is it causal?
Is it because women have changed their behavior so much? Although there is a correlation We don’t know for sure.
Qualitatively we do know that women date up. They want to date men who are more powerful and men they respect. But when they have greater choice in getting what they want (no arranged marriages in modern society) and when their own rising power makes the pool of available men who are more powerful than them much smaller… it seems that a possible logical conclusion is that more women will be willingly single and the population will shrink.
So humanity faces a dilemma. Do we go with biology or do we go with what’s morally right? Like if you look at India. The woman still lacks a huge amount of power but the population is expanding. The conditions in which women suffer under are horrible. They lack many freedoms and it’s not morally right. So it’s a hard question not only to answer but to face head on.
There is another freedom modern society enables that fully contributes to the problem as well. This freedom is enabling to both men and women and that is the freedom to fuck without having kids. Birth control. Another experiment in modern society.
China just needs to ban all forms of birth control and abortion and I’m very sure it will solve the population problem. But of course doing so banishes a basic human right that should be morally a universal human right.
I know tons of people disagree with me. And that’s fine. I posted here because I want to hear that disagreement and discuss it. I don’t want to start a gender war and the only way we can avoid that is to take stuff like this impartially. We need the freedom to criticize either gender, men and woman and not take it personally because real negative generalizations of either genders are real and those generalizations may very well be causing general macro impacts to society at large.
We have enough people. We would be better served by maintaining or even decreasing population. The fact that the economic health in the short term relies on unlimited growth doesn't mean the earth can sustain it long term.
it's not black and white. Unlimited growth is bad but negative population growth unchecked means the end of modern civilization as we know it.
Right now we have negative population growth and it is not controlled negative growth. We did not intend for this to happen nor does anyone know how to stop it. So logically speaking; If the background context and status quo remains the same expect this trend to continue past criticality.
Even before that imagine not being able to find a job, product prices rising. Hardships, starvation. Ever watch the movie Children of men?
Imagine not being able to find a job because there are not enough people? Seems unlikely. I have watched the movie (and read the book) but I don't think that's a good place to look for economic analysis.
Yes because there's not enough people to run the systems that create jobs.
For example an airline without enough people to hire to maintain a plane, then the plane can no longer fly and everyone associated with the plane no longer has a job.
There's a small period of time where it's easier to find a job but eventually that blows past criticality and the jobs start shrinking faster than available people.
You could just train people people on the more critical jobs. No, I would prefer not to get into a detailed discussion of how long aircraft maintenance in particular requires training for or why anyone would wait until their back was against the wall to start reorienting around that need, and would rather address the subject from a higher statistical vantage point.
OK, so what? Some industries are more essential than others. I'd be more receptive if you leavened the alarmism with links to some statistical references or economic projections. I personally think much smaller populations (like globally 50% of current) would be a good thing for both the environment and humanity in general.
You don't need scientific evidence for everything. These are logical and self evident conclusions. In fact your own conclusion that declining population is good isn't even evidence based. You derived it from common sense, but the missing part is the nuance.
Either way Japan, China and Korea are all in panic mode right now. This is not some speculative issue I'm making up. It's deadly real and there is tons of evidence. Look it up. You're mostly treating it like it is because you're not as informed about the issue.
It is. Historically health in economics is associated with growth and Population growth is part of economic growth.
Additionally the trendline of negative population growth is 0 population. The conclusion of what’s happening is the end of civilization itself. While it may be possible we are oscillating, we can’t know for sure.
I'm not sure you're aware of the criticality of negative population growth that japan/china/korea are all facing.
Thus it is reasonable to say it is unhealthy but it is also reasonable to say that it is possibly natural as unlimited growth also leads to an end.
Can you not call anyone a loonie? It’s this type of attack and personalization that adds barriers to actual critical debate.
Assuming negative population growth ends in a total world population of zero is as insane as thinking positive population growth ends in people stacked ten deep in Antarctica.
It seems that way and of course it likely won’t get that extreme but the scenarios aren’t symmetric.
Extinction is a real phenomenon. There have been many extinctions is your life time. It is directly a physical possibility both from a logical standpoint and an evidence based standpoint point. Like this isn’t just math it’s reality.
People stacked 10 deep is not a physically possible scenario. There aren’t enough resources on earth to sustain that level of population.
> Historically health in economics is associated with growth and Population growth is part of economic growth.
If you measure "health" by growth, then you'll get an association between "health" and growth. This means nothing.
You haven't even tried to give any other definition of "health".
> Additionally the trendline of negative population growth is 0 population.
... and any positive growth tends toward infinity. Furthermore any fixed positive growth rate ends with a spherical mass of flesh expanding at the speed of light, and that happens in finite time. After that you have to slow your growth to cubic, though.
Neither projection will actually come true. Neither prediction will come close to coming true.
You can't extrapolate anything forever. Furthermore there are obvious mechanistic reasons to expect the "demographic transaction" to be a bobble that will last at most a few centuries. Suggesting that it'll lead to human extinction is, um, loony.
And you haven't said why zero population would even be a problem. If everybody at every step along the way chooses not to reproduce, that means everybody got what they wanted. Well, OK, you may have not gotten what you want with respect to other people's reproduction, but you know what? That's none of your business.
> I'm not sure you're aware of the criticality of negative population growth that japan/china/korea are all facing.
I'm aware of all the people who think it's critical. I do not find their claims very compelling.
The best they seem to be able to come up with is that there aren't, or won't be, enough young people to do the work to support all the old people. That probably isn't even true, given that it's a global economy, and we're also on track to replace all human labor, long before there's a global population "shortage".
But if it is true, well, who are they that other people should exist to work for them?
Most of the the other reasons for claiming things like that are "critical" tend to come down to nationalism, which is anti-compelling.
> Can you not call anyone a loonie? It’s this type of attack and personalization that adds barriers to actual critical debate.
This was a thread about privacy issues in a phone app. When you drag in your personal obsessions in the context of something totally unrelated, people are going to call you a loony.
TFR should be considered the basis of economics rather than GDP. It’s no longer difficult to imagine a society in which GDP is growing year over year through an automated economy of AI and robotics. But the humans who are supposedly consuming its goods and services are so miserable they can’t even be bothered to reproduce themselves.
The genetic basis of sexual reproduction means that over a long enough time span the people who are happy not having kids become irrelevant because they no longer exist.
I find this hand-wavy "I am smart and you are dumb" argument to be lazy and too common on HN. What if I told you I've taken intro to evolution? What if I told you I'm an expert in human evolution? What would you have to say then?
I would be very surprised if a paleoanthropologist said that individual selection was the only thing that mattered. A geneticist maybe, but I'd look at them funny. The only ones I can imagine saying that earnestly are all like 60+ years old.
I'd say you should like a bullshit artist, because you are just making assertions rather than providing people with information that supports your assertions (based on this whole sub-thread).
So? They lived happy lives, that's all that matters. We're all irrelevant eventually due to death. Over a long enough time period, the Earth is cooked by the Sun and this all is gone. Enjoy the ride, in a century you'll be long forgotten.
Not sure what the point of this question is. If what you were supposing had any truth to it, then this is equivalent to wanting humanity to become extinct. I will concede that many people are dim enough that they can't follow through and therefor don't realize that it's equivalent, but there are a minimum number of children that must be born or our species is gone someday (soon).
If you somehow wish to hold the contradictory views that you both are unhappy with and do not want children, and you wish humanity to continue indefinitely, then what this would say is that your some sort of parasite that does not want to contribute your fair share to the continuation of the species. Let someone else do the work, you'll just benefit from it. At minimum, if such people were to be tolerated they must come to understand that they should have little say in policy, especially those policies dealing with the future.
There's a great podcast about the old hacks and warez for AOL that interviews the developers, hackers, and AOL employees from that era: https://aolunderground.com/
It stops +++ATH0 in a IP packet (such as in a ping request, web page, etc) hanging up the modem by requiring a delay between the escape sequence (+++) and the ATH0 command.
There is a project called SCALE that allows building CUDA code natively for AMD GPUs.
It is designed as a drop-in replacement for Nvidia CUDA, and it is free for personal and educational use.
There are still many things that need implementing, most important ones being cuDNN and CUDA Graph API, but in my opinion, the list of things that are supported now is already quite impressive (and keeps improving): https://github.com/spectral-compute/scale-validation/tree/ma...
All of Ollama and Stable Diffusion based stuff now works on my AMD cards. Maybe it’s different if you want to actually train things, but I have no issues running anything that fits in memory any more.
> But when we switch to longer context, we see something interesting happen. WMMA + FA basically loses no performance at this longer context length!
> Vulkan + FA still has better pp but tg is significantly lower. More data points would be better, but seems like Vulkan performance may continue to decrease as context extends while the HIP+rocWMMA backend should perform better.
> (What is bad is that basically every single model has a different optimal backend, and most of them have different optimal backends for pp (handling context) vs tg (new text)).
Anyway, for me, the greatest thing about the Strix Halo + llama.cpp combo is that we can throw one or more egpu into the mix, as echoed by level1tech video (https://youtu.be/ziZDzrDI7AM?t=485), which should help a lot with PP performance.
In practical generative AI workflows (LLMs), I think AMD Max+395 chips with unified memory is as good as Mac Studio or MacBook Pro configurations in handling big models locally and support fast inference speeds (However Top-end Apple silicon (M4 Max, Studio Ultra) can reach 546GB/s memory bandwidth, while the AMD unified memory system is around 256GB/s). I think for inference use either will work fine. For everything else I think CUDA ecosystem is a better bet (correct me if I'm wrong).
My impression is the same. To train anything you just need to have CUDA gpus. For inference I think AMD and Apple M chips are getting better and better.
For inference, Nvidia/AMD/Intel/Apple are all generally on the same tier now.
There's a post on github of a madman who got llama.cpp generating tokens for an AI model that's running on an Intel Arc, Nvidia 3090, and AMD gpu at the same time. https://github.com/ggml-org/llama.cpp/pull/5321
CUDA isn't really used for new code. Its used for legacy codebases.
In the LLM world, you really only see CUDA being used with Triton and/or PyTorch consumers that haven't moved onto better pastures (mainly because they only know Python and aren't actually programmers).
That said, AMD can run most CUDA code through ROCm, and AMD officially supports Triton and PyTorch, so even the academics have a way out of Nvidia hell.
I get the joke you two are making, but I've seen what academics have written in Python. Somehow, its worse than what academics used to write when Java was taught has the only language for CompSci degrees.
At least Java has types and can be performant. The world was ever so slightly better back then.
There is some truly execrable Python code out there, but it’s there because the barrier to entry is so low. Especially back in the day, Java had so many guardrails that the really bad Java code came from intermediate programmers pushing up against the limitations of the language rather than from novices pasting garbage into a notebook. As a result there was less of it, but I’m not convinced that’s a good thing.
Edit: my point being that out of a large pool of novices, some of them will get better. Java was always more gate kept.
Second edit: Java’s intermediate programmer malaise was of course fueled by the Gang of Four’s promise to lead them out of confusion and into the blessed realm of reusable software.
Largely Vulkan. Microsoft internally is a huge consumer of DirectML for specifically the LLM team doing Phi and the Copilot deployment that lives at Azure.
It is on Nvidia. Nvidia's code generation for Vulkan kind of sucks, it also effects games. llama.cpp is almost as optimal as it can be on the Vulkan target; it uses VK_NV_cooperative_matrix2, which turning that off loses something like 20% performance. AMD does not implement this extension yet, and due to better matrix ALU design, might not actually benefit from it.
Game engines that have singular code generator paths that support multiple targets (eg, Vulkan, DX12, Xbone/XSX DX12, and PS4/5 GNM) have virtually identical performance on the DX12 and Vulkan outputs on Windows on AMD, have virtually identical performance on apples-to-apples Xbox to PS comparisons (scaled to their relative hardware specs), and have expected DX12 but not Vulkan performance on Windows on Nvidia.
Now, obviously, I'm giving a rather broad statement on that, all engines are different, some games on the same engine (especially UE4 and 5) are faster than one or the other on AMD, or purely faster entirely on any vendor, and some games are faster on Xbox than on PS, or vice versa, due to edge cases or porting mistakes. I suggest looking at, GamersNexus's benchmarks when looking at specific games or DigitalFoundry's work on benchmarking and analyzing consoles.
It is in Nvidia's best interest to make Vulkan look bad, but even now they're starting to understand that is a bad strategy, and the compute accelerator market is starting to become a bit crowded, so the Vulkan frontend for their compiler has slowly been getting better.
CUDA largely was Nvidias attempt at swaying Khronos and Microsoft's DirectX team. In the end, Khronos went with something based on a blend of AMD's and Nvidia's ideas, and that became Vulkan, and Microsoft just duplicated the effort in a Direct3D-flavored way.
So, just use Vulkan and stop fucking around with the Nvidia moat.
A great thing about CUDA is that it doesn't have to deal with any of the graphics and rendering stuff or shader languages. Vulkan compute is way less dev friendly than CUDA. Not to mention the real benefit of CUDA which is that it's also a massive ecosystem of libraries and tools.
As much as I wish it were otherwise, Vulkan is nowhere near a good alternative for CUDA currently. Maybe eventually, but not without additions to nothing the core API and especially available libraries.
You mean the AI Max chips? ROCm works fine there, as long as you're running 6.4.1 or later, no hacks required. I tested on Fedora Rawhide and it was just dnf install rocm.
Yes it does. ROCm support for new chips, due to being available for paid support contracts, comes like 1-2 months after the chip comes out (ie, when they're 100% sure it works with the current, also new, driver).
I'd rather it works and ships late than doesn't work and ships early and then get gaslit about the bugs (lol Nvidia, why are you like this?)
One major reason I'm working extra years despite being FI is so I have money to provide for memory care for my parents if they end up needing that. They have downright nothing to their name and memory care can easily run into half a million dollars total.
This actual answer according the the author realized after he already liked the name.
He created it intending to be +1 of APL. Accidentally came up with BQN instead of BQM. Sat with that for 1hr, really liked the name, then realized that it should be BQM which he hated, so he stuck with BQN.
That said, it's and incredibly designed language. I honestly have never read any language (especially not designed by a single person) with the level of creative thought as he put into BQN. Some really incredible insights and deep understanding. It's amazing reading his posts / documentation about it. The focus on ergonomics, brand new constructs and the consistency/coherence of how all of his decisions fit together is really impressive.
I'm somewhat sure the author actually mentioned that that was the intention, "Big Question Notation" and basically "apl" + 1. But he realized that it didn't match up
It's just. So gross. Say it. Sudden interruption of slime coming up your throat. Like walking out the door into a spiderweb. Alphabetically I was mistaken but in every way that matters I was right.
Ordinarily I'd make fun of the Germans for giving such an ugly name to a nice concept, but I've always found "comfortable" to be rather unpleasant too (the root "comfort" is fine).
I've found the best ways to use AI when coding are:
* Sophisticated find and replace i.e. highlight a bunch of struct initalisations and saying "Convert all these to Y". (Regex was always a PITA for this, though it is more deterministic.)
* When in an agentic workflow, treating it as a higher level than ordinary code and not so much as a simulated human. I.e. the more you ask it to do at once, the less it seems to do it well. So instead of "Implement the feature" you'd want to say "Let's make a new file and create stub functions", "Let's complete stub function 1 and have it do x", "Complete stub function 2 by first calling stub function 1 and doing Y", etc.
* Finding something in an unfamiliar codebase or asking how something was done. "Hey copilot, where are all the app's routes defined?" Best part is you can ask a bunch of questions about how a project works, all without annoying some IRC greybeard.