In the US, given the current trajectory, I see a frightening and bleak future.* Due to the Trump administration and the people who enable it, the United States is highly degraded and getting worse every day. I'll list what comes to mind...
[x] Authoritarianism. [x] Civil rights abuses. [x] Blatant defiance of the law. [x] Unrepentant selfishness and lack of character. [x] Weaponization of the courts. [x] Loyalty tests at government agencies and in the military. [x] Political prosecutions. [x] Politicization of the Department of Justice. [x] Rampant presidential overreach. [x] The Supreme Court's endorsement and flimsy justification of presidential overreach. [x] Self-destructive trade policy. [x] Ineffective economic policy. [x] Erosion of norms. [x] Concerning presidential cognitive decline. [x] Institutional hollowing-out. [x] Defunding of science. [x] Destruction of USAID. [x] Blatant corruption. [x] Nepotism. [x] Use of the military for domestic purposes. [x] Firing of qualified military leaders. [x] Blatantly self-serving presidential pardons. [x] Firing of qualified civil servants. [x] Deliberately trying to traumatize civil servants. [x] Unnecessary tax breaks for the wealthy. [x] Intimidation of universities. [x] Rollback of environmental protections. [x] Unconstitutional and economically damaging immigration policies. [x] Top-down gerrymandering. [x] Firing of ~19 Inspectors General. [x] Unqualified cabinet members. [x] Relentless lying. [x] Implicit endorsement of conspiracy theories. [x] Public health policies that will lead to unnecessary deaths. [x] A president who 'models' immorality. [x] Tolerance of illegal and immoral behavior of political allies. [x] Prioritization of appearance over substance. [x] Opulent and disgusting displays of wealth. [x] Trampling on independent journalist access. [x] A foreign policy that undermines key alliances. [x] Dismantling of the Department of Education. [x] Undoing key healthcare provisions from the ACA. [x] Negligent inaction regarding AI catastrophic risks. [x] Motive and capability to manipulate voting machines [x] And more.
Positives? It looks like some durable peace deals are in the works.
Overall, things are dark.
* All the more reason to organize and act. No single person (or even group) can solve any of these problems alone. No person or group can afford to wait for other people to act.
We simply cannot afford to let our shock, anger, or fear get the better of us. We have to build coalitions to turn things around -- including coalitions with people that may vote in ways we don't like or believe things that we think don't make sense. We have to find coalitions that work. We have to persuade and build a movement that can outcompete and outlast Trumpism and whatever comes after it.
I remember using a TUI for a Bank in the UK, and them switching to a web-based javascript system. Because the TUI forced keyboard interaction everyone was quick, and we could all fly through the screens finding what we wanted. One benefit was each screen was a fixed size and there was no scroll, so when you pressed the right incantation the answer you wanted appeared in the same portion of the screen every time. You didn't have to hunt for the right place to look. You pressed the keys, which were buffered, looked to the appropriate part of the screen and more often than not the information you required appeared as you looked.
Moving to a web based system meant we all had to use mice and spend our days moving them to the correct button on the page all the time. It added hours and hours to the processing.
Exactly. And the OpenAI corporates speak acting like they give a shit about our best interests. Give me a break, Sam Altman. How stupid do you think everyone is?
They have proven that they are the most untrustworthy company on the planet
And this isn't AI fear speaking. This is me raging at Sam Altman for spreading so much fear, uncertainty, and doubt just to get investments. The rest of us have to suffer for the last two years, worrying about losing our jobs, only to find out the AGI lie is complete bullsh*t.
No extra tooling, no symlinks, files are tracked on a version control system, you can use different branches for different computers, you can replicate you configuration easily on new installation.
> I literally thought some unpublished book. But you shouldn't have doubled down on 'next'. Your first para was enough.
Thanks for the feedback.
To focus on "should" for a second. If I would not have written my second paragraph, I would not have made my main point: I'm trying to get people to pay attention to ambiguity more broadly and tamp down this all-too-common tendency for people to think "the way I see things is obvious and/or definitive" which pervades Hacker News like a plague. Perhaps working with computers too much has damaged our cognitive machinery: human brains are not homogeneous nor deterministic parsers of meaning.
Perhaps the second paragraph got some people thinking a little bit. We are discussing Kahnemann's life's work after all. This is a perfect place to discuss our flawed intellectual machinery and our biases. Kahnemann would be happy if people here improved their self-understanding and communication with each other.
What they don't tell you is that Lojban is so complicated no one can speak it. Go to the forums and pick almost any thread that contains Lojban, and you'll find debating about the language. If the supporters can't even speak it right, what hope if there for those who just want to communicate?
One starts off life with little conscious awareness of life’s big questions. If we’re lucky, we might gain some clarity about the most important questions and share what we learn with others.
From evolution’s point of view, individuals are only a bundle for the survival of genes. It is no surprise that we want more, hence society.
For society, one could argue that its core principles (liberty, freedom, security, flourishing, pursuit of happiness, shared narratives, etc.) are only provided to each individual in a time-bounded way. Individuals may be heartened when they have confidence these principles will carry on to the next generation.
I could go on and on but I think you get the point. Almost everything you hear in this field is opinions, extrapolations, and educated guesses rather than anything we could really call a "fact".
> The statement “AI is normal technology” is three things: a description of current AI, a prediction about the foreseeable future of AI, and a prescription about how we should treat it.
A question for the author(s), at least one of whom is participating in the discussion (thanks!): Why try to lump together description, prediction, and prescription under the "normal" adjective?
Discussing AI is fraught. My claim: conflating those three under the "normal" label seems likely to backfire and lead to unnecessary confusion. Why not instead keep these separate?
My main objection is this: it locks in a narrative that tries to neatly fuse description, prediction, and prescription. I recoil at this; it feels like an unnecessary coupling. Better to remain fluid and not lock in a narrative. The field is changing so fast, making description by itself very challenging. Predictions should update on new information, including how we frame the problem and our evolving values.
A little bit about my POV in case it gives useful context: I've found the authors (Narayanan and Kapoor) to be quite level-headed and sane w.r.t. AI discussions, unlike many others. I'll mention Gary Marcus as one counterexample; I find it hard to pin Marcus down on the actual form of his arguments or concrete predictions. His pieces often feel like rants without a clear underlying logical backbone (at least in the year or so I've read his work).
I used a Kinesis Advantage as my main keyboard for close to 10 years. A few years ago I got an Ergodox EZ, but just couldn't get into it. I've been using a NiZ Atom68 as my main keyboard for over a year now and am rather happy with it. I have plans to build and try a Kyria with some ultra-light linear keys soon as well.
These days I think that there are so many good community designed keyboards that it behooves anyone who has the notion they'd like something better than a standard layout keyboard to do a bit of research and testing. For any keyboard layout it is fairly trivial to make a printout and stick it to your desk to get an impression of how it fits your hand size/shape and your preferred resting/neutral position. There's a good layout comparison tool which makes it trivial to narrow in on your preference as well [0].
Recently I've found Ben Vallack [1] to be an excellent resource on keyboard customization and his philosophy echoes my own, though he shows much more dedication to the craft and exploration of keyboarding than I could ever hope or wish to. He has an excellent series on designing and making your own keyboard [2], as well as well thought out explanations and explorations of creating and learning personalized keyboard layouts [3][4]. His more general explorations of usability in computing and beyond have been inspiring as well [5].
I have it. The community is active in Discord. They also have a section for buying/selling Glove80's from users and getting mods done. The firmware is ZMK and lots of configuration options. I was glad to have bought this over any Dactyl. The physical key layout is amazing. The Red Pro switches even better than I expected coming from C MX Reds. The low profile is also great and I prefer it over standard height keys. It met my expectation. Checkout Ben Frain on YouTube since he has the Glove80, Moonlander, Kinesis Advantage 360 and a Dactyl. A customized to your hand Dactyl w/ low profile low force linear switches would be enticing.
> I was excited about using alacritty as a wayland-native terminal, but it does not yet work with sway.
Forgive me if I'm being dense, but: do you mean it's possible for a Wayland app to be incompatible with a particular Wayland window manager (or compositor)? That seems like a huge regression from the X11 status quo.
I find the current VC/billionaire strategy a bit odd and suboptimal. If we consider the current search for AGI as something like a multi-armed bandit seeking to identify “valuable researchers”, the industry is way over-indexing on the exploitation side of the exploitation/exploration trade-off.
If I had billions to throw around, instead of siphoning large amounts of it to a relatively small number of people, I would instead attempt to incubate new ideas across a very large base of generally smart people across interdisciplinary backgrounds. Give anyone who shows genuine interest some amount of compute resources to test their ideas in exchange for X% of the payoff should their approach lead to some step function improvement in capability. The current “AI talent war” is very different than sports, because unlike a star tennis player, it’s not clear at all whose novel approach to machine learning is ultimately going to pay off the most.
The “bloodbath” will be slow but is quite likely to be significant.
AI / GP robotic labor will not penetrate the market so much in existing companies, which will have huge inertial buffers, but more in new companies that arise in specific segments where the technology proves most useful.
The layoffs will come not as companies replace workers with AI, but as AI companies displace non-AI companies in the market, followed by panicked restructuring and layoffs in those companies as they try to react, probably mostly unsuccessfully.
Existing companies don’t have the luxury of buying market share with investor money, they have to make a profit. A tech darling AI startup powered by unicorn farts and inference can burn through billions of SoftBank money buying market share.
I struggled reading the papers - Anthropic’s white papers reminds me of Stephen Wolfram, where it’s a huge pile of suggestive empirical evidence, but the claims are extremely vague - no definitions, just vibes - the empirical evidence seems selectively curated, and there’s not much effort spent building a coherent general theory.
Worse is the impression that they are begging the question. The rhyming example was especially unconvincing since they didn’t rule out the possibility that Claude activated “rabbit” simply because it wrote a line that said “carrot”; later Anthropic claimed Claude was able to “plan” when the concept “rabbit” was replaced by “green,” but the poem fails to rhyme because Claude arbitrarily threw in the word “green”! What exactly was the plan? It looks like Claude just hastily autocompleted. And Anthropic made zero effort to reproduce this experiment, so how do we know it’s a general phenomenon?
I don’t think either of these papers would be published in a reputable journal. If these papers are honest, they are incomplete: they need more experiments and more rigorous methodology. Poking at a few ANN layers and making sweeping claims about the output is not honest science. But I don’t think Anthropic is being especially honest: these are pseudoacademic infomercials.
Hi! I lead interpretability research at Anthropic. I also used to do a lot of basic ML pedagogy (https://colah.github.io/). I think this post and its children have some important questions about modern deep learning and how it relates to our present research, and wanted to take the opportunity to try and clarify a few things.
When people talk about models "just predicting the next word", this is a popularization of the fact that modern LLMs are "autoregressive" models. This actually has two components: an architectural component (the model generates words one at a time), and a loss component (it maximizes probability).
As the parent says, modern LLMs are finetuned with a different loss function after pretraining. This means that in some strict sense they're no longer autoregressive models – but they do still generate text one word at a time. I think this really is the heart of the "just predicting the next word" critique.
This brings us to a debate which goes back many, many years: what does it mean to predict the next word? Many researchers, including myself, have believed that if you want to predict the next word really well, you need to do a lot more. (And with this paper, we're able to see this mechanistically!)
Here's an example, which we didn't put in the paper: How does Claude answer "What do you call someone who studies the stars?" with "An astronomer"? In order to predict "An" instead of “A”, you need to know that you're going to say something that starts with a vowel next. So you're incentivized to figure out one word ahead, and indeed, Claude realizes it's going to say astronomer and works backwards. This is a kind of very, very small scale planning – but you can see how even just a pure autoregressive model is incentivized to do it.
It's interesting, after my time at Amazon (8 years) -- I struggled to visualize decisions without a document, because you get so used to reviewing things in that way. However, it's EXTREMELY heavy-handed. People will give you comments on the structure of your document (to "raise the bar") instead of the content of the document, and often you can't get documents reviewed until they are aesthetically in the right place for people to digest them. Ultimately, I think it makes sense when you're at a large organization with high attrition (such that you need to keep track of all of the decisions which are made), but otherwise, it's probably not worth it to do a formal document writing process.
The real benefit of doc writing isn’t for decision making it’s for education. It allows everyone at Amazon to evaluate the author’s ability to refine their “chain of thought”.
The nice side effect is the author taking 10x more time to save 10x L+1 and L+2 leaders (ie more expensive people) from spending that same time trying to understand it.
This is just moving the goalposts, MCP is not supposed to solve every problem with agents. It's meant to make it easier to provide easier, standardised ways for LLMs to interact with external tools, which it has done. Reliability is a completely different problem.
I've tried a few times to get into this but honestly it's very very difficult. You pretty much need to have a degree in formal verification to be able to actually verify anything except toy examples. I kept hitting cases where things wouldn't verify, and the reason is due to something deep in the implementation that really only the authors would know about.
Also the language is huge, and while it's quite well documented, the level of documentation you want for this is far more than for a normal language.
On the plus side, the authors are really helpful on Github, and the IDE support in VSCode is great.
Every time I see a long inscrutable discussion about Passkeys, I see a weird avoidance of the "something you know" part of security. Here in the US, courts and law enforcement have every right to get your username, fingerprint, retina scan, face ID, whatever. But they don't have the right to extract something from your brain. Unless I'm missing something basic (which at this point, I don't think is my fault since this whole thing appears incredibly difficult to explain), Passkeys skips past that whole thing in favor of making it a heck of a lot easier to replace "something you know" with "something you have". Which is a security nightmare.
I hope we never find a time where a person (whether it be a researcher, tinkerer, dreamer, visionary, misfit, whatever) hears "X is already solved" and says "that's good enough for me".
The end of the article [1] reminds me to publish more of what I make and think. I'm no Doug Lenat and my content would probably just add noise to the internet but still, don't let your ideas die with you or become controlled by some board of stakeholders. I'm also no open-source zealot but open-source is a nice way to let others continue what you started.
[1]
"Over the last year, Doug and I tried to write a long, complex paper that we never got to finish. Cyc was both awesome in its scope, and unwieldy in its implementation. The biggest problem with Cyc from an academic perspective is that it’s proprietary.
To help more people understand it, I tried to bring out of him what lessons he learned from Cyc, for a future generation of researchers to use. Why did it work as well as it did when it did, why did fail when it did, what was hard to implement, and what did he wish that he had done differently? ...
...One of his last emails to me, about six weeks ago, was an entreaty to get the paper out ASAP; on July 31, after a nerve-wracking false-start, it came out, on arXiv, Getting from Generative AI to Trustworthy AI: What LLMs might learn from Cyc (https://arxiv.org/ftp/arxiv/papers/2308/2308.04445.pdf).
The brief article is simultaneously a review of what Cyc tried to do, an encapsulation of what we should expect from genuine artificial intelligence, and a call for reconciliation between the deep symbolic tradition that he worked in with modern Large Language Models."
> The fact that a single car can even cause this much harm is a sign of a much larger problem: car centric transportation is NOT scalable.
This is an interesting to me. From the perspective of an internet builder, systems vulnerable to shutdown from individual bad actors are obviously insecure and "not scalable". But too much time in this environment can make us forget that the actual world is about a zillion times less hostile than the internet.
As an educator, I worked with a computer security savvy 13 year old who knew my software background and would occasionally disclose vulnerabilities in the district's systems to me. He was incredulous that the network could be so insecure.
I asked about the security systems governing walking around in the hallway - was there any physical impediment preventing one person from punching, kicking, throwing a rock at another? Of course not - the security is socially constructed. Norms, consequences, etc, that on balance seemed to work out pretty effectively. I tried to make a case that security inside a local network of the size and scope of our school board was also ~90% social construction, but the student really struggled with it. There was a strong belief that because it was computers, you were expected to exploit it if it were vulnerable.
First paragraph is about the IT security mindset bleeding into real life, third is about the IT adversarial mindset corrupting our disposition toward technical equipment generally.
Given how immersed current generations are / have been in the adversarial landscape of the web, should we expect an uptick in the level of generalized adversarialism? Will xyst's (and my student's) assessments of these systems become true? Have they already and I'm just a knob?
Isn't this a cat and a mouse game? The moment this actually start causing problems they will change how parameters work. Maybe the easiest would be to use a single encoded parameter which would be decoded on the server and Apple or anyone else won't be able to change a thing about it.
This is a MITM attack where Apple plays the good guy(or control freak, depending on how you feel about it) but MITM attacks are nothing new.
(The above phrasing moves the conversation into a different topic from the message chain. That’s ok, but I want us to be clear about what we’re claiming.)
> If you want to do something unambiguously good, do it, don’t start a business.
Oh? And what is unambiguously good? This seems like a rhetorical question, but it isn’t. Try to answer it.
## Ethics
It can be quite hard to know what is unambiguously good, depending on your ethics.
- If you are a consequentialist, you’d have to pick a time frame, and even then you won’t really know until that time comes.
- Other moral philosophies might focus only on intentions, but these are not plausible, because good intentions are easily derailed by ignorance.
- I don’t have a clear answer for myself, so I can be confident that some particular idea of “unambiguous good” will not hold water for N > 1.
## Opportunity Cost
Next topic… even if we agree on an act being “unambiguously good”, that isn’t enough. Even the most “obvious” selfless acts might not be the best considering the other options available. To phrase it as a question: considering your opportunity cost, is a small scale good action really worth it?
## Life has Tradeoffs
The phrase “don’t let perfect be the enemy of the good” is apropos here.
If you want to solve a particular disease with high likelihood, you could well find it demands collaboration with flawed pharmaceutical companies.
Is it worth it? Weigh it.
But to dismiss such an idea solely because it is “impure” —- without some meaningful metric that trades off pros and cons —- is foolish and unmoored from
reality.
Hello over-the-top idealist, meet entropy. Time to talk about acceptance. Do what you can with what is available. Are you trying to make change or are you more interested in trying to look like a saint?
## Organizational Structure
Practically, if your desired good thing is attainable without using a for-profit business structure, great, consider that. It can work.
Personal actions matter. By all means, give smiles, encouragement, constructive criticism, love, advice, and so on.
Beware: not all kindness (such as advice) will be taken as you intended! (Whoops. Maybe not unambiguously good anymore.)
Community organizing matters. Volunteers can go a long way. (But they aren’t perfect!)
When you want to scale up your impact, you have to accept real world tradeoffs. Not all are clear at the outset.
For-profit or not, you want a plan. Often that plan demands longevity and thus some kind of sustainability. Even if you are happy to spend your capital without an eye towards building an endowment, you’ll want to think about effectiveness.
Even if you are a charity, you might face some trade-offs about what donors you want to let in your tent. Very little money comes without any kind of expectation. The expectations might be clear and totally fair such as: transparency as to how your organization spends the money. Other expectations might less savory: donors wanting to prop up their image.
## A Footgun Named Naïveté
All in all, the comment above strikes me as naive to the point of undermining one’s own goals.
## Efficiency, Corruption, Impact
It is my view that:
- maximizing impact is hard to do without trading off some efficiency
- reducing corruption is non-linear. Rooting out the big offenders is essential to keeping an organization’s mission intact. From there, fighting corruption will be beneficial but often with diminishing returns.
- At some point, trust and acceptance of human weakness may actually be more cost effective than fighting the lingering forms of corruption; e.g. personal networks having some influence over a process that is supposed to be completely blind
- In many cases, people breaking the rules based on good information and intentions isn’t corruption at all. Sometimes the system is flawed and people work around it. Figuring out which is which costs time and resources.
[x] Authoritarianism. [x] Civil rights abuses. [x] Blatant defiance of the law. [x] Unrepentant selfishness and lack of character. [x] Weaponization of the courts. [x] Loyalty tests at government agencies and in the military. [x] Political prosecutions. [x] Politicization of the Department of Justice. [x] Rampant presidential overreach. [x] The Supreme Court's endorsement and flimsy justification of presidential overreach. [x] Self-destructive trade policy. [x] Ineffective economic policy. [x] Erosion of norms. [x] Concerning presidential cognitive decline. [x] Institutional hollowing-out. [x] Defunding of science. [x] Destruction of USAID. [x] Blatant corruption. [x] Nepotism. [x] Use of the military for domestic purposes. [x] Firing of qualified military leaders. [x] Blatantly self-serving presidential pardons. [x] Firing of qualified civil servants. [x] Deliberately trying to traumatize civil servants. [x] Unnecessary tax breaks for the wealthy. [x] Intimidation of universities. [x] Rollback of environmental protections. [x] Unconstitutional and economically damaging immigration policies. [x] Top-down gerrymandering. [x] Firing of ~19 Inspectors General. [x] Unqualified cabinet members. [x] Relentless lying. [x] Implicit endorsement of conspiracy theories. [x] Public health policies that will lead to unnecessary deaths. [x] A president who 'models' immorality. [x] Tolerance of illegal and immoral behavior of political allies. [x] Prioritization of appearance over substance. [x] Opulent and disgusting displays of wealth. [x] Trampling on independent journalist access. [x] A foreign policy that undermines key alliances. [x] Dismantling of the Department of Education. [x] Undoing key healthcare provisions from the ACA. [x] Negligent inaction regarding AI catastrophic risks. [x] Motive and capability to manipulate voting machines [x] And more.
Positives? It looks like some durable peace deals are in the works.
Overall, things are dark.
* All the more reason to organize and act. No single person (or even group) can solve any of these problems alone. No person or group can afford to wait for other people to act.
We simply cannot afford to let our shock, anger, or fear get the better of us. We have to build coalitions to turn things around -- including coalitions with people that may vote in ways we don't like or believe things that we think don't make sense. We have to find coalitions that work. We have to persuade and build a movement that can outcompete and outlast Trumpism and whatever comes after it.