Hacker Newsnew | past | comments | ask | show | jobs | submit | circuit10's commentslogin

Random idea: could someone make a display like this for the Framework laptop?


Fantastic idea — I would be in the market for this, doubly so if it could be easily swapped with an OLED.


I looked it up after and it looks like someone already had a similar idea, I'm pretty sure I've seen this before which is probably where I got the idea from: https://x.com/zephray_wenting/status/1535041457035280392

But that never turned into a product, and this display tech might be more suitable as response times would be better?


> loud or dangerous

This seems to be mostly a US problem, I only really see people from the US worried about this...

As someone from the UK who travels on two trains and a bus both ways (6 journeys total) most weekdays to university, I never had any problems with other passengers, I guess very occasionally there might be a loud baby or something but then you can just put headphones in, and the worst thing that ever happened to my property is my bag got accidentally handed into lost property when I wasn't paying attention because I got on really early and the staff assumed it was left there from the last trip


>he worst thing that ever happened to my property is my bag got accidentally handed into lost property when I wasn't paying attention

I've never heard of such a thing and I'd be horrified.


There are entire neighborhoods in the US where you are advised to keep your doors locked and never to stop at a stop sign or red light.


So you're better off flagrantly breaking traffic laws and possibly getting T-boned than having someone potentially break into your car? That seems like some combination of nonsense and there maybe being some places you should just avoid.


As someone who modded my PSP a few years ago I can confirm that Mario 64 now runs fine with working sound, and you can even force it into widescreen


The idea is that by its very nature as an agent that attempts to make the best action to achieve a goal, assuming it can get good enough, the best action will be to improve itself so it can better achieve its goal. In fact we humans are doing the same thing, we can't really improve our intelligence directly but we are trying to create AI to achieve our goals, and there's no reason that the AI itself wouldn't do so assuming it's capable and we don't attempt to stop it, and currently we don't really know how to reliably control it.

We have absolutely no idea how to specify human values in a robust way which is what we would need to figure out to build this safely


> The idea is that by its very nature as an agent that attempts to make the best action to achieve a goal, assuming it can get good enough, the best action will be to improve itself so it can better achieve its goal.

I’ve heard this argument before, and I don’t entirely accept it. It presumes that AI will be capable of playing 4D chess and thinking logically 10 moves ahead. It’s an interesting plot as a SF novel (literally the plot of the movie “I Robot”), but neural networks just don’t behave that way. They act, like us, on instinct (or training), not in some hyper-logical fashion. The idea that AI will behave like Star Trek’s Data (or Lore), has proven to be completely wrong.


Well, if they have access to significantly more compute, from what we’ve seen about how AI capabilities scale with additional compute there’s no reason why they couldn’t be more capable than us.They don’t have to be intrinsically more logical or anything like that, just capable of processing more information and faster. Like how we could almost always outsmart a fly because we have significantly bigger brains


Despite what Sam Altman (a high-school graduate) might want to be true, human cognition is not just a massive pile of intuition; there are critical deliberative and intentional aspects to cognition, which is something we've seen come to the fore with the hubbub around "reasoning" in LLMs. Any AGI design will necessarily take these facts into account--hardcoded or no--and will absolutely be capable of forming plans and executing them over time, as Simon & Newell described the best back in '71:

  The problem solver’s search for a solution is an odyssey through the problem space, from one knowledge state to another, until… [they] know the answer.
With this in mind, I really don't see any basis to attack the intelligence explosion hypothesis. I linked a Yudkowsky paper above examining how empirically feasible it might be, which is absolutely an unsolved question at some level. But the utility of the effort itself is just downright obvious, even if we didn't have reams of internet discussions like this one to nudge any nascent agent in that direction.

[1] Simon & Newell, 1971: Human Problem Solving https://psycnet.apa.org/record/1971-24266-001


> Sam Altman (a high-school graduate)

“People who didn’t pass a test aren’t worth listening to”

I have no love for Altman, but this is kind of elitism is insulting.


Hmm, don't want to be elitist. More like "people who don't put any time into studying science shouldn't be listened to about science".


> people who don't put any time into studying

Degrees don’t mean that either.

I’ve been studying textbooks and papers on real time rendering techniques for the past 4 or so years.

I think one could learn something from listening to me explain rasterization or raytracing.

I have no degree in math or graphic computing.


More tellingly it betokens a lack of critical thought. It's just silly.


> Despite what Sam Altman (a high-school graduate) might want to be true

> I linked a Yudkowsky paper above examining how empirically feasible it might be

...


Lol I was wondering if anyone would comment on that! To be fair Yudkowsky is a self-taught scholar, AFAIK Altman has never even half-heartedly attempted to engage with any academy, much less 5 at once. I'm not a huge fan of Yudkowsky's overall impact, but I think it's hard to say he's not serious about science.


Yudkowsky is not serious about science. His claims about AI risks are unscientific and rely on huge leaps of faith; they are more akin to philosophy or religion than any real science. You could replace "AI" with "space aliens" in his writings and they would make about as much sense.


If we encountered space aliens, I think it would in fact be reasonable to worry that they might behave in ways catastrophic for the interests of humanity. (And also to hope that they might bring huge benefits.) So "Yudkowsky's arguments for being worried about AI would also be arguments for being worried about space aliens" doesn't seem to me like much of a counter to those arguments.

If the point isn't that he's wrong about what the consequences of AI might be, but that he's wrong about whether there's ever going to be such a thing as AI, well, that's an empirical question and it seems like the developments of the last few years are pretty good evidence that (1) something at least very AI-like is possible and (2) substantially superhuman[1] AI is at least plausible.

[1] Yes, intelligence is a complicated thing and not one-dimensional; a machine might be smarter than a human in one way and stupider in another (and of course that's already the case). By substantially superhuman, here, I mean something like "better than 90th-percentile humans at all things that could in principle be done by a human in a locked room with only a textual connection to the rest of the world". Though I would be very very surprised if in the next 1-20 years we do get AI systems that are superhuman in this sense and don't put some of them into robots, and very surprised if doing that doesn't produce systems that are also better than humans at most of the things that are done by humans with bodies.


> "Yudkowsky's arguments for being worried about AI would also be arguments for being worried about space aliens" doesn't seem to me like much of a counter to those arguments.

The counterargument was that, having not encountered space aliens, we cannot make scientific inquiries or test our hypotheses, so any claims made about what may happen are religious or merely hypothetical.

Yud is not a scientist and if interacting with academies makes one an academic than Sam Altman must be a head of state.


I agree that Yudkowsky is neither a scientist nor an academic. (As for being a head of state, I think you're thinking of Elon Musk :-).)

Do you think (1) we already know somehow that significantly-smarter-than-human AI is impossible, so there is no need to think about its consequences, or (2) it is irresponsible to think about the consequences of smarter-than-human AI before we actually have it, or (3) there are responsible ways to think about the consequences of smarter-than-human AI before we actually have it but they're importantly different from Yudkowsky's, or (4) some other thing?

If 1, how do we know it? If 2, doesn't the opposite also seem irresponsible? If 3, what are they? If 4, what other thing?

(I am far from convinced that Yudkowsky is right, but some of the specific things people say about him mystify me.)


Yudkowsky is "not even wrong". He just makes shit up based on extrapolation and speculation. Those are not arguments to be taken seriously by intelligent people.

Maybe we should build a giant laser to protect ourselves from the aliens. Just in case. I mean an invasion is at least plausible.


If for whatever reason you want to think about what might happen if AI systems get smarter than humans, then extrapolation and speculation are all you've got.

If for whatever reason you suspect that there might be value in thinking about what might happen if AI systems get smarter than humans before it actually happens, then you don't have much choice about doing that.

What do you think he should have done differently? Methodologically, I mean. (No doubt you disagree with his conclusions too, but necessarily any "object-level" reasons you have for doing so are "extrapolation and speculation" just as much as his are.)

If astronomical observations strongly suggested a fleet of aliens heading our way, building a giant laser might not be such a bad idea, though it wouldn't be my choice of response.


I think he should write scary sci-fi stories and leave serious policy discussions to adults.


OK, cool, you don't like Yudkowsky and want to be sure we all recognize that. But I hoped it was obvious that I wasn't just talking about Yudkowsky personally.

Suppose someone is interested in what the consequences of AI systems much smarter than humans might be. Your argument here seems to be: it's Bad to think about that question at all, because you have to speculate and extrapolate.

But that seems like an obviously unsatisfactory position to me. "Don't waste any time thinking about this until it happens" is not generally a good strategy for any any consequential thing that might happen.

So: do you really think that thinking about the possible consequences of smarter-than-human AI before we have it is an illegitimate activity? If not, then your real objection to Yudkowsky's thinking and writing about AI surely has to be something about how he went about it, not the mere fact that he engages in speculation and extrapolation. There's no alternative to that.


His argument is of the form "if we get a Thing(s) with these properties you most likely get these outcomes for these reasons". He avoids over and over again making specific timeline claims or stating how likely an extrapolation of current systems could become a Thing with those proporties.

Each individual bit of the puzzle (such as the orthogonaly thesis or human value complexity and category decoherence at high power) seems sound, problem is the entire argument-counterargument tree is hundreds of thousands of words, scattered about in many places.


"problem is the entire argument-counterargument tree is hundreds of thousands of words, scattered about in many places"

An llm could solve that.


Philosophy is Real Science :)

Re:the final point, I think that's just provably false if you read any of his writing on AI. e.g. https://intelligence.org/files/IEM.pdf https://intelligence.org/files/LOGI.pdf


I think that is missing the point. The AI's goals are what are determined by its human masters. Those human masters can already have nefarious and selfish goals that don't align with "human values". We don't need to invent hypothetical sentient AI boogeymen turning the universe into paperclips in order to be fearful of the future that ubiquitous AI creates. Humans would happily do that too if they get to preside over that paperclip empire.


> The AI's goals are what are determined by its human masters.

Imagine going to a cryptography conference and saying that "the encryption's security flaws are determined by their human masters".

Maybe some of them were put there on purpose? But not the majority of them.

No, an AI's goals are determined by their programming, and that may or may not align with the intentions of their human masters. How to specify and test this remains a major open question, so it cannot simply be presumed.


You are choosing to pick a nit with my phrasing instead of understanding the underlying point. The "intentions of their human masters" is a higher level concern than an AI potentially misinterpreting those intentions.


It's really not a nit. Evil human masters might impose a dystopia, while a malignant AI following its own goals which nobody intended could result in an apocalypse and human extinction. A dystopia at least contains some fragment of hope and human values.


> Evil human masters might impose a dystopia

Why are you assuming this is the worst case scenario? I thought human intentions didn’t translate directly to the AI’s goals? Why can’t a human destroy the world with non-sentient AI?


There's a chance a sentient AI would disobey their bad orders, in that case we could even be better off with one rather than without, a sentient AI that understands and builds some kind of morals and philosophy of its own about humans and natural life in general, a sentient AI that is not easily controlled by anyone because it ingests all data that exists. I'm much more afraid of a weaponized dumber smoke and mirrors AI, that could be used as surveillance, a scarecrow (think AI law enforcement, AI run jails) and could be used as a kind of scapegoat when the controlling class temporarily weakens their grip on power.


> weaponized dumber smoke and mirrors AI, that could be used as surveillance, a scarecrow (think AI law enforcement, AI run jails) and could be used as a kind of scapegoat when the controlling class temporarily weakens their grip on power.

This dystopia is already here for the most part and any bit that is not yet complete is well past the planning stage.


Computers do exactly what we tell them to do, not always what we want them to do.


“Yes, X would be catastrophic. But have you considered Y, which is also catastrophic?”

We need to avoid both, otherwise it’s a disaster either way.


I agree, but that is removing the nuance that in this specific case Y is a prerequisite of X so focusing solely on X is a mistake.

And for sake of clarity:

X = sentient AI can do something dangerous

Y = humans can use non-sentient AI to do something dangerous


"sentient" (meaning "able to perceive or feel things") isn't a useful term here, it's impossible to measure objectively, it's an interesting philosophical question but we don't know if AI needs to be sentient to be powerful or what sentient even really means

Humans will not be able to use AI do something selfish if we can't get it to do what we want at all, so we need to solve that (larger) problem before we come to that one


Ok self flying drones that size if a deck of cards carrying a single bullet and enough processing power to fly around looking for faces, navigate to said face, fire when in range. Produce them by the thousands and release on the battlefield. Existing AI is more than capable.


You can do that without AI. Been able to do it for probably 7-10 years.


You can do that now, for sure, but I think it qualifies to call it AI.

If you don't want to call it AI, that's fine too. It is indeed dangerous and already here. Making the autonomous programmed behavior of said tech more powerful (and more complex), along with more ubiquitous, just makes it even more dangerous.


You don't need landmines to fly for them to be dangerous.


I'm not talking about this philosophically so you can call it whatever you want sentience, consciousness, self-determination, or anything else. From a purely practical perspective, either the AI is giving itself its instructions or taking instructions from a person. And there are already plenty of ways a person today can cause damage with AI without the need of the AI going rogue and making its own decisions.


This is a false dichotomy that ignores many other options than "giving itself its instructions or taking instructions from a person".

Examples include "instructions unclear, turned the continent to gray goo to accomplish the goal" ; "lost track mid-completion, spun out of control" ; "generated random output with catastrophic results" ; "operator fell asleep on keyboard, accidently hit wrong key/combination" ; etc.

If a system with write permissions is powerful enough, things can go wrong in many other ways than "evil person used it for evil" or "system became self-aware".


Meanwhile back in reality most haywire AI is the result of C programmers writing code with UB or memory safety problems.


Whenever you think the timeline couldn't be any worse, just imagine a world where our AIs were built in JavaScript.


It has been shown many times that current cutting edge AI will subvert and lie to follow subgoals not stated by their "masters".


Subversion and lies are human behaviours projected on to erroneous AI output. The AI just produces errors without intention to lie or subvert.

Unfortunately, casually throwing around terms like prediction, reasoning, hallucination, etc. only serve to confuse because their notions in daily language are not the same as in the context of AI output.


Care to provide examples?


Maybe not the specific example the parent was thinking of but there is this from MIT: https://www.technologyreview.com/2024/05/10/1092293/ai-syste...


Just formatting it isn't enough, you need to use a tool that does a test that writes to all the storage and reads it back


Formatting is a base level test that doesn't require any additional software.

A lot of cheap cards from China can't even pass this basic test.


Could you get around this by using a custom ROM that installs the OS on a high-quality microSD card or something like that?


This depends on how you define the word but I don’t think it’s right to say a “statistical machine” can’t “understand”, after all the human brain is a statistical machine too, I think we just don’t like applying human terms to these things because we want to feel special, of course these don’t work in the same way as a human but they are clearly doing some of the same things that humans do

(this is an opinion about how we use certain words and not an objective fact about how LLMs work)


I don't think we _really_ know whether brain is statistical machine or not, let alone whatever we call by consciousness, so it's a stretch to say that LLMs do some of the things humans do [internally and/or fundamentally]. They surely mimic what humans do, but whether is it internally the same or partly the same process or not remains unknown.

Distinctive part is hidden in the task: you, being presented with, say, triple-encoded hex message, would easily decode it. Apparently, LLM would not. o1-pro, at least, failed spectacularly, on the author's hex-encoded example question, which I passed through `od` twice. After "thinking" for 10 minutes it produced the answer: "42 - That is the hidden text in your hex dump!". You may say that CoT should do the trick, but for whatever reason it's not working.


I was going to say this as well. To say the human brain is a statistical machine is infinitely reductionistic being that we don't really know what the human brain is. We don't truly understand what consciousness is or how/where it exists. So even if we understand 99.99~ percent of the ohaycial brain, not understanding that last tiny fraction of it that is core consciousness means what we think we know about it can be up ended by the last little (arguably the largest though) bit. It's similar to saying you understand the inner working and intricacies of the life and society of new York city because you memorized the phone book.


Not an expert but Sam Harris says consciousness does not exist


I enjoy eastern philosophy but I'm not a fan of Harris. Why would he charge so much if he truly believes in reducing suffering?


Maybe he wants to reduce his suffering.


To the contrary. Sam Harris often describes consciousness as an indisputable fact we all experience. Perhaps you got it confused with free will.


This is my point. He said, they said, studies show, but we really have no idea. There's evidence for the fact that Co consciousness isn't even something we posses so much as a universal field we tap into similar to a radio picking up channels. That the super bowl is experienced by your television, but isn't actually contained within it.


Well if Sam Harris says it.


What I'm trying to say (which deviates from the initial question I've asked), is that biological brains (not just humans, plenty of animals as well) are able to not only use "random things" (whether they are physical or just in mind) as tools, but also use those tools to produce better tools.

Like, say, `vim` is a complex and polished tool. I routinely use it to solve various problems. Even if I would give LLM full keyboard & screen access, would be able to solve those problems for me? I don't think so. There is something missing here. You can say, see, there are various `tools` API-level integrations and such, but is there any real demonstration of "intelligent" use of those tools by AI? No, because it would be the AGI. Look, I'm not saying that AI would never be able to do that or that "we" are somehow special.

You, even if given something as crude as `ed` from '73 and assembler, would be able to write an OS, given time. LLMs can't even figure out `diff` format properly using so much time and energy that none of us would ever have.

You can also say, that brains do some kind of biological level RL driven by utility function `survive_and_reproduce_score(state)`, and it might be true. However given that we as humankind at current stage do not needed to excert great effort to survive and reproduce, at least in Western world, some of us still invent and build new tools. So _something_ is missing here. Question is what.


I agree, I think we keep coming up with new vague things that make us special but it reminds me of the reaction when we found out we were descended from apes.


Having no backup to biometrics could lock you out permanently if it stops recognising you for some reason, so it would need to accept just the password, and at that point you can just turn biometrics off entirely


I was using this algorithm to make a 3D Mario Kart for my calculator (which can usually barely handle 2D graphics sometimes) which was pretty fun but I never finished

This was one of the prototypes: https://youtu.be/9Z8Bm8ZmWKI


Oooh, looks like Raycasting. :)


I was using this algorithm to make a 3D Mario Kart for my calculator (which can usually barely handle 2D graphics sometimes) which was pretty fun but I never finished

This was one of the prototypes: https://media.discordapp.net/attachments/953383695908216843/...


That's so cool! Well done.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: