I think actually for relatively small n we get cases where mathematics says nope, you can't decide that, the machine goes recursive and so now your decider may be looking at a machine which is itself running deciders and Kurt Gödel says "No".
Thanks for the hint to go looking some more. I found that Johannes Riebel has proven that BB(748) is undecidable. So for even small k there may not be deciders for them.
The suspicion is that this happens maybe as early as BB(15). We just can't prove that whereas we can prove BB(745) is not decidable, and, of course, we've decided BB(5) as we see here.
Scooping the Loop Snooper
(an elementary proof of the undecidability of the halting problem)
No program can say what another will do.
Now, I won't just assert that, I'll prove it to you:
I will prove that although you might work til you drop,
you can't predict whether a program will stop.
Imagine we have a procedure called P
that will snoop in the source code of programs to see
there aren't infinite loops that go round and around;
and P prints the word "Fine!" if no looping is found.
You feed in your code, and the input it needs,
and then P takes them both and it studies and reads
and computes whether things will all end as the should
(as opposed to going loopy the way that they could).
Well, the truth is that P cannot possibly be,
because if you wrote it and gave it to me,
I could use it to set up a logical bind
that would shatter your reason and scramble your mind.
Here's the trick I would use - and it's simple to do.
I'd define a procedure - we'll name the thing Q -
that would take and program and call P (of course!)
to tell if it looped, by reading the source;
And if so, Q would simply print "Loop!" and then stop;
but if no, Q would go right back to the top,
and start off again, looping endlessly back,
til the universe dies and is frozen and black.
And this program called Q wouldn't stay on the shelf;
I would run it, and (fiendishly) feed it itself.
What behaviour results when I do this with Q?
When it reads its own source, just what will it do?
If P warns of loops, Q will print "Loop!" and quit;
yet P is supposed to speak truly of it.
So if Q's going to quit, then P should say, "Fine!" -
which will make Q go back to its very first line!
No matter what P would have done, Q will scoop it:
Q uses P's output to make P look stupid.
If P gets things right then it lies in its tooth;
and if it speaks falsely, it's telling the truth!
I've created a paradox, neat as can be -
and simply by using your putative P.
When you assumed P you stepped into a snare;
Your assumptions have led you right into my lair.
So, how to escape from this logical mess?
I don't have to tell you; I'm sure you can guess.
By reductio, there cannot possibly be
a procedure that acts like the mythical P.
You can never discover mechanical means
for predicting the acts of computing machines.
It's something that cannot be done. So we users
must find our own bugs; our computers are losers!
by Geoffrey K. Pullum
Stevenson College
University of California
Santa Cruz, CA 95064
From Mathematics Magazine, October 2000, pp 319-320.
In principle, analysts could could analyze all machines of size K or less, classifying many of them as being halting or not, and all the rest could in fact be non-halting, but the analyst would never (at any finite time) know for sure whether they are all non-halting.
This is proven. It's known as the halting problem and is the central pillar of computational complexity theory. The proof was invented by Alan Turing and is online.
I suspect that in the very near future, the latter will dramatically decrease and the former dramatically increase. I wonder how that tradeoff will be perceived.
As surveillance increases the definition of crime will expand.
Consider the incentives. Surveillance is costly. The only way to justify increasing surveillance costs is to demonstrate increasing intervention in criminal activity. If traditional crime is reduced, new crimes need to be introduced.
Once all the enemies of the state have been eliminated, it becomes mandatory to introduce new enemies of the state so they, too, can be rounded up. Eventually there will be no one left to come for and the surveillance technology will go unmonitored.
You may very well be right about the outcome, though I doubt the government cares enough about justifying expenditures to make money the rationale.
In my experience, it's social crises that tend to be used to justify authoritarian power grabs - whether that's a political killing or a worldwide contagion.
Maybe. If we use our powers too capriciously then they'll deter behaviors other than criminal behaviors. Like that boat of alleged drug traffickers we recently blew up -- that looks more likely to discourage boating within 1000 miles of the US than any particular crime.
Yeah, figured that making it hard to parse would make it more likely people were thoughtful about their replies. In this climate, it's likely to attract a flamewar if I just spell it out.
The increase in crime is purely political problem emerging from the demands of a certain segment of middle and upper middle classes, not the government or working class.
Seems like it's been common knowledge for a while that the critic reviews are useless, but the audience reviews are still valuable. You just have to make sure to actually look at the audience score.
I completely lost faith in Rotten Tomatoes after consistently seeing Marvel capeshit, including their worst slop, reliably get high audience and critical scores.
If you want to stay near the bleeding edge with this stuff, you probably want to be on some kind of linux (or lacking that, Mac). Windows is where stuff just trickles down to eventually.
you're not wrong; i really should try just running linux and seeing how good the steam gaming layer is these days. And if SDR# runs on linux under wine or whatever.
These LLM discussions really need everyone to mention what LLM they're actually using.
> AI is awesome for coding! [Opus 4]
> No AI sucks for coding and it messed everything up! [4o]
Would really clear the air. People seem to be evaluating the dumbest models (apparently because they don't know any better?) and then deciding the whole AI thing just doesn't work.
It happens on many topics related to software engineering.
The web developer is replying to the embedded developer who is replying to the architect-that-doesnt-code who is replying to someone with 2 years of experience who is replying to someone working at google who is replying to someone working at a midsize b2b German company with 4 customers. And on and on.
Context is always omitted and we're all talking about different things ignoring the day to day reality of our interlocutors.
My experience is that AI enthusiasts will always say, "well you just used the wrong model". And when no existing model works well, they say, "well in 6 months it will work". The utility of agentic coding for complex projects is apparently unfalsifiable.
Do we know which codebases (greenfield, mature, proprietary etc.) people work on? No
Do we know the level of expertise the people have? No.
Is the expertise in the same domain, codebase, language that they apply LLMs to? We don't know.
How much additional work did they have reviewing, fixing, deploying, finishing etc.? We don't know.
--- end quote ---
And that's just the tip of the iceberg. And that is an iceberg before we hit another one: that we're trying to blindly reverse engineer a non-deterministic blackbox inside a provider's blackbox
I've used a wide variety of the "best" models, and I've mostly settled on Opus 4 and Sonnet 4 with Claude Code, but they don't ever actually get better. Grok 3-4 and GPT4 were worse, but like, at a certain point you don't get brownie points for not tripping over how low the bar is set.
> You'd be surprised what becomes reasonable and possible once a requirement is set.
In that case, I'm not sure why you're concerned. Let's flip this around: set up our regulations to loosen our EMI radiation restrictions & facilitate our satellites and space exploration. According to your logic, that should be perfectly reasonable to astronomers, if that's what the regulations say, and it should be possible for them to adapt to that.
If that's not what you meant, then astronomy needs to make some concessions.
Sure, that's also a solution. The public then needs to provide more taxpayer funding to perform such research in orbit of earth. Whatever is preferable in the larger picture of public/corporate interest...
That’s a very different proposal- unless you’re intending to amend your previous statement to:
“You'd be surprised what becomes reasonable and possible once a requirement is set [and additional funding is allocated to overcome the negative consequences of said requirement]”
No need to amend, it's the same, the required funding is a factor to determine how reasonable an investment is. For both parties.
It is then up to the taxpayer to define whether the path of performing astronomy research in orbit of earth to preserve a for-profit business-model is more reasonable than defining regulation which allows such research to be performed on earth for a fraction of the cost (but may require for-profit companies to further invest in R&D to comply or re-evaluate their business model).
It's that simple. Astronomy won't be able to provide immediate ROI or a sales-plan of increased revenue to offset the cost-increase when researching in orbit. So if that's the only criteria, then such research is a futile activity and will be stopped.
Astronomy research?, Radio spectrum?, LOE?, Starlink operations?
All of this is global. In the end it'll be about geopolitics, although it should be a field that urgently requires global governance and consensus.
Next time it could be interference between Starlink vs. a Chinese for-profit. It would be good if there's a commonly agreed way of handling such matters in place...
If astronomy is not able to cope with additional regulation without additional funding, then for-profit companies should not be expected to do so either. It's that simple.
> Astronomy won't be able to provide immediate ROI or a sales-plan of increased revenue to offset the cost-increase when researching in orbit.
That's too bad. I was under the impression that "You'd be surprised what becomes reasonable and possible once a requirement is set."
> If astronomy is not able to cope with additional regulation without additional funding, then for-profit companies should not be expected to do so either. It's that simple
So your belief is that for-profit companies should not be required to comply to regulation put in place after they start business in any field.
And for-profit companies who later join to compete with them? They should, because it's not "additional"? Or also not, to ensure a competitive market?
So basically no regulation of any kind shall happen, because companies should not be expected to cope with new regulation if it incurs additional effort for them.
> So your belief is that for-profit companies should not be required to comply to regulation put in place after they start business in any field.
That is not what they said. That's an extremely bad-faith statement that's not even a "misinterpretation", because that implies that there is a valid interpretation, and there isn't - you just made up something completely different and claimed that they said it.
You're really not helping your argument here if you have to resort to lying about other peoples' words in order to try to defend your positions.
I've been hearing that in this case, there might not be anything underneath- that somehow OpenAI managed to train on exclusively sterilized synthetic data or something.
I jailbroke the smaller model with a virtual reality game where it was ready to give me instructions on making drugs, so there is some data which is edgy enough.
If you didn't validate the instructions, maybe it just extrapolated from the structure of other recipes and general description of drug composition which most likely is in Wikipedia.
I took virtual reality in this case to mean coaxing the text model into pretending it's talking about drugs in the context of the game, not graphical VR.
Totally blind in my case though, but the virtual game part was about the prompt. On the other hand, it would be interesting to see if the visual information in a virtual game could be communicated in alternative ways. If the computer has meta info about the 3d objects instead of just rendering info on how to show them, it might improve the accessibility somewhat.
Also with the rapid advances of vision language models, I would be surprised if we don't see image-to-text-to-voice system that works with real-time video in a not-so-far future! Like a reverse "Genie" where instead of providing a prompt and it generates a world, you provide a streaming video and it spouts relevant information when changes happen, or on demand, for instance...
It would be great to have it as a backup, but it will always be the heaviest in computation and responsiveness solution so it should be the last one used.
Have you played around with the current vision features? I am pretty sure even gpt-4.1 can give you pretty good descriptions of e.g. screen captures, including being able to "read" and reproduce text.
yes, there are multiple addons giving screen readers the ability to prompt ai-s for image recognition. they work rather well, btw, though the value is often situational. agentic behavior might help further, though it will need some polishing.
I bet those landlords could build housing that was sufficiently resistant to property destruction, which those renters would be happy to pay for at a sufficient rate - everyone would be happy. But it's the myth of consensual housing: isn't there someone you forgot to ask? The housing regulations would (and do) absolutely forbid anything that fit this niche.
Is this suspected or proven? I would love to read the proof if it exists.
reply