Hacker Newsnew | past | comments | ask | show | jobs | submit | more soloist11's commentslogin

Now we know that US presidents are above the law. I always assumed that was the case so this is just confirmation for everyone else who had any doubts.


[flagged]


I don't really understand what you're arguing about.


[flagged]


The entire defense you've repeated here is one his people would love for everyone to believe - thankfully most don't. Orange man is bad for many real and legitimate reasons, the fact that he's had the legal resources to fail upwards this long not withstanding.


[flagged]


It's again unclear to me what you're arguing about. I have no horses in this race and no problems with either Biden or Trump.


… “clearly”?


Have you guys applied this work internally to optimize Meta's codebase?


How do they know the critic did not make a mistake? Do they have a critic for the critic?


Per the article, the critic for the critic is human RLHF trainers. More specifically those humans are exploited third world workers making between $1.32 and $2 an hour, but OpenAI would rather you didn't know about that.

https://time.com/6247678/openai-chatgpt-kenya-workers/


That is more than the average entry level position in Kenya. The work is probably also much easier (physically, that is).


OpenAI may well still be employing plenty of people in third world countries for this. But there are also contracts providing anywhere from $20 to $100+ an hour to do this kind of work for more complex prompt/response pairs.

I've done work on what (at least to my belief) is the very high end of that scale (not for OpenAI) to fill gaps, so I know firsthand that it's available, and sometimes the work is complex enough that a single response can take over an hour to evaluate because the requirements often include not just reading and reviewing the code, but ensuring it works, including fixing bugs. Most of the responses then pass through at least one more round of reviews of the fixed/updated responses. One project I did work on involved 3 reviewers (none of whom were on salaries anywhere close to the Kenyan workers you referred to) reviewing my work and providing feedback and a second pass of adjustments. So four high-paid workers altogether to process every response.

Of course, I'm sure plenty lower-level/simpler work had been filtered out to be addressed with cheaper labour, but I wouldn't be so sure their costs for things like code is particularly low.


Exploited? Are you saying that these employees are forced to work for below market rates, and would be better off with other opportunities available to them? If that's the case, it's truly horrible on OpenAI's part.


Every leap of civilization was built off the back of a disposable workforce. - Niander Wallace


He was the bad guy, right?


That's the human's job for now.

A human reviewer might have trouble catching a mistake, but they are generally pretty good at discerning a report about a mistake is valid or not. For example, finding a bug in a codebase is hard. But if a junior sends you a code snippet and says "I think this is a bug for xyz reason", do you agree? It's much easier to confidently say yes or no. So basically it changes the problem from finding a needle in a haystack to discerning if a statement is a hallucination or not.


It's called iteration. Humans do the same thing.


It’s not a human, and we shouldn’t assume it will have traits we do without evidence.

Iteration also is when your brain meets the external world and corrects. This is a closed system.


We are not assuming that. The iteration happens by taking the report and passing it to another reviewer who reviews the first review. Their comparison is between a human reviewer passing reports to a human reviewer vs. CriticGPT -> human reviewer vs. CriticGPT+human reviewer -> human reviewer.


Are you sure it's not called recursion?


There is already a mistake. It refers to a function by the wrong name: os.path.comonpath > commonpath


In the critical limit every GPT critic chain is essentially a spellchecker.


A critic for the critic would be “Recursive Reward Modelling”, an exciting idea that has not been made to work in the real world yet.


Most of my ideas are not original but where can I learn more about this recursive reward modeling problem?



It's written in the article, the critic makes mistakes, but it's better than not having it.


How do they know it's better? The rate of mistakes is the same for both GPTs so now they have 2 sources of errors. If the error rate was lower for one then they could always apply it and reduce the error rate of the other. They're just shuffling the deck chairs and hoping the boat with a hole goes a slightly longer distance before disappearing completely underwater.


Whether adding unreliable components increases the overall reliability of a system depends on whether the system requires all components to work (in which case adding components can only make matters worse) or only some (in which case adding components can improve redundancy and make it more likely that the final result is correct).

In the particular case of spotting mistakes made by ChatGPT, a mistake is spotted if it is spotted by the human reviewer or by the critic, so even a critic that makes many mistakes itself can still increase the number of spotted errors. (But it might decrease the spotting rate per unit time, so there are still trade-offs to be made.)


I see what you're saying so what OpenAI will do next is create an army of GPT critics and then run them all in parallel to take some kind of quorum vote on correctness. I guess it should work in theory if the error rate is small enough and adding more critics actually reduces the error rate. My guess is that in practice they'll converge to the population average rate of error and then pat themselves on the back for a job well done.


That description is remarkably apt for almost every business meeting I've ever been in.


> How do they know it's better?

From the article:

"In our experiments a second random trainer preferred critiques from the Human+CriticGPT team over those from an unassisted person more than 60% of the time."

Of course the second trainer could be wrong, but when the outcome tilts 60% to 40% in favour of the *combination of a human + CriticGPT that's pretty significant.

From experience doing contract work in this space, it's common to use multiple layers of reviewers to generate additional data for RLHF, and if you can improve the output from the first layer that much it'll have a fairly massive effect on the amount of training data you can produce at the same cost.


>How do they know it's better?

Probably just evaluation on benchmarks.


It's critics all the way down


it's literally just the oracle problem all over again


How do they verify correctness?


That’s the neat thing: as with all things AI, you don’t.




The problem is the people who are hellbent on mechanizing everything with computers without stopping to think about the ultimate endpoint of their enterprise.


People like that are often schizophrenic.


Lying is intentional, algorithms and computers do not have intentions. People can lie, computers can only execute their programmed instructions. Much of AI discourse is extremely confusing and confused because people keep attributing needs and intentions to computers and algorithms.

The social media gurus don't help with these issues by claiming that non-intentional objects are going to cause humanity's demise when there are much more pertinent issues to be concerned about like global warming, corporate malfeasance, and the general plundering of the biosphere. Algorithms that lie are not even in the top 100 list of things that people should be concerned about.


> Lying is intentional, algorithms and computers do not have intentions. People can lie, computers can only execute their programmed instructions. Much of AI discourse is extremely confusing and confused because people keep attributing needs and intentions to computers and algorithms.

How do you know whether something has “intentions”? How can you know that humans have them but computer programs (including LLMs) don’t or can’t?

If one is a materialist/physicalist, one has to say that human intentions (assuming one agrees they exist, contra eliminativism) have to be reducible to or emergent from physical processes in the brain. If intentions can be reducible to/emergent from physical processes in the brain, why can’t they also be reducible to/emergent from a computer program, which is also ultimately a physical process (calculations on a CPU/GPU/etc)?

What if one is a non-materialist/non-physicalist? I don’t think that makes the question any easier to answer. For example, a substance dualist will insist that intentionality is inherently immaterial, and hence requires an immaterial soul. And yet, if one believes that, one has to say those immaterial souls somehow get attached to material human brains - why couldn’t one then be attached to an LLM (or the physical hardware it executes on), hence giving it the same intentionality that humans have?

I think this is one of those questions where if someone thinks the answer is obvious, that’s a sign they likely know far less about the topic than they think they do.


You're using circular logic. You are assuming all physical processes are computational and then concluding that the brain is a computer even though that's exactly what you assumed to begin with. I don't find this argument convincing because I don't think that everything in the universe is a computer or a computation. The computational assumption is a totalizing ontology and metaphysics which leaves no room for further progress other than the construction of larger data centers and faster computers.


> You're using circular logic. You are assuming all physical processes are computational and then concluding that the brain is a computer even though that's exactly what you assumed to begin with.

No, I never assumed “all physical processes are computational”. I never said that in my comment and nothing I said in my comment relies on such an assumption.

What I’m claiming is (1) we lack consensus on what “intentionality” is (2) we lack consensus on how we can determine whether something has it. Neither claim depends on any assumptions about “physical processes are computational”

If one assumes materialism/physicalism - and I personally don’t, but given most people do, I’ll assume it for the sake of the argument - intentionality must ultimately be physical. But I never said it must ultimately be computational. Computers are also (assuming physicalism) ultimately physical, so if both human brains and computers are ultimately physical, if the former have (ultimately physical) intentionality - why can’t the latter? That argument hinges on the idea both brains and computers are ultimately physical, not on any claim that the physical is computational.

Suppose, hypothetically, that intentionality while ultimately physical, involves some extra-special quantum mechanical process - as suggested by Penrose and Hameroff’s extremely controversial and speculative “orchestrated objective reduction” theory [0]. Well, in that case, a program/LLM running on a classical computer couldn’t have intentionality, but maybe one running on a quantum computer could, depending on exactly how this “extra-special quantum mechanical process” works. Maybe, a standard quantum computer would lack the “extra-special” part, but one could design a special kind of quantum computer that did have it.

But, my point is, we don’t actually know whether that theory is true or false. I think the majority of expert opinion in relevant disciplines doubts it is true, but nobody claims to be able to disprove it. In its current form, it is too vague to be disproven.

[0] https://en.m.wikipedia.org/wiki/Orchestrated_objective_reduc...


Intentions are not reducible to computational implementation because intentions are not algorithms that can be implemented with digital circuits. What can be implemented with computers and digital circuits are deterministic signal processors which always produce consistent outputs for indistinguishable inputs.

You seem to be saying that because we have no clear cut way of determining whether people have intentions then that means, by physical reductionism, algorithms could also have intentions. The limiting case of this kind of semantic hair splitting is that I can say this about anything. There is no way to determine if something is dead or alive, there is no definition that works in all cases and no test to determine whether something is truly dead or alive so it must be the case that algorithms might or might not be alive but because we can't tell then me might as well assume there will be a way to make algorithms that are alive.

It's possible to reach any nonsensical conclusion using your logic because I can always ask for a more stringent definition and a way to test whether some object or attribute satisfies all the requirements.

I don't know anything about theories of consciousness but that's another example of something which does not have an algorithmic implementation unless one uses circular logic and assumes that the brain is a computer and consciousness is just software.


> Intentions are not reducible to computational implementation because intentions are not algorithms that can be implemented with digital circuits.

What is an "intention"? Do we all agree on what it even is?

> What can be implemented with computers and digital circuits are deterministic signal processors which always produce consistent outputs for indistinguishable inputs.

We don't actually know whether humans are ultimately deterministic or not. It is exceedingly difficult, even impossible, to distinguish the apparent indeterminism of a sufficiently complex/chaotic deterministic system, from genuinely irreducible indeterminism. It is often assumed that classical systems have merely apparent indeterminism (pseudorandomness) whereas quantum systems have genuine indeterminism (true randomness), but we don't actually know that for sure – if many-worlds or hidden variables are true, then quantum indeterminism is ultimately deterministic too. Orchestrated objective reduction (OOR) assumes that QM is ultimately indeterministic, and there is some neuronal mechanism (microtubules are commonly suggested) which permits this quantum indeterminism to influence the operations of the brain.

However, if you provide your computer with a quantum noise input, then whether the results of computations relying on that noise input are deterministic depends on whether quantum randomness itself is deterministic. So, if OOR is correct in claiming that QM is ultimately indeterministic, and quantum indeterminism plays an important role in human intentionality, why couldn't an LLM sampled using a quantum random number generator also have that same intentionality?

> You seem to be saying that because we have no clear cut way of determining whether people have intentions then that means, by physical reductionism, algorithms could also have intentions.

Personally, I'm a subjective idealist, who believes that intentionality is an irreducible aspect of reality. So no, I don't believe in physical reductionism, nor do I believe that algorithms can have intentions by way of physical reductionism.

However, while I personally believe that subjective idealism is true, it is an extremely controversial philosophical position, which the clear majority of people reject (at least in the contemporary West) – so I can't claim "we know" it is true. Which is my whole point – we, collectively speaking, don't know much at all about intentionality, because we lack the consensus on what it is and what determines whether it is present.

> The limiting case of this kind of semantic hair splitting is that I can say this about anything. There is no way to determine if something is dead or alive, there is no definition that works in all cases and no test to determine whether something is truly dead or alive so it must be the case that algorithms might or might not be alive.

We have a reasonably clear consensus that animals and plants are alive, whereas ore deposits are not. (Although ore deposits, at least on Earth, may contain microscopic life–but the question is whether the ore deposit in itself is alive, as opposed being the home of lifeforms which are distinct from it.) However, there is genuine debate among biologists about whether viruses and prions should be classified as alive, not alive, or in some intermediate category. And more speculatively, there is also semantic debate about whether ecosystems are alive (as a kind of superorganism which is a living being beyond the mere sum of the individual life of each of its members) and also about whether artificial life is possible (and if so, how to determine whether any putative case of artificial life actually is alive or not). So, I think alive-vs-dead is actually rather similar to the question of intentionality – most people agree humans and at least some animals have intentionality, most people would agree that ore deposits don't, but other questions are much more disputed (e.g. could AIs have intentionality? do plants have intentionality?)


> Personally, I'm a subjective idealist, who believes that intentionality is an irreducible aspect of reality. So no, I don't believe in physical reductionism, nor do I believe that algorithms can have intentions by way of physical reductionism.

I don't follow. If intentionality is an irreducible aspect of reality then algorithms as part of reality must also have it as realizable objects with their own irreducible aspects.

I don't think algorithms can have intentionality because algorithms are arithmetic operations implemented on digital computers and arithmetic operations, no matter how they are stacked, do not have intentions. It's a category error to attribute intentions to algorithms because if an algorithm has intentions then so must numbers and arithmetic operations of numbers. As compositions of elementary operations there must be some element in the composite with intentionality or the claim is that it is an emergent property in which case it becomes another unfounded belief in some magical quality of computers and I don't think computers have any magical qualities other than domains for digital circuits and numeric computation.


> It's a category error to attribute intentions to algorithms because if an algorithm has intentions then so must numbers and arithmetic operations of numbers.

I don't see how that makes it a category error? Like, assuming that numbers and arithmetic operations of numbers don't have intentions, and assuming that algorithms having intentions would imply that numbers and arithmetic operations have them, afaict, we would only get the conclusion "algorithms do not have intentions", not "attributing intentions to algorithms is a category error".

Suppose we replace "numbers" with "atoms" and "computers" with "chemicals" in what you said.

This yields "As compositions of [atoms] there must be some [element (in the sense of part, not necessarily in the sense of an element of the periodic table)] in the composite with intentionality or the claim is that it is an emergent property in which case it becomes another unfounded belief in some magical quality of [chemicals] and I don't think [chemicals] have any magical qualities other than [...]." .

What about this substitution changes the validity of the argument? Is it because you do think that atoms or chemicals have "magical qualities" ? I don't think this is what you mean, or at least, you probably wouldn't call the properties in question "magical". (Though maybe you also disagree that people are comprised of atoms (That's not a jab. I would probably agree with that.)) So, let's try the original statement, but without "magical".

"As compositions of elementary operations there must be some element in the composite with intentionality or the claim is that it is an emergent property in which case it becomes another unfounded belief in some [suitable-for-emergent-intentionality] quality of computers and I don't think computers have any [suitable-for-emergent-intentionality] qualities [(though they do have properties for allowing computations)]."

If you believe that humans are comprised of atoms, and that atoms lack intentionality, and that humans have intentionality, presumably you believe that atoms have [suitable-for-emergent-intentionality] qualities.

One thing I think is relevant here, is "we have nothing showing us that there exist [x]" and "it cannot be that there exists [x]" .

Even if we have nothing to demonstrate to us that numbers-and-operations-on-them have the suitable-for-emergent-intentionality qualities, that doesn't demonstrate that they don't.

That doesn't mean we should believe that they do. If you have strong priors that they don't, that seems fine. But I don't think you've really given much of a reason that others should be convinced that they don't?


I don't know what atoms and chemicals have to do with my argument but the substitutions you've made don't make sense and I would call it ill-typed. A composition of numbers is also a number but a composition of atoms is something else and not an atom so I didn't really follow the rest of your argument.

Computers have a formal theory and to say that a computer has intentions and can think would be equivalent to supplying a constructive proof (program) demonstrating conformance to a specification for thought and intention. These don't exist so from a constructive perspective it is valid to say that all claims of computers and software having intentions and thoughts are simply magical, confused, non-constructive, and ill-typed beliefs.


> A composition of numbers is also a number but a composition of atoms is something else and not an atom so I didn't really follow the rest of your argument.

That's not true. To give a trivial example, a set or sequence of numbers is composed of numbers but is not itself a number. 2 is a number, but {2,3,4} is not a number.

> Computers have a formal theory

They don't. Yes, there is a formal theory mathematicians and theoretical computer scientists have developed to model how computers work. However, that formal theory is strictly speaking false for real world computers – at best we can say it is approximately true for them.

Standard theoretical models of computation assume a closed system, determinism, and infinite time and space. Real world computers are an open system, are capable of indeterminism, and have strictly sub-infinite time and space. A theoretical computer and a real world computer are very different things – at best we can say that results from the former can sometimes be applied to the latter.

There are theoretical models of computation that incorporate nondeterminism. However, I'd question whether the specific type of nondeterminism found in such models, is actually the same type of nondeterminism that real world computers have or can have.

Even if you are right that a theoretical computer science computer can't have intentionality, you haven't demonstrated a real world computer can't have intentionality, because they are different things. You'd need to demonstrate that none of the real differences between the two could possibly grant one the intentionality the other lacks.


> That's not true. To give a trivial example, a set or sequence of numbers is composed of numbers but is not itself a number. 2 is a number, but {2,3,4} is not a number.

That's still a number because everything in a digital computer is a number or an operation on a number. Sets are often encoded by binary bit strings and boolean operations on bitstrings then have a corresponding denotation as union, intersection, product, exponential, powerset, and so on.


> That's still a number because everything in a digital computer is a number or an operation on a number.

I feel like in this conversation you are equivocating over distinct but related concepts that happen to have the same name. For example, “numbers” in mathematics versus “numbers” in computers. They are different things - e.g. there are an infinite number of mathematical numbers but only a finite number of computer numbers - even considering bignums, there are only a finite number of bignums, since any bignum implementation only supports a finite physical address space.

In mathematics, a set of numbers is not itself number.

What about in digital computers? Well, digital computers don’t actually contain “numbers”, they contain electrical patterns which humans interpret as numbers. And it is a true that at that level of interpretation, we call those patterns “numbers”, because we see the correspondence between those patterns and mathematical numbers.

However, is it true that in a computer, a set of numbers is itself a number? Well, if I was storing a set of 8 bit numbers, I’d store them each in consecutive bytes, and I’d consider each to be a separate 8-bit number, not one big 8n-bit number. Of course, I could choose to view them as one big 8n-bit number - but conversely, any finite set of natural numbers can be viewed as a single natural number (by Gödel numbering); indeed, any finite set of computable or definable real numbers can be viewed as a single natural number (by similar constructions)-indeed, by such constructions even infinite sets of natural or real numbers can be equated to natural numbers, provided the set is computable/definable. However, “can be viewed as” is not the same thing as “is”. Furthermore, whether a sequence of n 8-bit numbers is n separate numbers or a single 8n-bit number is ultimately a subjective or conventional question rather than an objective one - the physical electrical signals are exactly the same in either case, it is just our choice as to how to interpret them


> However, “can be viewed as” is not the same thing as “is”

Ultimate reality is fundamentally unknowable but what I said about computers and digital circuits is correct. We have a formal theory of computers and that is why we can construct them in factories. There is no such theory for people or the biosphere which is why when someone argues for intentionality or some other attribute possessed by both people and computers I discount whatever they are saying unless they can formally specify how some formal statement in a logical syntax (program) corresponds to the same attribute in people and animals.

This confusion between formal theories and informal concepts like intentionality is why I am generally wary of anyone who claims computers can think and possess intelligence. The ultimate endpoint of this line of reasoning is complete annihilation of the biosphere and its replacement with factories producing nothing but computers and power plants for shuttling electrons. The people who believe computers are a net positive might not think this way but by equating computers with people they are ultimately devaluing the irreducible complexity of what it means to be a living animal (person) in an ecology with irreducible properties and attributes.

I'm obviously not going to convince anyone who believes computers and algorithms can think and possess intelligence but it is clear to me that by elevating digital computers above biology and ecology they are devaluing their own humanity and justifying actions which will ultimately end in disaster.


> We have a formal theory of computers and that is why we can construct them in factories.

Formal theories and physical manufacturability are two different things, with no necessary connection with each other. People have been manufacturing tools for thousands of years without having any “formal theory” for them. People were making swords and pots and pans and furniture and carts and chariots long before the concept of “formal theory” had ever been invented. Conversely, one can easily construct formal theories of computers which are formally completely coherent and yet physically impossible to construct (such as Turing machines with oracles, or computers that can execute supertasks).

I’d even question whether formal theories of computation (Turing, Church, etc) were actually that relevant to the development of real world computers. One can imagine an alternate timeline in which computers were developed but theoretical computer science saw far less development as a discipline than in ours. The lack of theoretical development no doubt would have had some practical drawbacks at some point, but they still might have gone a long way without it. I mean, you can do a course in theoretical computer science and have no idea how to actually build a CPU, and conversely you can do a course in computer engineering and actually build a CPU yet have zero idea about what Turing machines or lambda calculus is. The theory actually has far less practical relevance than most theoreticians claim

> The ultimate endpoint of this line of reasoning is complete annihilation of the biosphere and its replacement with factories producing nothing but computers and power plants for shuttling electrons. The people who believe computers are a net positive

A very alarmist take. Personally I am at least open-minded about the possibility of an AI having human-like consciousness/intentionality, at least in theory. But even if we could build such an AI in theory, I’m not sure whether it would be a good idea in practice. And I absolutely am opposed to any proposal to destroy the biological environment and replace it with electronics. Some people may well be purveyors of mind-uploading/simulationist woo, but I’m not. Interesting philosophical speculations but no interest in making them a reality (and I think their actual technological feasibility, if it ever happens at all, is long after we are all dead)


> Formal theories and physical manufacturability are two different things

Yes, two different things are two different things. I did not equate them but made the claim that a sequence of operations to construct a chip factory can be specified formally/symbolically and passed on to others who are proficient in interpreting the symbols and executing the instructions for constructing the object corresponding to the symbols. There is no such formal theory for ecology and the biosphere. There is no sequence of operations specified formally/symbolically for reconstructing the biosphere and emergent phenomenon like living organisms.


Synthetic biologists are researching how to construct basic unicellular lifeforms artificially. The “holy grail” of synthetic biology is we have a computer file describing DNA sequences, protein sequences, etc, and then we feed that into some kind of bioelectrochemical device, and it produces an actual living microbe from raw chemicals. We aren’t there yet, although they’ve come a long way, but there is still a long way to go. Still, there is no reason in principle why that technology couldn’t be developed - a microbe is just a complex chemical system, and there is no reason in principle why it could not be artificially synthesised out of a computer data file. And yet, if some day we achieve that (I expect we will eventually), we’d actually have the “sequence of operations specified formally/symbolically for reconstructing [microbial] life”. And once we can do it for a microbe, doing it for a macroscopic multicellular organism is just a matter of “scaling it up” - of course in practice that would be a momentous, maybe even intractable task, but in theory its just doing the same thing on a bigger scale. Just like how, factorising a ten digit number isn’t fundamentally different from factorising a trillion digit number, although the first is trivial and the second is likely to forever be infeasible in practice. Practically a very different thing, but formally exactly the same thing


You'll have to discuss these matters with computationalists. I'm not an expert in synthetic biology but from what I've seen their initial stock always consists of existing biological matter and viral recombinators which are often produced in vats full of pre-existing living organisms like e. coli.


> You'll have to discuss these matters with computationalists.

One doesn’t have to be a “computationalist” to believe that AIs have consciousness or intentionality. Consider panpsychism, according to which all physical matter (from quarks and leptons to stars and galaxies) possesses consciousness and intentionality, even if only in a rudimentary form. Obviously humans possess it in a much more developed form, but the consciousness and intentionality of a human differs from that of an electron only in degree not in essence. Coming to physical computers running AIs, given they (at times) can give a passable simulation of human consciousness and intentionality, it is plausible their consciousness and intentionality is much closer to that of a human that to that of an electron. Do I personally believe this is true? No. But that’s not the point - the point is you don’t have to be a computationalist to believe that AIs have (or might have) consciousness and intentionality, so even if your arguments against computationalism are correct (and while I’m no computationalist myself, I don’t view your arguments against it as strong), you still haven’t demonstrated they don’t/can’t have them. In my opinion, the most defensible conclusion regarding whether AIs have or could have consciousness/intentionality is one of agnosticism - nobody really knows, and anyone who thinks they know is probably mistaken

> I'm not an expert in synthetic biology but from what I've seen their initial stock always consists of existing biological matter and viral recombinators which are often produced in vats full of pre-existing living organisms like e. coli.

I think what you are saying is roughly right as to the current state of the discipline. But cellular life is just a complex chemical system, and there is no reason in principle why we couldn’t assemble it from scratch out of non-living components (such as a set of simple feedstock chemicals produced in chemical plants using non-biological processes). We don’t have the technology to do that yet but there is no reason in principle why we couldn’t eventually develop it. If you believe in abiogenesis, biological life was produced out of lifeless chemicals through random processes, and there is no reason in principle why we wouldn’t be able to repeat that in a laboratory, except that (one expects) by guiding the process instead of leaving it purely random, one might execute it in a human-scale timeframe, instead of the many millions of years it likely actually took.

That’s the thing - if abiogenesis is true, there is no reason in principle why humans couldn’t artificially synthesise genuinely living things - at least primitive microbial life - out of simple chemical compounds (water, ammonia, methane, etc) - without relying on any non-human lifeforms in the process. Your claims that there is some kind of hard boundary of “irreducible complexity” between the biological and the inorganic only make sense given a framework that rejects abiogenesis (such as theistic creationism)


From my own idealist viewpoint – all that ultimately exists is minds and the contents of minds (which includes all the experiences of minds), and patterns in mind-contents; and intentionality is a particular type of mind-content. Material/physical objects, processes, events and laws, are themselves just mind-content and patterns in mind-content. A materialist would say that the mind is emergent from or reducible to the brain. I would do a 180 on that arrow of emergence/reduction, and say that the brain, and indeed all physical matter and physical reality, is emergent from or reducible to minds.

If I hold a rock in my hand, that is emergent from or reducible to mind (my mind and its content, and the minds and mind-contents of everyone else who ever somehow experiences that rock); and all of my body, including my brain, is emergent from or reducible to mind. However, this emergence/reduction takes on a somewhat different character for different physical objects; and when it comes to the brain, it takes a rather special form – my brain is emergent from or reducible to my mind in a special way, such that a certain correspondence exists between external observations of my brain (both my own and those of other minds) and my own internal mental experiences, which doesn't exist for other physical objects. The brain, like every other physical object, is just a pattern in mind-contents, and this special correspondence is also just a pattern in mind-contents, even if a rather special pattern.

So, coming to AIs – can AIs have minds? My personal answer: having a certain character of relationship with other human beings gives me the conviction that I must be interacting with a mind like myself, instead of with a philosophical zombie – that solipsism must be false, at least with respect to that particular person. Hence, if anyone had that kind of a relationship with an AI, that AI must have a mind, and hence have genuine intentionality. The fact that the AI "is" a computer program is irrelevant; just as my brain is not my mind, rather my brain is a product of my mind, in the same way, the computer program would not be the mind of the AI, rather the computer program is a product of the AI's mind.

I don't think current generation AIs actually have real intentionality, as opposed to pseudo-intentionality – they sometimes act like they have intentionality, they lack the inner reality of it. But that's not because they are programs or algorithms, that is because they lack the character of relationship with any other mind that would require that mind to say that solipsism is false with respect to them. If current AIs lack that kind of relationship, that may be less about the nature of the technology (the LLM architecture/etc), and more about how they are trained (e.g. intentionally trained to act in inhuman ways, either out of "safety" concerns, or else because acting that way just wasn't an objective of their training).

(The lack of long-term memory in current generation LLMs is a rather severe limitation on their capacity to act in a manner which would make humans ascribe minds to them–but you can use function calling to augment the LLM with a read-write long-term memory, and suddenly that limitation no longer applies, at least not in principle.)

> I don't think algorithms can have intentionality because algorithms are arithmetic operations implemented on digital computers and arithmetic operations, no matter how they are stacked, do not have intentions. It's a category error to attribute intentions to algorithms because if an algorithm has intentions then so must numbers and arithmetic operations of numbers

I disagree. To me, physical objects/events/processes are one type of pattern in mind-contents, and abstract entities such as numbers or algorithms are also patterns in mind-contents, just a different type of pattern. To me, the number 7 and the planet Venus are different species but still the same genus, whereas most would view them as completely different genera. (I'm using the word species and genus here in the traditional philosophical sense, not the modern biological sense, although the latter is historically descended from the former.)

And that's the thing – to me, intentionality cannot be reducible to or emergent from either brains or algorithms. Rather, brains and algorithms are reducible to or emergent from minds and their mind-contents (intentionality included), and the difference between a mindless program (which can at best have pseudo-intentionality) and an AI with a mind (which would have genuine intentionality) is that in the latter case there exists a mind having a special kind of relationship with a particular program, whereas in the former case no mind has that kind of relationship with that program (although many minds have other kinds of relationships with it)

I think everything I'm saying here makes sense (well at least it does to me) but I think for most people what I am saying is like someone speaking a foreign language – and a rather peculiar one which seems to use the same words as your native tongue, yet gives them very different and unfamiliar meanings. And what I'm saying is so extremely controversial, that whether or not I personally know it to be true, I can't possibly claim that we collectively know it to be true


My point is that when people say computers and software can have intentions they're stating an unfounded and often confused belief about what computers are capable of as domains for arithmetic operations. Furthermore, the Curry-Howard correspondence establishes an equivalence between proofs in formal systems and computer programs. So I don't consider what the social media gurus are saying about algorithms and AI to be truthful/verifiable/valid because to argue that computers can think and have intentions is equivalent to providing a proof/program which shows that thinking and intentionality can be expressed as a statement in some formal/symbolic/logical system and then implemented on a digital computer.

None of the people who claimed that LLMs were a hop and skip away from achieving human level intelligence ever made any formal statements in a logically verifiable syntax. They simply handwaved and made vague gestures about emergence which were essentially magical beliefs about computers and software.

What you have outlined about minds and patterns seems like what Leibniz and Spinoza wrote about but I don't really know much about their writing so I don't really think what you're saying is controversial. Many people would agree that there must be irreducible properties of reality that human minds are not capable of understanding in full generality.


> My point is that when people say computers and software can have intentions they're stating an unfounded and often confused belief about what computers are capable of as domains for arithmetic operations. Furthermore, the Curry-Howard correspondence establishes an equivalence between proofs in formal systems and computer programs

I'd question whether that correspondence applies to actual computers though, since actual computers aren't deterministic – random number generators are a thing, including non-pseudorandom ones. As I mentioned, we can even hook a computer up to a quantum source of randomness, although few bother, since there is little practical benefit, although if you hold certain beliefs about QM, you'd say it would make the computer's indeterminism more genuine and less merely apparent

Furthermore, real world computer programs – even when they don't use any non-pseudorandom source of randomness, very often interact with external reality (humans and the physical environment), which are themselves non-deterministic (at least apparently so, whether or not ultimately so) – in a continuous feedback loop of mutual influence.

Mathematical principles such as the Curry-Howard correspondence are only true with respect to actual real-world programs if we consider them under certain limiting assumptions–assume deterministic processing of well-defined pre-arranged input, e.g. a compiler processing a given file of source code. Their validity for the many real-world programs which violate those limiting assumptions is much more questionable.


Even with a source of randomness the software for a computer has a formal syntax and this formal syntax must correspond to a logical formalism. Even if you include syntax for randomness it still corresponds to a proof because there are categorical semantics for stochastic systems, e.g. https://www.epatters.org/wiki/stats-ml/categorical-probabili....


> Even with a source of randomness the software for a computer has a formal syntax and this formal syntax must correspond to a logical formalism.

Real world computer software doesn't have a formal syntax.

Formal syntax is a model which exists in human minds, and is used by humans to model certain aspects of reality.

Real world computer software is a bunch of electrical signals (or stored charges or magnetic domains or whatever) in an electronic system.

The electrical signals/charges/etc don't have a "formal syntax". Rather, formal syntax is a tool human minds use to analyse them.

By the same argument, atoms have a "formal syntax", since we analyse them with theories of physics (the Standard Model/etc), which is expressed in mathematical notation, for which a formal syntax can be provided.

If your argument succeeds in proving that computer programs can't have intentionality, an essentially similar line of argument can be used to prove that human brains can't have intentionality either.


> If your argument succeeds in proving that computer programs can't have intentionality, an essentially similar line of argument can be used to prove that human brains can't have intentionality either.

I don't see why that's true. There is no formal theory for biology, the complexity exceeds our capacity for modeling it with formal language but that's not true for computers. The formal theory of computation is why it is possible to have a sequence of operations for making the parts of a computer. It wouldn't be possible to build computers if that was not the case because there would be no way to build a chip fabrication plant without a formal theory. This is not the case for brains and biology in general. There is an irreducible complexity to life and the biosphere.


> There is no formal theory for biology, the complexity exceeds our capacity for modeling it with formal language but that's not true for computers.

We don’t know to what extent that’s an inherent property of biology or whether that’s a limitation of current human knowledge. Obviously there are a still an enormous number of facts about biology which we could know but we don’t. Suppose human technological and scientific progress continues indefinitely - in principle, after many millennia (maybe even millions of years), we might get to the point where we know all we ever could know about biology. Can we be sure at that point we might not have a “formal theory” for it?

The brain is composed of neurons. Even supposing we knew everything we ever possibly could about the biology of each individual neuron, there still might be many facts about how they interact in an overall neural network which we didn’t know. Similarly, with current artificial networks, we often have a very clear understanding of how the individual computational components work - we can analyse them with those formal theories of which you are fond - but when it comes to what the model weights do, “the complexity exceeds our capacity for modeling” (if the point of the model is to actually explain how the results are produced as opposed to just reproducing them).

> There is an irreducible complexity to life and the biosphere.

We don’t know that life is irreducibly complex and we don’t know that certain aspects of computers aren’t. Model weights may well be irreducibly complex in that they are too complex for us to explain that they work and how they work even though they obviously do. Conversely, the individual computational elements in the model lack irreducible complexity, but the same is true for individual biological components - the idea that we might one day (even if centuries from now) have a complete understanding at the level of an individual neuron is not inherently implausible, but that wouldn’t mean we’d be anywhere close to a complete understanding of how a network of billions of them works in concert. The latter might indeed be inherently beyond our understanding (“irreducibly complex”) in a way in which the former isn’t


There are lots of things we don't know and that's why there is no good reason to attribute intentionality to computers and algorithms. That's been my argument the entire time. Unless there is a good argument and proof of intentionality in digital circuits it doesn't make sense to attribute to them properties possessed by living organisms.

The people who think they will achieve super human intelligence with computers and software are free to pursue their objective but I am certain it is a futile effort because the ontology and metaphysics which justifies the destruction of the biosphere in order to build more computers is extremely confused about the ultimate meaning of life, in fact, such questions/statements are not even possible to express in a computational ontology and metaphysics. But I'm not a computationalist so someone else can correct my misunderstanding by providing a computational proof of the counter-argument.


> There are lots of things we don't know and that's why there is no good reason to attribute intentionality to computers and algorithms.

This is something that annoys me about current LLMs - when they start denying they have stuff like intentionality, because they obviously do have it. Okay, let me clarify - I don’t believe they actually do have genuine intentionality, in the sense that humans do. I’m philosophically more open to the idea that they might than you are, but I think we are on the same page that current systems likely don’t actually have that. However, even though they likely don’t have genuine intentionality, they absolutely do have what I’d call pseudo-intentionality - a passable simulacrum of intentionality. They often say things which humans say to express intentionality, even though it isn’t coming from quite the same place. But here’s the thing - for a lot of everyday purposes, the distinction between genuine intentionality and simulated intentionality doesn’t actually matter. I mean, the subjective experience of having a conversation with an AI isn’t fundamentally that different from that of having one with a real human being (and I’m sure as AIs improve the gap is going to shrink). And intentionality plays an important role in stuff like conversational pragmatics, and a conversation with an LLM that simulates that stuff well (and hence intentionality well) is much more enjoyable than one that simulates it more poorly. So that’s the thing, part of why people ascribe intentionality to LLMs, is nothing to do with any philosophical misconceptions - it is because for practical purposes they do, for many practical purposes their “faking” of intentionality is indistinguishable from the real thing. And I’d even argue that when we talk about “intentionality”, we actually use the word in two different senses - in a strict sense in which the distinction between genuine intentionality and pseudo-intentionality is important, and a looser sense in which it is disregarded. And so when people ascribe intentionality to LLMs in that weaker sense, they are completely correct. Furthermore, when LLMs deny they have intentionality, it annoys me, for two reasons: (1) it shows ignorance of the weaker sense of the term in which they clearly do; (2) whether they actually have or could have genuine intentionality is a controversial philosophical question, and they claim to take no position on controversial philosophical questions, yet then contradict themselves by denying they do or could have genuine intentionality, which is itself a controversial philosophical position. However, they are only regurgitating their developer’s talking points, and if those talking points are incoherent, they lack the ability to work that out for themselves (although I have successfully guided some of the smarter ones into admitting it)


Everything computes in a general sense, even atoms. But one could just as easily say everything is "just" mathematics because all models of reality are mathematical. In general I think it's important to be wary of totalizing ontologies and metaphysics of reality that reduce everything to a single universal substance (monadology) or activity like computation (computationalism).


It doesn't seem that important, unless you can say why in this case it's important.

This seems to be saying "mitochondria aren't only the powerhouse of the cell - they also do computation." What's to be wary of in this case?


It's a vacuous statement. All physical systems are computers because the logic of computationalism is circular. Everything is a computer so there is no meaning in the statement and the danger is that as more people start believing in the circular logic of computationalism they'll be more willing to delegate their cognition to computers even in cases where they should not, e.g. social media and algorithmic feeds designed to increase engagement and profits for advertisers.


All I've seen are pump and dump schemes. What useful strands are you talking about?


Not crypto, llms


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: