Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think the end game here is to create systems which aren't based on the current strategy of utilizing gradient descent (for everything). I don't see a lot of conversation explicitly going on about that, but we do talk about it a lot in terms of AI systems and probability.

You don't want to use probability to solve basic arithmetic. Similarly, you don't want to use probability to govern basic logic.

But because we don't have natural language systems which interpret text and generate basic logic, there will never be a way to get there until such a system is developed.

Large language models are really fun right now. LLMs with logic governors will be the next breakthrough however one gets there. I don't know how you would get there, but it requires a formal understanding of words.

You can't have all language evolve over time and be subject to probability. We need true statements that can always be true, not 99.999% of the time.

I suspect this type of modeling will enter ideological waters and raise questions about truth that people don't want to hear.

I respectfully disagree with Simon. I think using a trusted/untrusted dual LLM model is quite literally the same as using more probability to make probability more secure.

My current belief is that we need an architecture that is entirely different from probability based models that can work alongside LLMs.

I think large language models become "probability language models," and a new class of language model needs to be invented: a "deterministic language model."

Such a model would allow one to build a logic governor that could work alongside current LLMs, together creating a new hybrid language model architecture.

These are big important ideas, and it's really exciting to discuss them with people thinking about these problems.



> a "deterministic language model"

We already have a tool for that: it's called "code written by a programmer." Being human-like is the exact opposite of being computer-like, and I really fear that handling language properly either requires human-likeness or requires a lot of manual effort to put into code. Perhaps there's an algorithm that will be able to replace that manual work, but we're unlikely to discover it unless the real world gives us a hint.


This is futile thinking. Like saying machines don't need to exist because human labor already does.


Interesting point of view but life is not deterministic. There might be a probability higher than zero for 1+1 to be different than 2. Logic is based on beliefs.


There is utility in having things be consistent. It's very convenient that I know the CPU will always have 1 + 1 be 2.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: