Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you used GPT as a brain you could provide the current time as an input for it, but otherwise yeah, GPT doesn't have time within it's input by default, but if you did:

Made the instruction to be: Say "Stop!" when 10 seconds are done. And then had a loop where

You give it current time in a loop, it would surely be able to do that.

But I'm not sure if or how that is exactly related to consciousness?

Time, the best way to think of it is probably a series of ticks anyway, and I assume in people's brains there is something to do that, so it would be reasonable to add a mechanism for GPT as well.

GPTs goal is to only act as the intelligent part of the brain based on the input.



Modify the system enough and it’ll eventually be conscious.

It’s not about a pause token, but the internal processes. You can have a long conversation on the subway with someone without forgetting you’re going home from work. Overflow it’s context window and GPT-4 has no recourse it just forgets. The difference is essentially prioritizing information, but LLM’s really don’t function like that it’s all about predicting the next token from a given context.

Give a future generation of AI systems internal working memory, a clock, and the ability to spend arbitrary time updating that internal memory and IMO that’s pretty close to consciousness. At least assuming it was all functional.


But it's not really difficult to inject this mechanism into the context window.

GPT-4 turbo latest version allows for 100k tokens or 75k words. The whole subway thing and more could easily be kept there, but what ever else can't can be designed in the prompt to always keep certain amount of tokens in context for different layers of memory, where you the more into the past you go the less details you have, but it's more like a Title of your most important learnings through out life, but at any given time GPT-4 can call the function to ask extra bit of content about it, if it seems relevant to the situation at hand.

So for example in each prompt context you would have:

1. Short description of what you have done each year in your life. 2. Key findings, goals, that you currently have. 3. The whole current day (or how much seems reasonable). 4. Past weeks in a bit more detail than the short description for a year.

So basically you could try to find 70k words to represent as much context and most important details (that are decided by GPT itself, what is most important).

I've been building an assistant for myself, that has such memory management system, it gets past N (like 40 messages) in full detail, then it will have summaries from before that time and in addition it will have messages and learnings in the past stored, and these will also be passed to the context depending on the query that matches it.

And if you want to compare it to human process of sleeping, it occasionally goes through the all the messages and "compresses" them, to extract most important findings and short summaries so they can be used in the next day's context.

So to me it's just basically giving it tools, and the other things like memory, longer term memory, inputs it currently doesn't get, are fine to be solved by other tools. I think that human brain also has different parts of brains working on different things, so it's similar in a sense.

Then once you have 70k spent on this historical context, you will run prompt in a loop allowing it to perform fn each time, like retrieve further info, or store some important fact, etc.

The real actual problem would be cost, because the costs would rack up quite quickly making looped 70k token requests.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: