Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm interested in how you came up with this, do you have a blog post or something I can peruse?


I came up with this fairly early around gpt 3.5 release, I started working at a narratie based game, but basically gpt will let people do whatever, see this zero shot adventure example: https://chat.openai.com/share/8b59bbb9-f6a3-42a9-a14e-766195...

to control for that I had this general idea to have a grounded hidden truth to anchor the llm, things like having the scenario written and ground all player outcomes to the hidden secnario. that worked well in terms of pushback, but gpt are far too optimistic - i.e. it was impossible to lose a game, or to even die or have permanent wounds. gpt also allowed any kind of problem solving if narrated well enough - common sample, you can just keep drinking healing potion regardless of whatever situation you are in and just reset the game.

next step was to keep track of item consumption outside of the gpt, so I built a database and a few shot instructions system. gpt still doesn't push back on a player trying to cheat, but there's nothing much I can do there. also, I needed to lead gpt into negative outcomes, so I outsurced skill rolls. that eventually led to a massive engine to run the game, with llm just describing outcomes, where the pipeline is natural input > llm convert to structured json describing the action > engine ask for an action for every npc in the zone > llm map structure to game rules > engine executes the rules and update game world state > llm pick output and describe the result to the player - and the world is in a hierarchical structure, so only few npc get executed but the world state is tracked as the player explore and return to past locations

now, this is not it. this is a smaller engine that just take the skill roll and a simpler version of the item tracking, and instead of being the game controller, is referenced from a gpt for most of the game. as I said this approach is a fair bit weaker, but can provide a good enough result, if you are compliant with the narrative (i.e. willing and honest participant to the game) and will absolutely break apart if you challenge it too much

this is the originating gpt to which I added the nasa context: https://chat.openai.com/g/g-bLhy3K71r-gpt-and-paper and this is the engine, in all it's semplicity: https://pastebin.com/GQuJtEqf




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: