Hacker Newsnew | past | comments | ask | show | jobs | submit | awwaiid's commentslogin

Very minor and tangential request for future recordings - show the shell prompt and/or put the commentary in shell comments (prefix with "# "). A step closer to IRL.

Speaking of which, an interesting thing to contemplate is if it is worth automating what you did, or if making the videos happens rarely enough that you'd start from scratch with a new manus or other ai session.


Afaik all the large providers flipped the default to contractually NOT train on your data. So no, training data context size is not a factor.


It is using the llm library, so you do the plugin and model management through that. Let's say you've already gotten ollama installed and the `gemma3n:e2b` model. Then you use the llm cli to add the ollama extension:

  llm install llm-ollama
and then you use whatever model you like, anything llm has installed. See https://llm.datasette.io/en/stable/plugins/installing-plugin... for plugin install info.

Here is a sample session. You can't see it ... but it is very slow on my CPU-only non-apple machine (each response took like 30 seconds) :)

  >>> from gremllm import Gremllm
  >>> counter = Gremllm("counter", model="gemma3n:e2b")
  >>> counter.value = 5
  >>> counter.increment()
  >>> counter.value
  None
  >>> counter.value
  None
  >>> counter.value
  None
  >>> counter.what_is_your_total() 
  6
  6
... also I don't know why it kept saying my value is None :) . The "6" is doubled because one must have been a print and the other is the return value.


You know... Because if you get them wet they multiply, per the documentaries


…I read the whole article at OP’s link, many comments off this thread…I even clicked into the college course material in https://news.ycombinator.com/item?id=44468452 …and not once did it occur to me why it was called “wet mode”…not once…

…until your comment. Here! Take my “lived through the 80’s and 90’s” card.


Hm. I'll improve the documentation to make it slightly more obvious. When I add after_midnight with malicious compliance mode and bright_light() to freeze implementations then it should be more obvious.


Nah, dude. It’s exactly the right amount of subtle that I found it a delight to discover. Making the joke too obvious undercuts it!


I was chatting with Simon Willison (who's LLM library I use to power gremllm) on Discord and he suggested D&D use-cases. Kinda works!!!

    >>> from gremllm import Gremllm
    >>> player = Gremllm('dungeon_game_player')
    >>> player.go_into_cave()
    'Player has entered the cave.'
    >>> player.look_around()
    {'location': 'cave', 'entered_cave_at': '2025-07-02T21:59:02.136960'}
    >>> player.pick_up_rock()
    'You picked up a rock.'
    >>> player.inventory()
    ['rock']
(further attempts at this have ... varying results ...)


i helped Chris Callison-Burch design a class at upenn, called interactive fiction, which is a similar context to what Simon suggested. the real magic is that it reframes hallucinations as creative story telling. the usecase is SUPER fun if you imagine the LLM as a dungeon master telling a story that gets expanded over time.

the framework he and I built kept track of the game state over time and allowed saving and loading games as json. we could then send the full json to an LLM as part of the prompts to get it to react. the most neat part, imo, was when we realized we could have the LLM generate text for parts of the story, then analyze what it said to detect any items, locations, or characters not jn the game state, and then have it create json representations of the hallucinated objects that could be inserted into the game states. that sealed the deal for using hallucinations as creative story telling inside the context of a game.

i assure you the D&D context is very fun! the class website might give you more ideas too https://interactive-fiction-class.org/

i wasnt officially part of upenn at the time, so my name isnt listed on the site, but we wrote a paper about some of the things we did, such as this one, and you'll see me listed there https://www.cis.upenn.edu/~ccb/publications/dagger.pdf


Sounds similar to AI Dungeon which I believe ran on a fine-tuned version of GPT-2 "all the way" back in 2019. And honestly kind of reminded me of the "Mind Game" in the novel, Ender's Game.

https://en.wikipedia.rg/wiki/AI_Dungeon


Just want to say that I'm not an ai guy at all, but this has made me more excited about it than anything in a while. Really cool! Did you also do the one where you put "spells" in your code?


Yes. You can still try to get it in one attempt.


Odds in one attempt: 1 in 32

Odds in two attempts: 1 in 1


Your odds add up to more than 1.


Probabilities sum to 1.0. Odds don't sum.


Odds of getting it within 2 attempts is 1. Odds of getting it on exactly the second attempt is 31/32


Ooops:

You won!

You guessed 01111 in 4 attempts!


Well yeah and a fair coin has P(heads) == P(tails) == 0 for someone who eats it.


Some get joy from creating generative tools/models!


The llama models


Maybe we'll run out of ion?


Sort of. Compact NMC Li-ion cells from laptops and phones often use stuff like cobalt, supplies of which are much more limited and problematic than of lithium. The newer LiFePO4 chemistry does not use it, and, importantly, is rather hard to ignite. Its energy density per unit mass is lower, but it's not that important for stationary installations.


It doesn't sound like it is wormable -- it doesn't allow any new attacks on external devices.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: