Shepherd's Dog is a game I've wanted to create for a long time, but I never got the sheep flocking behaviour just right. The goal of the game is to herd all the sheep into the pen before nightfall. I've asked several models to create this game and I'm particularly impressed with what Claude 3.7 could do with a one-shot prompt.
Shepherd's Dog is a game I've wanted to create for a long time
Not sure if you're aware, but there was a game like that for playstation and GBA, called Sheep! https://en.wikipedia.org/wiki/Sheep_(video_game) Here's some gameplay footage (player here didn't chose a dog to play with for some reason): https://www.youtube.com/watch?v=SP058CHQj20 Premise of the game is the same, you run the sheep to the designated area over obstacles.
Ah thanks for this. The game above is lovely and it’s really similar to what I had in mind (I was also thinking of lemmings!). I see in the other comments below that this idea of mine has been created as a game a lot of times already. Seems like I’m not as original as I thought haha
Just tried a 1-shot on Grok3 - Thinking and it couldn't get past the start button. Throws an error:
| "<a class='gotoLine' href='#67:39'>67:39</a> Uncaught ReferenceError: startGame is not defined"
Scope issue.
No barking or dog player model but pretty similar in style to Claude's output.
What's interesting to me about playing with AI Codegen is each model has specific and sometimes overlapping output errors. Claude 3.7 really like to solve errors by returning dummy data as a 'fallback' when doing client or server calls. A little prompting can reduce this but not eliminate it. 'The tests always pass if you return dummy data'
It seems like it has some issues, but the result is interesting nonetheless. Just a one-shot like the others, needed a single "Keep going" but otherwise this is the vanilla output from the prompt.
Edit: Looks like you can share an HTML preview of a gist using html-preview.github.io, so here's that. https://html-preview.github.io/?url=https://gist.githubuserc... - It'll go to level 2 if you refresh the page and hit Restart, but I don't think it's possible to clear Level 2. The flock stays too far apart to fit enough sheep in the pen.
great demos. one shotting isnt really fair imo, i feel like that might be hard even for a human to do (working without feedback). i'd be curious what deepseek would do with a bit more feedback.
Not the same reason at all. In genetics the reason is that you're losing gene variety and eventually recessive genes aren't suppressed anymore. In case of LLM it's just error accumulation.
It's a few days late but "losing gene variety" isn't the cause. What happens is genetic errors compound and are more likely to be expressed. I.E. "error accumulation".
How about a number of grad level genetics courses? Does that beat your google search? Because that is what I have. And what I am telling you is what happens.
This is really easily searched (as you said).
You might read up on it if interested. Check out why inbreeding can lead to expression of genetic defects. What is the mechanism? (hint: it's not "losing gene diversity" or "suppression").
`Homozygous, as related to genetics, refers to having inherited the same versions (alleles) of a genomic marker from each biological parent. Thus, an individual who is homozygous for a genomic marker has two identical versions of that marker. By contrast, an individual who is heterozygous for a marker has two different versions of that marker.`
In other words, errors can accumulate and are more likely to be expressed. Not "gene diversity" (this is a topic relating to evolutionary fitness, selection potential etc.), not "suppression". Error accumulation.
I had this conversation before. I point out how your interpretation is insane and doesn't follow logical reasoning, and you accuse me of gaslighting. I don't want to waste anyone else's time. We could just paste to an AI our both initial statements and ask who is more correct, but I'm sure you would either say AIs (all of them or 99% of them) are wrong, or you would interpret them saying I'm more correct, as you being right.
I have no problems being wrong on the Internet. Unfortunately, for some magical reason, in the overwhelming majority of my conversations, I either recognize it within a minute (or one reply when in writing), or never.
Let me give you a simple example maybe you will understand better.
Let's say a person has a recessive faulty gene. The gene doesn't get expressed because there is only one copy (recessive). We can notate this Aa (small "a" being the faulty gene, large "A" being the good copy). The person has two copies because they get one from each parent.
So "Aa" has a partner we can notate as "AA" (two good copies of the gene). AA and Aa have a child. What is the chance the child has the recessive gene? 25% because we have 4 possibilities with 1 bad outcome. Can the child have two bad copies (i.e. "aa" where the gene gets expressed)? No, they cannot because there are not two copies available from the parents, only one. At most they get "Aa". 75% chance they get "AA".
Let's say AA and Aa have a bunch of kids, the kids intermarry. Then their kids intermarry. Now what is the chance of an individual having two bad copies (i.e "aa"). What is the chance they have 1 bad copy (Aa)?
It's just probability calculations, and the expression becomes more probable as there are more copies of the bad gene in the gene pool. I.E within a population, the errors accumulate, they build up, there is a larger chance of getting expression of the defect (aa) with continued inbreeding.
This works with desirable genes too which is why we have so many kinds of dogs for instance. We select for it and build up copies of gene expressions we want to see to the point there is a 100% (or close to) chance of expression.
Hopefully you get this now. If not, read up on Mendelian genetics and table calculations maybe that will help you see.
------------------------
So let me take this back to the original example of LLMs.
Suppose there is 1% chance an LLM confidently claims Python library "Foo" exists and does XX when it's not true. This is analogous to a bad copy of the gene. If you train on that output (i.e. "inbreeding"), then use that as a reference (more inbreeding), soon many sources will say "Foo" exists and you'll have a larger chance of getting "Foobarred" information from the LLM.
Seems o3-mini implements the 'boids' algorithm for flocking (likely due to its prevalence online), but I find that here it doesn't really fit.
Indeed in boids each element has a constant (or minimum) velocity, s.t. the sheep never stop 'running'. I find the Claude flocking behaviour looks more natural, for sheep.
Tip: don't push them into a corner! I got up to lvl 7 without a problem, and then I got them stuck in a corner and that was it :( Poor sheep will spend the night in the cold outside the barn!
Are they doing this now? Oh brother! And here I was thinking WASM would be a good solution to the desktop exe signing problem for my community's roguelike. Instead browser vendors are likely just going to ban the site.
interesting, im also using edge (with all security settings set to maximum) and it works fine for me. Maybe the difference is that I'm using it on mac?
The one that Claude created was a legitimately fun game! If it implemented boids similar to o3-mini, it would be even better. Slap some sprites on it and put it on steam!
On desktop the map is huuuuge and it's not particularly fun waiting for them to slowly move all the way to the opposite corner. It's cool that one can prototype this quickly, but needs some tweaks from play-testing as with all games I guess.
I once made a boid-thingy, which this also reminds me of. https://matsemann.github.io/boids-workshop/ (and since the parent game is mostly boid behavior with a goal condition, I guess that's why the LLM is so successful in implementing it?)
The link is the final result with lots of controls, but the idea is that it's a tutorial/workshop where you build it step by step yourself, in Norwegian though https://github.com/Matsemann/boids-workshop
Ha, I've been creating this on-and-off for a while. Just last night I asked various LLMs to implement a boid-with-predator algorithm and all failed hard.
Instead I spent an hour reading through a description and implementing manually and it at least worked.
But yes, boids is a good start, but it requires some work to make it more natural for mammals, who can have a 0 min speed.
The political views of flags is a instant stop for me. I wonder what radobank thinks of that.
Keep politics out of tech, and especially if you are in the dutch market, it's such a small market, that you would not want something like that to stop a contract in the future, especially seeing that you user to be a zzp'er.
the reality is, the netherlands is a right leaning country, look at the last election results.
A lot of people in the netherlands are very divided about poltics (stop the war, and side with the US, leave the EU, or give more money to ukraine and are pro EU). I hear about this on a daily basis at work. I am quite sick hearing about it. As a hiring manger at one of the big four banks in the netherlands, seeing something like the flags, is a obvious sign as to what side the OP leans towards. Having enough issues on my plate, and not needing another person to start a debate between team members at work is enough for me to stop, and not hire said person. The OP is or was a zzp'er (freelancer) in the netherlands. I think it's a little silly to limit your future contracts by brining in politics into tech. What's the purpose of having it there?
- You can play the Claude game here (note: doesn't work on Safari for some reason): https://html-preview.github.io/?url=https://raw.githubuserco...
- o3-mini's version is here: https://html-preview.github.io/?url=https://raw.githubuserco...
Results of other models and a leaderboard is here: https://github.com/vnglst/when-ai-fails/blob/main/shepards-d...
Some videos: https://hachyderm.io/@vnglst/114125938185826311