> But the problem is that while kids like it a lot, it doesn't translate to engineering careers.
I think there has always been that though. When having a guitar was cool and people thought they'd be famous doing it. Of course 0.00001% actually managed it, but some craft out a career in music or related areas such as being studio engineers etc. (I did)
And for some it shows that it is possible, that people like them can be enabled and make their own stuff.
It might be that they're are organisations needed to bridge this new gap and get people into more formal engineering, but they'll also hopefully realise that people like them might work one day at top tier engineering companies.
For me, if I ever say IEC mains lead I get a blank expression. C13 even more so.
"Kettle lead" (Which is notched to indicate it can take a higher temperature and most of cables aren't that, they will be the c13 type), and their face lights up and a cable will be handed to me.
Just one of those things that's wrong, but it's not worth being pedantic over it, imo.
Given the paucity of electric kettles in the USA, I wonder how that term became so widespread.
Ironically in Europe where the IEC cables were familiar from kettles, they've largely been superceded by cables hardwired into a base pad onto which the kettle is set.
No, they don't. Look at what has happened when a tesla has mistaken a motorcycle with two small rear lights that is nearby for a car that is further away but with the same lighting configuration. Did not end well for the motorcyclists.
I dont think it's wrong, but i do think models avaliable right now lack the inductive bias required to solve the task appropriately, and have architectural misalignments with the task at hand that mean for a properly reliable output you'll need impossibly large models and impossibly large/varied datasets. Same goes for transformers for language modelling. Extremely adaptable model, but ultimately not aligned with the task of understanding and learning language, so we need enormous piles of data and huge models to get decent output.
With respect, I disagree. Musk is obsessed with the "the best part is no part ". Which only works if you don't actually need it. Combined with an obsession with cost cutting, and you get tunnel vision insisting on a course of action which does not know with 100% certainty about the world it is trying to navigate. And this has led directly to people dying.
Being obsessed only works when you turn out to be right, and tesla's system does not work as well as lidar.
Can someone offer me some help? I've just been messing about "vibe coding" little python apps with local llm, continue and vscode. And I got so far with it.
Then I found the free tier of claude so I fed in the "works so far" version with the changes that the local llm made, and it fixed and updated all the issues (with clear explanation) in one go. Success!
So my next level attempt was to get all the spec and prompts for a new project (a simple manic miner style 2d game using pygame). 8 used chat gpt to craft all this and it looked sensible to me with appropriate constraints for different parts of the projrct.
Which claude created. But it keeps referring to a method which it says is not present in the code and that I'm running the wrong version. (I'm definitely not). I've tried indicating it by reference to the line number and the surrounding code but it's just gas lighting me.
Any ideas how to progress from this? I'm not expecting perfection, but it seems it's just taken me to a higher level before it runs into essentially the same issue as the local llm.
All advice appreciated, I'm just dabbling with this four a bit of fun when I can (I'm pretty unwell so do things as and when I feel up to it)
It's likely you're running into "too deep into mediocre code with unclear interfaces and a lot of hidden assumptions hell" that LLMs are generally poor at handling. If you're running into an inextricable wall then it's better to do a controlled demolition.
ie, take everything written by chatgpt and have the highest-quality model you have summarize what the game does, and break down all the features in depth.
Then, take that document and feed it into claude. It may take a few iterations but the code you get will be much better than your attempt on iterating on the existing code.
Claude will likely zero-shot a better application or, at least, one that it can improve on itself.
If claude still insists on making up new features then install the context7 MCP server and ask it to use context7 when working on your request.
I think I should have made it more clear in my post, the code is claude's and was done from scratch (the first app was a mandelbrot viewer which it added features to, this is a platfrom game).
It's a single file at the moment (I did give a suggested project structure with files for each area of responsibility) and it kind-of-works.
I think I could create the missing method in the class but wanted to see if it was possible by getting the tools to do it - it's as much of an experiment in the process and the end result.
Thanks for replying, I shall investigate what you've suggested and see what happens.
You can't. This is a limitation of LLM technology. They can output the most likely token sequence, but if "likely" doesn't match "correct" for your problem then there's nothing you can do.
Also, each LLM has its own definition of what "likely" is - it comes from the training and finetuning secret sauce of that particular LLM.
Is it? Certainly hasn't been the experience of my two autistic daughters who had years of difficulty and which led to real problems for a decade or so each during later later schooling and early work years. Both are now doing much better but it's been a long uphill battle for them. The same of my two sons but society is much more tolerant (but bit fully) off their symptoms as they tend to be viewed as 'male' tendencies, just severely amplified.
From what I've read, the cost of an llm would be greater than the current operators who are effectively enslaved. If it were cheaper then possibly - certainly easier to manage than people who would try to escape their dire situation here.
Profit for the founders and the shareholders is the only definition anyone cares about in the states.
The idea that a business could be considered successful by just providing a living wage for its owners and employees or contributing to the community is not a consideration.
People in this country see a single person startup making a few million dollars to be a greater success story than providing for the lives and well being of 20 employees for a decade.
Coming from android I have to agree, it's terrible. The only help I can offer is that if you press and hold the space bar you can drag to go through to where you need to be, but it's still painful. I can only bear ios because I am using SwiftKey - the default keyboard genuinely stopped me from switching to an iPhone, I found it that bad. And some apps force you to use the default ios one which is even worse!
I think there has always been that though. When having a guitar was cool and people thought they'd be famous doing it. Of course 0.00001% actually managed it, but some craft out a career in music or related areas such as being studio engineers etc. (I did)
And for some it shows that it is possible, that people like them can be enabled and make their own stuff.
It might be that they're are organisations needed to bridge this new gap and get people into more formal engineering, but they'll also hopefully realise that people like them might work one day at top tier engineering companies.