Hacker Newsnew | past | comments | ask | show | jobs | submit | not_that_d's commentslogin

What the post fails to understand is that a lot of the "top" people if big companies just doesn't understand the regular user because, well, they do not live a life like the regular users.

The same is true of most software developers.

One can argue that an appropriate response of the regulator for failing a11y audit is making sure that regulated entity does in fact have a certain percent blind people among it's ranks with increasing the number for each repeated failure

Besides my other response, it can also be I am not smart enough for it.


The code is quite easy to follow to be honest, we have documented a lot of stuff and segmented functionality into libraries that follow an app/feature/models pattern. Almost every service we have, has unit tests explicitly describing what the public api is doing or supposed to do on several scenarios, we never test implementation details.

Given it to new people of course carry questions, but most of them (juniors) could just follow the code given an entry point for that task, this from BE to FE.

I use the github copilot premium models available.

> I routinely make an implementation plan with Claude and then step away for 15 mins while it spins - the results aren’t perfect but fixing that remaining 10% is better than writing 100% of it myself.

I have to be honest, I just did this two times and the amount of code that needed to be fixed, and the mental overload to find open bugs was much more than just guide the LLM on every step. But this was a couple of months ago.


What is "Vibe testing"?


He means capturing things that benchmarks don't. You can use Claude and GPT-5 back-to-back in a field that score nearly identically on. You will notice several differences. This is the "vibe".


I would assume that it is testing how well and appropriately the LLM responds to prompts.


I don't understand people criticizing this. Didn't they read the article? The new Suzi stuff doesn't want to replace Zigbee, and the new zigbee version is backwards compatible.

What is the pain there?


I would assume, yet _another_ standard. There are a bunch of them, and product builders are taking a long long time to properly implement, and often buggy. And they often result in the consumer need to buy yet another gateway/router, and learn the ins/outs and quirks of another protocol that won't work properly in years, all the while two new competing standards have been introduced. An example - how long has Matter existed? Yet, it hasn't had a profile for smart plugs with energy monitoring (eg the 12$ IKEA one). Such a basic use case...

And all this so Samsung et al can siphon off more user data and show more ads.

I fully understand the consumer viewpoint.

But, it's great news imo with sub-GHz (Suzi)!


> An example - how long has Matter existed? Yet, it hasn't had a profile for smart plugs with energy monitoring (eg the 12$ IKEA one). Such a basic use case...

That was added with version 1.3 of Matter, released in the middle of this year. You just need to wait for your smart home ecosystem to support it and for IKEA to release a firmware update.

As far as ecosystems go, Home Assistant (HA) fully supports it, as does Samsung SmartThings. Google has a public beta, from what I've read. Amazon and Apple are in the on the way stage.

As far as device goes, all my energy monitoring smart plugs are Tp-link Tapo, and they have been quick to update firmware. I'm using several Tp-Link Tapo P110M Matter smart plugs [1] and a Tapo P316M Matter smart power strip [2] with HA.

The P316M, purchased in the middle of October, came with firmware that supported Matter 1.3 out of the box. I simply added it to HA using the "Add device" button on the HA screen and it worked.

The P110Ms, purchased at the start of this month, came with older firmware so they did not show energy use out of the box in HA. A quick trip to the Tapo app to add them to it during which it checks for and installs the latest firmware, brought them up to the latest firmware. After that the energy monitoring information showed up in HA.

[1] https://www.amazon.com/dp/B0DKG52WQ4

[2] https://www.amazon.com/dp/B0F5LNYTR7


As an ex audio engineer, I would say that the war ended and loudness won.


I am living this but the CEOs of my company are also "active" programmers.

Even when I already hear from them that "it helps them in language they do not know" (which is also my experience) I get frown upon if on meetings I do not say that I am "Actively using AI to GENERATE whole files of code".

I use AI as rubber duck, generate repetitive code or support me when going into an new language or technology, but as soon as I understand it, most of the code given for complete, non hobby, enterprise level projects contains either inefficient code or just plain mistakes which takes me ages to fix for new technologies.


I wonder if they will dedicate resources to help the development of their open source tools?


I have seen really inept people given manager positions because they were out going and then crash after six months in the position and expecting that we fix all the management issues for some reason.

Honestly, I have no energy to be as social as the work life needs me to be, maybe that is ok. Maybe no.


For me is not so. It makes me way faster in languages that I don't know, but makes me slower on the ones I know because a lot of times, it creates code that will fail eventually.

Then I need to expend extra time following everything it did so I can "fix" the problem.


My daily experience suggests that this happens primarily when the developer isn't as good as they assume that they are at expressing the ideas in their head into a structure that the LLM can run with. That's not intended to be a jab, just an opportunity for reflection.


But the moment I got the idea in my head, is the moment I got the code for it. The time spent is moslty checking the library semantics, or if there’s not some function already written for a specific bit. There’s also checking if you’re not violating some contract somewhere.

A lot of people have the try and see if it works approach. That can be insanely wasteful in any moderately complex system. The scientist way is to have a model that reduce the system to a few parameters. Then you’ll see that a lot of libraries are mostly surface works and slighlty modified version of the same thing.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: