Hacker Newsnew | past | comments | ask | show | jobs | submit | siliconc0w's commentslogin

An EO is not law - the hard part is going to be to get congress onboard. Trump is losing political steam and AI is widely unpopular. Most of this country feels AI is going take their job, poison their children, and increase energy prices.

Right. Congress has the power to preempt state law in an area related to interstate commerce by legislating comprehensive rules. The executive branch does not have the authority to do that by itself.

This is like Trump's "pardon" of someone serving time for a state crime. It does little if anything.

Quite a number of AI-related bills have been introduced in Congress, but very few have made much progress. Search "AI" on congress.gov.


Where do you get this impression? I don’t know anybody who thinks that.

> Trump is losing political steam and AI is widely unpopular.

It seems extremely popular based on my LinkedIn feed! /s


Good thing LinkedIn is such an authentic representation of the vox populi

The problem is that unions are only as strong as the NLRB which depends on the current administration. One of Trump's first actions was firing of a democratic member making them unable to form quorum, so it's not looking good for the rest of his term and the supreme court is likely going to bless that firing making it even more susceptible to executive branch meddling in the future.

I also would like to see better 'tech' for tech unions to organize, vote on priorities, share grievances, elect representatives, etc.. Ideally moving to a fixed fee vs a % of compensation. It shouldn't require millions of dollars in overhead to organize.


as much as i hate tech bros that think the solution to every social problem is a new saas app, there are two pieces of tech that would be great for workers:

1) for people that aren't in a union, make labor lawyers easy to use. there could be an app used to walk you through gathering evidence about various workplace violations (osha/safety stuff, wage theft, etc) and then hook you up with lawyers in a two-sided marketplace. workers would get easy represenation, lawyers would get a stream of clients that show up with a nicely formatted bundle of evidence. it could even work to find conneted cases could get bundled into class actions.

2) when everybody worked in the same office/shop floor, you could easily commiserate and start discussions about unionization and collective action. if you're an app-mediated gig-worker (uber drivers, door-dashers, etc) you don't know how to connect with your coworkers. there needs to be a social platform where people would be able to make these connections. to do this, you'd need a way to verify that users are actual employees and put in various protections to make sure management isn't spying on them.


Yeah the distributed nature of tech makes unionizing naturally difficult - multiple offices with different reporting chains, remote teams, etc. The way CWA handled this for Alphabet is a sort of fake "PR" union where the company is under no obligation to bargain with you and you don't really any of the protections.

An app could maybe help here as well to define more viable bargaining units - like "the QA team" rather than the "NYC Office" which may have thousands of employees with different eligibility and reporting chains.


It’s also important to have some line of communication that isn’t owned/monitored by management

Would you be opposed to a tech bro making a startup out of this? :)

No, I’d love to see it!

The first one lends itself to a revenue model really naturally, but I think it’d be hard to make number two into a business. It’s not clear how to monetize it while also maintaining a really high degree of trust


Random Bets for 2035:

* Nvidia GPUs will see heavy competition and most chat-like use-cases switching to cheaper models and inference-specific-silicon but will be still used on the high end for critical applications and frontier science

* Most Software and UIs will be primarily AI-generated. There will be no 'App Stores' as we know them.

* ICE Cars will become niche and will be largely been replaced with EVs, Solar will be widely deployed and will be the dominate source of power

* Climate Change will be widely recognized due to escalating consequences and there will be lots of efforts in mitigations (e.g, Climate Engineering, Climate-resistant crops, etc).


The infamous Dropbox comment might turn out to be right in 10 more years, when LLMs might just build an entire application from scratch for you.

I'd take the other side for most of these - Nvidia one is too vague (some could argue it's already seeing "heavy competition" from Google and other players in the space) but something more concrete - I doubt they will fall below 50% market share.

You’re about 20 days short or 345 days late for this HN tradition. ;)

It is interesting though that if it's all the same anyway - why not write formally proved software. Or write the same spec in three different languages to cross check correctness.

I was working on a new project and I wanted to try out a new frontend framework (data-star.dev). What you quickly find out is that LLMs are really tuned to like react and their frontend performance drops pretty considerably if you aren't using it. Like even pasting the entire documentation in context, and giving specific examples close to what I wanted, SOTA models still hallucinated the correct attributes/APIs. And it isn't even that you have to use Framework X, it's that you need to use X as of the date of training.

I think this is one of the reasons we don't see huge productivity gains. Most F500 companies have pretty proprietary gnarly codebases which are going to be out-of-distribution. Context-engineering helps but you still don't get near the performance you get with in-distribution. It's probably not unsolvable but it's a pretty big problem ATM.


That is the "big issue" I have found as well. Not only are enterprise codebases often proprietary, ground up architectures, the actual hard part is business logic, locating required knowledge, and taking into account a decade of changing business requirements. All of that information is usually inside a bunch of different humans heads and by the time you get it all out and processed, code is often a small part of the task.

AI is an excellent reason/excuse to have resources allocated to documenting these things

“Hey boss we can use AI more if we would document these business requirements in a concise and clear way”

Worst case: humans get proper docs :)


I use it with Angular and Svelte and it works pretty well. I used to use Lit, which at least the older models did pretty bad at, but it is less known so expected.

Yes, Claude Opus 4.5 recently scored 100% on SvelteBench:

https://khromov.github.io/svelte-bench/benchmark-results-mer...

I found that LLMs sometimes get confused by Lit because they don’t understand the limitations of the shadow DOM. So they’ll do something like throw an event and try to catch it from a parent and treat it normally, not realising that the shadow DOM screws that all up, or they assume global / reset CSS will apply globally when you actually need to reapply it to every single component.

What I find interesting is all the platforms like Lovable etc. seem to be choosing Supabase, and LLMs are pretty terrible with that – constantly getting RLS wrong etc.


As someone who works at an F100 company with massive proprietary codebases that also requires our users to sign NDAs even see API docs and code examples, to say that the output of LLMs for work tasks is comically bad would be an understatement even with feeding it code and documentation as memory items for projects...

I ended up building out a "spec" for Opus 4.5 to consume. I just copy-pasted all of the documentation into a markdown file and added it to the context window. Did fine after that. I also had the LLM write any "gotchas" to the spec file. Works great.

To be fair, it looks like that fronted framework may have had its initial release after the training cutoffs for most of the foundation models. (I looked, because I have not had this experience using less-popular frameworks like Stimulus.)

> What you quickly find out is that LLMs are really tuned to like react

Sounds to me like that there is simply more React code to train the model on.


The problem I've found with servant leadership in large orgs is the direct manager usually has little agency over problems. The best you can get is maybe they can provide additional context on the good intentions behind the bad decisions. This is essentially by design, a critical role they play are to be the scape goats and shock absorbers for the bad machinery above them.

Most codebases are pretty proprietary and so out of distribution for the AI which causes poor performance and you really have to fight some of the training to use internal libraries and conventions.

Still useful but certainly not PhD-level when it imports X, you remind it's instructions are to use Y, it apologizes, imports Y but then immediately imports X again.

So when your project gets cancelled for AI and haven't gotten a raise while AI researchers in the same company are getting generational wealth- it does feel pretty bad.


The problems and solutions are all well documented. Like the article mentions, there are many existence proofs of cheaper more effective systems. The real problem is the legalized bribery that prevents any action and the current media environment that pushes people to consume partisan rage slop so we don't hold mediocre politicians accountable.

This sounds like the wrong move- focusing on the product layer and counter positioning on ads is the way to beat G

Really sad to see the WFH era ending, it's such a better way to work - especially as these companies embrace distributed teams so you now get the worst of both worlds with RTO.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: