Hacker Newsnew | past | comments | ask | show | jobs | submit | martinohansen's commentslogin

I don’t get these kind of tools. A commit should be the why of a change, not a summary of what it is, anyone can either get that themselves or just read the code if they desire. What you can not get from the code is the _why_ which only you as the author can put.


I often start a change by having Cursor read the Slack thread (via MCP) that is motivating the change. In the same Cursor thread, after making the change, it has fairly good context on the _why_ and writes a helpful commit message.


Very nice, the fewer neurons you use the better. In biology they call it use it and lose it if my memory serves me correctly.

Neurons that fire together, fry together.


While I’m deeply skeptical of any attempt to define a commit message from the diff, if the context and motivation is truly captured in the Slack thread or other prior documents and available for summarization, then how many neurons are you really using on rewording what you already hashed out? Especially if someone would otherwise perhaps skip a message or write a poor one, this sounds like a great approach to get at least a first pass to further modify.


There are plenty of commits that don't need an explanation like mechanical cleanups or refactoring. If your code is appropriately documented then an LLM can often extract the explanation too.


If there truly is no need for an explanation, the commit message is very short and won’t require any substantial effort on the author to write.

A fix often has a particular bug it’s addressed, the bug should be explained in the commit. A refactor has a reason, that needs to be explained as well.

I’m not saying LLMs can’t do this, but it needs the context and it’s rarely in the diff of the commit you will find that.


I do often ask Claude Code or Gemini CLI to write commits. I agree with you on why being important. Majority of these being bug fixes accompanied tests where the why is easily inferred from the change/newly added tests and their comments.


That's a PR description. Commit messages describing why is pretty annoying and useless.



True. But that’s income, to be top 10% in your net worth you need 1.5m USD and 12m USD for top 1%


Yeah, but net worth is weird because for most people, it just measures age. When you're young, you have nothing in your 401k and you have a brand new mortgage, so you're worth around $0. Negative if you have any student loans.

When you're in your 50s or 60s, the mortgage is repaid, and if nothing blew up, you probably also have a million or two in your 401k, so at that point, it's actually not that hard for a person who had a decent career in the SF Bay Area to be worth $4M+. And many FAANG retirees will probably flirt with $10M+ if they don't spend too much.


Is that in the world? Or in the USA. Idk I thought there would be more millionaires.


Earning $60k/yr would put you in the global 1%.


Isn’t the lightning network solving the slowness and high fees problem?


Supposedly, but it doesn't seem to be happening -- the problem with those 2nd layers is that they end up simply inserting a point of failure that happens to be the entire point of cryptocurrency.


If human productivity and unemployment increases at the same time the obvious solution for regulators will be to decrease the work week from 5 to 4 days or even further


You only work 5 days?


Telegram has 15 million premium users paying ~$50/year

They also issue bonds which is another fun way to collect money.


It’s very hard to trust his words after he’s become the leader of a billion dollar for profit company. I miss the old Sam


Imagine having a mission of “ensure[ing] that artificial general intelligence (AGI) benefits all of humanity” while also believing that it can only be trusted in the hands of the few

> A lot of people around OpenAI in the early days thought AI should only be in the hands of a few trusted people who could “handle it”.


He's very clearly stating that trusting AI to a few hands was an old, naiive idea that they have evolved from. Which establishes their need to keep evolving as the technology matures.

There is a lot to criticize about OpenAI and Sama, but this isn't it.


To the benefit of OpenAI. I think LLMs would still exist, but we wouldn't have access to them.

Whether they are a net positive or a net negative is arguable. If it's a net negative, then unleashing them to the masses was maybe the danger itself.


One does not exclude the other.

I work for a remote-only company but use a workspace almost every day. I get to chose my own “office” and the people in it, I also pick the commute I want, this one is just 5 minutes away.


Imagine a future where state actors have hundreds of AI agents fixing bugs, gaining reputation while they slowly introduce backdoors. I really hope open source models succeed.


I work for a large closed-source software company and I can tell you with 100% that it is full of domestic and foreign agents. Being open source means that more eyes can and will look at something. That only increases the chance of malicious actions being found out ... just like this supply-chain attack.


Reminds me of the scene in Fight Club where the unreliable narrator is discussing car defects to a fellow airline passenger.

Quoting from flawed memory:

Passenger: Which company?

Narrator: A large one.


Why would open source models make this scenario you are painting better?


Because in the closed source model the frustrated developer that looked into this SSH slowness submits a ticket for the owner of the malicious code to dismiss.


It’s insane to consider the actual discovery of this to be anything other than a lightning strike. What’s more interesting here is that we can say with near certainty that there are other backdoors like this out there.


Time to start looking at similar cases for sure.


This seems completely unrelated to the grandparent comment’s mention of open source LLMs


You're right, I read the comment as:

> Imagine a future where state actors have hundreds of AI agents fixing bugs, gaining reputation while they slowly introduce backdoors. I really hope open source () succeed.

I guess we can only hope verifiable and open source models can counteract the state actors.


Not necessarily. A frustrated developer posts about it, it catches attention of someone who knows how to use Ghidra et al, and it gets dug out quite fast.

Except, with closed-source software maintained by a for-profit company, suck cockup would mean a huge reputational hit, with billions of dollars of lost market cap. So, there are very high incentives for companies to vet their devs, have proper code reviews, etc.

But with open-source, anyone can be a contributor, everyone is a friend, and nobody is reliably real-world-identifiable. So, carrying out such attacks is easier by orders magnitude.


> So, there are very high incentives for companies to vet their devs, have proper code reviews, etc.

I'm not sure about that. It takes a few leetcode interviews to get in major tech companies. As for the review process, it's not always thorough (if it looks legit and the tests pass...). However, employees are identifiable and would take huge risk to be caught doing anything fishy.


Absolutely not. Getting a job at any critical infrastructure software dev company is easier than contributing to the Linux kernel.


Can confirm. I may work at Meta, but I was nearly banned from contributing to an open source project because my commits kept introducing bugs.


We witnessed Juniper generating their VPN keys with Dual EC DRGB, and then the generator constants subverted with Juniper claiming of now knowing how did it happen.

I don’t think it affected Juniper firewall business in any significant way.


... if we want security it needs trust anyway. it doesn't matter if it's amazing Code GPT or Chad NSA, the PR needs to be reviewed by someone we trust.

it's the trust that's the problem.

web of trust purists were right just ahead of the time.


It would actually be sort of interesting if multiple adversarial intelligence agencies could review and sign commits. We might not trust any particular intelligence agency, but I bet the NSA and China would both be interested in not letting much through, if they knew the other guy was looking.


That is an interesting solution. If China, US, Russia, EU, etc all sign off and say "yep this is secure" we should trust it. Since if they think they found an exploit, they might assume the other people found an exploit. This is a little bit like the idea of a fair cut for a cake. If you have two people that want the last slice of cake, you have one cut and the other choose the first slice, since the chooser will choose the biggest slice, so the slicer knowing they will get the smaller will make it as equal as possible. In this case the NSA makes the cut (the code), and Russia / China chooses if its allowed in.


NSA makes the cut and China picks the public key to use.

In all seriousness, those people will quickly find some middle ground and will just share keys with each other


Maybe also throw EFF into the mix.


this is why microsoft bought github and has been onboarding major open source projects. they will be the trusted 3rd party (whether we like it our not is a different story)


That just…doesn’t make any sense.

Everyone starts from zero and works their way up.


Chad NSA

It's called the ANS is Chad.


Imagine a world where a single OSS maintainer can do the work of 100 of today’s engineers thanks to AI. In the world you describe, it seems likely that contributors would decrease as individual productivity increases.


Wouldn't everything produced by an AI explicitly have to be checked/reviewed by a human? If not, then the attack vector just shifts to the AI model and that's where the backdoor is placed. Sure, one may be 50 times more efficient at maintaining such packages but the problem of verifiably secure systems actually gets worse not better.


And be burned out 100x faster


Presumably the state actors are looking for other state actor's bugs, and would try to fix them, or least fix them to only work for them.

That's quite a game of cat and mouse.


Why AI agents?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: