Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Do you know why that policy is in place? Is it fear that the llm provider will steal company trade knowledge?


It's probably both that is leaking your source out and the risk of it being a copyright violation if it spits out some GPL source verbatim and you check it in.


> if it spits out some GPL source verbatim

Does that ever actually happen? I've only heard of it happening to people who forced the AI's hand by including the comments for said code in the prompt.


There's a setting "Suggestions matching public code" that can be set to "Block" that I think is sufficient for most people's usage. But many companies don't want to involve themselves in any sort of liability.


While many software companies get away with an “it’ll be fine” attitude, that is not sufficient for all companies and industries. Sometimes things have to be provably correct.


This happens a lot, and you can easily walk into constructing such a prompt without knowing.


I work in a research facility, so the biggest fear is that our super top secret elite info will leak. The reality is it'd be used to refactor a lot of terrible code.


Use local models if you don't want to send your data to OpenAI


> We have a total ban on AI for source code analysis


I'm replying to the fear that it will leak secret info


I'll let our executives know that alternatives exist.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: