Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> As long as the technical domain knowledge is at least partially published

Most internal technical and business domain logic of companies isn’t published, though. Every time I asked ChatGPT about topics I had actually worked on over the past decade or two, or that I’m currently working on, it basically drew a blank, because it’s just not the category of topics that are discussed in detail (if at all) on the internet. At best it produced some vague generalisms.

> once it can consume images and has a way to move a mouse.

That’s quite far from ChatGPTs current capabilities, which is strongly tied to processing a linear sequence of tokens. We will certainly improve in that direction as we start combining it with image-processing AIs, but that will take a while.



Check out the announcement. GPT-4 accepts mixed-mode inputs of text and images.

Mouse cursor instructions aren’t a massive leap from the current capabilities, given the rate of progress and recent developments around LLM tool use and the like.


I wonder if there will be a race to buy defunct companies for access to their now valuable junky tech-debt ridden hairball code, so they can train on it and benchmark on fixing bugs and stuff. With full source control history they could also find bug resolution diffs.


That source code isn’t worth much without the underlying domain knowledge, large parts of which only exist in the employees’ heads, more often than not. Maybe if the code is really, really well documented. ;)

Companies could in principle train an in-house AI with their corporate knowledge, and will likely be tempted to do so in the future. But that also creates a big risk, because whoever manages to get their hand on a copy of that model (a single file) will instantly have unrestrained access to that valuable knowledge. It will be interesting to see what mechanisms are found to mitigate that risk.


The weights file could be encrypted and require a password before becoming usable.


I think what you say goes for most jobs. Why would GPT know much detail about being a machinist or luthier?

Eventually job and role specific information will be fed into these models. I imagine corporations will have GPTs training on all internal communications, technical documentation, and code bases. Theoretically, this should result in a big increase in productivity.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: