Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you remove the AI glasses, "prompt engineering" is just typing words and seeing if results match the expectations... which is exactly what any search engine pays their testers for. Those testers are making an important job to keep improving the quality of the product but they aren't engineers and even less so researchers.

Similarly a kid playing with the dose of water needed to build a sandcastle isn't a civil engineer nor an environmental researcher. Maybe on LinkedIn though.



I’m not sure the scientific method itself can withstand this sort of scrutiny. After all, it’s just making guesses about what will happen and then seeing what happens!


Except there's also, you know, building coherent theories and using those theories to predict the system behavior.


All right, here is a theory: LLMs contain "latent knowledge" that is sometimes used by the model during inference, and sometimes it isn't.

One way to "engage" these internal representations is to include keywords or patterns of text that make that latent knowledge more likely to "activate". Say, if you want to ask about palm trees, include a paragraph talking about a species of a palm tree (no matter whether it contains any information pertaining to the actual query, so long it's "thematically" right) to make a higher quality completion more likely.

It might not be the actual truth or what's going on inside the model. But it works quite consistently when applied to prompt engineering, and produces visibly improved results.


> It might not be the actual truth or what's going on inside the model.

This sums up pretty nicely why prompt hacking is not science. A scientific theory is related in a concrete way to the mechanism by which the phenomenon being studied works.


That is in no way a requirement for doing science.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: