Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's become more subtle but still there. You can bias the model towards more "expert" responses with the right terminology. For example, a doctor asking a question will get a vastly different response than a normal person. A query with emojis will get more emojis back. Etc.


This is definitely something I’ve noticed — it’s not about naïve role-priming at all, but rather about language usage.

“You are an expert doctor, help me with this rash I have all over” will result in a fairly useless answer, but using medical shorthand — “pt presents w bilateral erythema, need diff dx” — gets you exactly what you’re looking for.


If this holds up, it’s an interesting product idea you could MVP in a day.

Lay person’s description -> translate into medical shorthand -> get the expert response in shorthand -> translate back.

Liability and error is obviously problematic.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: