Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The first part of this, where you told it to ask YOU questions, rather than laboriously building prompts and context yourself was the magic ticket for me. And I doubt I would have stumbled on that sorta inverse logic on my own. Really great write up!


This is the key to a lot of my workflows as well. I'll usually tack some form of "ask me up to 5 questions to improve your understanding of what I'm trying to do here" onto the end of my initial messages. Over time I've noticed patterns in information I tend to leave out which has helped me improve my initial prompts, plus it often gets me thinking about aspects I hadn't considered yet.


Frankly getting used to doing this may help our communication with other engineers as well.


promo from L5->L7 confirmed.


The big question is, which level will be replaced by GPT first?


Indeed!

The example prompts are useful. They not only reduced the activation energy required for me to start installing this habit in my personal workflows, but also inspired the notion that I can build a library of good prompts and easily implement them by turning them into TextExpander snippets.

P.S.: Extra credit for the Insane Clown Posse reference!


That's one of the key wins with o1-pro's deep research feature. The first thing it tends to do after you send a new prompt is ask you several questions, and they tend to be good ones.

One idea I really like here is asking the model to generate a todo list.


I add something like “ask me any clarifying questions” to my my initial prompts. For larger requests, it seems to start a dialogue of refinement before providing solutions.


Can confirm, this is an excellent tactic when working with LLMs!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: