Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
fnordpiglet
on May 13, 2023
|
parent
|
context
|
favorite
| on:
GitHub Copilot Chat Leaked Prompt
You could have a semantic filter with an out of context LLM classifying prompts by the rule set but not interpreting them as instructions.
Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: