Never eval() AI-generated text without a sandbox.
This should one of the most obvious principles of AGI safety, especially to a company working towards building superintelligence.
Yet, inexplicably, the OpenAI tutorial linked above includes this un-sandboxed command, with no explanation for this egregious lack of any safety precautions: `tool_query_string = eval(tool_calls[0].function.arguments)['query']`
Though the content of function.arguments is usually reasonable in typical scenarios, it is unvalidated, arbitrary text. There are multiple documented instances of it containing text that causes parsing errors:
Anyone running the code in the OpenAI cookbook, containing eval(), could have their machine compromised by a model that gets hijacked via prompt injection or whose behavior goes astray for other reasons.
The fact that OpenAI recommends this code to developers speaks to a serious lack of care about AGI safety.
Yet, inexplicably, the OpenAI tutorial linked above includes this un-sandboxed command, with no explanation for this egregious lack of any safety precautions: `tool_query_string = eval(tool_calls[0].function.arguments)['query']`
Though the content of function.arguments is usually reasonable in typical scenarios, it is unvalidated, arbitrary text. There are multiple documented instances of it containing text that causes parsing errors:
https://community.openai.com/t/malformed-function-calling-ar...
https://community.openai.com/t/malformed-json-in-gpt4-1106-f...
https://community.openai.com/t/ai-assistant-malformed-functi...
Anyone running the code in the OpenAI cookbook, containing eval(), could have their machine compromised by a model that gets hijacked via prompt injection or whose behavior goes astray for other reasons.
The fact that OpenAI recommends this code to developers speaks to a serious lack of care about AGI safety.