I assume this is most in regards to our anomaly/error detection! Deterministic rules for flagging anomalies + human feedback help in adapting our flagging system accordingly. So hallucinations won't directly impact flagged anomalies. The rules (patterns) we generate are on the stricter end, so they err on the side of flagging more.
However, the rules themselves aren't deterministically generated (and therefore prone to LLM hallucinations). To address this, we currently have a simpler system that lets you mark incorrectly flagged anomalies so they can be incorporated into our generated rules. There's room to improve that we're actively working on: exposing our generated patterns in a human-digestible manner (so they can be corrected), introducing metrics and more data sources for context, and connecting with a codebase.
However, the rules themselves aren't deterministically generated (and therefore prone to LLM hallucinations). To address this, we currently have a simpler system that lets you mark incorrectly flagged anomalies so they can be incorporated into our generated rules. There's room to improve that we're actively working on: exposing our generated patterns in a human-digestible manner (so they can be corrected), introducing metrics and more data sources for context, and connecting with a codebase.