Thanks, I'm having a little trouble understanding the gap between quick notes and the AI-generated full report.
I worked on safety reporting in another domain, and in that domain, I can't immediately think of how AI could fill in the gaps from notes safely. Every detail of observation, interpretation, and suggestion mattered.
Maybe what you're doing with the construction domain reports is different, or you have found the right line for what you do and don't do?
For example, are you only generating boilerplate parts that fabricate zero observation information? Say, if the note is "rebar exposed, south wall", then that becomes a sentence without anything added, and the rest of the paragraph is copied verbatim from professional standards about what this means?
Or maybe there is potential for the AI to do harm (e.g., an LLM component hallucinates a typical description of exposed rebar, that adds or subtracts some important fact), but you trust the engineer to read closely and catch that every time before they stamp (rather than "rubber-stamp").
That's a valid concern especially when considering safety reports. Since we have access to a company’s historical reports, we’re able to draw from their previous language and structure when generating full length content, but its not exactly boiler plate (unless the users request it that way). If you hit generate on a report multiple times it will output slightly differently in wording, but it doesn't extrapolate on the content.
Though it is still the users responsibility to review the output.
I worked on safety reporting in another domain, and in that domain, I can't immediately think of how AI could fill in the gaps from notes safely. Every detail of observation, interpretation, and suggestion mattered.
Maybe what you're doing with the construction domain reports is different, or you have found the right line for what you do and don't do?
For example, are you only generating boilerplate parts that fabricate zero observation information? Say, if the note is "rebar exposed, south wall", then that becomes a sentence without anything added, and the rest of the paragraph is copied verbatim from professional standards about what this means?
Or maybe there is potential for the AI to do harm (e.g., an LLM component hallucinates a typical description of exposed rebar, that adds or subtracts some important fact), but you trust the engineer to read closely and catch that every time before they stamp (rather than "rubber-stamp").