in the publishing industry we call that "cooking a press release". the "news" article was entirely written and mailed by the PR of the subject (mayo clinic here) and the "journalist" just copy and paste. at most they will reword a couple paragraphs not for fear of looking bad, but just to make it fit in their number of words required for the column they are publishing under.
> where the model extracts relevant information, then links every data point back to its original source content.
I use ChatGPT. When I ask it something 'real/actual' (non-dev) I ask it to give me references in every prompt. So when I ask it to tell me about "the battle of XYZ" I ask it within the same prompt to give me websites/sources, that I click and check if the quote is actually from there (a quick Ctrl+F will bring up the name/date/etc.)
Since I've done this I get near-zero hallucinations. They did the same.
I was waiting so long for this to finally arrive in Firefox (and now I can't seem to unsubscribe from Bugzilla for some reason -- I guess "because Bugzilla"). However, in true FF fashion, I'm sure it'll be another 10 years before the "Copy link to selection" arrives like its Chrome friend, so I have an extension to tide me over :-/
TBH, I also previously just popped open dev-tools and pasted the copied text into console.log("#:~:text="+encodeURIComponent(...)) which was annoying, for sure, but I didn't do it often enough to enrage me. I believe there's a DOM method to retrieve the selected text which would have made that much, much easier but I didn't bother looking it up
I have an application that does this. When the AI response comes back, there's code that checks the citation pointers to ensure they were part of the request and flags the response as problematic if any of the citation pointers are invalid.
The idea is that, hopefully, requests that end up with invalid citations have something in common and we can make changes to minimize them.
This sounds like a good technique that can be fully automated. I wonder why this isn't the default behavior or at least something you could easily request.
There was an article about Sam Altman that stated that ex/other OAI employees called him some bad_names and that he was a psychopath...
So I had GPT take on the role of an NSA cybersecurity and crypto profiler and read the thread and the article and do a profile dossier of Altman and have it cite sources...
And it posted a great list of the deep psychology and other books it used to make its claims
Which basically was that Altman is a deep opportunist and showed certain psychopathological tendencies.
Frankly - the statement wasn't as interesting of how it cited the expert sources and the books it used in the analysis.
however, after this OAIs newer models were less capable of doing this type of report, which was interesting.