The purpose seems clear to me from the explanation provided. Here's what I read between the lines.
1. Send out thousands of letters expecting some to be returned. They may be returned due to deliverability issues, or they may be returned with a reply attached or (probably less commonly) scrawled on the pages of the letter itself. Replies to letters are of course common whether they're expressly requested or not.
2. Give each letter a unique number in your database so you can cross reference the letter to the recipient information (including but not limited to the address) you have stored in your system. The letter may be returned with something else (e.g. another letter) attached so it's important to keep that information correlated.
3. Scanning the original letter is a low cost way to maintain this correlation. When the letters are returned you scan them then send them through a program you have set up to update the system accordingly. The program uses some primitive OCR and probably a checksum to automatically recognize the codes in the original letters. I can imagine this being used to automatically mark bad addresses if a letter is returned without additional context, but its main purpose is probably to route the letter - and any attachments, like other letters - to the appropriate agent.
To support a workflow not unlike the one described above, it is requested that the unique number that identifies the letter be left unobscured. This way OCR can do its job, deliverability issues can be flagged with minimal human involvement, and replies to letters can be put in front of the right person without creating too much organizational overhead.
But OP was not planning on returning the letter, so it would never be scanned.
I think the BBC could have solved this preemptively, by simply making the letter say "Please do not write below this line, if you are returning the letter."
Or it's a template that they use for a lot of things, many of which are intended to be returned, and nobody took the step to remove it since there is no harm in leaving it.
Also:
> Replies to letters are of course common whether they're expressly requested or not.
Or perhaps it is in hopes that some unwitting fee-dodger mails back a flyer with "Bugger all is what you'll be gettin' for license fees, ya bloody parasites!" scrawled across it. As long as the faintly-printed address information below the line is intact, de-anonymization is possible. Note how they kept asking him to send it back REGARDLESS.
> when compared against other 4b and 8b parameter models I would genuinely champion the quality of their responses
You clearly have some very specific models in mind. Even if the latest 4B and 8B models don’t move the needle on the “results you would champion” metric, this does not advance your argument that the state of the art hasn’t significantly progressed from 5 years ago.
He added a lesser option, catastrophically harming humanity, so whatever he meant by the first is immaterial (“there’s a 70% chance of a hurricane or strong winds”). Furthermore, if it wasn’t a high number chosen for dramatic effect the estimated percentage would be completely arbitrary.
No, you’re right, it was chosen because “trust me bro”.
Look, it may well be something he believes, and he’s free to prognosticate (or market) however he likes, but I see absolutely nothing to support the number outside of his own opinion.
Besides, there’s no time limit on p(doom), so it’s completely unfalsifiable (“on a long enough timescale…”), and it’s about the destruction of humanity which means it’s unprovable as well. That, in my view, makes his 70% guess a sensational statement lacking scientific merit.
No, the number is made up and the facts don’t matter so the statement can easily be reimagined as an ad lib.
> There’s a [arbitrary number] percent chance that [technology] will destroy or catastrophically harm humanity
Try these: social media, the Internet, the large hadron collider, Starlink, Neuralink, iPhones, iDrones, quantum computers, regular computers, the 2038 bug, the Y2K bug, electric cars, gasoline cars, the great firewall of China, the not so great firewalls of asbestos, mRNA technology, gain of function research, nuclear bombs, nuclear energy, paper clip manufacturers, scissors.
I’m not saying it’s true that these have a 70 percent chance of destroying or catastrophically harming humanity, but couldn’t you make the argument?
"A preponderance of evidence" is the standard for a civil case which is what this would be, "beyond a reasonable doubt" is the standard for a criminal case.
They absolutely can reason and plan; how do you suppose they predict the next token?
That they’re not autonomously solving complex tasks is a bit of a straw man though, and with a bit of creativity we can easily imagine them being combined with models and modalities that do provide executive function and autonomy.
Well, yes, reasoning and planning abilities exist on a spectrum, so it isn’t so much a matter of where to draw the line as a question of degree. As for LLMs, I think their reasoning and planning is some of the most powerful and human-like we’ve seen so far, even if the hidden mechanisms and constraints are different (in some cases, more limited, but in others, vastly superior).
Our brains however are highly modular (a “committee of idiots”) so who’s to say a portion, and even a significant one, doesn’t operate on similar principles?
Can a collection of around 1.5 billion interconnected cells that predictably respond to signals in their environment using simple rules? How about 86 billion? 36 trillion?
These are ballpark counts of cells in crow’s brain, a human’s brain, and a human body. The question is, is it the cells themselves doing the reasoning and planning, or are they just the machinery this disembodied process happens to be running on? I’d argue intelligence is a distributed phenomenon that our DNA is as much a party to as our brains.
Certainly the question of whether humans use DNA to reproduce or DNA uses humans is a matter of perspective.
Also, most of the signers in that letter probably still have OpenAI equity. They are incentivized to pump it. I'm not saying that they are doing this in bad faith. I'm just saying that the incentive is perverse in this case.
I dunno. Quite frankly I’m bored of it and don’t care any more. I suspect it’s just attrition. I have better things to do than deal with some transient technology provider’s existence.
Other than to bitch about this meta point of course :)
Really the metric is: if they died tomorrow would my life be materially impacted. Apple yes, OpenAI nope!
Anyone?