Wouldn't such a tool have to be based on easily outdated or broken blacklists of parts of the file? Complex file formats like DOC can leak data in an immense number of ways.
Here's a stupid but effective way I came up with to delete the data in files when they are too large to be stored in RAM:
Use dd on the device where the file is stored.
Files intended for eventual deletion can be stored on their own dedicated virtual block devices, or "file-backed virtual disks".
Unless things have changed, on OpenBSD these virtual block devices can be created from /dev/vnd.
To create a location to store the file(s) at, create an empty "backing" file with dd, associate it with a vnd, newfs the vnd and mount it.
To delete all the files on the mounted vnd, either umount and dd if=/dev/zero of=/dev/vnd{no}d or dd if=/dev/zero of=rvnd{no}d and umount.
One can also configure a cryptographic disk device over the vnd using a random throwaway password.
Wouldn't such a tool have to be based on easily outdated or broken blacklists of parts of the file? Complex file formats like DOC can leak data in an immense number of ways.