It's a great idea but perhaps needs to be executed a little better. Hoping the op reads these comments as us all just being grumpy sysops but takes the positives from them, and works on rip3 with better system-level safety.
And for everyone else, always worth remembering that author could have instead built: .
Instead they chose to build something, so let's not shit all over their intent
Not only that, but the comment could have been much better: It could be an issue on GH saying "the default location is insecure, please use ~/.cache" or whatnot.
Which, until recently, was losing file metadata, like extended attributes (in very old tmpfs versions also the timestamps could have been truncated).
Whenever exact file copies are desired, they must not pass through /tmp or through any other place where tmpfs is mounted, unless it is checked that the tmpfs version is new enough to preserve file metadata.
(In multi-user computers, it was usual to copy a file through /tmp for copies between users, because that might have been the only place where both users had access.)
Older versions of tmpfs did not support any extended attributes, then only certain system attributes were preserved, while all user attributes were deleted. Only one year or two ago tmpfs has been enhanced to preserve most extended attributes, with some size constraints. Many older systems still have tmpfs versions that lose the extended attributes.
Oh heck, good point. I forgot about that in my criticism, extended attributes. If you've gone through the effort of actually using SELinux, this could undo it... at least to the point of requiring 'restorecon' to reapply the context policy on deleted-then-restored files
And if you delete too much the trash folder on tmpfs will fill up to half your available memory. With how terrible linux is with swap, unintentionally deleting a lot of data while doing something memory-intensive may well cause your system to slow to a crawl, forcing you to either sit it out or reboot and lose your recycle bin
Don't forget 'systemd-oomd' who may try to help. Depending on the configuration of your distribution/desktop environment, there's a non-zero chance your whole DE gets whacked. I generally am a fan of 'systemd', but the collection of parts require attention. cgroups and such
Absolutely none of this is a concern for this project, though. I just find this-then-that stuff funny
Author acknowledges how the tmpdir can be a bad default, so that’s why you can change it :)
> Graveyard location.
> You can see the current graveyard location by running rip graveyard. If you have $XDG_DATA_HOME environment variable set, rip will use $XDG_DATA_HOME/graveyard instead of the $TMPDIR/graveyard-$USER.
> If you want to put the graveyard somewhere else (like ~/.local/share/Trash), you have two options, in order of precedence:
> Alias rip to rip --graveyard ~/.local/share/Trash
> Set the environment variable $RIP_GRAVEYARD to ~/.local/share/Trash.
> This can be a good idea because if the graveyard is mounted on an in-memory file system (as /tmp is in Arch Linux), deleting large files can quickly fill up your RAM. It's also much slower to move files across file systems, although the delay should be minimal with an SSD.
Kinda bad example since this tool doesn't follow xdg-trash spec per README. Tools that do (such as the file managers from the big DEs) use this directory.
That's not quite how things work, file permissions don't magically go way just because they land in tmp, this is only a problem if your file permissions are setup wrongly and implicitly relied on parent folders etc.
But then file permissions being "wrong" or relying on the parent folder is the norm...
I wonder if it does keep ACLs associated with the file.
This is a good practical example for the less experienced that just because it’s written in rust doesn’t mean it’s magically more secure. Who’s gonna write up the CVE?
Is it accurate to say that deleted files have their filenames and other metadata become readable by design? Or at least top-level ones (not necessarily subdirectories)?
I'm not sure if that's "by design" or a "bug" – I'm not really familiar with this tool. I agree that ideally it should copy the directory permission bits too, or be more restrictive about that in some other way.
It's always laudable when OSS projects get some love, but... I'm slightly put off by programs that try to be witty or funny (e.g. flags like --decompose and --seance)
It's mostly just the first. And to achieve the first people use languages that they're productive in, which is often Rust or Go due to being able to compile a binary and performance. Sometimes people use Python, Ruby, or Node, and that's okay too.
I'm building a general-purpose undo that will log and let you undo things like chmods, chown/chgrps, mv's and rm's. it will work with the recursive parameters of those utilities and shunt that out to `find` with `-exec` so it can track the undo of every individual file. it will use the xdg-trash spec for rm'ing files. I haven't pushed it up to github yet but I have test cases working locally. in particular it will handle idempotent updates properly so if you for example chown a file with the same user and group, it will be recorded as a no-op so that a later (untracked) change won't get overwritten when you undo a tracked change that would otherwise interfere with it.
it's just plain old bash, so nothing fancy like Rust, but it should work
A small psa if you're on Windows and like this tool want to focus on "ergonomics" and "performance" of deleting files, disabling real-time protection in the security center makes deleting large directory trees about 50% faster and reduces CPU usage by about 80% for me. It's wholly non-obvious to me why this is the case, considering that DeleteFile doesn't even use a file handle. Perhaps it acquires one internally, and that still triggers demand scanning of the file to be deleted?
Scanner needs to scan files being deleted to catch certain kinds of malware, and Windows blocks until it's actually scanned and deleted. Lots of file operations are like this on Windows. It makes filesystem operations seemingly easier to program/understand, but much much slower. I suspect the synchronisation assumptions it is allowing are also deeply ingrained in legacy code.
Any serious Windows app needs to spawn many threads to work around this performance issue when batch operating on lots of files.
Aliases are not expanded in shell scripts, unless they explicitly opt into it. Additionally, they are run in a non-interactive shell, so will not load your ~/.bashrc where you probably defined the alias.
Rust programmers are the punk rockers of the software industry. They're loud, militant, they like to draw attention to themselves, but in the end they can't really deliver anything beyond the same basic riffs.
The command line is for power users, and utilities designed for file manipulation like "rm" should operate as intended (in this case, removing files outright).
Users should understand the risks and benefits of using these powerful tools without unnecessary safeguards and constraints. Users who understand these tools should have the freedom to use them without unnecessary interruptions. If someone requires additional safety measures, they can still use the same "rm" utility which already supports options for added safety such as the "rm -i" command for interactive mode or use "mv" (which is designed for moving / renaming files); however, imposing constant prompts or silly defaults would be antithetical to the efficiency and speed that power users expect from command-line operations.
When I use "rm", I expect my files to be removed quickly and efficiently. I believe it is important to note that using "rm" does not actually erase the file's data from the disk; it removes the directory entry for the file and deallocates the inode associated with that file. This means that the data remains on the disk until it is overwritten, making it potentially recoverable. If I want to ensure that files are truly removed, I use "srm" (secure remove). The "srm" utility not only removes the file entry but also overwrites the data on disk multiple times with random patterns which means it truly gets removed (excluding edge cases related to specific file system behaviors such as those using journaling or copy-on-write mechanisms or maintain snapshots or copies of files, and so forth).
EDIT: just tested it, it creates /tmp/graveyard-$USER with 0755.