I have found that duplicated tabs can be useful e.g. for pages where footnotes are not hyperlinked in the text. When this happens I open a duplicate tab and scroll to the bottom of the page on it.
oh, for sure, that's why the extension shows which tabs are duplicated, and I can kill the duplicates individually, but also has a kill-all-duplicates button
>There's a certain freedom in owning your story publicly. People can't weaponize what you've already made peace with. I think that's what I'm motivated to do here.
Really nice. It also builds some credibility currency, the reputation economy is not as punitive in your case as I thought it would be.
> Why? Genuine curiosity, what's the angle for a market for sublime/zed alternatives? What are they lacking?
In my opinion, Sublime’s biggest gap is that it’s not open-source, and there aren’t many (if any) open-source alternatives that match its feature set, performance, and unique user experience, Sublime just feels especially nice. Zed comes closest, and I think it’s fantastic, but it’s VC-backed, so their focus on profitability will likely shape the user experience over time (as some users are already noticing). Every editor has its pros and cons, and preferences vary, but there’s always room for innovation. Even subtle differences can add up to a significantly better user experience. With ecode, I’m aiming to deliver a polished, enjoyable experience while subtly innovating on common editor features. That said, ecode is opinionated in some ways, so it won’t suit everyone, though it’s highly customizable and configurable.
> And congrats on your project, looks interesting.
Sorry did not mean to hate on Sublime, it was pointed out in another comment that the comparison didn’t really match and I sort of agree. The mental model that brought that initially was the one-off use case of opening large files, for which I have traditionally done through Sublime in the past.
I love Sublime. I have been using it for years and it's just fantastic software. I have no problems paying for it. But since it is such an important part of my toolbox, not having the source code is a liability. What if they decide to drop support for my platform? What if they decide to shift gears into AI and enshittify the experience?
Every other piece of software in my toolbox is open-source. The scenarios I've described happened to some of those tools, and I maintain my own forks. Currently, Sublime is the single point of failure on my toolbox.
Impossible to truly know. Writing may well have started with doodles, notes, even jokes on materials like leaves or wood that didn’t survive.
What survives are the "important" texts because you would deliberately put them on durable material. That creates a bias where early writing looks purely transactional.
Same reason we think of pyramids when we think of ancient architecture: stone lasts, wood doesn’t.
That's true we don't know anything about markings that were made on organic materials.
We do know that art and other markings date tens of thousands of years before the first proto-writing. Writing is specifically about markings that form a language. So doodles and visual jokes (e.g. phalluses) wouldn't count. I don't know what you mean by notes, but writing notes without a language would be difficult I suspect.
But there could early languages that were written on organic materials. The main problem is there's a bootstrapping problem where you need to account for how the first one developed at all. After that you can continuously improve over time.
fwiw, `tar xzf foobar.tgz` = "_x_tract _z_e _f_iles!" has been burned into my brain. It's "extract the files" spoken in a Dr. Strangelove German accent
Better still, I recently discovered `dtrx` (https://github.com/dtrx-py/dtrx) and it's great if you have the ability to install it on the host. It calls the right commands and also always extracts into a subdir, so no more tar-bombs.
If you want to create a tar, I'm sorry but you're on your own.
I used tar/unzip for decades I think, before moving to 7z which handles all formats I throw at it, and have the same switch for when you want to decompress into a specific directory, instead of having to remember which one of tar and unzip uses -d, and which one uses -C.
"also always extracts into a subdir" sounds like a nice feature though, thanks for sharing another alternative!
For anyone curious, unless you are running a 'tar' binary from the stone ages, just skip the gunzip and cat invocations. Replace .gz with .xz or other well known file ending for different compression.
Examples:
tar -cf archive.tar.gz foo bar # Create archive.tar.gz from files foo and bar.
tar -tvf archive.tar.gz # List all files in archive.tar.gz verbosely.
tar -xf archive.tar.gz # Extract all files from archive.tar.gz
I tried it to check before making the comment. In Ubuntu 25.04 it does not automatically enable compression based on the filename. The automatic detection when extracting is based on file contents, not name.
-l, --check-links
(c and r modes only) Issue a warning message unless all links to each file are archived.
And you don't need to uncompress separately. tar will detect the correct compression algorithm and decompress on its own. No need for that gunzip intermediate step.
What value does tar add over plain old zip? That's what annoys me about .tar files full of .gzs or .zips (or vice versa) -- why do people nest container formats for no reason at all?
I don't use tape, so I don't need a tape archive format.
A tar of gzip or zip files doesn't make sense. But gzipping or zipping a tar does.
Gzip only compresses a single file, so .tar.gz lets you bundle multiple files.
You can do the same thing with zip, of course, but...
Zip compresses individual files separately in the container, ignoring redundancies between files. But .tar.gz (and .tar.zip, though I've rarely seen that combination) bundles the files together and then compresses them, so can get better compression than .zip alone.
The zip directory itself is uncompressed, and if you have lots of small files with similar names, zipping the zip makes a huge difference. IIRC in the HVSC (C64 SID music archive), the outer zip used to save another 30%.
Plain old zip is tricky to parse correctly. If you search for them, you can probably find about a dozen rants about all the problems of working with ZIP files.
The problem is it's very non-obvious and thus is unnecessarily hard to learn. Yes, once you learn the incantations they will serve you forever. But sit a newbie down in front of a shell and ask them to extract a file, and they struggle because the interface is unnecessarily hard to learn.
And why is -v the short option for --invert-match in grep, when that's usually --verbose or --version in lots of other places. These idiosyncrasies are hardly unique to tar.
and here is an example from its Wikipedia page, under the "Operation and archive format" section, under the Copy subsection:
Copy
Cpio supports a third type of operation which copies files. It is initiated with the pass-through option flag (p). This mode combines the copy-out and copy-in steps without actually creating any file archive. In this mode, cpio reads path names on standard input like the copy-out operation, but instead of creating an archive, it recreates the directories and files at a different location in the file system, as specified by the path given as a command line argument.
This example copies the directory tree starting at the current directory to another path new-path in the file system, preserving files modification times (flag m), creating directories as needed (d), replacing any existing files unconditionally (u), while producing a progress listing on standard output (v):
I think that it's the fact that it requires a pipe to work and that you add files by feeding stdin that throw me for a loop.
I also use it very infrequently compared to tar -- mostly in conjunction with swupdate. I've also run into file size limits, but that's not really a function of the command line interface to the tool.