Hacker Newsnew | past | comments | ask | show | jobs | submit | more jeanlucas's commentslogin

Good luck! It can positively impact the whole city, by a lot.


I'm building a tab manager extension for Chrome, that also kills duplicated tabs.

Why?

The one I used died (Manifest V2 only, and was not updated). And I wanted to test one-shot it.

Incredibly it worked!


I have found that duplicated tabs can be useful e.g. for pages where footnotes are not hyperlinked in the text. When this happens I open a duplicate tab and scroll to the bottom of the page on it.


oh, for sure, that's why the extension shows which tabs are duplicated, and I can kill the duplicates individually, but also has a kill-all-duplicates button


Well, I called it out 5 months ago. It wasn't viable. :(

https://news.ycombinator.com/item?id=44117601


>There's a certain freedom in owning your story publicly. People can't weaponize what you've already made peace with. I think that's what I'm motivated to do here.

Really nice. It also builds some credibility currency, the reputation economy is not as punitive in your case as I thought it would be.


I'm building my own version of circle for the community I run. It was a perfect test to try out all cli-agents, so far Amp is my favorite.


Why? Genuine curiosity, what's the angle for a market for sublime/zed alternatives? What are they lacking?

And congrats on your project, looks interesting.


> Why? Genuine curiosity, what's the angle for a market for sublime/zed alternatives? What are they lacking?

In my opinion, Sublime’s biggest gap is that it’s not open-source, and there aren’t many (if any) open-source alternatives that match its feature set, performance, and unique user experience, Sublime just feels especially nice. Zed comes closest, and I think it’s fantastic, but it’s VC-backed, so their focus on profitability will likely shape the user experience over time (as some users are already noticing). Every editor has its pros and cons, and preferences vary, but there’s always room for innovation. Even subtle differences can add up to a significantly better user experience. With ecode, I’m aiming to deliver a polished, enjoyable experience while subtly innovating on common editor features. That said, ecode is opinionated in some ways, so it won’t suit everyone, though it’s highly customizable and configurable.

> And congrats on your project, looks interesting.

Thanks! =)


> Sublime’s biggest gap is that it’s not open-source

But yours is also not open-source


It's in the main project repo, since ecode it's part of a much bigger project: https://github.com/SpartanJ/eepp/

Here's the explanation: https://github.com/SpartanJ/ecode/?tab=readme-ov-file#source...


it is or at least, the source is present check the readme, he has the source in the GUI repo


It is, the link is in the readme.


Question out of curiosity: why does Sublime need an alternative? As far as I know it still is maintained?


Sorry did not mean to hate on Sublime, it was pointed out in another comment that the comparison didn’t really match and I sort of agree. The mental model that brought that initially was the one-off use case of opening large files, for which I have traditionally done through Sublime in the past.


Sublime is great but falling behind. LSP support being a janky plugin instead of first party is a great example.


I don't think they mean "replacement" but rather "the sublime of ai editors"


I love Sublime. I have been using it for years and it's just fantastic software. I have no problems paying for it. But since it is such an important part of my toolbox, not having the source code is a liability. What if they decide to drop support for my platform? What if they decide to shift gears into AI and enshittify the experience?

Every other piece of software in my toolbox is open-source. The scenarios I've described happened to some of those tools, and I maintain my own forks. Currently, Sublime is the single point of failure on my toolbox.

I would buy a source code license if I could.


> The scenarios I've described happened to some of those tools, and I maintain my own forks.

Which ones?


Impossible to truly know. Writing may well have started with doodles, notes, even jokes on materials like leaves or wood that didn’t survive.

What survives are the "important" texts because you would deliberately put them on durable material. That creates a bias where early writing looks purely transactional.

Same reason we think of pyramids when we think of ancient architecture: stone lasts, wood doesn’t.


That's true we don't know anything about markings that were made on organic materials.

We do know that art and other markings date tens of thousands of years before the first proto-writing. Writing is specifically about markings that form a language. So doodles and visual jokes (e.g. phalluses) wouldn't count. I don't know what you mean by notes, but writing notes without a language would be difficult I suspect.

But there could early languages that were written on organic materials. The main problem is there's a bootstrapping problem where you need to account for how the first one developed at all. After that you can continuously improve over time.


Exactly, it's not a good theory, but it's the best one we have.

We just need to keep in mind that and not word it as if it is a fact that writing stated with accounting


cheers for one more release, hope it gets attention and necessary funding


nope, that would be handling tar balls

ffmpeg right after


Tough crowd.

fwiw, `tar xzf foobar.tgz` = "_x_tract _z_e _f_iles!" has been burned into my brain. It's "extract the files" spoken in a Dr. Strangelove German accent

Better still, I recently discovered `dtrx` (https://github.com/dtrx-py/dtrx) and it's great if you have the ability to install it on the host. It calls the right commands and also always extracts into a subdir, so no more tar-bombs.

If you want to create a tar, I'm sorry but you're on your own.


I used tar/unzip for decades I think, before moving to 7z which handles all formats I throw at it, and have the same switch for when you want to decompress into a specific directory, instead of having to remember which one of tar and unzip uses -d, and which one uses -C.

"also always extracts into a subdir" sounds like a nice feature though, thanks for sharing another alternative!


> tar xzf foobar.tgz

You don't need the z, as xf will detect which compression was used, if any.

Creating is no harder, just use c for create instead, and specify z for gzip compression:

  tar czf archive.tar.gz [filename(s)]
Same with listing contents, with t for tell:

  tar tf archive.tar.gz


Personally I never understood the problem with tar balls.

The only options you ever need are tar -x, tar -c (x for extract and c for create). tar -l if you wanna list, l for list.

That's really it, -v for verbose just like every other tool if you wish.

Examples:

  tar -c project | gzip > backup.tar.gz
  cat backup.tar.gz | gunzip | tar -l
  cat backup.tar.gz | gunzip | tar -x
You never need anything else for the 99% case.


For anyone curious, unless you are running a 'tar' binary from the stone ages, just skip the gunzip and cat invocations. Replace .gz with .xz or other well known file ending for different compression.

  Examples:
    tar -cf archive.tar.gz foo bar  # Create archive.tar.gz from files foo and bar.
    tar -tvf archive.tar.gz         # List all files in archive.tar.gz verbosely.
    tar -xf archive.tar.gz          # Extract all files from archive.tar.gz


> tar -cf archive.tar.gz foo bar

This will create an uncompressed .tar with the wrong name. You need a z option to specify gzip.


Apparently this is now automatically determined by the file name, but I still habitually add the flag. 30 years of muscle memory is hard to break!


I tried it to check before making the comment. In Ubuntu 25.04 it does not automatically enable compression based on the filename. The automatic detection when extracting is based on file contents, not name.


If you add a for auto, it will choose the right compression based on the file name.

tar -caf foo.tar.xz foo

Will be an xz compressed tarball.


> tar -l if you wanna list, l for list.

Surely you mean -t if you wanna list, t for lisT.

l is for check-Links.

     -l, --check-links
             (c and r modes only) Issue a warning message unless all links to each file are archived.
And you don't need to uncompress separately. tar will detect the correct compression algorithm and decompress on its own. No need for that gunzip intermediate step.


> -l

Whoops, lol.

> on its own

Yes.. I'm aware, but that's more options, unnecessary too, just compose tools.


That's the thing. It’s not more options. During extraction it picks the right algorithm automatically, without you needing to pass another option.


Yeah I never really understood why people complain about tar; 99% of what you need from it is just `tar -xvf blah.tar.gz`.


What value does tar add over plain old zip? That's what annoys me about .tar files full of .gzs or .zips (or vice versa) -- why do people nest container formats for no reason at all?

I don't use tape, so I don't need a tape archive format.


A tar of gzip or zip files doesn't make sense. But gzipping or zipping a tar does.

Gzip only compresses a single file, so .tar.gz lets you bundle multiple files. You can do the same thing with zip, of course, but...

Zip compresses individual files separately in the container, ignoring redundancies between files. But .tar.gz (and .tar.zip, though I've rarely seen that combination) bundles the files together and then compresses them, so can get better compression than .zip alone.


The zip directory itself is uncompressed, and if you have lots of small files with similar names, zipping the zip makes a huge difference. IIRC in the HVSC (C64 SID music archive), the outer zip used to save another 30%.


zip doesn't retain file ownership or permissions.


I think the Mac version may?


Good point. And if I remember right, tar allows longer paths than zip.


Plain old zip is tricky to parse correctly. If you search for them, you can probably find about a dozen rants about all the problems of working with ZIP files.


You for got the -z (or -a with a recent gnutar).


It’s no longer needed. You can leave it out and it auto-detects the file format.


Except it's tar -t to list, not -l


Whoops, lol. Well that's unfortunate.


    gzip -dc backup.tar.gz | tar -x
You can skip a step in your pipeline.


The problem is it's very non-obvious and thus is unnecessarily hard to learn. Yes, once you learn the incantations they will serve you forever. But sit a newbie down in front of a shell and ask them to extract a file, and they struggle because the interface is unnecessarily hard to learn.


It's very similar to every other CLI program, I really don't understand what kind of usability issue you're implying is unique to tar?


As has been clearly demonstrated in this very thread, why is "Please list what files are in this archive" the option "-t"?

Principle of least surprise and all that.


And why is -v the short option for --invert-match in grep, when that's usually --verbose or --version in lots of other places. These idiosyncrasies are hardly unique to tar.


it was just a reference to xkcd#1168

I wasn't expecting the downvotes for an xkcd reference


I have so much of tar memorized. cpio is super funky to me, though.


cpio is not that hard.

A common use case is:

  $ cpio -pdumv args 
See:

  $ man cpio 
and here is an example from its Wikipedia page, under the "Operation and archive format" section, under the Copy subsection:

Copy

Cpio supports a third type of operation which copies files. It is initiated with the pass-through option flag (p). This mode combines the copy-out and copy-in steps without actually creating any file archive. In this mode, cpio reads path names on standard input like the copy-out operation, but instead of creating an archive, it recreates the directories and files at a different location in the file system, as specified by the path given as a command line argument.

This example copies the directory tree starting at the current directory to another path new-path in the file system, preserving files modification times (flag m), creating directories as needed (d), replacing any existing files unconditionally (u), while producing a progress listing on standard output (v):

$ find . -depth -print | cpio -p -dumv new-path


I think that it's the fact that it requires a pipe to work and that you add files by feeding stdin that throw me for a loop.

I also use it very infrequently compared to tar -- mostly in conjunction with swupdate. I've also run into file size limits, but that's not really a function of the command line interface to the tool.


nope, it's using `find`.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: