Hacker Newsnew | past | comments | ask | show | jobs | submit | longwave's commentslogin

I also wonder how many of the numerous AI proponents in HN comments are subject to the same effect. Unless they are truly measuring their own performance, is AI really making them more productive?


How would you even measure your own performance? You can go and redo something, forgetting everything you did along the way the first time


You could go the same way as the study, flip a coin to use AI or not, write down the task you just did, the time you thought the task took you and the actual clock time. Repeat and self-evaluate.


Sample size of 16 is already hard enough to draw conclusions from. Sample size of 1 is even worse.


Sample of 16 is plenty if the effect is big enough.

It’s also not a sample size of 1, it’s a sample size of however many tasks you do because you don’t care about measuring the effect AI has on anyone but yourself if you’re trying to discern how it impacts you.


It's the most representative sample size if you're interested in your own performance though. I really don't care if other people are more productive with AI, if I'm the outlier that's not then I'd want to know.


Similarly, Mistabishi's Printer Jam from 2009 sounds much like you might expect from the title.

https://www.youtube.com/watch?v=ALNV4ZW33fA


Please someone mention The User with their Symphonies for dot matrix printers (1999 and later): https://www.youtube.com/watch?v=Grb_EIDSVnY


It seems this is because the string "autoregressive prior" should appear on the right hand side as well, but in the second image it's hidden from view, and this has confused it to place it on the left hand side instead?

It also misses the arrow between "[diffusion]" and "pixels" in the first image.


This smells like an advert. Over the last year I've spent less money on energy by being on Octopus Tracker (which requires a smart meter) over any fixed tariff.


Many have also not and are switching away from Agile.

It hit £1 a couple of weeks ago. Ouch

It's cheap if you use the majority of your energy in the wee hours right?


Tabs now show a preview on hover since Firefox 129, which is a nice improvement, but it's not actually mentioned on that page.


I do not have that on v130. Got any links to release notes about that one?


https://www.mozilla.org/en-US/firefox/129.0/releasenotes/#no...

It only does it for tabs that you've viewed previously (i.e. not restored from a previous session), and not the current tab.


FF 130 does not do that for me by default. I had to go to about:config and set browser.tabs.hoverPreview.enabled to true.


Unfortunately it seems this doesn't work/integrate with TST.


Yeah same, this was off by default until I just toggled it on. Thanks for the tip!


If this is true does this mean we don't need fingerprint scanning hardware any more, but we can just use a microphone and software to unlock a device when the user runs their finger over any convenient surface?


Given the full fingerprint reconstruction rates, it would likely be a while until the tech is reliable enough to do that, if ever.


I used this many, many years ago but switched to Borg[0] about five years ago. Duplicity required full backups with incremental deltas, which meant my backups ended up taking too long and using too much disk space. Borg lets you prune older backups at will, because of chunk tracking and deduplication there is no such thing as an incremental backup.

[0] https://www.borgbackup.org/


I did the same. I had some weird path issues with Duplicity.

Borg is now my holy backup grail. Wish I could backup incrementally to AWS glacier storage but that just me sounding like an ungrateful begger. I'm incredibly grateful and happy with Borg!


Agree completely... used duplicity many years ago, but switched to Borg and never looked back. Currently doing borg-backups of quite a lot of systems, many every 6 hours, and some, like my main shell-host every 2 hours.

It's quick, tiny and easy... and restores are the easiest, just mount the backup, browse the snapshot, and copy files where needed.


After Borg, I switched to Restic:

https://restic.net/

AFAIK, the only difference is that Restic doesn't require Restic installed on the remote server, so you can efficiently backup to things like S3 or FTP. Other than that, both are fantastic.


Technically Borg doesn't require it either, you can backup to a local directory and then use `rclone` to upload the repo wherever.

Not practical for huge backups but it works for me as I'm backing up my machines configuration and code directories only. ~60MB, and that includes a lot of code and some data (SQL, JSON et. al.)


Sure, but rsync requires the server to support rsync.


Does rclone require rsync? Haven't checked.


Oh sorry, brain fart, I thought you said rsync. I think rclone uploads everything if you don't have rclone on the server, but I'm not sure.


Pretty sure rclone uploads just fine without server dependencies, yeah. I never installed anything special on my home NAS and it happily accepts uploads with rclone.


It will upload fine, but it can't upload only the changed parts of the file without server support.


That I can't really speak of. I know it does not reupload the same files at least (uses timestamps) but never really checked about only uploading file diffs.

Do you have a direct link I can look at?


Nothing offhand, but basically it can't know what's on the server without reading it all, and if it can't do that locally, it'll have to do it remotely. At that point, might as well re-upload the whole thing.

Its front page hints at this, but there must be details somewhere.


I think you’re misunderstanding something. There’s no need, and even no possibility, to have “rclone support” on the server, and also no need to “read it all”. rclone uses the features of whatever storage backend you’re using; if you back up to S3, it uses the content hashes, tags, and timestamps that it gets from bucket List requests, which is the same way that Restic works.

Borg does have the option to run both a client-side and a server-side process if you’re backing up to a remote server over SSH, but it’s entirely optional.


Ah, you're right, I got confused between rsync and rclone's server-side transfers.


Not to make this an endless thread, but I have been wondering about what's the most rsync-friendly backup on-disk layout. I have found Borg to have less files and directories which I would naively think translates to less checks (and the files are not huge, too). I have tried Kopia and Bupstash as well but they both produce a lot of files and directories, much more than Borg. So I think Borg wins at this but I haven't checked Restic and the various Duplic[ati|icity|whatever-else] programs in a while (last I did at least a year ago).


I think the advantage of restic is that you don't need to rsync afterwards, it handles all that for you. Combined with its FUSE backup decryption (it mounts the remote backup as a local filesystem you can restore files from), it's very set-and-forget.


My problem with Restic was that it did not recognize sub-second timestamps of files. I made test scripts that tested it (and were creating files and directories in a hypothetical backup source, and were also changing the files) but then Restic insisted nothing was changed because the changes were happening too fast.

I modified the scripts to do `sleep 1` between each change but it left a sour taste and I never gave Restic a fair chance. I see a good amount of praise in this thread, I'll definitely revisit it when I get a little free time and energy.

Because yeah, it's not expected you'll make a second backup snapshots <1s after the first one. :D


I'm going to say that was a bit of a niche usage :P


I tried Restic again but, its repo size is 2x of that of Borg which allows you to fine-tune compression, and Restic doesn't.

So I'll keep an eye on Rustic instead (it is much faster on some hot paths + allows you to specify base path of the backup; long story but I need that feature a lot because I also do copies of my stuff to network disks and when you backup from there you want to rewrite the path inside the backup snapshot).

Rustic compresses equivalently to Borg which is not a surprise because both use zstd on the same compression level.


For the "replicating borg repos" use case this doesn't matter, because files are only written once and never modified afterwards.


Works a treat with borgmatic https://torsion.org/borgmatic


I have an overnight cron that flattens my duplicity backups from many incremental backups made over the course of one day to a single full backup file, that becomes the new backup. then subsequent backups over the course of the day do incremental on that file. So I always have full backups for each individual day with only a dozen or so incremental backups tacked onto it.

that said will give Borg a look


Same for me. Also, on MacOs duplicity was consuming much more CPU than Borg and was causing my fan to spin loudly. Eventually I moved to timemachine, but I still consider Borg a very good option.


Also duplicity let's you automatically delete backups older than a certain amount of time, what is the difference?


This site has seemingly solved both of those problems. So isn't HN the modern StumbleUpon, albeit with more focus on technical topics?


SU was always one of the many aggregators in the addth.is toolbar, alongside places like Reddit. They do both serve the same function of making the Internet more discoverable - noting that early Reddit didn't have comments.


I love it when kids shows do things like this. The BBC show Hey Duggee has an episode called The River Badge which is a homage to Apocalypse Now, recreating a number of similar scenes and pieces of dialogue. Other episodes have references to other movies that kids won't know but that are great fun for adults when they notice.


> I wonder why they can't reject the flight plan for an aircraft that's already in the air?

You need to know everything that may be in the air - if you skip the details of a flight that may be in the air, you risk routing another flight through the same space and the possibility of collision? So if you can't do that safely, the only option is to shut down; existing flights can continue but no new flights can be routed until the anomaly is resolved.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: