Hacker Newsnew | past | comments | ask | show | jobs | submit | audiodude's commentslogin

Reminds me when Toy Story 2 was deleted and they found the backups on an artist's laptop that was working from home.


>artist

technically, it was the supervising technical director.

The only reason this happened (I don't think "working from home" was very common in 1999) was because she just had a baby! I love this story because it feels like good karma – management providing special accommodations for a new mom saves the show.


It was on their SGI workstation that they lugged to home, but yeah, pretty much that's how they recovered most of the files. At the end they barely used the material.


Behind the scenes: https://gist.github.com/audiodude/8eb6934539f3f54a2bd1ee6d83...

TODO: 1. Automate registering the domain 2. Automate setting up the DNS 3. Better automate deploying to Netlify (Claude kind of choked on this one)

I also used my trick of immediately changing the Namechep NS records to AWS Route 53, to avoid getting a parking page and having to wait for an hour TTL to expire.

The most frustrating part was waiting for Netlify to figure out the Let's Encrypt cert. It takes ages and I have to manually recheck. Fix this Netlify! (If anyone from Netlify is reading this, it seems like I click "veryify DNS" it says "Success!" but if I reload the page it asks me to verify again).

Time spent: ~30 minutes Total cost: - Claude: $0.56 - Domain: 12 bucks of course ;) - Hosting (Netlify): $0.00


So....just rot then?


more like a funny sort of dependency drift


Isn't this what TFA suggests?


All day in ChromeOS at work it's accidentally open the dev tools in chrome (you Ctrl-Shift-C'd when you should have just Ctrl-C'd) or kill my server in the terminal (opposite direction). All. Day.


Same in Firefox. Absolutely infuriating and trivial for them to fix/disable with an about:config even if you absolutely had to keep Ctrl-Shift-C for the masses. I suppose it only affects, oh, everyone in the world who uses both dev tools and a terminal. Which is almost a complete intersection with people who use dev tools! Or have they always been expecting AI to take over web design entirely and you no longer need terminals or something?

You can inject JS into every single page to fix it via an extension, but extensions don't have the ability to fix it otherwise. I suppose you could also intercept it in an IME?


While we're talking keyboards, what about emoji? Every messaging app has a different interface for them, some like Discord and Whatsapp allow you to use :smile: shortcuts, but the enumerated names are different across apps. I've occasionally gotten a dedicated OS emoji picker to show up on my Mac, but I don't know how.

On Ubuntu Cinnamon, I managed to create keyboard shortcuts for the 8 or so emoji I use the most by binding something called a "compose" key and modifying a .XCompose file, but it still took other config file gymnastics to make it persist between X sessions.


`Meta + .` opens an emoji picker on KDE. And you can type unicode with `Control + Shift + u`.

I've been meaning to look into this, but what I'd really love is a composer which can insert any text, emoji, or unicode character based on whatever alias I give it. I'd probably leverage more of unicode if I should hit a shortcut and type wd to summon a wave dash (〜).


Windows does a similar thing with win+. (I forget if win maps to the meta key on linuxes)


~/.XCompose might be what you are looking for.

Support varies - GTK needs an environment variable to use it and Qt (since version 5) only uses the first code point of the result. No idea about Wayland support or alternatives.


Control + Command + Space


You can also bind it to the fn key on a MacBook.


Crazy, thanks!


That's just regular keyboard for CJK languages. "today" -> "2025/08/15 Fri", "cry" -> ":crying_face:", etc. It's just small extension to regular "keycodes to full UTF-8 gibberish" process that input method editors handle.

There's nothing wrong if someone make ones for en, fr, de, in my opinion. Correction-plus-conversations like "hors douvray" to "hors-d'œuvre" in one key is going go be useful.


> I've occasionally gotten a dedicated OS emoji picker to show up on my Mac, but I don't know how.

On mine it shows up when I accidentally press the Fn key, which is very inconveniently placed where every other keyboard places the Ctrl key...


On recent-ish macOS it's Fn-E (nice and mnemonic). They recently added a bunch of Fn key shortcuts, but that's probably the most useful.


To me, it makes sense that "Give me the active/main/primary account", when multiple accounts are signed in, is inherently ambiguous. Which account is the main one? You suggest Accounts.first. Is that the first account that was signed into 3 years ago? Maybe you don't want that one then. Is it the most recently signed into account?

The designer of the API decided that if you ask for "the single account" when there are multiple, that is an error condition.


Is there any intrinsic reason why Jupyter Notebooks can't be imported? You don't know which code blocks to run?


not really, you can just run the blocks one by one like `nbconvert --to script` does. Looks like https://pypi.org/project/importnb/ exists


If you, like me, have never heard of a .netrc file...

https://everything.curl.dev/usingcurl/netrc.html


There might be a funny thing with FTP, in which, if a company is using FTP, it's probably for something important.

(Even if it's a bad idea now, and compromise of it could result in a bad quarter or regulatory action, legacy systems and priorities happen.)


Volunteer for Kiwix here (https://kiwix.org), we do a lot of offline Wikipedia stuff. I've personally worked on MWOffliner (https://github.com/openzim/mwoffliner) which scrapes MediaWikis, primarily Wikipedia.

We have apps for basically every platform. Our PWA even supports IE 11!

You can use the WP1 tool which I'm the primary maintainer of (https://wp1.openzim.org/#/selections/user) to create "selections" which let you have your own custom version of Wikipedia, using categories that you define, WikiProjects, or even custom SPARQL queries.


May I suggest somebody out of your company reviews the website. It is not clear to me what you do, what the apps do and so on. The copy is also kind of abstract "we make the world a better place" type of copy. From your comment I understand you do good work and would be a shame for people new to your products to struggle understanding what are you doing.


I don't agree with your assessment. Did you find the 'About Us' page insufficient?


I don't think the previous comment was trying to be snarky - I can see where they're coming from.

Take my feedback with a grain of salt, as I am entirely not the target audience, but...

"Stay Connected, Always" - weird way to put it, given it's for offline situations. At this stage it sounds like it could be a 4G or portable wifi solution?

"Use our apps for offline content or the Kiwix Hotspot for reliable access." - so it's probably a desktop or mobile app, maybe a web app. What is Kiwix Hotspot, another app? Unclear there is a hardware at this point, or any on the home page unless I watch the video that hints at it.

The summary in the footer was a lot clearer to me: "Kiwix is an offline reader for online content like Wikipedia, Project Gutenberg, or TED Talks. It makes knowledge available to people with no or limited internet access"

Again, not trying to complain for the sake of it, I think this is a cool project helping under-served communities, but if people can't easily understand what you do, they may not dig deeper.

If I can't tell what is being offered without much thinking or digging, the home page isn't doing as much as it could be.

Perhaps it is ticking the boxes for your target audience if you have done some testing. Great! If not, some quick user testing could help optimise the messaging to make sure what you offer is landing.


The quote from the footer sounds pretty good. That should probably be front and centre.


agreed. It hits the nail on the head.


Thank you for the feedback!

Very valid points. I’m volunteering for Kiwix and can say that they’re aware of the issues with the website. There are plans to renew it and refresh the content/copy, as well as do some user testing.

The core team is small and doing its best to prioritise and focus on what brings the best value to the users.


I love the product and mission. I'd be happy to volunteer some time to help with the messaging and content. Feel free to reach out to me on LinkedIn or X. - Jason Lankow


I honestly thought this was a self contained relay to provide internet access to remote locations...


The open sentence of the ‘About Us’ page is very good. The problem is, it is on the about us page, rather than front and centre on the home page


it is often true that the people who made a thing are unable to adequately explain it to others, and think their own explanation works just fine. The site isn't self-explanatory.


Neat, thanks! I'm CTO of Ameelio (non-profit) and have been eyeing Kiwix for awhile. Getting content to incarcerated people is a unique challenge due to the exceptional security requirements, and an offline solution like kiwix might fit in well. Being able to narrow down categories is a huge capability for us. Thank you!


Just wanted to comment on what a great mission Ameelio seems to have! Glad you guys are helping some of the most unseen in our society. Kudos!


Feel free to reach to us (Kiwix), we've already helped NGO deploy our content to prisons for the exact reason you mention.


Neat, thanks! I'll definitely do that :-)


I had an offline copy of wikipedia from like five years ago, just in case. When I recently needed it I opened the kiwix app and everything was broken by some godforsaken overhaul update. I don't have an offline copy of wikipedia on my new phone anymore.


Does archive.org have a mirror of the iso?


Regarding mwoffliner: Why scrape Wikipedia when you can just download a dump?


If you want to test Mediawiki tooling, wikipedia is good test target, because it uses a lot of the features (unsurprisingly), compared to smaller wikis. (OTOH, the latter often have custom extensions, so it's not quite enough)


Sure, but I understood the parent as saying that the tool primarily serves for scraping Wikipedia.


I was thinking the same. It must take much less space in database form than all the html pages.


Its also kind of bad form to scrape a huge website when there's a downloadable dump available. Save yourself, and more importantly wikimedia, a whole lot of bandwidth & CPU cycles.


And torrenting the dumps helps distribute them to others as well.


If I'm reading this right, the last full zim archive of all of english wikipedia is wikipedia_en_all_maxi_2024-01.zim which is now about 16 months old. Is that right, or is are another more recent sources?

The current US administration is actively trying to interfere with Wikipedia and censor public speech or information that is detrimental to their disinformation campaign. [1]

Do you know if there is an effort to publish more recent archives ? Or do you have any advice how outside developers could jump in to help with that project?

[1] - https://news.ycombinator.com/item?id=43799302


Kiwix team is close on this, it’s even a partnership directly with Wikipedia to work on the newer APIs and function reliably.


Do you know when this might be back in stock? https://www.prepperdisk.com/products/prepper-disk-premium-ai...


Thanks for asking, we just opened up our second preorder wave. Those will ship mid June.


Is there concern for AI-produced slop in Wikipedia? I have the 2024-01 version which may be out of date, but may also have less slop.


Look at the Wired article on fighting people rewriting the history of Germany and the George Galloway article. Any one can edit it, including a recently convicted ex-congressman


[flagged]


Wikipedia is a collection of objective truth which is apparently fundamentally incompatible with "the right"'s worldview.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: