Hacker Newsnew | past | comments | ask | show | jobs | submit | finnh's favoriteslogin

I see no way to DM on this site, so public it is.

No association with it but have used it multiple times.

https://allfamilypharmacy.com/

Mods if this is not ok, my apologies


Do this course https://github.com/henki-robotics/robotics_essentials_ros2

Totally free, don't need to buy anything just a computer.

I've been designing electrical hardware for robots for the last 4 years for a big corporation and I can tell you, the fun and money is in the software but having another skill is awesome. Robotics is a place where multiple fields converge and if you find a good team they will help you to grow.

Embedded is the adjacent field after you complete the course. Maybe something like zephyr project.

If you want to get you feet wet with mechanical. Buy a A1 mini and play with onshape (www.onshape.com) to design your first pieces, supports for the motors or the board, try create you own gripper.

As for the electrical engineering, is the one with most pitfalls and the most expensive. A wrong voltage will release the magical smoke and is another 30$ for a board. Tread carefully. Start with the RP2040 or the RP2350, they are cheap and well documented. This skill will evolve hand by hand with the embedded coding. Start small. Learn about H-bridge and brushed motors before doing the jump to the bigboys and FOC control with brushless. Get a cheap soldering iron. If you can, a clone of the JBC C245 tips. Is the most versatile and you can find stuff in alie xpress for 45 or 50$ and would be similar to the tools you will find in the field without breaking the bank.

Search for ROS meetups. I could point you to some depending where in the world you are.

And above all, it will be a long journey. Don't dispear, do at your own time but don't forget the objective.


To see some contemporary "reverse oil painting on glass" works, see Jessica Frances Grégoire Lancaster's site.

[1] https://www.jfglancaster.com/


According to his website, he has:

https://shark.fish/rustlab2019


Imo, everyone should check https://terminaltrove.com/ from time to time. There, I have found easy replacements to commonly used tools:

find -> fd, time(for runtime comparison) -> hyperfine, grep->ripgrep, asciinema + converting to .gif -> t-rec[1], manually creating convertional commits -> koji[2], etc.

[1]https://terminaltrove.com/t-rec/ [2]https://terminaltrove.com/koji/


They faked a lot of the showcases in the last years and their public offerings are just weird. Ever heard of https://labs.google/fx/tools/image-fx/ or https://labs.google/fx/tools/video-fx ? Because these sites are the consumer facing video and image model UIs and literally no normal person knows.

I would like to note that Linux is the only kernel which will allow you to do this! The Linux system call interface is stable and defined at the instruction set level. Linking against some system library is absolutely required on every other system.

I've written an article about this idea:

https://www.matheusmoreira.com/articles/linux-system-calls

You can get incredibly far with just this. I wrote a freestanding lisp interpreter with nothing but Linux system calls. It turned into a little framework for freestanding Linux programs. It's been incredibly fun.

Freestanding C is a much better language. A lot of legacy nonsense is in the standard library. The Linux system call interface is really nice to work with. Calling write is not that hard. It's the printf style string building and formatting that I sometimes miss.


Sounds like it's exactly the same "magic" as exercise and calorie restriction. If the body has more energy available than it needs for normal function, it'll use as much as it can and eventually that use goes to things like general inflammation, cancer, and autoimmune disease. If you use available energy to exercise, simply eat less, or disrupt your body's ability to actually use all the excess food you feed it with drugs, then it won't be able to do those bad things and will only maintain essential function.

Obviously, if you restrict too much, you starve. Animals in a state of nature seem to automatically find the right balance and eat roughly exactly what they need. Animals placed into situations in which food is nearly costless, effectively infinite, and you don't need to be active except to the extent you do it freely for recreation, seem to struggle. Humans suffering from diseases of civilization are one such example, but human pets and livestock seem to have the same problems. Presumably, lab rats are basically like that, too. Life in a plastic cage with no predators and food given to you directly by gods is not very similar to life in the wild.


Somewhat tangential but I've found Claude 3.5 to be ridiculously good at doing this kind of prototyping with React, and you get the preview immediately in the artifact pane to the right of the chat. Mind you I don't ever use React (I'm writing a desktop app in Rust), so I'm probably as unbiased an audience as it gets.

Here are a couple of examples of things people asked for help with and I just iterated with Claude for 5-10 minutes on (code upfront and gif demo at the bottom):

https://gist.github.com/airstrike/7ae444de207e679adca7be6faa...

https://gist.github.com/airstrike/e0e47eaab733277b537923c4d3...

I think given the sheer amount of training data on React, these models tend to really hit it out of the park when they are allowed to use it. Part of me thinks the best approach is to have it model in React first and then translate it to whatever other language/framework you need it in


With luck, `uv tool install` [1] will solve most of the frustrations with Python programs on Linux/Mac/Windows, since uv combines the functionality of pipx and pyenv and doesn't itself depend on Python. It is a single binary that manages Python interpreters and project environments. uv is very actively developed and seems to be rapidly gaining adoption. It makes me optimistic about its future (and, correspondingly, pessimistic about the future of Python tooling that competes with it).

I already have a success story of helping a friend install ArchiveBox [2] with uv. ArchiveBox required an older version of Python than their system had. The quickest and easiest solution I could think of was:

  pipx install uv
  uv tool install --python 3.8 archivebox
You could also install uv by other means.

[1] https://docs.astral.sh/uv/concepts/tools/

[2] https://archivebox.io/


Loughla is spitting truth here.

Those of you in the rat race, when you get home after a hard day and an hour of sitting in traffic, please, just sit in your car for a minute or two. Shed all of that. Pick a tree in your front yard and name it your worry tree. Before you go in, leave your stress/worries/frustrations with it. It won't stay gone but the intention will make it so those first five minutes when you walk into the door you can be present and enjoy your kids, instead of just bringing in a ball of intensity they won't understand.

Those memories of 'mommy's home' or 'daddy's home' shouts of joy when you go inside are pure gold for the rest of your life.


For air travel, I really like my Xreal Air glasses now that I have a newer iPhone 16pro. Just plug in the USB-C cable, and you have a virtual 60" screen in front of you which works perfectly for Netflix, etc. And they cost less than 10% of the cost of an AVP, and are not limited to 2-3 hours of battery life (they get power from the phone).

Note that if you have an older (lightning) iPhone, don't bother with these. They require a pair of dongles. Not only does that make things really awkward, but one of the dongles ends up apparently blocking HDCP, and prevents you from using anything but ... your own... downloaded content.


I’ve switched over to devbox, which is a thin, easy to use wrapper around Nix on osx. Haven’t had to install brew yet. Brew was simple in theory but I always ran into so many issues like this, or just various packages polluting everything. Fingers crossed I don’t have to go back

As someone who hates tinkering with this kinda stuff, I’m surprised how well it works so far


Always cool to see new mutex implementations and shootouts between them, but I don’t like how this one is benchmarked. Looks like a microbenchmark.

Most of us who ship fast locks use very large multithreaded programs as our primary way of testing performance. The things that make a mutex fast or slow seem to be different for complex workloads with varied critical section length, varied numbers of threads contending, and varying levels of contention.

(Source: I wrote the fast locks that WebKit uses, I’m the person who invented the ParkingLot abstraction for lock impls (now also used in Rust and Unreal Engine), and I previously did research on fast locks for Java and have a paper about that.)


This looks extremely cool. This is basically incremental view maintenance in databases, a problem that almost everybody (I think) has when using SQL databases and wanting to do some derived views for more performant access patterns. Importantly, they seem to support a wide breath of SQL operators, support spilling computation state to disk, and it's open-source! Interestingly, it compiles queries to Rust, so an approach similar to Redshift (which compiles queries to C++ programs).

There's already a bunch of tools in this area:

1. Materialize[0], which afaik is more big-data oriented, and doesn't pipe the results back to your database, instead storing results in S3 and serving them.

2. Epsio[1], which I've never used, seems to be very similar to this product, but is closed-source only.

3. When building OctoSQL[2], this capability was also important to me and it was designed from ground up to support it. Though in practice in a tool like OctoSQL it's pretty useless (was a fun problem to solve though).

There's some things I'm curious about:

- Does it handle queries that involve complex combinations of ordering with limits in subqueries? If due to a change in an underlying table a top-n row is added, resulting in moving other rows around (and removing the current n'th) will the subsequent query parts behave as though the order was maintained when computing it, or will it fall apart (imagine a select with limit from a select with bigger limit)?

- Is it internally consistent[3]? They say it's "strongly consistent" and "It also guarantees that the state of the views always corresponds to what you'd get if you ran the queries in a batch system for the same input." so I think the answer is yes, but this one's really important.

Either way, will have to play with this, and dig into the paper (the link in the repo doesn't work, here's an arXiv link[4]). Wishing the creators good luck, this looks great!

[0]: https://materialize.com

[1]: https://www.epsio.io

[2]: https://github.com/cube2222/octosql

[3]: https://www.scattered-thoughts.net/writing/internal-consiste...

[4]: https://arxiv.org/pdf/2203.16684


I have one where I deployed K9s so I can learn Kubernetes better, and one where I have deployed Harbormaster (http://harbormaster.readthedocs.io/).

The Harbormaster one has a bunch of stuff (Zigbee2MQTT, my smart home stuff, my apps, etc. I have a Pi 4 that has Octoprint, services on the NUC load instantly whereas Octoprint feels a bit sluggish.

The NUC is an x86 (well, amd64) box, with a 10W power draw, which is great. I don't think a desktop PC will do less than 100W...


The name is perfect. "Etak" refers to a system of navigation used by Micronesian and Polynesian seafarers to navigate from island to island in the vast Pacific Ocean. Much like this device, it operates by "augmented dead reckoning" (as it says in TFA).

The etak system of navigation involves navigating by stars and ocean swells to get the heading, but a key issue when navigating by dead reckoning over long distances is that if you're a fraction of a degree off you may miss your destination and never know it, so it's also vitally important to know how far you've gone. This is hard when there are no landmarks. The navigators estimated their distance by using intermediary islands off to the side, which they viewed as coming towards them (from their point of view, the navigator stays motionless on the open ocean while the world moves towards them) and past them. These reference islands were called etaks.

However, confusingly, the etaks were generally not visible, being beyond the horizon, and sometimes did not even exist. The navigators would have named etaks that they pictured being just over the horizon, whether they were there or not, and would track their procession past their boat. When the set number of etaks had passed, they would know they were in the vicinity of the destination island. If they were not at the right time of day for birds to be out, they would then hang out in the area waiting to spot the birds leaving or returning at dawn or dusk.

So the system involves dead reckoning plus a system of turning the navigators' own well-developed intuition of how far they had travelled into a formalized system of generally-invisible islands that they used as a mental model to externalize this intuition.

(My knowledge of this is from Cognition in the wild, Hutchins, E., 1995.)

Edit: D'oh, I should have finished TFA. This is described at the end, although more roughly.


This is fantastic! One can easily imagine some minor refinements that would allow this to be mass producible with very high accuracy. And the applications are abundant. I imagine you could use space filling curves to make 2D or 3D sensors that could cost-efficiently give robots a sense of touch. Wrapped around something like a flexible tube you could make it directionally sensitive for proprioception. It's easily possible that other things that affect the air gaps like say temperature differences could be detected and localized as well.

Microsoft filed this late today with the SEC[1] just before they stopped accepting new filings for the day under their new Cybersecurity Incident disclosure rule[2]. FWIW, two other publicly traded companies disclosed[3] their breaches since the rule went into affect last month.

[1] https://www.sec.gov/Archives/edgar/data/789019/0001193125240...

[2] https://www.sec.gov/news/press-release/2023-139

[3] https://last10k.com/stock-screeners/cybersecurity


Note that they don't compare with deepseek coder 6.7b, which is vastly superior to much bigger coding models. Surpassing codellama 7b is not that big of a deal today.

The most impressive thing about these results is how good the 1.3B deepseek coder is.


Miles and kilometers can be estimated by the Fibonacci Sequence. The conversion (mi -> km) is very nearly the golden ratio (1.61 and approximately 1.62, respectively, IIRC). For any number in the sequence taken as miles, the subsequent number is the distance in kilometers. Your way is probably quicker, but it’s a fun bit of information.

Most people know about MediaWiki even if they don't realize they do, because it powers Wikipedia, but I wish more people used it for documentation.

You can create highly specialized templates in Lua, and there's a RDBMS extension called Cargo that gives you some limited SQL ability too. With these tools you can build basically an entirely custom CMS on top of the base MW software, while retaining everything that's great about MW (easy page history, anyone can start editing including with a WYSIWYG editor, really fine-grained permissions control across user groups, a fantastic API for automated edits).

It doesn't have the range of plugins to external services the way something like Confluence has, but you can host it yourself and have a great platform for documentation.


This seems more of a course about how to use Generative AI - does anyone have a good recommendation of a course or book about how they actually work?

I agree that this makes 37 somewhat interesting.

Certainly more interesting than - say - 31.

31 is a prime number too, and therefore somewhat interesting. But for sure not as interesting as 37, which as we just learned, is the median value for the second prime factor of an integer.

Any suggestions of integers which are even more interesting?

And while we are at it, is there an integer which qualifies to be the most interesting?


The spotify app has become so ridiculously bad I can't use it anymore. I found out about Zotify [1], which can rip my entire collection from spotify. The API rate limits anything faster than real time listening though, so it'll take a few weeks to download everything.

[1] https://zotify.xyz/


https://www.insecam.org/ (now dead) was fun while it lasted. It listed all unsecured security cameras by country, so you could stare at a forklift moving pallets around in a Chinese factory or people parking in a Japanese car park, and similar things.

Edit: It's not dead after all: http://insecam.com/


All APE executables are also unikernels that boot from BIOS. Here's a demo video of the APE executable for SectorLISP (ANSI C version) running in the Blink emulator (which I wrote originally just to test this): https://storage.googleapis.com/justine/sectorlisp2/sectorlis... When you compile an APE program with cosmocc, the first 512 bytes are a master boot record where we embed a BIOS bootloader in the PE DOS stub section of the executable. APE is then able to tell when the BIOS is what loaded it, switch to 32-bit mode, set up the page tables, enter 64-bit long mode, and then call _start() and main().

I personally haven't been focusing on bare metal support myself the past several months, but we've got a talented member of the team (tkchia) who actively works on maintaining metal and expanding its capabilities. See also the list of supported system calls for bare metal here: https://justine.lol/cosmopolitan/functions.html As you can see, we still have a lot of work to do. But it's really cool to have, and I have high hopes for it someday as a cloud native deployment format. Hypervisors are in many ways the new OSes, so why do we need a traditional OS in-between?


I was in Japan when the quake/tsunami/nuclear meltdown happened on 11th of March 2011. More specifically my fiancée and I was on a train bound for Tokyo on some slow train from Kusatsu; a charming winter wonderland with thermal spring goodness. Our schedule that day was to stop over Tokyo for a bit of sightseeing before meeting up with friends in Osaka. We obviously didn't want to lug our belongs around, so had the foresight of takkyu-bin all our luggages to Osaka. After some excite.jp translator services with some Kanji the inn staff managed to made a booking and we went on our way.

The train ride was atypical the whole way, the train would've stopped randomly and carriages were very full. We even got to see some angry men loudly complaining and tapping the train driver's door; quite a rare sight in Japan. We later realised it was the small quakes that had struck before the big one.

We arrived in Tokyo delayed, we had some food around Ueno station, just when we were about to board the train the big earthquake struck. It was loud at first, then slow rolling. But it went on for too long. You know it's a big one when the local middle school students are cowering and some even started screaming. The station's lights shorted from swinging so much and some spark flew as well. Then news came, the bullet train would be cancelled, in fact all trains were cancelled. We walked around aimlessly trying to find accommodation for the night but couldn't, everyone was stuck in Tokyo. It was quite funny to see even when almost everything's sold out in convenience stores, same food/drink were left alone everywhere.

At the end we sat and slept in a lobby of a 5 star Hotel nearby Tokyo train station who were nice enough to invite all the people still out and about at 2am. The next morning the bullet train was declared fit to run again, phones started working too, and on our way to catch the train to Osaka, we received a call from our accommodation in Osaka, the staff simply said your luggages arrived yesterday and are here waiting for you.


Shouldn't there be games coming out fairly soon (next 1-2 years) that take advantage of Lumen/Nanite/Quixel/Metahuman? From my POV, the future looks bright for Unreal because those technologies give it a pretty crazy edge over its competitors, and that's not even taking into account the Unity fiasco.

wow.

> https://www.cs.umd.edu/~abadi/

> He is best-known for the development of the storage and query execution engines of the C-Store (column-oriented database) prototype, which was commercialized by Vertica and eventually acquired by Hewlett-Packard in 2011, for his HadoopDB research on fault tolerant scalable analytical database systems which was commercialized by Hadapt and acquired by Teradata in 2014, and deterministic, scalable, transactional, distributed systems such as Calvin which is currently being commercialized by Fauna.

That is as an impressive of a resume as I have ever seen. Dude spawned 3 successful commercial database offerings.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: