Hacker Newsnew | past | comments | ask | show | jobs | submit | vvladymyrov's favoriteslogin

I began using Jujutsu as my VCS about 2 months ago. Considering most of my work is on solo projects, I love the extra flexibility and speed of being able to safely fixup recent commits. I also love not having to wrangle the index, stashes, and merges.

`lazyjj` [1] makes it easier to navigate around the change log (aka commit history) with single keypresses. The only workflow it's currently missing for me is `split`.

For the times when I have had to push to a shared git repo, I used the same technique mentioned in the article to prevent making changes to other developer's commits [2].

It's been a seamless transition for me, and I intend to use Jujutsu for years.

[1] https://github.com/Cretezy/lazyjj [2] https://jj-vcs.github.io/jj/latest/config/#set-of-immutable-...


Literally just replaced the flimsy clip-on vent mount charger in my car with a ProClip custom fit mount and a Qi2 charger that ran me ~$115 all-in. I wanted longevity with this solution.

(See also: https://www.proclipusa.com/pages/product-finder)

I was pretty close to picking up the new SE4 to replace my iPhone 14 Pro and I'm balking at the lack of MagSafe on top of the $599 pricetag.


I recently automated this process, however, in a very different way.

- CLI run for every job you want to apply for (this is important)

- JavaScript (Deno) with Puppeteer to run the JS for the page

- Create a directory for all the artefacts <yyyy-mm-dd-ms-pagetitle>

- Save the webpage link (artefact)

- Take a screenshot of the page (artefact)

- Extract the HTML (artefact)

- Convert HTML to Markdown with a CLI (artefact)

- Send Markdown to the Grok API to extract just the Job Description as Markdown (artefact)

- Send Job Description and Autobiography to Grok API to generate a Resume (artefact)

- Send Job Description and Autobiography to Grok API to generate a Cover Letter (artefact)

- Use pandoc to convert the Markdown Resume and Cover Letter into Open Document Format (LibreOffice) (artefacts)

The important differences here are:

- You need to find the job you are interested in. Why automate this?

- Run the CLI `job-hunter https://job.site/jobid` (50sec runtime)

- Open the ODF documents, review, edit, save (human involved is important)

- Use a bash script running LibreOffice CLI to convert ODF documents to PDF

- Review the PDFs

- Manually click the apply button on the site and upload the documents

I also keep a spreadsheet with the details for each job I apply for so I can track interactions, think CRM for job applications and recruiters. This could be automated, however, I got a job so have lost interest.

Points of interest:

- Markdown is a fantastic format in general, but for LLMs as prompts and documents, it's awesome.

- If you just curl the page html, you don't get the recruiters email addresses in most cases, hence the use of Puppeteer.

- Having all the artefacts saved on disk is important for review before and after the application, including adding notes.

- By using an Autobiography that is extreme in detail, the LLM can add anything and everything about you to the documents.

- Use Grok and support Elon. OpenAI can stick their "Open" where it fits.

- I don't end up having to format the documents that are generated as ODF files, they look great.

I can apply for around 10 to 20 jobs in a day if I try hard. Most of the time it is around 5 because I am doing other things. They are only jobs I'm interested in though, and I can customise the documents. Also, If I am applying for a job that includes AI, I add a note at the bottom stating it has been generated by an LLM and customised.

There's probably more interesting points, but you get the idea.

My TODO list includes a CLI switch to only open the page in a Firefox profile so I can authenticate to the page. This removes the stupid "automate auth on ever job site" issue. Simply authenticate and keep the cookie in the hunter profile.

The repo is private for the time being, but I could make it public.

Edit: formatting.


I did this as well and landed a job in 3 months. The most tedious part before I automated the process was copying and pasting relevant infos into my cover letter, updating stuff, creating the word document for the cover letter and a copy of my resume in a folder for that company/offer. Also, I auto added job details to a Notion table (a Kanban board) where I tracked open applications.

The whole process took me previously half an hour to 45 minutes. Afterwards it took me less then 2 minutes. I didn’t apply for more, but could write an application in a fraction of time. And then focus on researching the company and the job.

Chatgpt made the whole process super smooth. We live in wonderful times.


I’m reading some articles on managing people all the time (mostly from Software Lead Weekly newsletter). And recently, I’ve opened Armstrong’s Handbook of Human Resource Management Practice to read a few chapters on a problem I’m facing at work.

And WOW a proper book on the topic is SOOO much better than any random article that I find, be it from SWLW, HN, Reddit, or any other source. Articles and posts are easy to like when I already agree with their premise. But the depth of a proper book, from a real source of authority and not some random person online, looking at the problem from multiple points, that’s so much more insightful and useful.

So instead of hunting for best articles, I would 100% recommend getting Armstrong, or some textbook. Or at least High Output Management as other comment suggested, or some other well known and well regarded book. But Armstrong in particular can give you very deep understanding of most aspects of people management, plus it’s up-to-date.


I fail to understand why I should use it over a different embedded vector DB like LanceDB or Chroma. Both are written in more performant languages, have a simple API with a lot of integrations and power if one needs it

A related line of work is "Thinking Like Transformers" [1]. They introduce a primitive programming language, RASP, which is composed of operations capable of being modeled with transformer components, and demonstrate how different programs can be written with it, e.g. histograms, sorting. Sasha Rush and Gail Weiss have an excellent blog post on it as well [2]. Follow on work actually demonstrated how RASP-like programs could actually be compiled into model weights without training [3].

[1] https://arxiv.org/abs/2106.06981

[2] https://srush.github.io/raspy/

[3] https://arxiv.org/abs/2301.05062


Grinding leetcode is inefficient. What you should be doing is familiarizing yourself with the common patterns you might expect to see in an interview. Look at the blind 75 and https://seanprashad.com/leetcode-patterns/.

Initially, you don’t need to solve any of the problems from scratch. Look up the problem on YouTube and someone will walk you through it. This will build your intuition of when to reach for a heap or for a DP array or when to do BFS, etc. If you don’t know these, then watch another video explaining the concepts. These videos are often 10-15 minutes so with a 30 minute time commitment a day you potentially can get through 3 a day, getting you through the complete blind 75 (more than enough) in less than a month or 1 of each of SP’s 22 patterns in a couple of weeks.

The great thing is you don’t need dedicated time for this approach, you can often start a video while tackling laundry or doing some dishes.

Then, start putting these into practice but spend no more than 10-15 minutes on the problem. If you can’t solve it, go watch the video again. There are so many times where you can have the right approach but make a stupid mistake that will cause you to flounder and you can pick up a better way of doing it. Eventually you will be solving these in 10-15 minutes and the time commitment will have remained at a minimum.

After this, find a new job that is only 40 hours a week and voila you’ve just opened up 10-20 hours for personal projects.


It's also not really an obscure issue. For example, the owasp password cheatsheet talks about the need to increase the work factor over time - https://cheatsheetseries.owasp.org/cheatsheets/Password_Stor... . I would hope that password manager software would at least know the things that end up on "cheatsheets"

My take as a TPM and certified in Scrum: the better and more skilled the team members, the less you need Scrum and other frameworks. Scrum is great for teams composed of new developers who don't yet know how to work together, or teams at companies with poor culture. But the best teams are self-organized and don't necessarily need the guidance of Scrum. As Gergely mentioned, team autonomy is a big factor in job satisfaction (and performance).

But, it can still be worth doing in high performance environments if specific things are needed. Part of being Agile is adapting processes to teams, and some Scrum processes can be useful (relative estimation for example) even when not doing Scrum itself.

As an aside, Gergely has a newsletter[1] (of which this is one of the free articles) that is absolutely fantastic. It's $100/yr IIRC and well worth it. Gergely deep dives into topics around tech including hiring, structure, comp, etc.

Gergely also has a book[2] that helped me get hired in SF when I moved here a couple of years ago.

[1] - https://newsletter.pragmaticengineer.com/about [2] - https://thetechresume.com


> "If this old way of doing things is so error-prone, and it's easier to use declarative solutions like Kubernetes, why does the solution seem to need sooo much work that the role of DevOps seems to dominate IT related job boards? Shouldn't Kubernetes reduce the workload and need less men power?"

Because we're living in the stone age of DevOps. Feedback cycles take ages, languages are not typed and error prone, pipelines cannot be tested locally, and the field is evolving rapidly like FE javascript did for many years. Also I have a suspicion that the mindset of the average DevOps person has some resistance to actually using code, instead of yaml monstrosities.

There is light at the tunnel though:

- Pulumi (Terraform but with Code)

- dagger.io (modern CI/CD pipelines)

Or maybe the future is something like ReplIt, where you don't have to care about any of that stuff (AWS Lambdas suck btw).


I got one as a gift and was a bit skeptical of it too as a programmer that does most of the work in code with vim commands but have come around on it.

useful to have as a clock/date for a few locations at a glance. opens up some apps too but the main thing I use it for is having a few reference docs that you can open up with a button. window layout buttons are nice too. the other thing I use it for is setting brighness on the screen and locking the screen. Not sure I'd buy one but they are neat to have if you get one as a gift.


I don't quite get the study plan, as it sounds more like cramming. Nothing wrong with cramming per se except that what you learn does not stick, and you will have to repeat the process when you need to switch jobs again.

I think a better system is to regularly study fundamentals and always deep dive in your work. Case in point, I've never had problems passing interviews by FAANG or other hot startups, I always got title bump when switching jobs, and I never spent extra time prepping for interviews. Say you're an ML engineer who builds an image recognition pipeline using Spark and PyTorch. Do not just be content with assembling a number of open-source solutions to make your pipeline work. Instead, study the internals of Spark, understand the math and algorithms behind your image recognition models, read survey papers to understand the landscape of data processing and image recognition or further, machine learning, and implement a few models and try to optimize them. Similarly, if you work on database systems, do not just stop at being familiar with MySQL or Postgres or whatever. Instead, understand how transactions work, what consistency means, how principles of distributed systems play out. Study Jim Gray, Gottfried Vossen, Maurice Herlihy, Leslie Lamport, the database red book and its references... You get the idea.

As for leetcoding, replace it with study of algorithm designs. Study Jon Kleinberg's book or Knuth's writings (no, you don't have to read through his books, but his writings are incredibly insightful even for mortals like us), for instance. Instead of working out hundreds of back-tracing problems, study backtracking's general forms.

People tend to underestimate the effect of regular study for years. You'll find that in a few years your knowledge will converge and you will be able to spend less time to incrementally improve your skills, and you will have so many concepts to connect to greatly benefit your projects.


Believe it or not, it's completely free.

It's thanks to TFRC. It's the most world-changing program I know of. It's why I go door to door like the proverbial religious fanatic, singing TFRC's praises, whether people want to listen or not.

Because for the first time in history, any capable ML hacker now has the resources they need to do something like this.

Imagine it. This is a legit OpenAI-style model inference API. It's now survived two HN front page floods.

(I saw it go down about an hour ago, so I was like "Nooo! Prove you're production grade! I believe in you!" and I think my anime-style energy must've brought it back up, since the API works fine now. Yep, it was all me. Keyboard goes clackclackclack, world changes, what can I say? Just another day at the ML office oh god this joke has gone on for like centuries too long.)

And it's all thanks to TFRC. I'm intentionally not linking anything about TFRC, because in typical google fashion, every single thing you can find online is the most corporate, soulless-looking "We try to help you do research at scale" generic boilerplate imaginable.

So I decided to write something about TFRC that wasn't: https://blog.gpt4.org/jaxtpu

(It was pretty hard to write a medieval fantasy-style TPU fanfic, but someone had to. Well, maybe no one had to. But I just couldn't let such a wonderful project go unnoticed, so I had to try as much stupid shit as possible to get the entire world to notice how goddamn cool it is.)

To put things into perspective, a TPU v2-8 is the "worst possible TPU you could get access to."

They give you access to 100.

On day one.

This is what originally hooked me in. My face, that first day in 2019 when TFRC's email showed up saying "You can use 100 v2-8's in us-central1-f!": https://i.imgur.com/EznLvlb.png

The idea of using 100 theoretically high-performance nodes of anything, in creative ways, greatly appealed to my gamedev background.

It wasn't till later that I discovered, to my delight, that these weren't "nodes of anything."

These are 96 CPU, 330GB RAM, Ubuntu servers.

That blog post I just linked to is running off of a TPU right now. Because it's literally just an ubuntu server.

This is like the world's best kept secret. It's so fucking incredible that I have no idea why people aren't beating down the doors, using every TPU that they can get their hands on, for as many harebrained ideas as possible.

God, I can't even list how much cool shit there is to discover. You'll find out that you get 100Gbit/s between two separate TPUs. In fact, I'm pretty sure it's even higher than this. That means you don't even need a TPU pod anymore.

At least, theoretically. I tried getting Tensorflow to do this, for over a year.

kindiana (Ben Wang), the guy who wrote this GPT-J codebase we're all talking about, casually proved that this was not merely theoretical: https://twitter.com/theshawwn/status/1406171487988498433

He tried to show me https://github.com/kingoflolz/swarm-jax/ once, long ago. I didn't understand at the time what I was looking at, or why it was such a big deal. But basically, when you put each GPT layer on a separate TPU, it means you can string together as many TPUs as you want, to make however large of a model you want.

You should be immediately skeptical of that claim. It shouldn't be obvious that the bandwidth is high enough to train a GPT-3 sized model in any reasonable time frame. It's still not obvious to me. But at this point, I've been amazed by so many things related to TPUs, JAX, and TFRC, that I feel like I'm dancing around in willy wonka's factory while the door's wide open. The oompa loompas are singing about "that's just what the world will do, oompa-loompa they'll ignore you" while I keep trying to get everybody to stop what they're doing and step into the factory.

The more people using TPUs, the more google is going to build TPUs. They can fill three small countries entirely with buildings devoted to TPUs. The more people want these things, the more we'll all have.

Because I think Google's gonna utterly annihilate Facebook in ML mindshare wars: https://blog.gpt4.org/mlmind

TPU VMs just launched a month ago. No one realizes yet that JAX is the React of ML.

Facebook left themselves wide open by betting on GPUs. GPUs fucking suck at large-scale ML training. Why the hell would you pay $1M when you can get the same thing for orders of magnitude less?

And no one's noticed that TPUs don't suck anymore. Forget everything you've ever heard about them. JAX on TPU VMs changes everything. In five years, you'll all look like you've been writing websites in assembly.

But hey, I'm just a fanatic TPU zealot. It's better to just write me off and keep betting on that reliable GPU pipeline. After all, everyone has millions of VC dollars to pour into the cloud furnace, right?

TFRC changed my life. I tried to do some "research" https://www.docdroid.net/faDq8Bu/swarm-training-v01a-pdf back when Tensorflow's horrible problems were your only option on TPUs.

Nowadays, you can think of JAX as "approximately every single thing you could possibly hope for."

GPT-J is proof. What more can I say? No TFRC, no GPT-J.

The world is nuts for not noticing how impactful TFRC has been. Especially TFRC support. Jonathan from the support team is just ... such a wonderful person. I was blown away at how much he cares about taking care of new TFRC members. They all do.

(He was only ever late answering my emails one time. And it was because he was on vacation!)

If you happen to be an ambitious low-level hacker, I tried to make it easier for you to get your feet wet with JAX:

1. Head to https://github.com/shawwn/jaxnotes/blob/master/notebooks/001...

2. Click "Open in Collaboratory"

3. Scroll to the first JAX section; start reading, linearly, all the way to the bottom.

I'd like to think I'm a fairly capable hacker. And that notebook is how I learned JAX, from zero knowledge. Because I had zero knowledge, a week or two ago. Then I went from tutorial to tutorial, and copied down verbatim the things that I learned along the way.

(It's still somewhat amazing to me how effective it is to literally re-type what a tutorial is trying to teach you. I'd copy each sentence, then fix up the markdown, and in the process of fixing up the markdown, unconsciously osmose the idea that they were trying to get across.)

The best part was, I was connected remotely to a TPU VM the whole time I was writing that notebook, via a jupyter server running on the TPU. Because, like I said, you can run whatever the hell you want on TPUs now, so you can certainly run a jupyter server without breaking a sweat.

It's so friggin' nice to have a TPU repl. I know I'm just wall-of-text'ing at this point, but I've literally waited two years for this to come true. (There's a fellow from the TPU team who DMs with me occasionally. I call him TPU Jesus now, because it's nothing short of a miracle that they were able to launch all of this infrastructure -- imagine how much effort, from so many teams, were involved in making all of this possible.)

Anyway. Go read https://github.com/shawwn/website/blob/master/mlmind.md to get hyped, then read https://github.com/shawwn/website/blob/master/jaxtpu.md to get started, and then read https://github.com/shawwn/jaxnotes/blob/master/notebooks/001... to get effective, and you'll have all my knowledge.

In exchange for this, I expect you to build an NES emulator powered by TPUs. Do as many crazy ideas as you can possibly think of. This point in history will never come again; it feels to me like watching the internet itself come alive back in the 80's, if only briefly.

It's like having a hundred raspberry pis to play with, except every raspberry pi is actually an ubuntu server with 96 CPUs and 330GB of RAM, and it happens to have 8 GPUs, along with a 100Gbit/s link to every other raspberry pi.


I went rather deep into ankification of computer science - 5500 cards deep. I've written up my experiences as a series of articles here for anyone interested

https://www.jackkinsella.ie/articles/janki-method


When I’m in an interview I always end up turning the interview around and interview the the interviewer.

Interviews are a two way street. They want to make sure you suite a role but you also need to make sure the role is suitable for you, you are going to enjoy it and get something from it. I go in with the mental view they need to sell the job to me.

Doing consultancy I’ve been at many companies, some great some bad. I go straight to the questions trying to dig up as many warts as possible to figure out what I’m letting myself in for as I’ve seen how much different companies can differ.

What are you working on this week?

What’s a typical day?

What’s the current tough problems you have?

Do you do much out of hours work?

Why do you use the tech stack / tools you do?

How do you manage work, do you do scrum/Kanban/x/y/z. How’s your backlog? What are the upcoming releases, are they on schedule? Who manages features, who and how is the product owner?

How’s the team?

What’s been your favourite part of your job so far?

How’s the business make money? How much funding do they have and what happens when that runs out? How’s the current projections will my job be here in 12 months?

Why are you recruiting for this role? How is staff turnover?

Any bad things I should know?

Etc etc


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: