I first learned of it reading the intro to American Cake, by Anne Byrn. It covers the history of cakes in America, through (updated) 125 recipes.
The current recipe for pound cake calls for 6 large eggs, but the notes on ingredients in the book’s introduction said early recipes needed 12-16 (!!) eggs in order to get one pound of eggs. Side note: pound cake uses 1 lb each of eggs, flour, sugar, and butter
I recently bought an older Better Homes and Gardens cookbook from 1953. I wanted one from before science took over the kitchen too much. I haven’t had a chance to cook anything from it yet, but now I’m questioning if I’ll have issues trying to cook with a 70+ year old cookbook, especially when it comes to baked goods.
I’m not into cooking enough to have the patience to experiment and tune things. If something doesn’t work, I’m more likely to get discouraged and order take out.
Sizes are different but also appliances were a lot more temperamental back then; the first oven with a temperature control was only developed in the 20s and it would take a while for them to be in every home.
If anything, much older recipes tend to be less precise simply because they did not have the technology. Before thermostats were put in ovens, baking was done by feeding a fire by vibes, and then leaving your baked good to sit in the residual heat.
The very first thing I learned to cook as a young kid in the late 1950s was a macaroni and cheese recipe from the BH&G cookbook. It was very different frum the creamy mac and cheese recipes that are common today. It didn't have a runny sauce; it had more of a firm custardy texture. You could scoop up chunks of it with a big serving spoon.
I did some brainstorming with ChatGPT, and we found the recipe below.
Could you check your cookbook to see if it has a recipe like this, and possibly take a photo and send it to me? Email is in my profile. Thanks!
---
Old-Fashioned Baked Macaroni and Cheese (circa 1950s BH&G style)
Ingredients:
1½ cups elbow macaroni (uncooked)
2 cups grated sharp cheddar cheese
2 eggs, beaten
2 cups milk (sometimes evaporated milk was used)
1 tsp salt
Dash of pepper
Optional: breadcrumbs or cracker crumbs for topping
Optional: butter for dotting the top
Instructions:
Cook the macaroni in salted water until just tender. Drain.
In a large bowl, combine the hot macaroni with most of the grated cheese.
In a separate bowl, beat the eggs and mix in the milk, salt, and pepper.
Pour the egg-milk mixture over the macaroni and cheese, stir gently to combine.
Pour into a buttered casserole dish. Top with the remaining cheese, and optionally a layer of buttered breadcrumbs or crushed crackers.
Bake at 350°F for about 45 minutes, or until set and lightly browned on top.
Very interesting, thanks! That one is very different from what my little sister and I made as kids. Ours was more like the one from ChatGPT that I posted above.
We were big fans of cream of mushroom soup, though. Our favorite was to mix a can of that and a can of tomato soup (with the usual 50/50 dilution with water). We called it "cream of tomato".
My standard cookbook is a 1970s edition of the Joy of Cooking, right before fat became evil and was excised from cookbooks. Everything from how to break down a squirrel to a side of beef.
I have no issues cooking from it with modern ingredients because it doesn't fundamentally use things that aren't "base" ingredients or other recipes in it.
Generally spaces around em-dashes is a question of style, not pre- or pro-scribed by any specific typographical rule. One nice middle ground is a hair space ( ), although it’s a pain to insert.
> spaces around em-dashes is a question of style, not pre- or pro-scribed by any specific typographical rule
Writing and publishing style guides like Hart's Rules (Oxford Style Guide) & Chicago manual of style have the 'em' dash use as a parenthetical closed or "no spaces" dash.
In British use – Hart's Rules – writers will choose the 'en' dash with spaces as a parenthetical dash, where US writers/publishers choose the closed 'em' dash for the same thing.
Imo, there is a conflation of 'en' dash and 'em' dash going around due to the ease of smart-dashes auto-correction turning (--) into 'em' dash with the 'en' dash and non-auto-correct 'em' dash needing a key-combo.
Common everyday typing online, I think people will simply use what is convenient and "good enough" -- a single hyphen dash as an 'en' dash or 2-hyphen dashes that may or may not auto correct into an 'em' dash. I prefer mixing spaces with a 2-hyphen dash 'em' dash, but I'm not a published writer so I enjoy doing wild things like that
I configured my Markdown renderer to replace ` -- ` with " — ". Hopefully those narrow spaces make it through HN's rendering — it's much easier when your tooling can do the job for you.
I haven’t run a biz on Stripe, but as an ex-employee the two potential problems I worry about for you immediately are:
1. Account takeovers, helped by your service. Maybe a non-issue in practice, but it’d be a PITA if you get tangled up in that.
2. Handling the long tail of Stripe API version changes. Hopefully it’s straightforward enough, and maybe you can simply require your users to upgrade if they’re on something older that you don’t support. I don’t have any specific thing in mind, but it feels inevitable to me.
But awesome job and good luck! I hope it succeeds, it sounds like a good service to me!
Those are fair points and something to investigate further but for point 1 our tool only uses the restricted API keys with minimal access, so both parties would have to agree on using the tool but I believe we should protect ourselves legally against such cases. For point 2, Stripe seems to be quite good for backwards compatibility on core objects but it's good to agree on a minimal supported version. Thanks!
> LLMs open up the door to performing radical updates that we'd never really consider in the past. We can port our libraries from one language to another. We can change our APIs to fix issues, and give downstream users an LLM prompt to migrate over to the new version automatically, instead of rewriting their code themselves. We can make massive internal refactorings. These are types of tasks that in the past, rightly, a senior engineer would reject in a project until its the last possibly option. Breaking customers almost never pays off, and its hard to justify refactoring on a "maintenance mode" project.
> But if it’s more about finding the right prompt and letting an LLM do the work, maybe that changes our decision process.
I don’t see much difference between documenting any breaking changes in sufficient detail for your library consumers to understand them vs “writing an LLM prompt for migrating automatically”, but if that’s what it takes for maintainers to communicate the changes, okay!
Just as long as it doesn’t become “use this LLM which we’ve already trained on the changes to the library, and you just need to feed us your codebase and we’ll fix it. PS: sorry, no documentation.”
There's a huge difference between documentation and prompts. Let me give you a concrete example.
I get requests to "make your research code available on Hugging Face for inference" with a link to their integration guide. That guide is 80% marketing copy about Git-based repositories, collaboration features, and TensorBoard integration. The actual implementation details are mixed in through out.
A prompt would be much more compact.
The difference: I can read a prompt in 30 seconds and decide "yes, this is reasonable" or "no, I don't want this change." With documentation, I have to reverse-engineer the narrow bucket which applies to my specific scenario from a one size drowns all ocean.
The person making the request has the clearest picture of what they want to happen. They're closest to the problem and most likely to understand the nuances. They should pack that knowledge densely instead of making me extract it from documentation links and back and forth.
Documentation says "here's everything now possible, you can do it all!" A prompt says "here's the specific facts you need."
Prompts are a shared social convention now. We all have a rough feel for what information you need to provide - you have to be matter-of-fact, specific, can't be vague. When I ask someone to "write me a prompt," that puts them in a completely different mindset than just asking me to "support X".
Everyone has experience writing prompts now. I want to leverage that experience to get cooperative dividends. It's division of labor - you write the initial draft, I edit it with special knowledge about my codebase, then apply it. Now we're sharing the work instead of dumping it entirely on the maintainer.
I was pretty hand-wavy when I made the original comment. I was thinking implicitly to things like the Python sub-interpreter proposal, which had strong pushback from the Numpy engineers at the time (I don't know the current status, whether it's a good idea, etc, just something that came to mind).
The objections are of course reasonable, but I kept thinking this shouldn't be as big a problem in the future. A lot of times we want to make some changes that aren't _quite_ mechanical, and if they hit a large part of the code base, it's hard to justify. But if we're able to defer these types of cleanups to LLMs, it seems like this could change.
I don't want a world with no API stability of course, and you still have to design for compatibility windows, but it seems like we should be able to do better in the future. (More so in mono-repos, where you can hit everything at once).
Exactly as you write, the idea with prompts is that they're directly actionable. If I want to make a change to API X, I can test the prompt against some projects to validate agents handle it well, even doing direct prompt optimization, and then sharing it with end users.
Yes, there's a difference between "all documentation for a project" and "prompt for specific task".
I don't think there should be a big difference between "documentation of specific breaking changes in a library and how consumers should handle them" and "LLM prompt to change a code base for those changes".
You might call it a migration guide. Or it might be in the release notes, in a special section for Breaking Changes. It might show up in log messages ("you're using this API wrong, or it's deprecated").
Why would describing the changes to an LLM be easier than explaining them to the engineer on the other end of your API change?
I saw that you didn’t want to use a 3rd party provider, but why not stick a git repo on your VPS (which you are trusting with your data today) and use that to coordinate syncs between your client devices?
I don't think you really get it. Git is distributed. There's no need for "a git server". You already have a machine on which you host the SQL database, you can just use that as yet another git remote.
Thanks for the reply. I do agree with sibling comment from tasuki that I think you’re missing the simpler solution of plain git repos to solve “owning your own data in a future-proof manner”.
If you’re not trying to coordinate work among multiple people, and aren’t trying to enforce a single source of truth with code, you don’t _need_ “git server” software. You just need a git repository (folder & file structure) in a location that you consider to be your source of truth.
I’m not trying to convince you to change it now, especially if you’re happy with what you have, but I would suggest reading some (or all) of https://git-scm.com/book/en/v2
I think the first ~4 subsections of chapter 4 cover what I & tasuki were suggesting could be sufficient for you. If you’re the type of engineer to read through the code of your data storage layer, I think you’d find Chapter 10 (Git Internals) interesting, because it can demystify git. I enjoyed the whole book.
As with any engineering project, I see lots of questions about your choices, and I applaud you for sticking around. I would make very different decisions than you, based on your stated priorities, but that’s okay.
I was curious about what they meant by strength, and the link at the bottom of the article says this is tensile strength. So the comparison to spider silk was actually appropriate.
I also noticed that it’s from 2015, although it was still new to me and interesting.
I’m fascinated that they aren’t requiring an entitlement for all usage of setting & posting notifications through this API. A way to share 64 bits of information (at a time) to any process on the device? That is right in the wheelhouse of tracking a user across apps.
I don’t specifically know the types of things that you’d want to share across apps, but there’s a long history of cross process information channels being removed or restricted.
If the system is storing values for you, and isn’t keeping track of which app they came from, now you’ve got persistent storage across app deletion & re-install, as long as there isn’t a reboot in between.
I think you could easily use it to work around IDFA or IDFV resets, as a simple example.
> That is right in the wheelhouse of tracking a user across apps.
The design is old. It probably predates facebook, so it's not been intentional, as your comment might be interpreted. But it certainly seems ripe for abuse. I'm curious if it would actually be used for that, because any app that can access internet already has a better way to share information.
I was interning at Facebook in '07 when the first iPhone was released. Can confirm! Someone was 3rd in line at the Palo Alto Apple store and brought it over to the office.
Though iOS definitely predates 3rd party apps and the ad based economy. Which is a bit of a tautology.
this is exactly where my mind went immediately - 64 bits is more than enough for easy (1 line!) unenforced cross-app tracking of a user for advertising purposes, basically a super cookie for iOS. If they now require an entitlement for this API it's a privacy win
The IDFV already supports tracking user across apps, as long as they are from the same vendor. It resets when apps from a vendor are removed from a device. Not sure if the user can reset it by themselves, but the vendor could then always tie things together using another self-generated identifier stored on the device, as long as any of its apps are on it, which boils down to the same.
I think the approach you describe allows roughly the same, except perhaps doing so without (or with different) permissions, and allowing to do this between vendors (that must agree upon this upfront).
I think it’s most interesting for 3rd party SDKs (analytics, advertising, others?), because they’re in a position to have their code running in apps from different vendors.
As per the DMA if it's available to Apple's own apps it has to be available to third party apps as well. Of course apple will fight this tooth and nail so they can maintain their walled garden, making them billions per year.
Two more of my favourites for Embedded Microcontroller based systems are;
Introduction to Embedded Systems: Using Microcontrollers and the MSP430 by Manuel Jimenez et al. An excellent textbook with an emphasis on interfacing to a MCU from a hardware perspective. The chapter titled "The Analog Signal Chain" is by itself worth the price of the book.
Patterns for Time-Triggered Embedded Systems by Michael Pont. Full of C code for 8051 which you can study and then adapt to your specific MCU family. Free book available at https://www.safetty.net/publications/pttes Also checkout all of his other books since they are also full of C code examples.
My guess is that 30k LOC is the 1st party code, and there’s probably a bunch of 3rd party dependencies that are also compiled as part of that. But maybe I’m wrong
The current recipe for pound cake calls for 6 large eggs, but the notes on ingredients in the book’s introduction said early recipes needed 12-16 (!!) eggs in order to get one pound of eggs. Side note: pound cake uses 1 lb each of eggs, flour, sugar, and butter