As soon as I finished reading the article, the very first thing that came in my mind is Dieter Rams' "10 Principles of Good Design"; I have been following his principles as much as I can, as they match, more or less, those of UNIX's philosophy:
1. Good design is innovative
2. Good design makes a product useful
3. Good design is aesthetic
4. Good design makes a product understandable
5. Good design is unobtrusive
6. Good design is honest
7. Good design is long-lasting
8. Good design is thorough down to the last detail
9. Good design is environmentally-friendly
10. Good design is as little design as possible
If people are interested in the DX7's technical details, I've done some work reverse-engineering it, and some of Yamaha's other FM synths: https://ajxs.me/blog/tag/DX7.html
There are a lot of software emulations of the DX7 that are quite realistic, like the amazing Dexed. Here's one that deserves a lot of credit for how accurate it is: https://github.com/chiaccona/VDX7
I love state machines and every time I use one my workers think I invented it because they’ve never seen them before.
The data for state machine in this article might be best prepared by generating it from a program. That generator program doesn’t need to care (too much) about performance since it is run during the build process. I like the idea of doing a lot of work now to save time in the future.
For several years I have worked primarily with performance optimizations in the context of video games (and previously in the context of surgical simulation). This differs subtly from optimization in certain other areas, so I figured I'd add my own perspective to this already excellent comment section.
1. First and foremost: measure early, measure often. It's been said so often and it still needs repeating. In fact, the more you know about performance the easier it can be to fall into the trap of not measuring enough. Measuring will show exactly where you need to focus your efforts. It will also tell you without question whether your work has actually lead to an improvement, and to what degree.
2. The easiest way to make things go faster is to do less work. Use a more efficient algorithm, refactor code to eliminate unnecessary operations, move repeated work outside of loops. There are many flavours, but very often the biggest performance boosts are gained by simply solving the same problem through fewer instructions.
3. Understand the performance characteristics of your system. Is your application CPU bound, GPU compute bound, memory bound? If you don't know this you could make the code ten times as fast without gaining a single ms because the system is still stuck waiting for a memory transfer. On the flip side, if you know your system is busy waiting for memory, perhaps you can move computations to this spot to leverage this free work? This is particularly important in shader optimizations (latency hiding).
4. Solve a different problem! You can very often optimize your program by redefining your problem. Perhaps you are using the optimal algorithm for the problem as defined. But what does the end user really need? Often there are very similar but much easier problems which are equivalent for all practical purposes. Sometimes because the complexity lies in special cases which can be avoided or because there's a cheap approximation which gives sufficient accuracy. This happens especially often in graphics programming where the end goal is often to give an impression that you've calculated something.
This thread has lots of good advice. I'll add some of mine, not limited to C/C++. If you have luxury of using VCS, make a full use of its value. Many teams only use it as a tool merely for collaboration. VCS can be more than that. Pull the history then build a simple database. It doesn't have to be an RDB (it's helpful though); a simple JSON file or even a spreadsheet file is a good starter. There are so many valuable information to be fetched with just a simple data driven approach, almost immediately.
* You can find out the most relevant files/functions for your upcoming works. If some functions/files have been frequently changed, then it's going to be the hot spot for your works. Focus on them to improve your quality of life. If you want to introduce unit tests? Then focus on the hot spot. Suffer from lots of merge conflicts? The same.
* You can also figure out correlation among the project and its source files. Some seemingly distant files are frequently changed together? Those might suggest an implicit structure that not might be clear from the code itself. This kind of information from external contexts can be useful to understand the bird's eye view.
* Real ownership models of each module can be inferred from the history. Having a clear ownership model helps, especially if you want to introduce some form of code review. If some code/data/module seems to have unclear ownership? That might be a signal for refactoring needs.
* Specific to C/C++ contexts, build time improvements could be focused on important modules, in a data driven way. Incremental build time matters a lot. Break down frequently changed modules rather than blindly removing dependencies on random files. You can even combine this with header dependency to score the module with the real build time impact.
There could be so many other things if you can integrate other development tools with VCS. In the era of LLM, I guess we can even try to feed the project history and metadata to the model and ask for some interesting insights, though I haven't tried this. It might need some dedicated model engineering if we want to do this without a huge context window but my guts tell that this should be something worth try.
I watched Tim Hunkin explain sewing machines when I was about 8 and have never lost my fascination with them (or mechanical engineering) since then.
https://youtu.be/8lwI4TSKM3Y
Shout out to Obsidian, my most-used desktop and mobile app of the year. Absolute game changer. Hacker News showed me this and the book “How to Take Smart Notes” and it’s been an immense aid for difficult technical work, and plenty of other things as well.
I felt the same way and recently did the whole 92 questions of CodeSignal's Python Arcade section (https://app.codesignal.com/arcade/python-arcade). This made me completely fluent on modern Python syntax and the questions themselves are very easy so can be done quickly. You are really only re-learning Python's syntax as opposed to doing hard algorithmic problems or anything like that (although I do recommend doing this as you'll be able to touch on Python's standard library more).
I guess the one thing that you don't get to do with that approach is build something interesting or use `async-await` but it gets you fluent with the syntax again which is an important first step.
I have a similar display, and also use blue noise dithering. Mine is driven in the backend by a web browser, which means I was able to abuse CSS and mix-blend-mode to do the dithering for me:
A few comments for those who want to go down this route:
1) you probably should buy a machine, rather than designing and building one, if your goal is to use the CNC. It will be cheaper and more of your time will be spent CNCing instead of rediscovering the painful lessons learned over the past decades.
2) he says woodworking equipment is only good for wood, but it also works for aluminum
3) ignore all the talk about grbl on arduino due or mega, use FluidNC on ESP32.
4) a dewalt 611 will work better than any spindle that costs less than it, and also many spindles that cost more, if you're working in just wood. This is a controversial opinion, if a spindle makes you happy, go for it.
I've been using https://structurizr.com/ to automatically generate C4 diagrams from a model (rather than drawing them by hand). It works well with the approach for written documentation as proposed in https://arc42.org/. It's very easy to embed a C4 diagram into a markdown document.
The result is a set of documents and diagrams under version control that can be rendered using the structurizr documentation server (for interactive diagrams and indexed search).
I also use https://d2lang.com/ for declarative diagrams in addition to C4, e.g., sequence diagrams and https://adr.github.io/ for architectural decision records. These are also well integrated into structurizr.
Also worth mentioning, the US Navy Electricity and Electronics Training Series (NEETS), plus other interesting documentation one can find from the top menu here.
This one is a CPU-based rasterizing renderer, it gives you a good understanding of what a GPU graphics pipeline does underneath.
In the graphics world the two common ways of rendering things are either rasterization or raytracing.
Raytracing is basically all the movie/VFX/CGI/offline renderers (although it is also being used for certain parts of real-time in recent years)
Raster is how most real-time renderers like the ones used for video games work.
If you're interested in graphics I'd highly recommend implementing a ray-tracer and a rasterizer from scratch at least once to get a good mental model of how they both work.
Imagine we go to the beach, and we observe the waves. A wave is born in the sea, then rolls forward until it dies on the beach. Each wave is different, and how sad it is that it's gone once it reaches the beach.
The thing is, a wave is basically some water particles and energy. The water particles don't go away, and as we know from physics, neither does the energy. So how can a wave die when all of its components don't die?
The truth is, a wave doesn't really exist. It's a concept in our head. "Here is some part of water that is higher than the rest, let's call it a wave". And now the wave can be "born" in the sea and "die" at the beach. But in fact nothing was created nor removed. It's just a concept. Everything is in fact interconnected. The disconnected parts that we see are concepts in our head. A tree, the sun, the rain, grass, ... . Nothing stands on its own.
You and me and everyone here, we are just concepts like the wave. What are you composed of? Some DNA from your ancestors, some cultural influence, the plants and animals you eat, the water you drink. After the concept of "you" dies, everything is still here. Everything that you were composed of is still here.
You're not an entity on your own. You are interconnected with everything around you. You are the water you drink and the water is you. Your thoughts are the thoughts of your ancestors, of your fellow humans, etc, and your thoughts are theirs.
So even if the waves die on the beach, the sea is still there. An therefore, the waves are also still there.
Right now it's in a closed beta, and the size of the closed beta is increasing slowly through the end of the month. The goal is for it to be public around the start of August. That kbin magazine has a link to a signup sheet for the closed beta.
(disclosure? I was an alpha tester for it, and so far it looks AMAZING. The dev is awesome, it's a really friendly community, and the amount of progress in just 2 weeks of development is really impressive.)
Agree with this; also, it feels like you are reading out the same theories I have somewhere in my brain and that's a weird sensation.
A related theory: being good at math, especially mental math, correlates with aphantasia (not being able to see pictures in your head). People who can see images in their head learn to do arithmetic early on with visual algorithms, which are fundamentally not good for understanding as well as rather error-prone (because the brain remembers gestalts, not finicky details like where a decimal is).
Aphantasiacs are forced to learn to do math differently, and use some different part of the brain as 'scratch space'. In my case it's the language brain: calculations which are set aside live in the same part of your brain that can repeat what was said a moment to you without understanding it. Turns out, though, that this verbal part is quite _accurate_ at remembering things, and this makes it easier to juggle multi-step calculations without paper.
I've always felt like I have awful short term memory. I use tiling WMs so I can see information side-by-side. If I need to type something exactly, I forget the exact details almost immediately.
I need to derive things to understand them, ie music theory. I'm jealous of people who can memorize and take at face value but if I'm looking at a chord, I need to know the components that create that chord, which gets frustrating because music has a lot of rules that seem to be based on vibe and closeness. Two things can be identical but distinct based on their context.
It used to take me 2-3x as long to do homeworks or labs compared to other classmates. Same with work assignments. It often triggers an imposter syndrome type feeling.
Yet I have proof that I'm capable of solving complex problems, I understand certain things almost immediately compared to others, other things I need to study for a long time.
I tend to rarely know an answer on the spot, but I know how to determine many things, by knowing how to find the information needed.
I don't pretend to be a genius, but I have proof via a degree, others opinions of me, and material results that basically say I'm intelligent to a point.
Once I get into a flow I can retain a fairly complex system in my head but before or after that state, it's a terrible blur where I can barely focus my eyes.
Speed is underrated. I've worked on a lot of side projects and for a long time I couldn't get them done. I spent too long "perfecting" baseline things like folder structure (really) and overall system design. This made things slow-going and I tended to abandon them.
Over time, I started just hacking things together and shipping them, worrying less about perfecting those initial things. (I used YAGNI a lot in my decision-making.) What I learned is that there were so many more things I had to do and had to learn to do to ship. I could only get to those tasks and learn those skills by "skipping" through the earlier tasks. Working quickly helped.
I started thinking of projects as this vertical stack of work that you move up from bottom to top. If you could look holistically at absolutely everything you needed to do to ship a project, you could mark some as having a larger impact on the success of the project than others. Those are things that require more time and energy.
When you move slowly, you have a very small scope of the overall project, just stuff at the bottom, and predictions about the future. You may not really know what's ahead. If you go slowly and try to do everything perfectly down there, you spend a lot of energy on a small subset of tasks which may actually have smaller impact than something in the middle or towards the end.
Speed allows you to move through those early tasks move towards a more holistic view of the entire system so that you can determine which are high impact and which are not. You might need to double-back on an earlier task if you misjudged something as low-impact and ended up spending less time than you should on it, but at least you're not pouring energy into low-impact tasks on average.
It's not quite the same thing, but building a prototype is a good example of learning the end to end of a system without worrying too much about quality. It gives you an initial idea of what's possible and you use that to get a better picture of what's high and low impact in a project.
I really wish I had more information about some concrete technical problems I've solved in the past. The problem I've often run into is that writing them down is tedious. I used to record myself on video using the native camera app on iOS but found that information just gets lost in those videos. That led me to create an app that allows me to record myself on video, then transcribes what I say and synthesis it using Chat GPT. I've called it Vournal.