Not the OP you're replying to, but a parent nonetheless.
From basically the very moment you realize you're having a kid you essentially become the #2 person in your own life. Its more so for the mother, but even the father likely needs to do some maturing in preparation to being a good dad.
If you're hoping to be a good parent you need to start weighing every decision on a basis of where it leads to. There's simple things like affordability: deciding to make smarter financial decisions because you're going to be supporting another human for the next couple decades at least. There's more complex things like suddenly seeing your own health as a liability and needing to straighten yourself out if you want to set a good example. Things get way easier if your life has structure honestly. Having good habits about food prep, cleaning, chores makes things way easier and bad habits really make life more difficult than it needs to be.
You also need to adapt quickly. You realize that you don't define your schedule any longer. Whether you sleep at night is up to the kid. As they grow from baby to toddler you start to see how the fundamental things like routine have impacted your child and you can begin to make connections to how you did things 6 months ago impacts how things are going now. Self reflection is huge, knowing how your emotions, your reactions, your behavior is a template this person learns from humbles you.
I think that's the new value system that leads to having children being such a burden. There are sooo many expectations of parents that they spend all their time fulfilling those without questioning any of them.
I always knew my dad was a good dad but only once I had a kid did I really understand how much he had quietly and willing given up because he loved me. More and more each day I understand him better and continue gaining an appreciation for all that he has done for me. All of this thanks to my daughter making me / giving me the chance to see life from the other side.
So that sounds like: in the past there were well defined templates for how to live your life, and people followed them; but now you have to make up your own, continuously figuring things out for yourself, and that is harder to do and needs more maturity.
I really enjoyed the write up. As father to a 2y/o as well I relate so hard to much of it.
This line sticks out to me:
you see life as a collection of experiences to be sampled
...because I feel like that was my hardest adjustment to becoming a parent while the vast majority of my friend circles are either single or DINKs. You very quickly become a sideline to many people. You get invitations to join the pint after work, to go hit up the trails each weekend, to travel, to do frivolous things on a whim but you just...can't anymore. I think I struggled with that transition for a long time. Partially worried I would lose my friends and become isolated, partially the FOMO of no longer being able to partake on all the things that were most enjoyable to me.
But thinking about it at this point? I've lost a few friends (not in a breakup sort of way, they just slowly faded out) but the ones who stick around were always those I was closest to. I do miss out on a lot more of the hobbies I once loved but at the same time I've traded those for experiences that are new, different, and fulfilling in their own ways. I've truly been a believer that the best reward always comes after a period of suffering or struggle. In the same way that cold glass of water feels like heaven after a strenuous workout that immense feeling of reward when your kid takes first steps, first words, first sentence just crush all other emotions and swell you with joy and love. Is 2 years of hard work worth that immense wash of emotion? Arguably yes. And I hope to keep getting hit with those experiences.
I've never been shy to say it isn't something for everyone. You sacrifice a lot to have kids. Financially, physically, emotionally, mentally, it all takes a toll, but the rewards are also so much deeper than anything else.
tbf it works both ways. My wife and I are child-free by choice. We haven't lost friends, but they definitely became generally unavailable after the kids arrived. They also made new friends, as kids are an incredibly common thing to relate to. They'll "come back" in 15 years or so, but still.
I could never get used to ergonomic keyboards. I actually spend a considerable amount of my day typing single handed across the entire QWERTY so I don't have to remove my hand from the mouse. I developed the habit working on far too many apps with bad/missing keyboard shortcuts or broken tab ordering.
For me I wanted a smaller keyboard, but one that still included the numpad and ended up with this custom build:
I get that, I've practiced with my ergo board for like a year and still don't use it exclusively bc of one-handing stuff (though learning a new layout at the same time cert. didn't help).
There's a layout based on multi-key chords called Taipo that has this really interesting "reversable" property, in that each half is a mirrored copy of the other, so you can type any letter (or eg. tab or return) with either hand. I think that's super cool, but don't think chording is for me, so I'm trying to imagine what a compromise might look like.
Would be interesting to know how they were testing authentication. Were they using a botnet of any sort? Otherwise for every "valid" user/pass combo from an external leak they tested there'd be several failures. A single (or multiple) hosts smashing auth attempts should raise flags. They didn't "Brute force" one user account at a time, but they did brute force the authentication system in general.
The current info that's been released seems to indicate that they used a botnet over the course of several months and had access to the "last known login location". So there wasn't any "smashing" happening and no "you're signing in from a different location" blocks either.
It isn't just CS2. Apex Legends has players affected by this. Respawn Entertainment has addressed it and said is reverting bans. I guess part of the issue is an AMD update made it default on, so players who didn't even know it got included were affected.
Yeah... I randomly punched in $40,000 for Gaming and got
(1) Parts that total $9,980, but claims it totals $33,950
(2) Recommendations for parts that are both years out of date, and vastly inferior to other options (CPU = AMD Ryzen Threadripper 3990X, GPU = RTX3090)
So I'm guessing the AI data isn't trained enough and is going off some poor metrics like sorting by price, perhaps recommending multiples without indicating (was it asking me to double up on RAM or Stroage?)
I do a lot of support work for Control Systems. It isn't unheard to find a chunk of PLC code that treats some sort of physical equipment in a unique way that unintentionally creates problems. I like to parrot a line I heard elsewhere: "Every time Software is used to fix a [Electrical/Mechanical] problem, a Gremlin is born".
But often enough when I find a root cause of a bug, or some sort of programmed limitation, the client wants removed. I always refuse until I can find out why that code exists. Nobody puts code in there for no reason, so I need to know why we have a timer, or an override in the first place. Often the answer is the problem it was solving no longer exists, and that's excellent, but for all the times where that code was put there to prevent something from happening and the client had a bunch of staff turnover, the original purpose is lost. Without documentation telling me why it was done that way I'm very cautious to immediately undo someone else's work.
I suppose the other aspect is knowing that I trust my coworkers. They don't (typically) do something for no good reason. If it is in there, it is there for a purpose and I must trust my coworkers to have done their due diligence in the first place. If that trust breaks down then everything becomes more difficult to make decisions on.
This is why I comment a "why" for any line of code that's not incredibly obvious. And 100% of the time when it's due to interaction with something outside the codebase, whether that's an OS, filesystem, database, HTTP endpoint, hardware, whatever, if it's not some straightforward call to some API or library.
Sleep due to rate limiting from another service? COMMENT. Who's requiring it, the limits if I know exactly what they are at the time (and noting that I do not and this is just an educated guess that seems to work, if not), what the system behavior might look like if it's exceeded (if I know). Using a database for something trivial the filesystem could plausibly do, but in-fact cannot for very good reasons (say, your only ergonomic way to access the FS the way that you need to, in that environment, results in resource exhaustion via runaway syscalls under load)? Comment. Workaround for a bug in some widely-used library that Ubuntu inexplicably refuses to fix in their LTS release? Comment. That kind of thing.
I have written so very many "yes, I know this sucks, but here's why..." comments.
I also do it when I write code that I know won't do well at larger scale, but can't be bothered to make it more scalable just then, and it doesn't need to be under current expectations (which, 99% of the time, ends up being fine indefinitely). But that may be more about protecting my ego. :-) "Yes, I know this is reading the whole file into memory, but since this is just a batch-job program with an infrequent and predictable invocation and this file is expected to be smallish... whatever. If you're running out of memory, maybe start debugging here. If you're turning this into something invoked on-demand, maybe rewrite this." At least they know I knew it'd break, LOL.
You're doing the lords work. I often get pushback on doing this with some variation of "comments bad, code should be self-documenting". This is unwise, because there are "what code does" and "why code does" comments, but this turns out to be to nuanced to battle the meme.
“Why” isn’t a property of the code itself, though; it’s rather a property of the process of creating the code.
Because of this, my personal belief is that the justification for any line of code belongs in the SCM commit message that introduced the code. `git blame` should ultimately take you to a clear, stand-alone explanation of “why” — one that, as a bonus, can be much longer than would be tolerated inline in the code itself.
Of course, I’m probably pretty unusual, in that I frequently write paragraphs of commit message for single-line changes if they’re sufficiently non-obvious; and I also believe that the best way to get to know a codebase isn’t to look at it in its current gestalt form, but rather to read its commit history forward from the beginning. (And that if other programmers also did that, maybe we’d all write better commit messages, for our collective future selves.)
I think there's room for the comments having a brief explanation and a longer one in the commit message. Sometimes people need the callout to go looking at the history because otherwise they might not realize that there's anything significant there at all.
The "brief explanation" is what the commit message's subject line is for :) Just turn on "show `git blame` beside code" in your IDE, and you'll get all the brief explanations you could ever want!
...but no, to be serious for a moment: this isn't really a workable idea as-is, but it could be. It needs discoverability — mostly in not just throwing noise at you, so that you'll actually pay attention to the signal. If there was some way for your text editor or IDE to not show you all the `git blame` subject lines, but just the ones that were somehow "marked as non-obvious" at commit time, then we could really have something here.
Personally, I think commit messages aren't structured enough. Github had the great idea of enabling PR code-reviewers to select line ranges and annotate them to point out problems. But there's no equivalent mechanism (in Github, or in git, or in any IDE I know of) for annotating the code "at commit time" to explain what you did out-of-band of the code itself, in a way that ends up in the commit message.
In my imagination, text-editors and IDEs would work together with SCMs to establish a standard for "embeddable commit-message code annotations." Rather than the commit being one long block of text, `git commit -p` (or equivalent IDE porcelain) would take you through your staged hunks like `git reset -p` does; but for each hunk, it would ask you to populate a few form fields. You'd give the hunk a (log-scaled) rating of non-obviousness, an arbitrary set of /[A-Z]+/ tags (think "TODO" or "XXX"), an eye-catching one-liner start to the explanation, and then as much additional free-text explanation as you like. All the per-hunk annotations would then get baked into a markdown-like microformat that embeds into the commit message, that text-editors/IDEs could recognize and pull back out of the commit message for display.
And then, your text-editor or IDE would:
1. embed each hunk's annotation-block above the code it references (as long as the code still exists to be referenced — think of it as vertical, hunk-wise "show `git blame` beside code");
2. calculate a visibility score for each annotation-block based on a project-config-file-based, but user-overridable arbitrary formula involving the non-obviousness value, the tags, the file path, and the lexical identifier path from the syntax highlighter (the same lexical-identifier-path modern `git diff` gives as a context line for each diff-hunk);
3a. if the visibility score is > 1, then show the full annotation-block for the hunk by default;
3b. else, if the visibility score is > 0, then show the annotation-block folded to just its first line;
3c. else, hide the annotation-block (but you can still reveal the matching annotation with some hotkey when the annotated lines are focused.)
Of course, because this is just sourced from (let's-pretend-it's-immutable) git history, these annotation-block lines would be "virtual" — i.e. they'd be read-only, and wouldn't have line-numbers in the editor. If the text-editor wants to be fancy, they could even be rendered in a little sticky-note callout box, and could default to rendering in a proportional-width font with soft wrapping. Think of some hybrid of regular doc-comments, and the editing annotations in Office/Google Docs.
---
...though, that's still not going as far as I'd really like. My real wish (that I don't expect to ever really happen) is for us to all be writing code as a series of literate codebase migrations — where your editor shows you the migration you're editing on the left, and the gestalt codebase that's generated as a result of running all migrations up to that one on the right, with the changes from the migration highlighted. You never directly edit the gestalt codebase; you only edit the migrations. And the migrations are what get committed to source-control — meaning that any code comments are there to be the literate documentation for the changes themselves; while the commit messages exist only to document the meta-level "editing process" that justifies the inclusion of the change.
Why? Because the goal is to structure the codebase for reading. Such a codebase would have one definitive way to learn it: just read the migrations like a book, front to back; watching the corresponding generated code evolve with each migration. If you're only interested in one component, then filter for just the migrations that make changes to it (`git log -S` style) and then read those. And if you, during development, realize you've just made a simple typo, or that you wrote some confusing code and later came up with a simpler way to express the same semantics — i.e. if you come up with code that "you wish you could have written from the start" — then you should go back and modify the earlier migration so that it's introduced there, so that new dev reading the code never has to see you introduce the bad version and then correct it, but instead just sees the good version from the start.
In other words: don't think of it as "meticulously grooming a commit history"; instead, think of it as your actual job not being "developer", but rather, as you (and all your coworkers) being the writers and editors of a programming textbook about the process of building program X... which happens to compile to program X.
I don't understand the focus on commit messages. I never read the git log. You can't assume anyone reading the code has access to the commit history or the time to read it. The codebase itself should contain any important documentation.
Well, no, because 1. it's not useful, because 2. most people never write anything useful there (which is a two-part vicious cycle), and 3. editors don't usefully surface it.
If we fix #3; and then individual projects fix #2 for themselves with contribution policies that enforce writing good commit messages; then #1 will no longer be true.
> You can't assume anyone reading the code has access to the commit history or the time to read it.
You can if you're the project's core maintainer/architect/whoever decides how people contribute to your software (in a private IT company, the CTO, I suppose.) You get to decide how to onboard people onto your project. And you get to decide what balance there will be between the amount of time new devs waste learning an impenetrable codebase, vs. the amount of time existing devs "waste" making the codebase more lucid by explaining their changes.
> The codebase itself should contain any important documentation.
My entire point is that commit messages are part of "the codebase" — that "the codebase" is the SCM repo, not some particular denuded snapshot of an SCM checkout. And that both humans and software should take more advantage of — even rely upon! — that fact.
I've been in enough projects that changed version control systems, had to restart the version control from a snapshot for whatever reason (data loss, performance issues with tools due to the commit history, etc) that I wouldn't want to take this approach.
> amount of time new devs waste learning an impenetrable codebase, vs. the amount of time existing devs "waste" making the codebase more lucid by explaining their changes.
That's a false dichotomy. The codebase won't be impenetrable if there are appropriate comments in it. In my experience time would be better spent making the codebase more lucid in the source code than an external commit history. The commit messages should be good too but I only rely on them when something is impossible to understand without digging through and finding the associated ticket/motivation, which is a bad state to be in, so at that point a comment is added. Of course good commit messages are fine too, none of this precludes them.
Agreed on most points, but a good SCM provides also the ability to bisect bugs and to show context that is hard to capture by explicit comments. E.g. what changed at the same time as some other piece of code. What was changed by the same person a few days before and after some line got introduced.
Regarding your first point:
> I've been in enough projects that changed version control systems
I have the impression that with the introduction of git, it became suddenly en-vogue to have tools to migrate history from one SCM to another. Therefore, I wouldn't settle on restarting from a snapshot anymore.
With git you can cut off history that is too old but weighs down the tools. You can simplify older history for example, while keeping newer history as it is. That is of course not easy but it can be done with git and some scripting.
> I've been in enough projects that changed version control systems, had to restart the version control from a snapshot for whatever reason (data loss, performance issues with tools due to the commit history, etc) that I wouldn't want to take this approach.
When was that? I've never seen that in 15-20 years of software development; I've seen plenty of projects change VCS but they always had the option of preserving the history.
Sure it's there, but having to wade through a large history of experiments tried and failed when trying to answer why this thing is here right now just feels subpar. Definitely sometimes it's helpful to read the commits which introduced a behavior, but that feels like a fallback when reading the code as it exists now. It works, but is much slower.
> Sure it's there, but having to wade through a large history of experiments tried and failed when trying to answer why this thing is here right now just feels subpar
That is one the downsides of trunk-based developments. One keeps a history of all failed experiments, the usefulness of the commit history deteriorates. That is for reading commit messages as well as for bisecting bugs.
> In other words: don't think of it as "meticulously grooming a commit history"; instead, think of it as your actual job not being "developer", but rather, as you (and all your coworkers) being the writers and editors of a programming textbook about the process of building program X... which happens to compile to program X.
If you have to "wade through experiments" to read the commit history, that means that the commit history hasn't had a structural editing pass applied to it.
Again: think of your job as writing and editing a textbook on the process of writing your program. As such, the commit history is an entirely mutable object — and, in fact, the product.
Your job as editor of the commit-history is, like the job of an editor of a book, to rearrange the work (through rebasing) into a "narrative" that presents each new feature or aspect of the codebase as a single, well-documented, cohesive commit or sequence of commits.
(If you've ever read a programming book that presents itself as a Socratic dialogue — e.g. Uncle Bob's The Clean Coder — each feature should get its own chapter, and each commit its own discussion and reflected code change.)
Experiments? If they don't contribute to the "narrative" of the evolution of the codebase — helping you to understand what comes later — then get rid of them. If they do contribute, then keep them: you'll want to have read about them.
Features gradually introduced over hundreds of commits? Move the commits together so that the feature happens all at once; squash commits that can't be understood without one-another into single commits; break commits that can be understood as separate "steps" into separate commits.
After factoring hunks that should have been independent out into their own commits, squashing commits with their revert-commits, etc., your commit history, concatenated into a file, should literally be a readable literate-programming metaprogram that you read as a textbook, that when executed, generates your codebase. While also still serving as a commit history!
(Keeping in mind that you still have all the other things living immutably in your SCM — dead experiments in feature branches; a develop branch that immutably reflects the order things were merged in; etc. It's only the main branch that is groomed in this fashion. But this groomed main branch is also the base for new development branches. Which works because nobody is `git merge`ing to main. Like LKML, the output-artifact of a development branch should be a hand-groomed patchset.)
And, like I said, this is all strictly inferior to an approach that actually involves literate programming of a metaprogram of codebase migrations — because, by using git commit-history in this way, you're gaining a narrative view of your codebase, but you're losing the ability to use git commits to track the "process of editing the history of the process of developing the program." Whereas, if you are actually committing the narrative as the content of the commits, then the "process of editing the history" is tracked in the regular git commits of the repo — which themselves require no grooming for presentation.
But "literate programming of a metaprogram that generates the final codebase" can only work if you have editor support for live-generating+viewing the final codebase side-by-side with your edits to the metaprogram. Otherwise it's an impenetrably-thick layer of indirection — the same reason Aspect-Oriented Programming never took off as a paradigm. Whereas "grooming your commit history into a textbook" doesn't require any tooling that doesn't already exist, and can be done today, by any project willing to adopt contribution policies to make it tenable.
---
Or, to put this all another way:
Imagine there is an existing codebase in an SCM, and you're a technical writer trying to tell the story of the development of that codebase in textbook form.
Being technically-minded, you'd create a new git repo for the source code of your textbook — and then begin wading through the messy, un-groomed commit history of the original codebase, to refactor that "narrative" into one that can be clearly presented in textbook form. Your work on each chapter would become commits into your book's repo. When you find a new part of the story you want to tell, across several chapters, you'd make a feature branch in your book's repo to experiment with modifying the chapters to weave in mentions of this side-story. Etc.
Presuming you finish writing this textbook, and publish it, anyone being onboarded to the codebase itself would then be well-advised to first read your textbook, rather than trying to first read the codebase itself. (They wouldn't need to ever see the git history of your textbook, though; that's inside-baseball to them, only relevant to you and any co-editors.)
Now imagine that "writing the textbook that should be read to understand the code in place of reading the code itself" is part of the job of developing the program; that the same SCM repo is used to store both the codebase and this textbook; and that, in fact, the same textual content has to be developed by the same people under the constraints of solving both problems in a DRY fashion. How would you do it?
Imho, you are missing out on a great source of insight. When I want to understand some piece of code, I usually start navigating the git log from git blame.
Even just one-line commit messages that refer to a ticket can help understanding tremendously.
Even the output of git blame itself is helpful. You can see which lines changed together in which order. You see, which colleague to ask questions.
One could accomplish a part of your vision, today, by splitting commits into smaller commits. Then you have the relation between hunks of changes and text in the commit message on a smaller level. Then you can use branches with a non-trivial commit message in the merge commit to document the whole set of commits.
As far as I know changes to the Linux kernel are usally submitted as a series of patches, i.e. a sequence of commits. I.e. a branch, although it is usually not represented as git branch while submitting.
WHY: Using _______ sort because at time of writing code, the anticipated sets are ... and given this programming language and environment ... this tends to be more performant (or this code was easiest and quickest to write and understand).
This way when someone later come along and says WTF?! They know why or at least some of the original developers reasoning for choosing that code implementation.
Completely agree. I find this attitude toward (against?) comments is common in fields where code churn is the norm and time spent at a company rarely exceeds a year or two. When you have to work on something that lasts more than a few seasons, or are providing an API, comments are gold.
Concurrence: the ability of the code to be self-documenting ends at the borders of the code itself. Factors outside of the code that impose requirements on the code must be documented more conventionally. The "sleep 10 seconds to (hopefully) ensure that a file has finished downloading" anecdote from upthread is a great example.
Self-documenting code is perfectly capable of expressing the "why" in addition to the "what". It's just that often the extra effort and/or complexity required to express the "why" through code is not worth it when a simple comment would suffice.
> Self-documenting code is perfectly capable of expressing the "why" in addition to the "what".
I don't think anymore that's true, at least in a number of areas.
In another life, I've worked on concurrent data structures in java and/or lock-free java code I'd at this point call too arcane to ever write. The code ended up looking deceptively simple and it was the correct, minimal set of code to write. I don't see any way to express the correctness reasoning for these parts of code in code.
And now I'm dealing with configuration management. Code managing applications and systems. Some of these applications just resist any obvious or intuitive approach, and some others exhibit destructive or catastrophic behavior if approached with an intuitive approach. Again, how to do this in code? The working code is the smallest, most robust set of code I can setup to work around the madness. But why this is the least horrible way to approach this, I cannot express that in code. I can express this in comments.
This is a statement which is technically-true (so long as your language of choice has no length on names), but unhelpful since it does not apply in most practical cases.
I like to make sure the "why" is documented, but it's hard to get people to care about that.
I remember a former client tracking me down to ask about a bug that they had struggled to fix for months. There was a comment that I'd left 10 years earlier saying that while the logic was confusing, there was a good reason it was like that. Another developer had come along and commented out the line of code, leaving a comment saying that it was confusing!
Hah, I did one of these just last week. There's some sort of silicon bug or incorrect documentation that causes this lithium battery charger to read the charge current at half of what it should be. This could cause the battery to literally explode, so I left a big comment with lots of warnings explaining why I'm sending "incorrect" config values to the charge controller.
It's absolutely imperative that the next guy knows what the fuck I'm doing by tampering with safety limits.
I like to add the date the why comment was added and the date the comment was last reviewed/veified as still true/neccesary (which will rarely differ because they are seldomly reviewed/re-verified).
COMMENT WRITTEN: 2023-03-21
COMMENT LAST REVIEWED/VERIFIED AS STILL TRUE: 2023-05-04
I see your "wait for a file to finish downloading", and raise you a "wait before responding because, for one of our clients, if the latency is below a certain threshold, it will assume that the response was a failure and will discard it". That was a fun codebase.
Back in the 90s, we were trying to debug a crash on a customer’s site (using the then miraculous pcAnywhere). We couldn’t figure it out, and in desperation sent a developer that lived in the same country to observe ‘live’. As soon as he watched us reproduce it, he said “the modem was still disconnecting” - he could hear it, and we couldn’t. The solution, of course, was a sleep statement.
The last part is also useful. Tells the next person where to look when they do run into scaling issues, and also tells them that there wasn't some other reason to do it.
> But often enough when I find a root cause of a bug, or some sort of programmed limitation, the client wants removed. I always refuse until I can find out why that code exists. Nobody puts code in there for no reason, so I need to know why we have a timer, or an override in the first place.
Isn't that just the regular Chesterton's Fence argument though?
The one the article is specifically written to point out is not enough by itself, because you need to know what else has been built with the assumption that that code is there?
All my comment is adding a software anecdote to the story. It really is just regular Chesterton's Fence, a term I've never heard until now but dealt with for the last several years.
You're not wrong, but in the context of a PLC controlling a motor or gate it is far more segregated than the code you're probably thinking of. Having a timer override on a single gate's position limit sensor would have no effect on a separate sensor/gate/motor.
If the gate's function block had specific code built into it that affected all gates then what you're talking about would be more applicable.
I think they’re thinking of things like that software control for the health machine that could get in a state where it would give a lethal dose to a patient.
Context for those who haven't worked in the field: A PLC is a programmable logic controller. They are typically programmed with ladder logic which grew out of discrete relay based control systems.
Generally they're controlling industrial equipment of some sort, and making changes without a thorough understanding of what's happening now and how your change will affect the equipment and process is frowned upon.
I interned briefly at a company which mainly built industrial control systems. One of its most interesting features (which is also very mind-bending if you're coming from any sort of typical programming ecosystem) is that every "rung" is evaluated in parallel. (As a physical relay-based control system would have back in the day.)
I remember reading a story like this from the early days of Acorn. The first production sample of the BBC Micro came in, and would crash unexpectedly. Trial and error found that connecting a jumper wire between 2 particular points on the board stopped it crashing, but nobody could work out why it crashed or how that fixed it. They never worked it out and ended up shipping mass quantities of the BBC Micro with the magic jumper wire in place on each one.
> I do a lot of support work for Control Systems. It isn't unheard to find a chunk of PLC code that treats some sort of physical equipment in a unique way that unintentionally creates problems. I like to parrot a line I heard elsewhere: "Every time Software to fix a [Electrical/Mechanical] problem, a Gremlin is born".
At least some of this is cultural. EEs and MEs have historically viewed software less seriously than electrical and mechanical systems. As a result, engineering cultures dominated by EEs/MEs tend to produce shit code. Inexcusably incompetent software engineering remains common among ostensibly Professional Engineers.
You're not wrong. It shows in the state of PLC/HMI Development tools. Even simple things like Revision Control is decades behind in some cases.
I've basically found my niche in the industry as a Software Engineer though I can't say I see myself staying in the industry much longer. The amount of time's I've gotten my hands on code published by my EE coworkers only to rewrite it to work 10x faster at half the size with less bugs? Yikes. HMI/PLC work is almost like working in C++ at times, there's so many potential pitfalls for people that don't really understand the entire system, but the mentality by EE/ME types in the industry is to treat the software as a second class citizen.
Even the clients treat their IT/OT systems that way. A production mill has super strict service intervals with very defined procedures to make sure there is NO downtime to production. But get the very same management team to build a redundant SCADA server? Or even have them schedule regular reboots? God no.
I have no clue, but I wonder how unique this attitude turns out to be among MEs/EEs versus, say, everyone dealing with electronics. Because the stories and complaints about mechanical and electrical engineers treating code (and programmers) as second-class, remind me very much of how front-end people complain about how they’re perceived by back-end people. And about how backend people complain about how they’re perceived by systems programming people. And so on up/down the line(s) of abstraction.
> It shows in the state of PLC/HMI Development tools.
i mean you are talking about upgrading things that are going to be in service for decades perhaps. the requirements for the programs is generally not complicated. turn on a pump for a time, weigh something and then alarm if some sensor doesn't see something.
Structured text was an improvement over ladder logic as you could fit more of the particular program in the screen real estate you had and could edit it easier since it was just text. though, that had its own set of issues that needed to be worked through and it wasn't a panacea.
I think one of the reasons I've seen this happen is because typically, EE and ME programs in university teach very little CS "enough to be dangerous", and the few coding projects you are required to do are often taught in a way that downplays the importance of the software. Software is often seen as simply a translation or manifestation of a classical mathematical model or control system ( or even directly generated by Matlab/Simulink).
Software, being less familiar, is not viewed as a fundamental architectural component because there often isn't sufficient understanding of the structure or nuance involved in building it. In my experiences software or firmware engineers tend to be isolated from the people who designed the physical systems, and a lot of meaning is lost between the two teams because the software side does not understand the limitations and principles of the hardware and the hardware team does not understand the capabilities and limitations of the software.
The worst part is there's no particular reason for this -- infusing proper software development best practices into existing EE/ME coursework isn't that hard.
It's an instance of the larger pattern in which technical degree programs lag industry requirements by decades, as older faculty ossify at the state of the art circa 2-3 years prior to when they received tenure.
IMO one way to help would be to get rid of the entire notion of a "Professor".
Instead, courses should be taught primarily by a combination of professional instructors on permanent contracts and teacher-practitioners supported by the instructors. The instructors should have occasional sabbaticals for the professional instructors to embed in firms and ensure they're up to date on the industry.
The research side of the university can even more easily replace Professors and tenure with first-line lab managers on 3-5 year contracts whose job is simply to apply for grants and run labs, and who can teach if they want but are held to the same standards as any other applicant for an ad junct teaching position in any particular term.
I definitely think there are two sides to this. The school I went to had a lot of professors for whom the "ossified 2 years prior to tenure" thing was true for, but I also found them to be helpful for teaching fundamental concepts that didn't change in an effective way.
I think one barrier to better engineering programs in universities is that there typically is an onerous set of "accreditation requirements" which prevents significant modification of the curriculum to adapt to modern needs.
The other barrier is that students culturally appear to not always want to do more coding than needed. Courses involving coding were widely regarded as the most difficult by the people around me, despite something like up to 80% of an EE class going into SW engineering after graduating.
I think in general, degree programs are designed to be something that they're not used for anymore often. The usual line is that they're designed to provide a track to academia, and aren't vocational training. But nowadays degrees seem very ritualistic and ornamental - it seems that people are doing their learning on the job mostly whatever they do, and the relevance of the degree itself is just a shibboleth of some sort.
> I think one barrier to better engineering programs in universities is that there typically is an onerous set of "accreditation requirements" which prevents significant modification of the curriculum to adapt to modern needs.
This seems to be rapidly dissolving, at least in California. Several schools including Stanford, Cal Tech, and several of the UCs have dropped ABET accreditation for most of their programs in recent years, with more likely to follow as they come up for renewal.
Why should EEs also be software engineers? These are two distinct specializations.
No sane person would expect a programmer to just design a lithium battery charge circuit that goes in your user's pocket, that'd be reckless and dangerous. I likewise would never expect a programmer to break out an oscilloscope to debug a clock glitch, or characterize a power supply, or debug protocol communication one bit at a time. I wouldn't ask a programmer to do FCC pre-validation, or to even know what that means.
Why then do you want to rely on an EE to produce critical software?
As an EE, I know my limits and how to separate concerns. I keep my bad C++ restricted to firmware and I simply do not write code further up the stack. We have programmers for that. Where the app needs to talk to the device, I'll write an example implementation, document it, and pass it off to the programmer in charge. It's their job to write good code, my job is to keep the product from catching fire.
If you want good code, hire a programmer. If you want pretty firmware, hire a programmer to work with your EEs. If you expect an EE to write code, you get EE code because their specialization is in electronics.
Unless you really want an EE who is also a software engineer, but then you're paying someone very highly skilled to do two jobs at once.
Electronics and software are two different jobs requiring two differently specialized people. It just looks like they overlap.
I think it is the result of how these things tend to be taught. At least in my school, all the EE’s and all the computer engineering students had the same first couple programming classes.
Lots of EE’s need to do some programming, and lots of people getting EE degrees end up in programming careers, so it would be a disservice not to teach them any programming at all. In particular, an engineer should be taught enough programming to put together a matlab or Numpy program, right?
Meanwhile, some of their classmates will go on to program microcontrollers as their job.
Writing programs and a product, and writing programs to help design a product, are two basically different types of jobs that happen to use similar tools, so it isn’t that surprising that people mix them up.
This is a discussion that is much larger than what's available in a comment section like this. But I agree with you wholeheartedly.
I think part of the thing is Software Engineers haven't been a thing for as long in the industry. I'm the only Software Engineer I've met doing controls. My supervisor has a CS degree and an Electrical Technician diploma, but never another SE.
Second is I think up until recently, the work done by Control Systems has been what's capable of an EE or ME so having a SE hasn't been necessary. I've been with my company for 10 years now, and in that time I've watched the evolution of what my clients are seeking in terms of requirements to their systems.
I primarily work in Agriculture or Food Production. 10 years ago my projects were assembling plants and getting their motors to start, with the required protections then some rudimentary automation to align paths and ensure flow.
Today? I'm building traceability systems to report on exactly which truck load was the origin of contamination for a railcar shipped months later. Or integrating production data to ERP systems. Adding MES capabilities to track downtimes and optimize production usage. Generating tools to do root cause analysis on events... It's a different world and the skills of a Software Engineer haven't really been a super important role for quite a while.
No, why? The point was EEs and MEs, or rather traditional engineering heavy culture produces bad software (never mind the first software devs tended to be EEs), so Juicero is good example of a software-leaning culture producing shitty hardware products.
Uh, the Juicero was spectacular hardware. The mechanical engineering in that beast was absolutely beautiful.
I don't recall what the software was like, but none of that is why it failed, it was simply a moronic business idea. An overpriced subscription for low quality fruit in a DRM-laden pouch. Nobody wanted it then or now.
The Juicero was terrible hardware in the sense that they could have made a product that was functionally similar but cost far less to make. It seems like they got a hardware engineering team straight out of college and gave them no constraints or budget. You have a giant CNC'd aluminum frame, a custom gearbox, a custom power supply, a custom drive motor, etc. All of this is only necessary because they decided to squeeze the juice out of the bags by applying thousands of pounds of force to the entire surface of the bag at once vs. using a roller or something. They were likely losing hundreds of dollars per unit even when they were selling the press for $600.
Squeezing the entire bag was the selling point. It's not negotiable. The important question is how much money they could have saved without changing that aspect.
I'll never do PLC work again. Forget undocumented code, most of the time there's no schematics for the hardware you're working on because it was custom-built thirty years ago.
My company is generally good about that. We have lots of overlapping documentation that answers questions like that in different ways. From Electrical schemas to QA docs, picture archives of panels and wiring, ticketing systems, spreadsheets over I/O, etc. etc.
I hate PLC work for other reasons. I'm starting to look at going back to more traditional software role. I'm a bit tired of the road work and find the compensation for the amount asked of you to be drastically underwhelming. This meme is very much relevant:
> Nobody puts code in there for no reason, so I need to know why we have a timer, or an override in the first place.
I would like to think that if I sent out an email about git hygiene that you would support me against the people who don’t understand why I get grumpy at them for commits that are fifty times as long as the commit message, and mix four concerns two of which aren’t mentioned at all.
Git history is useless until you need it, and then it’s priceless.
I can’t always tell what I meant by a block of code I wrote two years ago, let alone what you meant by one you wrote five years ago.
> commits that are fifty times as long as the commit message
One of my proudest commits had a 1:30 commit:message length ratio. The change may have only been ~3 lines, but boy was there a lot of knowledge represented there!
I work with control systems and have a similar mantra: "You can't overcome physics with software." It's super common to have someone ask if a mechanical/human/electrical/process issue can be fixed with software because people believe that programming time is free. Sometimes it's not even that it's impossible to do in software, but adding unnecessary complexity almost always backfires and you'll wind up fixing it the right way anyway in the end.
Chesterton's Fence is a principle that says change should not be made until the reasoning behind the current state of affairs is understood. It says the rash move, upon coming across a fence, would be to tear it down without understanding why it was put up.
This sounds more like the original Chesterton’s fence than what the article is describing. The article is about understanding something’s actual current purpose, rather than just the intended purpose.
What the article is describing reminds me of the XKCD comic workflow: https://xkcd.com/1172/
A system exists external to the creators original purpose, and can take on purposes that were never intended but naturally evolve. It isn’t enough to say “well that is not in the spec”, because that doesn’t change reality.
The unspoken thing here is that PLC code often(usually?) isn't exactly written in text, or in a format readable by anything other than the PLC programming software.
After a year long foray into the world of PLC, I felt like I was programming in the dark ages.
I'm assuming its a bit better at very big plants/operations, but still.
> "Every time Software is used to fix a [Electrical/Mechanical] problem, a Gremlin is born"
I'm definitely going to use this, and I think there's a more general statement: "Every time software is used to fix a problem in a lower layer (which may also be software), a gremlin is born."
For personal development, it was merely shipping things. The more I published, the better I felt about myself. The more I published, the more I had learned and had to refer to. Now when I'm taking on tasks I can instantly recall how each of the pieces of the problem can be stitched together from things I did previously (or at least know where to look for foundations to build from)
For the confidence? It was working with others. The first job I had I got to sit down with one of the company's programmers as part of my on boarding and watch him work through tickets. After I saw just how flawed everyone was I felt a lot better about myself. I suppose that's a weird thing to say: Oh he was pretty shit, so I shouldn't feel bad about my poor performance...but that's not the point I want to make really. More that, it is wrong to compare your efforts to learn and grow against the final product of others. Once you sit down with the experienced devs and see how they shape and form the product and all the bumps along the way it doesn't feel so bad to struggle on your own.
Ultimately the skills that I honed that gave me the best boost in confidence were not really the direct programming parts where I put letters and numbers in files. It is the debugging. Understanding how things move and where to look for problems makes me feel like I can solve any problem with the right tools.
Perhaps you're using a broad/generic example to try and make the point but I'll say this:
If a seasoned mechanic is unable to figure out how to reset the Maintenance Reminder or look up how to sync Tire Pressure sensors, run away.
In the same way that one can use knowledge of one programming language as a means to leapfrog into other languages, other skilled trades are similar. Perhaps there's something that could be said about an ICE mechanic trying to dabble on Electric but that's not the point you're making. So yeah. I know you're trying to make a point about lock in, but when I think of people I want to hire for tasks who might say "Oh, sorry, you have a Volkswagen and I only know how to work on GMC" I wouldn't take my GMC to them either. It shows a fundamental lack of skill in that they don't understand the broader concepts and their universal applications. If I, a programmer, can figure out my Volkswagen, my GMC, my Mazda, my Nissan, certainly a mechanic can. If my appliance repair specialist can only do Whirlpool when I ask for help on a Bosch that's red flags.
One might specialize. Sure. But to refuse? Weird. But I fear I might be getting lost in the weeds here because its all about the approach. "Sorry, too busy to take on work on things that aren't my specialty": yep, understood. "Sorry, I don't know <model> I only know <other model>" bad.
I know mechanics in particular can be quite chauvinistic.
In the US, for a very long time, you had to find an "import specialist" mechanic, even long past the point where Japanese brands had gone mainstream. Part of this might have been because of the availability of metric tools at the time; my family had a set of metric wrenches specifically because they had to do occasional light maintenance on their early Datsuns and Toyotas.
I can recall that the mechanic in my neighbourhood was decidedly unwilling to service a new Hyundai in the late '90s. He complained they were 'disposable'.
Specialized items require specialized tools. Specialized tools, like all other tools, require maintenance and they change.
A shop dealing with domestic produced automobiles can significantly reduce profit-bleed by not servicing vehicles that require special tools, special diagnostics, special machines, etc.
It's simply a math equation. Do I serve enough of these vehicles daily/quarterly/yearly to make these expenditures profitable for me? The shops you're referring to answered no to that question.
It could just be a parts issue. A lot of mechanics will work on pretty much any car, if they have the parts, but if it's not a popular model then they don't want it sitting in their shop for a week while they order parts.
I guess some mechanics will prefer to work with a smaller number of models, because they're much faster if they're familiar with the model, but new models come out every year, and they need to learn how to fix those. If a mechanic can learn to fix the newest VW, they can learn to fix the newest Renault, it just might not be worth their time if they have enough work to do.
I'm not from Poland but I hope this is hyperbolic. Parts delivery is once or twice a day delivery in most cities in Europe, no matter if it is Renault, VW or a Kawasaki motorcycle. Of course a part can have longer delivery time but not because it is a Renault instead of a Audi. At least not at any reputable delivery business in modern parts of Europe (which I would think Poland is a part of even though I haven't been there since the 90's).
That's often certain German cars in the US. IE some shops will just not work on modern Minis. Plenty of shops will avoid weird, niche cars.
Any mechanic can fix a Citroen, but is it worth the floor time it'd take to get the parts and figure out french quirks vs working on something they know that they'd make the same money in a third the time.
Having done shade tree work on various cars, I'd totally turn down any Subaru engine bay work if I was already close to swamped.
Once upon a time I was the proud owner of a third gen RX7, proud that is until the powertrain warranty ended and I had to go outside the dealership network for minor repairs. Basically your options were... go back to the dealership and pay unsubsidized warranty rates (double or triple what independent mechanic charged for "normal" engine work), or go to questionable looking characters running "performance" shops who wanted to side port everything they could get on a lift. And still pay double or triple the normal mechanic rate.
I think you are missing at least one important point. The reason the mechanic can’t work on the other brand of car is not knowledge or skill, but equipment. It costs money to buy the full suite of equipment required to correctly service a particular manufacturer’s vehicles. It often makes much more sense to specialize and make more use of fewer expensive tools than to have tools for everything and have only marginally more business.
Well, that's a very American way of doing things that most places simply don't do. The norm is for tools to be compatible with all brands and at best it is an added option to unlock or a pay per use. You can do 99% of the diagnostics with a Wish Bluetooth dongle and a free android app if you wish, since by law it is an open standard.
Most specialty equipment costs less than a mechanic can earn in a day. You even order the parts from the same company no matter if the bumper is for a Mazda, VW, or an Alfa. Or a Kawasaki motorcycle for that matter. This lock-in behavior is, luckily, mostly illegal.
Diagnostics yes, but try to reset the warning light on a German car after fixing the fault. This can apply even when replacing seemingly purely mechanical parts (I think some suspension parts on bmw/Mercedes). You will need vag-com for any vw/Audi and something else proprietary for Mercedes or bmw
From basically the very moment you realize you're having a kid you essentially become the #2 person in your own life. Its more so for the mother, but even the father likely needs to do some maturing in preparation to being a good dad.
If you're hoping to be a good parent you need to start weighing every decision on a basis of where it leads to. There's simple things like affordability: deciding to make smarter financial decisions because you're going to be supporting another human for the next couple decades at least. There's more complex things like suddenly seeing your own health as a liability and needing to straighten yourself out if you want to set a good example. Things get way easier if your life has structure honestly. Having good habits about food prep, cleaning, chores makes things way easier and bad habits really make life more difficult than it needs to be.
You also need to adapt quickly. You realize that you don't define your schedule any longer. Whether you sleep at night is up to the kid. As they grow from baby to toddler you start to see how the fundamental things like routine have impacted your child and you can begin to make connections to how you did things 6 months ago impacts how things are going now. Self reflection is huge, knowing how your emotions, your reactions, your behavior is a template this person learns from humbles you.