Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The Stupid Programmer Manifesto (hasen.substack.com)
206 points by hsn915 on June 15, 2023 | hide | past | favorite | 226 comments


Honestly, the truly 'smart' programmer isn't someone who does or doesn't use a bunch of techniques or best practices, it's the one who can look at the situation and do what's right for that particular project/job.

Most of the things in this post could be the right answer if the project is a weekend side project that's going to get a few hundred views a month, or a website for a small business. Bob's Restaurant doesn't need a Google scale website built with microservices, Docker and SPAs hosted on AWS and S3.

But something like Google or Amazon or Netflix would. Use the right tool for the right job.

Plus, even a 'stupid' or '0.05x' programmer can be a rockstar at a small company or organisation where a lot of complexity isn't required.


Yeah, I read this post as a dig at "cargo cult" programmers who read up on google-scale programming and best practice and unthinkingly apply those constructs to their own Bob's-Restaurant-scale task at hand where they are often at best an unneeded time and complexity overhead.


I've worked in a number of projects where they decided to move to microservices for reasons they wish they had, while not fully understanding microservices or applying basic litmus tests to where to split off services.

So we ended up with great puzzles like how to link an order service with an inventory service to check if there's enough inventory to fulfill an order, all through a central event bus. Then of course the order service has to update the inventory if the order is confirmed. But of course, the order service also has to talk to the pricing service, and since the price of products varies by day, they'd have to remember the price at the time the order was made. Whose responsibility is that? No idea, that's someone else's service.


There is a youtube video for that https://www.youtube.com/watch?v=y8OnoxKotPQ


We had a dev in my last work pushing to microservice everything (small company, only a few developers). I could tell that it was going to cause more pain than it was going to solve. Plus, every service that you add makes setting up a development machine even more difficult.


Not to mention testing all of this. Its an absolute nightmare.


Replacing function calls with network calls is only going to make things more difficult.


Unfortunately it's not a popular belief anymore, but [asynchronous] central event bus is a terrible idea on it's own, and it goes against what [micro]services are about.

If you want microservices to work you shouldn't have anything central in them, and you should avoid async as much as possible.


That's not at all what I learned from working with microservices. The worst thing you can do is make sync calls between services, then you end up with high aggregate failure rates, and/or large request latency from retries as well as low uptime due to dependency couplings.

One-way async dataflows work much better. That way each microservice is up with all the (potentially delayed) data it needs to respond to requests.


If you have services with lots of dependencies on other services, then you probably ended up the worst of two worlds - distributed monolith.

Typically one-way or two-way is not a choice you can make as an engineer, most processes in your business require two-way, because they are initiated by a user and user wants a definitive response.

The "potentially delayed" approach is very tempting for engineers, but it should be exception rather than the rule.

It

- dilutes responsibilities between services (who is responsible for delays? is service A producing too many messages or is service B too slow to process them?)

- makes your SLA vague (message was processed 3 days later, do we treat it as downtime or not?)

- requires more infrastructure & processes (every service has a queue, dead-letter queue, and a process to deal with dead letters)

- requires a ton monitoring overhead (what delay is acceptable? how do we even measure delay? what if you have different SLA for different messages? we'll have a monitor per message type?)

- introduces a lot of unnecessary complexity and rules (how do you deal with TOCTOU, e.g. admin deactivates a user, but by the time the message gets processed he's no longer an admin)

- ruins user experience (we received your payment information, but we won't immediately tell you that it's wrong).

Despite it's downsides, potentially delayed approach can be a fine tradeoff when it saves you 7-8 digits per year. Most companies never reach this phase.


A lot of this depends on implementation and infrastructure which is of course an additional detail. In an example I'm recalling it was for a communications app that had services for content, users, groups, sending, and receiving. Sending a message would save content with an id, include user/group recipient ids and write to the send service with them. Each service if it accepts the request completes it unless the service is actually down. The user/group service seems like it could be a sync service, but actually the client caches a list of users/contacts or can search for them.

By abstracting content the only thing that needed to change for new types of content was the content service and clients. Abstracting recipients which can be users/groups, meant that the only service that needed to care about this detail was the one that replicates sent to inboxes in the receiving service. Because of the use of content ids and user/group ids, this is all small idempotent/immutable metadata. The system was complex (yet became manageable over time) and onboarding onto each service was immediate.

I think few have seen well-bounded microservices' contexts leading to the idea that it's a bad distributed monolith. Also worth remembering that advantages of microservices 'done right' is large scaling of developers and isolate failures.


> I've worked in a number of projects where they decided to move to microservices for reasons they wish they had

I've advised engineers to rewrite code before, mostly for resume building to help jumping ship to somewhere better. Basically, use the current job as a way to train on the stack the company you really want to work for is using.

If a company doesn't have a stock-based comp and sticks to prevailing wages/CoL there's no incentive to help the company succeed. Especially with non-technical management, the best thing to do for ICs is often to aggressively rewrite (in the language that's trendy right now and that better companies use) and get promoted by showing off a significant output. Then jump to a better company having spent a year training on their stack at the previous company's expenses.


It's unprofessional to do this in conflict with your company's interests. Use your own hobby projects to learn instead of leaving a maintenance project for your coworkers.

It also creates an invisible bias towards worse companies in your next position, because smart companies are wary about this kind of behavior, and will be savvy to filter out candidates that have displayed it.


> It's unprofessional to do this in conflict with your company's interests.

Great work happens at the intersection of employee's and company's interests!

Why not simply align the two? Compare the Xerox Alto with the Macintosh...


this might come as incredibly cynic for some, but illustrates an important point - languages/frameworks/whole marketed stacks like ELK are merely tools for you to accomplish the task, which in the end is either to reduce costs or increase revenue

so naturally it doesn't matter (in long term) what you should learn to get a first job, change jobs, change domains - prioritize ROI the tool would give you in future, a job market, and your sanity :)


This is a good point. The other side of the coin is that by not moving to a modern day stack, the company slowly finds it harder and harder to attract staff to maintain it. So, keeping up with trends is not just good for the ICs.


Well, they do what's right according to how they learned, which may only be a local 'right' and not a global 'right'. There's so many ways to put things together that you can make it work in a lot of different ways. It becomes more art than engineering or science. Then whether your peers accept whether it was the right thing is up to their preferences too.

All that comes down to the large variety of training everyone in the field gets, from university to youtube tutorials.

It's noisy out there


> It becomes more art than engineering or science.

The opposite.

> All that comes down to the large variety of training everyone in the field gets, from university to youtube tutorials.

A big problem is that a lot of the youtube/bootcamp ecosystem focusses on "teaching what they do at the FAANG" to lure people into thinking they could get a job there by enrolling/buying whatever training they are selling. And then these people repeat what they have been told are "the best practices".

Proper engineering degree teaches about gathering requirement and analyzing the problem space to understand what really needs to be done.


> Proper engineering degree teaches about gathering requirement and analyzing the problem space to understand what really needs to be done.

This. I don't even think you have to bring a degree into it, honestly. This is just at the heart of all engineering (software or otherwise).


>And then these people repeat what they have been told are "the best practices".

And then they get jobs which serve to validate their learning paths (maybe not at FAANG), which gives them confidence about what they learned being the right way to do things.

>Proper engineering degree teaches about gathering requirement and analyzing the problem space to understand what really needs to be done.

Well sure, but a lot of non-engineers do that too. Where's the real distinction at?


> Well sure, but a lot of non-engineers do that too. Where's the real distinction at?

A lot of non-lawyers can discuss law principles and explain laws pretty well; but someone who passed the bar I know for a fact are familiar with certain concepts.

Same for engineers.


That's a great point. Good engineering, from a distance, can look awfully like bad engineering from the point of view of a Youtuber or Blogger focused on trends.


There's also the fact that those complex solutions aren't engineered by a solo developer but by teams, typically even multiple teams each working on a compartmentalized piece of the whole thing.


That too. It's a lot easier to work on a complex project if you've got a large team working on it. Like the teams that the 10s of thousands of devs at your average FAANG company can staff.

Well, that and how most complex projects didn't start that way. A lot of the time, these things were built in a much simpler way, and dealing with increasing demand over the last decade or two caused them to balloon in complexity.


Id venture to say that as this is implemented in the authors post this could scale to many many thousands of QPS by just bumping VPS resources. Funnily a lot of these 'simple' techniques actually speed things up like writing to a local disk, or static binary.

Theres no High Availability so in the chance it went down theyd be screwed and have no failover but makes you wonder if its cheaper just to lose traffic for a few minutes while theyre upscaling VPS resources or pay for all the HA scaling stuff running around the clock.


That last point is key. I’m a fan of SaaS solutions but I also wonder if more organizations could unlock more value by just having a 1x programmer (with well defined _objectives_) on staff.


> I also wonder if more organizations could unlock more value by just having a 1x programmer

They probably already have such programmers. Actual 10x programmers are exceptionally rare. Having an entire team of them is extremely unlikely.


I think it depends on a situation where no problems occur. Often, once a problem arises, we (at least me) begin to reflect on why we didn't follow best practices initially.


OP makes the same mistake that those chaps who use Kubernetes to run an app with 10 lines of code: thinking that there is one right solution for every kind of problem.

OP's approach might work for a personal project or a small company, but when you need to work in a large and complex system you start to appreciate the beauty of tools like Docker.

It's cool to question cargo cult, but don't throw the baby with the bathwater.


I work at a very large company with very complicated systems. Fighting unnecessary complexity at every opportunity is the only way to keep things near comprehensible. A lot of the articles rules are even more important in complex environments.


Ive experienced this at work as well, even at a more modestly sized company.

You should always be chasing the simplest solution for your problem because no matter how smart you are today and how much you understand the problem and the abstractions, a year or two from now you’ll be a different person.

You might have new team members, or fewer, new management, marketing goals etc etc etc. Complexity is inherently difficult to change. My chasing simplicity, you are optimizing for potential change.


Me too, and I wouldn't apply most of OP rules to my day job. They just aren't a good fit. Sometimes using the right tool (e.g. Docker) actually decreases the overall complexity of the system, even if it raises it locally.


How would you host the Bob’s Restaurant website?


I never used them, but Neocities seem well worth of consideration for simple sites.

https://neocities.org/


Facebook seems to be the ultimate in simplicity for small restaurants. Has a number of advantages, like built in chat, picture hosting and a feed of posts.


You'd hit up GoDaddy for hosting and FTP your webpage files to it. And it'd work just as well as other website.


Yeah, this would be the way. This hypothetical restaurant would probably only need the following pages:

- Home page - About Us - Menu - [Maybe 1 or 2 info pages] - A contact page (with or without a form)

That's something that can be easily done with static HTML and CSS, and hosted on just about any random shared hosting service you can imagine. Assuming it doesn't become a major viral hit, it'd probably use a couple of megabytes of space on the hard drive and under a hundred megs of bandwidth.

Maybe up that number a tiny bit if Bob decides he wants to run a blog on the domain and you decide to just do the whole thing in WordPress or Drupal or whatever else.

Either way, a site that small could be built like its 1980 and it'd be perfectly fine.


I've made several "Bob's Restaurant" websites for clients. Early on I thought just that. "This can be statically implemented using the core web technologies HTML and CSS" - It wowed the client visually at first and they signed off on the product agreeing that the simple site was enough for them. Then the client wanted their Facebook, Google Maps, and Instagram integrated throughout the site over time. Now the site has JavaScript in its stack and external code running on it via APIs.

Early on the client wanted to make changes to the menu, and have a feed of events from their Google calendar. I implemented a Google Calendar feed script. Enter the world of dates in JS and learning how to use OAuth to hook into their feed. The client was getting confused as to why all this work had to be done for such simple little features and there was extra cost associated, and at the same time requested that they can CRUD menu items. I told them we need a CMS for that. They didn't like hearing the was more time and cost associated. By this time the statically implemented, elegantly designed Home | Menu | About site delivered the purpose of a visual mockup rather than a website that will help the business retain and add growth.

It could be said that this story is one of not planning and getting the spec right with the client, which in part is true. The part that was wrong is that the site could be this simple CSS HTML static site when in fact, even for a small business lacked the features and flexibility needed. Initially, the client and I decided that we would have me make the changes and updates to the static food and drink menu pages. This turned out to be very inefficient for both parties. Small businesses often don't want to pay hourly rates to have minor updates done on their website, which was the case because I don't work for free.

In the end the HTML/CSS site had a CMS, Database, and multiple external services. I learned from this moving forward and came up with the right stack for the scale of business I work with. However, in the late 2010's I noticed more small businesses opting into products like Squarespace and Wix, and selling clients with certain small businesses (nick nack shops and local bars restaurants) on custom site builds was getting harder.

Back to the original post. I get the point of the blog was to make a statement against unneeded complexity. Sure, however, it's important to remember that the work of producing digital products is inherently technical and complex. Developers should strive to be a smart programmer and spend time picking the right tools for the job.


Godaddy's free website builder would do that fine, too.


Bob's restaurant doesn't need to rent a VPS either one would hope.


I can't tell if this is an honest call to keep things simple, or if it's meant to ridicule that idea. Because I strongly, deeply agree with some of these points, and am absolutely horrified by some of the others.


Yeah, I really can't tell if it's satire. Especially things like this:

> I’m not smart enough to figure out how to transform data between different layers of the system, so I just don’t. I use the same represetnation in the UI layer and the storage layer.

OK, well then you might need to gather a little more work experience and meet the repercussions of decisions like this one.

Or maybe the OP is a contractor and never has to deal with the repercussions of their actions.


> contractor and never has to deal with the repercussions of their actions.

This doesn't match my experience of contracting.

Not caring about code quality and then leaving others holding the bag of sh*t is frequently what "ladder climbers" do, and those are, by definition, on payroll rather than contractors. For a contractor, there is no ladder to climb, but willingness to get one's hands dirty is definitely part of the job description.

Contractors are quite frequently the people who are brought in at extra expense to maintain mission-critical infrastructure when the employee who made it is no longer at the firm, or has moved on to different responsibilities. After having gone through several contracts, a typical contractor will often have a lot more experience with different coding practices and their respective outcomes and a deeper appreciation of good engineering.


> For a contractor, there is no ladder to climb,

The trick is to refuse to adopt the job-title "contractor" - simply refer to oneself as a "consultant" - it goes a long way.

Fun-fact: there's no rule against contractors/consultants earning equity in their clients' projects either: apparently I can artificially raise my hourly-rate by 20% during the post-interview negotiations, then sweet-talk the client into letting me have token equity in exchange for a 16% pay-cut. Simply do this for every small gig you get and eventually it works out as a pretty-nice personal-ladder to early-retirement.

...not that I want to retire, really - the idea terrifies me because I simply can't do anything except work :/


Yes, my experience being a contractor largely backs this up.

Also, remember another possible reasons why a company might hire a contractor: to have someone to blame for a project they know is failing.

One of my favorite things about contract work is that it's time-limited. This means that I can be 100% honest about bad dev processes, bad code, bad management, etc., because I won't have to worry about angry managers making my life miserable for any longer than the remainder of my contract period.

I can't count the number of times I've spoken up about something terrible, only to have the permanent devs privately tell me later "I'm so happy you said that. I've been wishing I could bring that up for years"


That's a particularly weird one in light of starting out with "I only use statically typed languages", because it's basically trivial in a statically typed language. You have "type UIData", "type StorageData", and functions/methods to convert back and forth between them. Those functions/methods may need additional parameters, which will be documented by the mere act of calling for them in the function signature. You take the right type in the right place and the language does the rest, basically. Statically-typed languages are so good at this that even refactoring it after the fact is trivial; potentially tedious and lengthy, but generally trivial. You just create the new type you need, put it in one place in the code where it belongs, then start compiling the code and fixing the errors it flags. Repeat until you've converted all uses of the old type to the new type or performed the correct conversion. It'll probably just work the next time you run it. You don't anything even remotely as strong as Haskell to have an "It Just Works when I compile it" moment.

In dynamically-typed languages this is a nightmare, and this is one of the significant reasons I've stopped using them. Finding a bug six months to six years later from some attempt at this and finding a place that used the old type incorrectly when it was getting the new type, or because the programmer thought they were getting one but in fact got the other, or my favorite, the code is actually called with both types and nobody noticed until way too late... it's just something I've had enough of.


But, _what_ is the point of using a different layout/structure for storage vs the UI layer?

Remember I'm a 0.5x developer. If I have to write code to transform data for every kind of entity/object I need to store and have a UI to view and edit, that's way too much. I'm already very slow. No need to slow me down further by telling me I have to write so much extra code that does no useful work.


> the point of using a different layout/structure for storage vs the UI layer?

By way of example, I'll point to the classic example of a New User form UI: that UI will need 2 password inputs (one for the password, the other for the "confirm password") - but your User object/schema/DB-table won't have two separate string password fields - it'll have a single binary/byte[] salted-password-hash field, and you certainly must never show that in the UI.

As for your (reasonable!) concerns about having to write-out repetitive types/models that all share similar (though hardly ever identical) data/fields/shape: I agree it's tedious, but that's what we have scaffolding-tools for (granted, I often end-up having to write my own scaffolding and templaying tools...), so for a simple CRUD application just design your database and have the scaffolding tools take care of the rest, including stubbing-out a functional UI (and this is why Ruby-on-Rails was huge when it first came out: it took-away all of the drudgery in CRUD web-applications - but then all the other frameworks/platforms improved their scaffolding-stories and RoR is certainly less attractive in comparison now.

-----

Oh, and of course, ChatGPT et al. is also incredibly effective at scaffolding entire solutions from scratch, e.g. from last month: https://bit.kevinslin.com/p/leveraging-gpt-4-to-automate-the


> the other for the "confirm password"

That always pisses me off. I can copy/paste the password from the first field into the second. Hell, I use a password manager; I pasted into the first field, so I paste into the second.

It's just a check to make sure they match. But my passwords are complicated enough that I can't remember them long enough to type the characters in the first time; I literally have to copy/paste.

So this anti-pattern assumes that users are using some memorable password like their mother's maiden name, probably for all the services they use. Anyone using sane password practices is penalized with stupid friction.

[Edit] I have an even bigger gripe about asking me to enter my email address twice. If you really aen't sure I entered it correctly, send me an email asking me to confirm.


> So this anti-pattern assumes that users are using some memorable password like their mother's maiden name, probably for all the services they use. Anyone using sane password practices is penalized with stupid friction.

I think you're really, really underestimating how easy it is to make a typo when entering a password into a masked password box, hence the 2 fields.

Also, web-browsers don't let you copy-and-paste the password from one box into another - so I don't know how you're doing that: https://security.stackexchange.com/questions/149326/why-cant...

-----

I appreciate that you're just-as-annoyed with web-form tropes, cliches and irritations as I am - and when I'm building a system or UI for savvy people then sure, I'll do things like skipping the registration form entirely and just use OIDC federation or PassKeys or whatever the current-security-fad-of-the-month is - but my day-job requires me to write software for ...uh... "normal people" and part of that means having to weigh-up the support-costs of users who want a predictable and easy-to-understand that isn't too different from what they already know - and those normal-people are what pays my bills.


> Also, web-browsers don't let you copy-and-paste the password from one box into another

Point taken; perhaps simply having that widget that lets you see the password unmasked would be better, than forcing you to enter it blind twice (and getting it wrong twice). Fact is I don't try to copy/paste one field into the other; I paste the same stuff into both.

FWIW I'm a normal person. I don't know what OIDC is, and I've never been asked to use Passkeys (I'm proud to know nothing about Apple devices, and I don't entrust my security to "the cloud" or third parties). My password manager is local, backed-up locally. I'm already annoyed once I'm presented with a registration form, and only complete one when I'm forced to. Every extra annoyance increases my rage.


> I don't know what OIDC is

It was a very, very painful learning experience.


Maybe, instead of having two password fields, they should have a single password field that you cannot type in, but can only paste in, forcing everybody to use a password vault.


> That always pisses me off.

It's annoying, but I understand why that's done, so I don't get too upset about it.

But making me type in my email address twice? Grrrr.


You're citing an exception as if it was the rule.

But even in this case, I don't have two representations of the same object. Rather I have two different object:

Account (persisted)

SignupRequest (not persisted)

The SignupRequest is used to create the Account

The signup form on the UI is about editing the SignupRequest object. This object will be sent from the UI code to the backend as-is. The backend code will use it as-is (ignoring the json encoding/decoding).

There's a code path in the backend that takes SignupRequest and uses it to create a new Account.


In this case SignupRequest is your contract representation, Account is your storage representation, and the "backend code path" is the transformer/napping layer.

>I don't have two representations of the same object. Rather I have two different object

Exactly! Your api contract and your storage are ALWAYS two different objects, because they serve two different concerns. Sometimes by coincidence they can share the same shape but there's no reason that they need to be coupled together and impossible to change independently, other than the fear of inconvenient "boilerplate" mapping logic. By doing this up front, and not even letting it enter your data model, you create a formal abstraction boundary; it's reserving the right to change two pieces of data independently. Also, mapping/transformation logic can often just be simple, pure, total functions; which are trivial test and maintain compared to anything that touches I/O.


> Exactly! Your api contract and your storage are ALWAYS two different objects

Not really. I have request objects for everything. "Search" is a request object. List pagination is a request object.

Every function exposed through the RPC API takes a request object and returns a response object.

The response object is often just a collection of objects straight from "the database".

A response to a paginated list request will contain a list of objects straight from the database, in addition to some metadata about the pagination (names: current page number, total page count).

The cruicial part is there's no "transformation" of data as it goes out from the database into the UI. There's some aggregation and grouping (an outer object that contains multiple objects), but that's about it.

Again though there's a subtlty: some transformations do occur, but they don't occur on the path from the storage to the UI. Instead, everytime I store a complex object, I also derive a "simple" version of the object and store it too.

When you request a list of objects, you get the "simple" version, and the UI displays them in summary format. When the user clicks one of them items on the list to see more details about it, the backend sends the "full" object.

Notice the underlying principle: the UI flow dictates how the storage layer stores objects.

This is the anti-thesis to the common wisdom, where the storage layer does not care about the UI, and it's the job of the intermediate layer to transform data for the needs of the UI.


Yes really. The fact that you can "often just return an object straight from the database" and have it fulfill the functional requirements of your client is just a coincidence, or more likely it's an invarient that you have decided to enforce. What people have discovered (usually very painfully) is that unavoidable breaking changes to either your client representation or storage representation are bound to happen, and when they do if you haven't separated these concerns this will have a ripple effect through the entire application. This may be fine. If your applications are tiny or downtime is okay, then you likely won't care about this. But to casually dismiss this advice as simply always being overcomplicated and overengineered is a grave oversimplification that you may regret someday. Many of the topics in this post fall into this category - people do it for a good reason, and you might not need it, but everything is a tradeoff and "I'm just going to do the stupid simple thing" is not the silver bullet.


> Also, mapping/transformation logic can often just be simple, pure, total functions

I find this is never the case - usually for the exact reason you inadvertently sold as some kind of "benefit": "it's reserving the right to change two pieces of data independently" - because when you need to map from, say `SignupRequest` to an `SavedAccount` object you'll encounter data-members required by `SavedAccount` which cannot be sourced from the `SignupRequest` object - for example, supposing the UX people come to us and say we need to split-up the registration form into 2 pages, such that most fields are still on page 1, but the password boxes are on page 2. Now you need to deal with how to safely persist data from page 1 for use in page 2 (so using hidden HTML inputs between pages won't work because that requires POST requests to work, but both pages should be able to be intiially-requested with a GET request.


You want computer storage to be flexible, non-redundant, and small. You have a machine capable of quickly and flawlessly integrating data from anywhere, correctness is your main concern.

You want your human representation to be specialized, highly redundant, and as large as needed so anything important is presented. Your users can't find data or keep it in memory, enabling them is your main concern.

Usually, those two sets of constraint lead to the same format on behind the scenes admin panels and nowhere else.


Because most of the development work isn't done in the initial creation phase, but in the maintenance, debugging and extension phase.

And if you're a slow and unskilled developer, you want to minimize the the pain in the second phase.

(That goes for all developers, skilled or unskilled of course.)


I think of it this way, actually: I have a different structure for every major use in my system. A fairly general variant is having a structure for input, a structure for my internal operations, and a structure for output, but I use that just as an example. Each of them can be used as I outlined above, with methods to transform between them as needed.

This is necessary because each of those things represent completely different needs, because they operate in different domains. In particular, the guarantees are different; the input must effectively be treated as having no guarantees, and you must check them all. Your internal operation may add additional guarantees it provides, things you don't need to check anymore because the mere act of being passed a value of a particular type means that the code can rely on this particular thing being true, thus saving me a ton of code everywhere. For the output, you don't care at all about any guarantees but you need to conform to what the external world needs.

In general, trying to cover all these bases with one structure is a bad idea which leads to pain. I've seen it many times, where developers try to overload one structure to do too many things.

In specific... it so happens that 95%+ of the time, covering all the needs with one structure works out fine. But I conceptualize this differently than you. I do not say to myself "It's OK, one structure is all I even need because anything else would be overcomplication." I say "It so happen here that the structures are so similar that I can conveniently elide them down to one without significant loss. I can do this because I have examined all the needs and guarantees and verified that they do not conflict."

But the difference is, as soon as they do conflict, as the codebase changes over time, I split them, leaning on the compiler to guide me through the process, because I know from experience it is not particularly hard. When you need to do this, it is easier to just do the split of types than to try to make one type straddle the gap. (Besides, once you have one type straddling one gap, the odds are by the time you're done it's going to be straddling more than one gap. This tends to happen precisely to those most central types.)

I bet you have at least one type somewhere that is suffering from trying to straddle too many use cases. But I would also bet you don't actually have many such types. It turns out that the majority of the time the elision is safe. But I think it is a useful perspective to still mentally model that as an elision and having separate types as the underlying model. I think the way you are advocating for thinking of it works fine the 95% of the time our models agree, but in that other 5%, someone following my system is going to be a lot happier than someone following yours.

I do a lot of relatively small programming, projects in the single person-year range. I do a lot of this sort of elision, because bringing the full power of generalized architecture to such projects can cost you a lot more than it gains. But I also do this sort of modelling in my head a lot, too, and when I notice a particular elision is starting to cost me something, I will on-demand unelide some particular architectural elaboration on the spot. In fact I am taking a break from this exact process to type this post, as I need to replace an increasingly complicated hard-coded structure of decorator-based plugins for a fully configuration-based model of arbitrary combinations. For any given such re-elaboration, it is perhaps more expensive to do than if I had started with that architectural feature in the first place, but across the full space of possible architectural features I win big versus starting with an architecture that is too heavy weight in many ways that I will ultimately never need in this particular project.


Transforming state data between different layers of the system if done wrong is one of the biggest nightmares in my opinion you could have.


I think the reality is that neither of those outcomes are desirable and your statement doesn't do anything to support the OPs case. Both are wrong.


Less state changes the better.

Managing state is the number one enemy on software.


I agree wholeheartedly. I don't understand why everyone insists on repackaging the same data over and over.

Store it in one format in the database. Read from the database and transform it. Package that into a TO object and send it over the wire. Receive the TO object and repackage it into your local context Take the local context and repackage a bunch of parts of it into each view model as necessary.

Everyone just wants to bundle up data and throw it over a wall instead of working together to engineer end to end.


Because you don't want couple your data model with your public contract. it allows you to change them independently.

When it comes time to change your data model you can't without bubbling it all the way through the to the public API which may not be desirable.

Repackaging the data at every level means you only have to change the transformation at one of those levels.

For small projects this isn't a big deal.

But if you are working on a large project with multiple teams you have public contracts at multiple levels. You don't want to wade through 10 layers and 10 teams of changes because you change the way you store and compute some attribute.


If you are working on a large project with multiple teams and you change something like that you better version it or make sure it's backwards compatible anyways.

Otherwise you're going to break something downstream and you're still going wade through all of those layers, and now it's less obvious what broke because everyone is re-bundling your data into their own formats.

And I am not advocating for dumping your DB rows directly onto the wire for the frontend to fumble.

I am just saying it's silly to write a frontend and backend in a super modular, decoupled way when they are actually just a single service.


Not if the public view on the data doesn't change. Just the storage. That's sorta the point. To not have to version if you just change the underlying storage model.

For example let's say for whatever reason we were storing a duration as an int. But instead we decided to migrate it to start time and end time.

Do we need to force that change on everyone downstream and add a new major version API? Or can we just compute the old duration from the new attributes in the transformation.

Even in your example do we really need to change the frontend in this case? For small projects the extra boiler plate probably doesn't out weigh the benefits but for large projects it absolutely does.


I disagree completely.

If the data structure needs to be changed it needs to be changed at the beginning of the system typically the front end. And downstream code needs to be fixed to accommodate that.

Otherwise youve created 20 different state machines for each part of the system stacked on top of each each other each expecting a different data structure and returning a different data structure.

So changing anything after the system is sufficiently complex is an exercise in masochism and development will slow to a crawl to avoid breaking one of the 20 downstream black boxes.

There should only be a single data structure contract that all teams follow.

There needs to be a SINGLE data structure passed through the system originating at the beginning of the system. The data structure can be added to by code along the pipeline but never changed.


Am I understanding that you think that there should be just one data structure which is shared between all teams which is a superset of all fields that it could have and people just add data to those fields? So at any given point you don't even know what data is present or not present depending on which point in the pipeline it has passed or not?

At this point you may as well just use a object or dictionary. The type doesn't give you any idea about the actual shape of the data.


Absolutely correct.

> So at any given point you don't even know what data is present or not present depending on which point in the pipeline it has passed or not?

At least with a non mutating data structure with the public contract you can tell if it's passed through a part of the pipeline because a required field is blank or null.

With a mutating data structure that gets changed, say 20 different times throughout the system you have no idea if it's passed through some part of the pipeline or not.

Even better if somehow everything can interact via a centralized database where all state is stored even intermediate state. Its not just storage. It's also state management.


And they'll also run into major liability issues when they leak customer credit card information, SSN, or something similar because they have a single class that represents the table in the database and they use it in both the frontend and backend.


how does the "structure" of the data have anything to do with leaking information?

Seems like data protection occurs way lower in the stack like https and encryption at rest.


Because of things like mass assignment or IDOR, or injection attacks, presumably.

Handing data from the user (untrusted input) directly to the backend unchanged is going to in 99.9999999% of cases also mean it's unchecked.

"I'm not smart enough to bother sanitizing my input, or to learn about stuff that's someone else's job like security" would fit right into this "manifesto".


Sanitizing data is string based not data structure based though.


Well first off, that's not correct anyways; type-conformancy is a very valid and important part of data sanitization.

E.g. does the SSN consist of 3 valid integers split on dashes? If not, it ain't a proper SSN. Catching that typeError is much safer than trying to roll regex or character allow/blocklists.

But also, the manifesto never mentions data structures.

It says

> I’m not smart enough to figure out how to transform data between different layers of the system, so I just don’t. I use the same represetnation in the UI layer and the storage layer.

Transforming data means actually changing the data, not the data structure that houses it. The author is explicitly talking about making changes to the data itself.


You shouldnt have to "transform" the data though.

You should validate the data and sanitize the data...but if youre transforming data youre headed into a state management nightmare.

State management is the number one enemy and other than sanitizing and validating both the data and structure should be considered mostly immutable.


You absolutely may need and want to transform data.

You don't want to store code unmodified in a database; that's how you end up with SSTI or stored XSS. You encode special characters in a simple, reversible way (as one example).

Similarly, you don't store passwords untransformed in a db, you hash them. That's a transform.

There are tons of examples like this.


You're taking about securing the data itself, which has nothing to do with its parent structure.


And in the manifesto the author says

> I’m not smart enough to figure out how to transform data between different layers of the system, so I just don’t. I use the same represetnation in the UI layer and the storage layer.

That has nothing to do with the data structure, is about how the data is being represented in the backend vs the frontend. It's explicitly about changing the data itself.

Stuff like url-encoding and escaping characters are examples of transforming data to be represented differently in the backend (where e.g. it's encoded or escaped) from the frontend (where it's displayed in "proper" formatting).


Yeah, other than sanitizing and validating, you shouldn't be transforming data or you're headed into a state management bug nightmare.


So you think that if e.g. Stack exchange is storing code examples in their database, they shouldn't transform the special characters into e.g. url-encoding or some other escaped format?

How are you going to validate it doesn't do something harmful?

And also, you keep mentioning sanitization as though it's not inherently a transform. Stripping out whitespace? That's a transform. Just because it's a one-directional transform doesn't make it not one.


I'm guessing they are thinking of a scenario where SELECT * FROM User_Details gets sent directly to the front end.

So even if all you are displaying is the users name or initials you would still be sending things like SSN and credit card number to the front end


Sanitizing your inputs is a string issue not a data structure issue.

Sanitizing your inputs has been known about for literally almost half a century that should just be default for developers at this point.


>Sanitizing your inputs has been known about for literally almost half a century that should just be default for developers at this point.

Except if you're a "stupid programmer", in which such defaults are irrelevant to you. In such cases, one can only hope they're relying on tooling that sanitizes as much as possible for them.


Data protection also happens at a much lower level. If you're running your binary blob on a server with no firewall you will be hammered with constant root hacks. https and REST encryption will be irrelevant.

Even worse, you may be running a mail server on the same machine.


> hammered with constant root hacks.

That happens to every server exposed to the internet.

Unless there is a targeted ddos event its usually on the level of a query or two per second AT MOST or so.

Its something most people don't worry about because theres no way around it.

If you need ddos protection you put the server behind cloudflare or something like it.

Its totally fine to raw dog the internet without a firewall in my opinion if the server only has a single web server that you keep updated and sanitize your inputs.

Running other services on the same server without a firewall becomes horrible though youre right. lol


I think what they're really trying to say is "we can start out with this and in a lot of cases we won't ever have to change it".


No that's not what I'm saying.


Haha, OK! Well, sorry, then.


or the field is "is_rspts_field" and no one will tell them what that means.


I think he's just rebelling a bit against the culture of caring about scale and performance. A culture that has been pushed out by FAANG that also has some academic undertones. The subtext he's giving is: it doesn't really matter.

That's what I read into it, because that's what I identify with. I've met so many fellow CS students (back in the day) who cared about performance optimization. I never cared. I just wanted it to be simple, working and done. I cared about clean code because I had a hard time understanding anything else. I felt allergic to over-engineering because it portrays a whole host of thoughts/emotions that just hurt my brain empathizing with it how it would feel if I'd have those. So I kept things simple too.

Usually the repercussions aren't really there. When they are there, then they need to be there and it's usually a sign that your product and the place in the market you're at is evolving. Code is a living thing. Sometimes a big rewrite is necessary, but if that simple unoptimized code held up for 10 years (which is what I'm experiencing), then I'd argue that's okay.


A lot of these things aren't about scale and performance, but maintainability. Once you get over the initial learning curve, things like static typing, Docker, cloud services managed with Terraform, etc. don't require that that much up-front effort and save you a lot of time in the long-run.


> I can't tell if this is an honest call to keep things simple, or if it's meant to ridicule that idea.

Me neither.

All of these things involve tradeoffs. Multiple repos vs monorepo highly depends on your organization. Microservices vs monolith has been discussed to death. A simple, locally-driven deploy process is great for solopreneurs but doesn't scale to full companies. Load balancers might be required depending on your availability requirements. Storing all your data on disk is easier than maintaining a database, but aside from performance bottlenecks, it's harder to define a recovery process (RTO/RPO) in the case of a hardware failure. And as much as I dislike them, there are ease-of-use benefits in using dynamically-typed languages.

The biggest technical challenge in most software engineering roles today is evaluating the pros and cons of these types of choices and choosing the right one for your situation.


Yeah this is a sane take. Completely agree except for I think theres more performance bottlenecks with a database than files on disk no?


OP here.

I was working on a feature for a webapp that I thought should be doable in a day, but I spent roughly a week on it. When I was done with it, I looked back and thought: wait, why did this take me the whole week? I couldn't come up with a satisfying answer.

So I just decided to accept that I'm not that productive. Maybe there are things I can do to improve my speed of implementing features, but for the time being, this is my speed.

I was going to make a writeup about coming to terms with that.

But somehow as I was writing (originally on Twitter) I linked it in my head to the way I hate modern dev culture: the docker and the webpack, etc. Programming is already hard, why make it 10x harder with all the tools that require complex configurations, etc?

I remember raising this point on HN and other places with other developers, and that there was always push back from people who swear by these tools.

So it clicked in my head: I hate these tools because I'm already slow as it is and I can't take it when these complexities slow me down even further.

Then as I spent more time thinking about the content, I thought this a manifesto worthy content.

But in terms of the points mentioned: I'm dead serious about all of them. Although what I actually mean might not be obvious from first glance.

When I said I write objects as-is, it appears that most people in this thread thought I'm writing individual files. This is not what I'm doing. I'm using a B-Tree backed key-value store and doing binary serialization. I also have a scheme where I make use of the properties of B-Trees to make indexing/querying possible. I have a whole write up about this topic: https://hasen.substack.com/p/indexing-querying-boltdb

For using HTTP as a medium for RPC, I also have a writeup: https://hasen.substack.com/p/automagic-go-typescript-interfa...

These alternative techniques took time for me to develop, so you could say that it would have been faster for me to just use postgres/docker/aws etc (the standard stack). But this is my point: I really am too stupid to learn them.


There are plenty of reasons why something you think should take a day might take a week. We often (fallaciously) assess a problem as "easy" without considering the context in which it has to be incorporated, i.e. the codebase. The duration is moreoften a function of the complexity of the codebase, rather than the complexity of the isolated problem.


We've all been there. Things take much longer than expected. This is, as you've already stated, something we come to realize in our careers.

However, I would like to share a different angle, if I may. I don't think your speed is necessarily the problem. I think your time estimation skills might need some work.

It's easy for us to think things "should be simple". Unfortunately, the modern world of software is becoming more and more diverse and complex (a symptom of software "eating the world"). It's only natural in a world that has evolved from 8086's and terminals to interconnected-everything, multi-OS, multi-platform, multi-device magic is going to be orders of magnitude more difficult to build on at any reasonable scale of business. There are just so many small things that can go wrong!

I would gently suggest you stop being so hard on yourself and instead of beating yourself up for "being slow" just take each technology one at a time. Ultimately, the way to survive in this career is embracing change and staying sane while doing it. It's not easy, but it's the only way.

As for you being dumb, I would also like to say that it is clear you choose to deeply understand the topics you put in your head. Many folks "learn enough to get the job done" and very little more. It seems you have an appreciation for deep learning. This is not a bad thing, it is usually this trait over a long time that differentiates a truly senior engineer from an intermediate one. Your path may be slower, but it is probably more thorough and more informative in the long term. Keep in mind though, this breeds imposter syndrome in your mind. When you accept you still have lots to learn, it is easy to feel like you know nothing at all. I would suggest not letting this get to you. We all feel it, it is real, we're all just trying to do our best day-to-day.

Oh, and always 2x pad all your estimates at minimum, cause if we know anything about computing, it's that it always has hiccups.


It's ok man. I don't need to be cheered up.

It's completely fine to accept your situation without giving up on improving it.

The problem is when people refuse to acknowledge reality because they really hate certain labels and are too invested in not having that label apply to them.

I'm slow and it's ok.

I'd like to become faster in the future.

But for now, I'm slow, and that's ok.


I'm also slow. It is also okay.

I have met many, many brilliant engineers. They fly real fast and really far. The funny thing about it is that they tend to crash and burn sometimes too. Sometimes they even break orbit, but even then, someone has to stabilize that orbit from time to time.

Life takes all kinds of people. Folks who are willing to simplify things while ignoring fads are just as useful as those who jump into the unknown of new technologies.

What you think of as a weakness you need to compensate for so you can get work done is what other employers will consider an asset because you refuse to add complexity where it is unwarranted. What you call "slow" others might call wisdom.

I'm really not trying to cheer you up. I'm sharing facts from over a decade writing software being the "slow one".


Wait, so you made your own database? That's the thing I'm not smart enough for. I'd rather just use an existing database.


I mean, not really. The hard work was done by benbjohnson who is now working on https://litestream.io/ and https://fly.io/

He's the one who wrote BoltDB https://github.com/boltdb/bolt

I just put a relatively thin layer on top of it.

Now, to address your point more directly: I'm too stupid to figure out configuration, but not too stupid to figure out code. Code gets compiled and type checked. You can have tests, etc. Tractability for code is much higher than configuration.

With configuration, you have to be really smart and keep many moving parts in your head.

With code, you can be a bit dumb and lean heavily on the tooling.


Skeptical of the person who is "not smart enough to figure out docker", yet is smart enough to know about building a "web application into a self contained statically linked binary executable file".

I think they are smart enough, they just don't want to, or don't see the value in it (even if they've made an uninformed decision)


If you take 'figuring out Docker' as an open-ended problem then it makes sense. Getting an app running in a container is relatively easy, but that isn't really 'figuring out Docker'. That's just scratching the surface. Learning how to optimize that container, make it secure, put it in a registry so you can reuse it, etc means there's a lot to get through before you can say "I've got Docker figured out." There's no 'Done' when it comes to learning Docker. That makes it a harder problem in many people's opinions.


Idk this person claims they “can’t” figure out HTTP verbs and REST. I’d say there’s a very good chance “figure out docker” means “run a program in a container”


Yeah, I'm not smart enough to figure out either. I'm smart enough to rely on the command line tools provided by the framework I'm using.


Yeah, how does one do this? I see something called bakeware that does it for elixir... Is that what the author is using?


Nah they just use Go, so they get that outcome mostly for free, based on their other blog posts anyway.


Golang makes it very easy, e.g.


Many doctors watch videos at 1.5X speed or they can not take in the information. If its too slow their brain switches off. I find I have this problem too and i havent figured out docker either even though im a ninja at programming.


This appears to be someone railing against these practices, but in terms of "defending" them.

Some are universal, but some, only apply to certain domains.

I'm not really a fan of tearing down others, but I do feel the industry has a lot of room for improvement. Not sure if these types of screeds will actually make things better, though.


No SQL? It’s satire.


I read this as childish frustration directed at people that are not "as smart" as the author.

Regardless of what he's trying to say, there are much much better ways to present these topics.


It’s important to keep in mind that pretty much all the techniques you’re too „stupid“ for all aren’t necessities for small projects, but rather being able to manage complexity (or workload) at scale. For an MVP or a small product, keeping with the simpler techniques is absolutely fine, and adhering to those standards in many cases can even be considered overengineering.

Though there’s one thing I take issue with. „I’m too stupid“ implies that these techniques are impossible to understand, while I think it’s just a matter of being able to put in the time and effort to learn the intricacies. It’s absolutely fine if the resources (mainly time) aren’t available to you, but I don’t think it’s a matter of cognitive function.


By which I mean „I don’t have the time to learn X“ would be more precise, even if less catchy for a HN headline


It's a rhetorical device


> It’s important to keep in mind that pretty much all the techniques you’re too „stupid“ for all aren’t necessities for small projects, but rather being able to manage complexity (or workload) at scale

Most of the techniques people use to manage complexity at scale are too stupid to actually do that. Their only benefit is that following them prevents people from doing something even worse.

It also prevents people from doing anything better. Instead we dogmatically follow patterns with mountains of useless boilerplate and unnecessary abstraction, but somehow it's considered less complex.

I hate software sometimes.


Boilerplate and abstraction for the sake of it may be bad, sure, but there’s something to be said about having Standards anyone can adhere to. If I use a certain structure of code that requires more boilerplate but also means that anyone who has worked in that framework before knows exactly where to find and how to edit things, that’s a net gain, even more so in bigger teams. Think about tradeoffs, the most lean and elegant solution may not always be the best.


Sigh. I hope this is satire. Or this person is only referring to their personal projects. Otherwise, I agree with their premise and suggest they find a different career.

HTTP Verbs? You really can't get more basic. At least you could try to wrap your brain around the idea that GET reads and POST writes.

Not using SQL? Okay. You're spending a lot of time and effort hand-rolling your own shitty database.

The combination of things suggests to me they have can't or don't want to deal with with non-trivial mental models. Things like transforming data between layers or figuring out how someone else will read your code.

The next person to work on their stuff is going to have a massive headache figuring out what "made sense" to this person.


> HTTP Verbs? You really can't get more basic.

Every REST API codebase I've ever worked on would like a word. There's TONS of subtleties that determine whether you should be creating a GET, POST, PUT or PATCH for many different use cases.

And this isn't even taking into account the hundreds and hundreds, maybe thousands of endpoints I've come across that were obviously wrong, but some Senior or even Principal dev had somehow managed to completely botch it - NOT a rare occurence.


I didn't say they needed to use REST. REST is a huge mess.

You can simplify it to GET = doesn't change stuff, POST = changes stuff. Basic HTML form tag stuff.

For the few things in-between like self-expiring links or counters recording how many times a thing was viewed can get tossed into GET because the data you're retrieving isn't being modified. (Search can be POST if you don't like query parameters in your urls.)


GET: get a thing from a place. PUT: put a thing in a place. POST: create a new place, put a thing there, and tell me its location. DELETE: delete the contents of an existing place.

The biggest subtlety I can think of is deciding whether PUT to a place that doesn't exist is valid, and whether DELETE of a place should delete just the object or also the place too, if PUT is only valid for existing places. How to handle that subtlety is application dependent. Personally I like to architect RESTful services such that POST isn't necessary since it's hard to make it idempotent, but sometimes POST is the most comprehensible interface.

Also, I wouldn't bother using REST at all unless I was going to make everything fully discoverable via hypertext[1]. However that's not to say there aren't soundly designed non-RESTful ways to use the HTTP application protocol.

[1] https://en.wikipedia.org/wiki/HATEOAS


I'm okay with it until the database.

For http verb, I'm onboard with just using POST for everything.

The database is one area that you need more care. Even if just SQLite, storage persists and especially if this is for a company, needs to persist past you.


Unless there's a compelling reason...like a use case for caching (your server or an intermediate) or high volume (POSTs are larger requests), POST for everything is a good default.


Debugging is easier with GET requests, where you can see what is being sent to the server. It's not a huge win, but it's better than POST for everything in my experience.


What do you mean? You can inspect POST request payloads in browser devtools just as easily as GET requests.


It's far faster when you can look at the URL and see the parameters. I have worked on systems with both and I didn't like everything as a POST. If you want to get a page with list of objects with a filter it's a lot easier to copy paste a URL than it is to fart about in developer tools and try top recreate the POST request.


It's also easier to reproduce, for sure. This goes for QA as well as customers, that may not be casually accessing the request (or have access to the logs) for reproducing problematic requests.


Yeah I really hope this is exclusive to personal throwaway projects. Otherwise such thinking can land you and your potential users/clients in very hot water. A belief like avoiding SQL because is too complicated indicates to me you're incapable of handling data appropriately. Again, totally OK if you're smart enough to understand that you're not going to be using user data anyways maybe saving a theme variable here and there. But none of that is indicated in the article, and is why I'm so confused why the comments ITT are supportive.


> Not using SQL? Okay. You're spending a lot of time and effort hand-rolling your own shitty database.

I've seen this before.

It crashed and corrupted most files because the author didn't understand how file IO really worked (he also created a ramdisk thinking it would be faster).

Fortunately, someone took a memory dump of the process some times before and most data was recovered.


> The combination of things suggests to me they have can't or don't want to deal with with non-trivial mental models.

You mean… they’re stupid? Like they say in their title?


Being limited in time, effort, or ability isn't stupid. I've got my own mental hardware limitations with reading math and some functional programming languages. Bad enough I don't put in the effort to learn certain things.

Now, writing an article conflating limitations or lack of desire with intelligence shows... poor self-awareness.

But yeah, I prefer not to say stupid when I can be more specific. :)


This post is the programmer equivalent of teenage girls posting photos of themselves captioned with "I'm so ugly" to farm praise from their friends.

I have come to dislike this growing trend that celebrates mediocrity and failure. Posting proudly that you are stupid, terrible and incompetent seems to result in applause and high praise. Why? Why is that a good thing? I hope this was intended to be a parody.


>I have come to dislike this growing trend that celebrates mediocrity and failure.

That's not what this is. It's a sarcastic critique of techniques the author thinks don't justify their weight. It's a comment on the cognitive load required by the plethora of complex "best practices" that often goes without questioning.

The multi-repo microservices thing really hit home for me. I only recently had to deal with this, and it was horrible. I finally made my peace with it, but it required adding an entirely unnecessary level of abstraction and techniques where merges are the new commits, and repos are the new directories, and so on. And each service was 90% boilerplate and like one endpoint with a few hundred lines of code.

Ultimately I think reducing the coginitive load of your devs, and making the build-test-debug loop fast, is the highest goal of project leadership, but I seem to be in the minority. If you don't like the "stupid" verbiage, I think one benchmark for a successful system should be usable by devs who are mildly drunk.


The problem is people aren't sure this is sarcastic. I've heard people say shit like this.

It reads very much like "I don't like learning new things" rants I see from time to time.

They've found one way they like doing things and stopped. They don't know how difficult any of the stuff they are talking about is. They just know it's something they don't want to learn.


I don't like learning new things.

Unironically.

I like learning good/useful/effective things.

Not "new" things.

Most "new" things are less capable variations of existing things that are better and more mature.


It may be similar to the idea that if you are deemed to have imposter syndrome, then you cannot be an actual imposter, so if people perceive you to have imposter syndrome then it's in some ways a good thing.

It's impossible to know if the author has imposter syndrome or not without some way of measuring his competence. However, I think "imposter" has become a loaded term as a result of the syndrome it's associated with. There are cases describing workplace incompetence, or not being qualified for a job, or being rejected for cheating at interviews, but there is no discussion of imposters in those cases. They're just deemed incompetent or not the right fit and moved on from. Actual imposters who game interviews and do manage to land jobs aren't called imposters either, they're called dishonest, but there's no "dishonesty syndrome."

So I hesitate to reach for explanations of imposter syndrome if I'm not actually qualified for something. It can become used as a crutch to fall back on like a lot of diagnoses, instead of addressing the root causes of qualification. Waking up on that cycle made me realize I have a lot to work on in some areas.


"Imposter syndrome" is one of those terms that has lost most of its value now that it's become widely known. I don't think it has any value when it's self-diagnosed. People seem to think that being in a new job or a new environment where they are the junior automatically means they have imposter syndrome. No, you probably are junior and do need to learn new things every day. You probably do have a lot to learn before you are useful. It's likely that you are out of your depth. That's not imposter syndrome. Imposter syndrome is when you are at the top of your field but you have the irrational belief that you're actually a total fraud, an imposter, someone that doesn't deserve to be there. Yet how many threads have I seen where someone with 10 years experience talks about how they had imposter syndrome a few years earlier?


It's a much needed coping mechanism, because the tech buzzword churn prescribed with an imperative tone is the equivalent of media setting unrealistic expectations for teenage girls.

You may be experienced, but a large part of people in our industry are just out of school and don't have the experience to sort through it.


The “10x 1x 0.5x developer” talk at the start is the only thing giving me that feeling. That I find annoyingly grating. Resigning yourself to some self-flagellating label because you don’t like to use docker is just begging for sympathy. I’d be more open to empathizing with the author if they just wrote about why they like their system. Doesn’t have to be a “why X is better Y” where you have to defend everything. Just “how I do Z with out Y” and people might be interested.


I don't think it's hard to see why people have this reaction when they're inundated with people who are better than they will ever be on social media and sites like HN.

Not everyone can be the best and this seriously affects some people's self confidence, especially when they see the best around every corner.

In 2023 it's not good enough to be good enough, at least not if you believe what the people on the internet have to say so folks rebel against such things.


TBH I just read it as someone who's found a way to make web apps that works well for them and want to focus on solving problems rather than toying with their tech stack or copying solutions from contexts that are entirely unlike the one you're working, e.g. because you're at a much smaller scale.


I think in this case being "stupid" is just a narrative pretext to advocate for simpler practices with less cognitive overhead more than anything else.


I've had similar crises of faith every now and then. I never got into Docker, and every time I read somebody's K8s horror story I get hives. I begrudgingly learned enough about systemd so I wouldn't become a hermit, but I'm not happy about it. I still more or less prefer to do Web things without the SPA-like features.

Some of this is because this is what I know, but some of it is because I'm too dumb to see the benefits of the added complexity. I'm not going to do "WEBSCALE!" things, more than likely, so why not just run a FreeBSD server and, if needed, stand up a jail here and there. Chasing the tech rabbit can be fun, but it is also exhausting.


I feel the same, but docker is actually a net positive in my opinion. You can learn enough of it to be useful in a fairly short period of time, though it does throw up own set of problems that need debugging from time to time.


Related: The Grug Brained Developer

https://grugbrain.dev/


I love the grug brain article, but this original post is taking it too far.


Thank you for posting the link. This was definitely one of the inspirations (in terms of the provocative titles). But I couldn't remember what it was called exactly.


"Smart" is certainly subjective. And one can, given enough time and practice, "figure out" all the things the author listed.

The problem is time. A lone dev just can't take the time to setup the full stack every time for every project at every client. What to do?

Templates, scripts, pre-made configs ... soon you have a stack that will deploy itself and all you have to think about is structures and algorithms. And now you look like a 10x dev. Not because you "know everything in the universe," but because you leveraged technology ... much like our customers want to leverage technology.


I'm convinced the difference between the "10X" dev and the 0.5X dev isn't intelligence, it is rather the strength of the "rabbit hole detector". The 0.5X dev jumps right in, never to be seen again.


OT ramble: I met a -0.5x dev once. .NET dude, lots of experience. Standup, every day, was something gone awry with dependency injection. Every. Single. Day.


> standup

The one of you who came up with that idea was the true -0.5x engineer.


“Came up with?” None of us invented Agile/Scrum/wtfe malfunction requires daily standup meetings. Management insists, management observes, management signs the paychecks, so … here’s your DSM.


Noob Observation: There is an element of knowing what saves you time and then on top of that why and how it does. The saving I will divide into two categories:

1. Easy wins category: Don't repeat code, cause you will take twice to change it. This is clearly visible to even the author.

2. What I call the "tougher" win category: When you require more executive function like breaking a bad habit, managing cognitive load; in short understanding why it saves you time in the long run. A reasonable example of this is using typing in python. I and many of you know that you can catch errors way faster using types+pyre(for example) than writing tons of unit tests, but it takes an old python programmer longer to get used to putting in the types and some times remembering to set up and use Pyre to get used to the idea of how it saves time. Pre-pyre used to python dev might just think "I will write tons of unit tests".

There is obviously a psychological/mental health component to this.


Unironically have no problem with any of this at at small and maybe even medium scale which is like 99% or more of applications.

As long as the disk written data is being backed up rock and roll!!

When they say 'compiled binary' are they using Java or are they writing web apps in C?


Probably Go. It is really well suited to this sort of thing.


Yeah lol this entire blog post is basically extolling the virtues if golang.


You’re not too stupid, nobody is. If you need to learn it, you can, and you will. Some things are hard to learn but it’s worth it because they help us do things we couldn’t otherwise.

As others have pointed out: don’t use k8s to deploy a web app for your local soccer team. All of the suggestions in this article are fine.

It’s actually kind of smart.

Update: ... except the REST/HTTP part. It doesn't matter for a small friends-and-family website, sure. But if you're building something you want to have grow and stick around and be used by others you'll just be setting yourself up for problems down the line. I dunno why RPC has come back into vogue, it was a pain in the 90s and early aughts that I had hoped we'd abandoned. It doesn't play well with the wider web ecosystem, it's a pain to grow horizontally, and locks you out of some pretty neat tools that can take your website further as you grow without much effort.


I am another stupid programmer.

Even if I have the experience, I still decide to use sqlite, single binary, local files, vps, monorepo. At least until market-fit is proven.


Most projects are created by programmers. That means that projects are not maintained properly, as most programmers focus on code, not on the build scripts. Making projects user friendly is an after thought. Projects are not advertised properly. Are missing documentation, screenshots, youtube tutorials.

Some projects live long enough to become bloated, big, do-everything projects, full of experimental technologies that were forbidden to use at work.

Adding layers, abstractions, interfaces may lead to adding unnecessary complexity, or dependency. How to spot code written by a senior? It is dead simple, obvious. How to spot project managed by a senior? Is is easy to install, well documented, easy to kick-off, uses little-to-none resources.


Most projects don't need Docker. I have built multiple billion-dollar-a-year company websites from scratch (no, I don't make a lot of money, just a dev) that get tens of thousands of users a day and we barely ever need more than one or two instances of anything, all of which the major cloud providers can scale out of the box with Azure App Service / Elastic Beanstalk / etc. Docker/K8S is pure hype and never fills the "oh it simplifies development on various OS" argument.

Sure, you can orchestrate in K8S to spin up the front end, back end, and database instance, but one day someone has to troubleshoot that. That someone being a dev who doesn't work with K8S all the time - good luck.


> Docker/K8S is pure hype and never fills the "oh it simplifies development on various OS" argument.

This is demonstrably untrue and a hilariously bad take.

Docker absolutely simplifies things. Give me a docker-compose file any day over “install X version of Y and Z version of Q”.


Yea, except I literally never have that problem. Get latest Node and .NET (as an example) and you are good to go after NuGet restore or npm i. Sounds like you guys need to keep your libraries up-to-date. Which you will need to do anyway if you want your project to be maintainable. On a similar vein - don't pull in every single dependency you can all the time. Btw, this isn't directed at you specifically, obviously, just easier to phrase it that way.

Edit: I am actually not arguing just to argue and think about this a lot. A "real world" example is "well, I need to mock out AWS SQS locally and now I have to pull in a fake local queue" or "I need to mock out RabbitMQ locally", etc. The short answer to those things is that you mostly don't - test the endpoint handling queue messages independently / write integration tests for it if you have to. There is no point to having the whole system mocked on your local - SQS/Kafka/whatever will do what it's supposed to do for the most part.


multiple billion per year with tens of thousands of users per day?

Assuming conservatively that this is 1B/year and 100K users/day, that's still $273/user/day. What are you selling?


Medical staffing with 2k+ commissions for placements with pretty high conversion rates. Again, I am just a dev so I don't see any of that.

Inventory tracking and auditing for Fortune 500 companies in a SaaS before that.


For anyone not sure if it's satirical (or perhaps allegorical), a click on the author's profile may help. Juxtapose this article about BoltDB, B-Trees and "moving away from SQL", https://hasen.substack.com/p/indexing-querying-boltdb, with "I'm not smart enough to figure out SQL queries."


  sed -i 's/I’m not smart enough/I have not made the effort/g' {url-in-a-file}
(Pardon my syntax, might not be 100% correct here)

Devs work in a performance business, and most of the time "smart" can be replaced with "effort". Heck, even "experience" often equates to "outcome from prior effort". If you don't care to understand something, therefore making you less "smart", we're often just talking about a choice in applied & focused effort.

Of course, there are caveats to this. Effort alone can't compensate for capability to learn (I keep this bookmarked: https://cdn.shopify.com/s/files/1/0535/6917/products/incompe...) And there are times where effort requires a lot, and we might not have the room/time to simply add that capability to our repertoire.

But all this said, I have found that being "smart" enough is more often just a status of where my efforts have been applied.


Boy, I feel this.

I am also a 0.5x programmer. I write dead-simple C; I don't even touch web development, which I consider more complex than C development with Valgrind.

In fact, I've spent nearly 3 years writing a build system. That's how slow I am, and it's made some acquaintances laugh at how long I've worked without result.

But in reality, I'm not just building a build system. I'm also building my own stack, a la [1], including replacing as much of libc as I can and changing the OS API's I use.

I think that soon enough, this stack and this simplicity will become my superpowers and make me a 10x programmer.

Being a stupid programmer means you won't be clever. And if you're not clever, you will be able to debug it, even though it is twice as hard to debug than to write. [2] Good debugging is a superpower in itself.

[1]: https://youtube.com/watch?v=443UNeGrFoM

[2]: https://www.defprogramming.com/quotes-by/brian-w-kernighan/


Great point about debugging.

We're currently trying to simplify a monolith that is too hard to maintain, and the biggest issue for us that debugging is heavily impacted the call-stack size. Sometimes 100 functions before it gets to the point you want. All this due to very complex class and function hierarchies.

I'm responsible for a rewrite of some modules and I'm personally gonna stick with simple stuff.


For someone who's "too stupid" to understand these techs, they certainly can explain them and their drawbacks and alternatives well.

This reminds me of a bit of advice from Austin Kleon that I've used frequently -- "Make bad art, too"[^1]:

> “Good” can be a stifling word, a word that makes you hesitate and stare at a blank page and second-guess yourself and throw stuff in the trash. What’s important is to get your hands moving and let the images come. Whether it’s good or bad is beside the point. Just make something.

This is a perspective I have to come back to, as an engineer building enterprise-scale things, when I'm working on small-scale projects. I don't need to use the same tools I use at work, I can pick "bad"[^2] architecture, I just need to build _something_.

[^1] https://austinkleon.com/2020/04/15/make-bad-art-too/ [^2] "Bad" in that I know precisely how and when and what would bite me in the ass when I try to scale it.


For a side project I used some PHP to get a job done. Why? because you can edit the file on the server and keep trying it out till it works. On 90s style hosting that is cheap and honest. The iteration speed is amazing. CI/CD took <100ms.


When I wrote PHP on the server I had real problems with the "CI" part- I would refresh my page an manually test each change. Do you have Continuous Integration tests (i.e. automated) for this code or are you also doing the manual refresh cycle?

No shame either way, I think the juice isn't worth the squeeze for automatic testing short-lived code myself


Sorry that was my silly subtle joke. The CI I am referring to here is refreshing the page! And I agree the juice isn't worth the squeeze.

(I love CI systems at work and wouldn't live without them, but that wait time :-(, so everyone tries to get the OODA loop on their local machine )

One idea I had would to build an Elm-like backend language BUT with PHP's "edit the file on the server and it runs" nature. Combine that with some source control, forking and more of a VSCode editor on the server, and you would have a nice DX for small projects.

All those features exist but they live in different languages/stacks, you can't have them all at once (I hope this is where someone replies "you say that... but have you tried X"!)


> I’m not smart enough to figure out docker confiuration, so instead I compile my web application into a self contained statically linked binary executable file.

Good luck getting that to run in different environments because static or not, nearly every executable has dependencies.

> I’m not smart enough to figure out cloud services and auto scaling groups, so I just upload my static binary file using scp to a linux server I rent from a VPS provider.

Which of course you'll need to setup, patch, figure out how to restart apps when they go down and on and on. There's no easy out.

> I’m not smart enough to figure out how to setup all the databases and firewalls and load balancers, so I just embed the http server and data storage engine as libraries in my static binary.

And how do you back that up?

I get the gist of the article, but some things are not always replaceable with an easy alternative. Most alternatives also have disadvantages as well as their own learning curves.

Postgres, Docker, Infra as Code and Kubernetes are pretty much best practices on any development.

Don't kid yourself that the learning curve is not worth it.


> Good luck getting that to run in different environments because static or not, nearly every executable has dependencies.

Go compiled with CGO_ENABLED=0 disagrees, especially since embedding was folded into the standard library.

> Which of course you'll need to setup, patch, figure out how to restart apps when they go down and on and on. There's no easy out.

Recompile, upload, restart. If the VPS server goes down, that is what their tech support is for.

> And how do you back that up?

Either use LVM snapshots and tar or sigstop/tar/sigcont, and rely on whatever embedded storage engine you are using to be able to recover from what will appear to be a bog standard recovery from unexpected shutdown. If you are nice, arrange for an API to be able to take a complete snapshot of your running data.

> Postgres, Docker, Infra as Code and Kubernetes are pretty much best practices on any development.

That is debatable. There is a whole lot to be said about the virtues of a single binary that has no dependencies beyond a vaguely modern Linux distro.


I’ll take a static binary any day over some virtual image, thanks.


Docker images run native on Linux. The virtual part is just to get Docker working on MacOs/Windows


> Good luck getting that to run in different environments because static or not, nearly every executable has dependencies.

Maybe he's using nix to parameterize his builds with the system architecture, but left it out because it didn't fit his idea for a blog post.


No. I'm too stupid to even understand what that means.


Some of the things in this are a bit much, but I think some of this comes from the cult of the 10x developer. I feel like I'm a 10x developer on some very specific things, and a 0.5x developer when it comes to others, but that I'm mostly an average 1x dev. I think a lot of devs are like this, they get really good at some specific stuff, but feel lost as soon as they start looking at some new project on a totally different stack or in a different area. Like when I first migrated to using git from svn I definitely had a productivity drop from screwing stuff up, but I eventually adjusted. Most devs I know went through something similar, shot themselves in the foot a few times and then acclimatized to the new tool, not many instantly switched without a productivity drop.


Smart is not chosing a particular techology (whether simple or complex), smart is choosing the right technology for the problem at hand.

The primary difference between good and bad developers is good developers understand context. Every tool, technique, and practice were developed in a particular context to solve specific problems. Good developers understand the contex and limitations. Bad developers use the golden hammer or follow all “best practices”.

The OP calls all the technologies they dont use “bullshit”,so they are probably a bad developer. But a good developer might choose exactly the same set of techonologies for good reasons.

“Keep it simple” is a great principle, but it is also bordering on a platitutde because determining what is simple is not trivial.


I used to consider myself 10x years ago. But now probably like the author. There are too many tools, too many configurations, too complicated to get it all to work together. That is no longer 'programming', it is fighting a dozen tools that rarely get along. This all becomes inertia that takes 10x and slows it down to .5x. The environment is no longer suited towards focused work. If the day is spent futzing around with some library that stopped working then it is not spent on problem solving.

Maybe that is the key, you are 10x if you are solving a real problem. But now you are spending that 10x on solving stupid problems with tooling, which is not aimed at the main problem.

(10x Programmer) - (9x time spent on tooling) = a 1x Programmer.


I feel the same way. I'm never sure what to think when I can write my own UAV firmware, but can't get the toolchain/build dependencies working for popular open source ones.


I’m stupid too. It’s hard to make sense of complexity when it’s senseless and doesn’t directly address a problem, instead being an inner abstraction. Every day there’s more things to learn in web development, and the scope of most engineering jobs is too large for most people to perform at a peak without burning out. I guess much more stupid is someone who doesn’t realize this and try to do everything and end up doing it badly. Of course you can spin up whatever layer in your stack, but maintaining and debugging it at large is a whole other problem. Usually people who prefer simplicity are the ones wise enough to see the big picture, software in its entire lifecycle. Being stupid has allowed me to deliver fast and secure software that I can explain to the next maintainer in a very short time, not to mention figuring out a problem just by its context in my head. There’s a whole set of principles that makes being a stupid programmer viable, such as Pareto and YAGNI, so much so that often people confuse you with a bright, productive programmer. It came to the point that I like being stupid, because by focusing on the bare minimum, software can be shipped really fast and with great quality overall.


I wonder if it's not that he's not smart enough, but that he doesn't enjoy it. Personally, I loathe developing on the web stack. I find it tedious and boring, and as a result, I do it poorly. Not because I'm not smart enough, but because I dislike it.

All that means is that sort of programming isn't for me. It doesn't mean I'm a terrible programmer.


> embed the http server and data storage engine as libraries in my static binary.

That's very interesting, I've been looking for DB + web server + application server in one process solutions some time ago, but the only thing I found back then was Tarantool. I wonder which solution the author had in mind when he was writing this.


It is all reasonable stuff for most apps that don’t need scale EXCEPT writing files to disk as a database. Doing that for “blobby” stuff like profile pic is OK but for shared data that needs granular mutation I feel you will end up with a buggy, adhoc, ill specified half of postgres in your app.


If you write data to a temporary file and then rename that file to its final name, renaming is an atomic operation on a POSIX filesystem, so you will cleanly overwrite the original file. However, you then need to deal with the risk of lost writes if another process or thread is reading/writing at the same time (i.e., sequential writes with in-memory cache, or just architect so that only one writer at a time; of course, if the shape of the data fits SQL and you don't mind compiling C libraries into your app which might remove the ability to static compile, using pure SQLite in WAL mode might be a better approach since it handles multiple writers for you.)

Even with that risk, for certain types of apps, this can be a way to avoid a database which might need a separate process to backup/dump etc, the skills of a DBA-type of person in some environments, adding yet another database to an enterprise "approved" list, etc. This technique presents a nice, clean view of real data and allows backups of just the live data directory itself. On SSD, it's also much more performant than you might expect.

It's quite possible to make the author's approach work very well with nearly zero long-term maintenance and no external database dependencies, as long as you are careful to manage the expectations and requirements.


Not smart enough for all this, but probably still aware enough of all these issues to find a method that works and actually ship stuff. And I suspect oblivious enough to document their process properly.

I wouldn't have made all the same choices, but I admire the decision-making process.


Bitter humor aside, I think this is a pretty good lament about the current state of software development and its often seemingly unnecessary layers of complexity. But all those things were invented to solve particular complex problems. The issues is that somehow we keep telling ourselves that we need to use these complex solutions to solve simple problems. One of my favorite examples of this is the inappropriate use of XML as a data format - it was intended to be a markup language, but of course it is flexible enough to be a data format, so people started using it like that, and then it became the standard (thank goodness that is changing now).


The main problem is Google, Amazon et. al. publish how they handled some difficult problem they encountered and then middle management at Foo Inc. decided that’s the way they need to design their 40 user internal orders app.


Being smart enough to create click bait content is probably more valuable anyway.


There’s always someone smarter and more successful out there. Someone that can do everything you can without even breaking a sweat, and then do more. They have no reason to consider your existence.

Calling yourself stupid is stupid because the graph of those you->betterperson relations has no leaves, and is a totally closed loop. You’re the smarter, better benevolent version of someone else.


I feel like all of these could easily be flipped

>> I’m not smart enough to figure out how to manage multiple repositories with shared code, so I put all my code in one repository.

> I'm not smart enough to manage a monorepo, so I put all my projects in separate repositories.

There's no reasoning behind the opinions other than "bc dumb".


Of course you don't need a hammer if got no screws to hit. You probably only worked in very small projects. Use the right tool for the right job. The tradeoffs on each scenario for each tool is one of the most important things we have to learn everyday, on every task. If you can't do that...well, good luck.


> I’m not smart enough to figure out docker confiuration, so instead I compile my web application into a self contained statically linked binary executable file.

I'd reverse that for myself. I am not smart enough to cross-compile my project or statically link, so I throw it in a docker container and call it a day.


An argument for a particular kind of opinionated simplicity with humility used as a rhetorical Trojan Horse.


Eh? What's in this alleged horse?


The opinionated simplicity.


As someone who’s career started doing ‘stupid programmer’ things, it’s 100% worth investing time in learning how to do a lot of the things which are “too smart”, purely because a lot of these smart things were built to solve problems which will cause a lot of headache down the line.


It sounds like he still has to do a bunch of BS. What stack is this buzz marketing for? Golang?


The "and that's OK" part of the sub-title of this post needs some support!


> so I just store my objects in their entirety as-is on disk.

Every time I decided to use filesystem as db for my program, a couple months down the road that turned out to be a wrong approach.


> I’m not smart enough

No, you're "brilliant." You know how to make something that's simple enough to get the job done.


How can you store program state on disk like that? You'd need serialization and deserialization. A DB might be easier.


> How can you store program state on disk like that? You'd need serialization and deserialization.

You need serialization and deserialization when interacting with a DB, too. You're trusting a library to do most of that for you. This is the difference between a junior engineer and a senior engineer. As a junior engineer, you don't trust your own judgement and use a library. As a senior engineer, you don't trust the library writer's judgement, so you write it yourself.


> As a junior engineer, you don't trust your own judgement and use a library. As a senior engineer, you don't trust the library writer's judgement, so you write it yourself.

What? No. As a senior engineer you realize that loading data from a database is a solved problem, has nothing to do with the core use cases you're addressing, and would be a massive waste of time to build yourself.


Yeah, this is true but I tend to hand craft my persistence to a database. I'm not sure how using files would be any easier. I'd have to manage a bunch of files on top of what I'm already doing.


Dunning-Kruger begs to differ...


You have things like rocksdb, SQLite etc that are simple have serialization-deserialization built-in, so no external app dependencies


My first thought was the way early 1990s forum and blog scripts did it, aka the UBB/Movable Type method of storing everything as static text files in a directory somewhere.

That would be really inefficient though. I vaguely recall database driven blog and forum scripts taking over because of how poorly the old solution worked at scale.


Maybe a bunch of JSON files on the local file system? Find the files and read the contents as needed.


Simple != stupid, even though sometimes it's hard to discern stupid simple from simply stupid.


I like this kind of discussion, but I'm too stupid to know what the author's point is.


I appreciate the attempt at honesty, but it's not true that you're too stupid. You're too lazy. A trained monkey could learn these things (I know, because I learned them!) and you're not stupid if you know how to program. There's no shame in being lazy. Let's call a spade a spade.


It's not an attempt at honesty, it's either allegorical or satire. Other recent posts by author are an article about BoltDB and B-Trees, https://hasen.substack.com/p/indexing-querying-boltdb. Juxtapose with "I'm not smart enough to figure out SQL queries."


Right. And it’s therefore quite lame. “Look at me I’m smart because I use Docker” is for sure a stupid thing to post under your own name. Youth is wasted on the young.


One trusts he's not too dumb to protect his webapp from the OWASP top 10


I guess I’m stupid too. I do all of these things. It’s awesome.


this is for a stupid programmer that doesn't want or need to work for a corporation.

by the way, I'd say:

I'm too stupid to keep my configuration in order, so I use Docker.


I really think, he was just making a joke!


If it works, it ain't stupid.


contrarian substack clickbait


Obviously great article (for real) but it's sad that so many people still misunderstand containers and orchestration.

> I’m not smart enough to figure out docker confiuration [sic], so instead I compile my web application into a self contained statically linked binary executable file.

That might be fine for something that needs to run once. What if it crashes? And what about everything else that's required. Database, centralized cache, queues, workers, load balancers.

You can't build a fully functional product without those and what are you going to do Install all of the dependencies on a VM, by hand? What if you have to migrate to a different machine for whatever reason? Are you going to write scripts to automate all that and test them and change them and test them again? What if your program crashes? Are you installing supervisor? Will you turn your nice little binary into a daemon and deal with the whole initd/systemd configuration? What about the development environment? Are you going to do all that again when you switch laptop or when you hire someone?

(I personally prefer a PaaS like Heroku, Vercel or Render but that's besides the point)

Complexity very quickly gets out of control. What if you could, instead, define what your project requires in terms of environment, services, dependencies, networking etc. and just give it to a magical (today we'd say "AI powered") DevOps entity that makes sure everything runs smoothly no matter on what machine it's running, assuming it has enough resources? Locally you'd just run a single command and the whole thing would magically come to life.

That's what Docker and Compose/Kubernetes are. A very convenient abstraction layer that makes it very easy to define your underlying architecture in a declarative way and have something else worry about how to get there.

And, let's be honest, it's not even that complicated. You get 80% of the benefits with (less than) 20% of what Docker etc. are capable of. I learned how to use it once 8 years ago and I'm still using it to this day, just occasionally googling the odd command or Dockerfile keyword. I've seen the same with git. So many people just memorize a few commands and can't be bothered to learn how it conceptually works.

I guess I just went on a long rant here but I'm really surprised how, in a constantly evolving industry like tech, there are so many people who are afraid of learning something new or to change how they do things, sometimes out of pure irrational spite or fear of whatever is new. Reminds me of the bit from Brett Victor's talk "The future of programming" where (paraphrasing) he says "binary people thought assembly was a bad idea and assembly people thought C was a bad [...]" etc. etc.


ouch, right in the feels




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: