Over 20 years ago, after I had devised a way to use Gray Code to embed a clock into a signal without increasing the signal's modulated bandwidth, one of my engineers said that someday there'd be a "Moyer's Law". I thought that was funny, so I created one myself:
Moyer's Law: "When we can finally print from any computer to any printer consistently, there will be no CS problems left to solve."
Since every good law has a corollary, I created one of those too (simply to be funnier):
The Corollary to Moyer's Law: "The printed dates will still be wrong."
While PDFs have brought us closer to "universal printing", I won't claim we're anywhere close to solving all CS problems. Sadly, date conversion and formatting continue to be problems (hint, consider UTC ISO-8601 or RFC3339 dates/times for the JSON representation).
P.S. I don't actually think I'm smart enough to have a law named after me ... nor have I really contributed enough to "our art".
For some reason I've gravitated to printing since my earliest computing days and your law speaks to me deeply. I've recently been working on a source code printing app (yes, I'm nuts) and have been blown away by how bad printers are today. I thought the troubles I had at home were "just me".
Love it :) I think the "consistently connect to a projector" is similar. Moyle's Law seems popular here and has generated a great conversation, I'll raise it as a suggestion on the repo and see what people think!
I guess since it's lasted 20+ years, perhaps it's more prescient than I thought?
EDIT: I see that you've submitted an RFC but it misses (what I think is) the point. It's really the corollary of the law that is important as it states how hard it is to do date/time correctly (and you'll notice that the bulk of the discussion here is on the corollary). As I noted above, this was intended to be funny and including it in a list with real laws is just that much funnier!
I just spent 2 days programming a timezone selector in a React form that changes the displayed date/time as you switch timezones but the underlying UTC representation wouldn't change.
All this without loading 500k JS timezone libraries. I only used Intl API and tz.js [1].
The trick simply was to temporarily shift the date by the difference between the local time and the edited time zone. :)
Yeah, not to be rude (we've all been there) but that hack is bound to be borken somehow.
Really this time & timezone stuff should be handled by the OS, system calls or libs should be more sophisticated so we aren't "solving" these problems over and over again.
But I'm about to start ranting about Unicode, so I'll shut up now... ;-)
I agree but I'd like the generic implementation of that. JSON schema never really took off and I believe that part of the reason is that there's not a way to indicate what type might be contained in a string or number (I'm okay with JSON booleans and null). As ugly as it could get, adding XML Schemas to XML documents did in fact help the parser.
The reason I stated I'd like the generic version is that there are other types that we use consistently. There's a very nice RFC available for telephone numbers that we've started following and we can marshal/unmarshal pretty easily from strongly typed languages (where we control the code), but wouldn't it be nice if there was a standard way (within the JSON to let systems know it was a telephone number?
I dunno, JSON Schema seems to have certainly found a lot of traction where it makes sense--web responses, etc. That may to some extent be a POV thing, as I work on a lot of OpenAPI-consuming and -producing systems myself, but I never really have to dig around to find schemas and the like for stuff I really care about.
VSCode even has JSON Schema support built in, which is cool. I use that a lot.
> the objects being mapped to implicitly are the schema
Dynamic languages don't have an enforced schema. You can do it manually, but a schema is easier, special-purpose, declarative.
Plus, data schema languages are typically far more expressive than programming languages. e.g. java doesn't even have nonnullable (which C had, as non-typedef structs). They're closer to a database schema.
a) no mandatory quotes for dicts keys
b) date and time intervals in iso 8601 format.
c) optional type-specifiers for strings, so we can add e.g. ip4 and ip6 addresses. Eg. { remote: "127.0.0.1"#ip4 }
e.g.
{ guests: 42, arrival: @2020-02-17T17:22:45+00:00, duration: @@13:47:30 }
Name "JSON" would be weird, because it wouldn't be JavaScript-compatible syntax anymore. I know, that few people would agree with me, but I would propose `new Date(2020, 2, 17, 17, 22, 45)` syntax, even if nobody uses `eval` to parse JSON, keeping historical heritage is important. And if you need timezone, something like `Date.parse("2011-10-10T14:48:00.000+09:00")` could be used. Now it's not a real constructor or function calls, it's just a syntax, but it's still compatible with JavaScript.
18.2.2020 is easy if you recognize that 2020 must be a year and 18 can't be a month. But if you want to parse 18.2.2020, you probably want the same parser to handle 1.2.2020.
To be universally applicable it would have to support at least date, time, time zone and time zone database version (since time zone definitions keep changing). You would then have to define how each of these look in a backward- and forward-compatible manner and define what every combination of them means. For example, a time and time zone but no date or time zone database could mean that time of day in that time zone on any day, using the time zone database active at that date and time. Not saying it can't be done, but it's a big job.
Unix time still has issues. It officially pauses for a second when leap seconds happen. You can't actually calculate a delta in seconds between two unix timestamps without a database of when the leap seconds were.
On calculating a delta, isn't there the exact same problem with UTC timestamps? Unless one of the ends of your delta is the exact 23:59:60 moment, there's no way to account for possible leap seconds in the middle of your range without just having a list of them.
Totally! Just pointing out that unix timestamps don't solve everything (even before getting to relativity).
International Atomic Time (TAI), which differs by UTC by 37 seconds since it doesn't count leap seconds, solves everything I know of. Although the clocks aren't in a single reference frame, the procedure for measuring their differences and averaging them to define TAI is well defined and so sets an objective standard for "what time is it on Earth".
Presumably you mean unix time as a numeric scalar in the JSON. That is still not self-describing - is it time, or just a number? Which scalar data type should your parser use? And is it seconds since epoch or milliseconds since epoch?
It should be maintained as a numeric scalar until you are going to do something with the value... and if you are going to do something with the value, you should know if it is a date or not.
JSON isn't meant to be self-describing format. There is JSON schema or the like if that is what you are after.
And yet to a very large extent, it is. Strings, numbers, booleans, arrays, and associative maps made the cut. Timestamps would be a pretty reasonable addition. It would certainly cut out all the controversy here.
Then it is a complete non-starter for "just use ______". I work on software that requires millisecond precision (honestly it would benefit from even greater precision) at the transport layer. It's not even really doing anything spectacularly complex or unusual. Seconds simply aren't sufficient for tons and tons of use cases.
"Fortunately, we now have teleconferencing to take our minds off of it."
"Does anyone have an HDMI to DisplayPort adapater on them? What's the phone number for this conference again? What? You only have Slack installed? No, we decided to start using Microsoft Teams for these. Can you speak up? I'm having trouble hearing you. Maybe try dialing in again?"
We had to contact our desktop support group to program the bridge number into the conferencing telephone we use for our stand-ups every day because they'd disabled the ability to program it from the touch-screen and keypad. Cisco VoIP phones now have echo cancelling that I think is as good as the old Polycom Star Trek phones, but having them managed by CCM may be a step backwards.
P.S. We're still manually dialing every day while waiting for our ticket to be processed.
Perhaps home printers are a wasteland, but the copy/printers and plotters at my work are pretty dang reliable, considering they do thousands of pages or thousands of linear feet a day sometimes.
Indeed, but this is the "easy" case, and I suspect your IT guys/gals spent a considerable amount of time setting up and shaking the bugs out of this setup.
It's not that printers are physically unreliable, although they certainly can be. It's the logical complexity of getting an image (or text) from one device via the myriad protocols, connectors (or wireless), page layout languages, etc., correctly onto the printer. And for added fun, even after RMSs long trek, some printers still aren't open enough to drive without secret software.
It reminds me of a typical code written by scientists.
IMHO there is the biggest gap between the raw intelligence and software engineering skills (including good practices). So, it means that most of the code ends up as code-golf-style contraptions, incomprehensible for anyone else (including their future themselves).
Full disclaimer: I used to be that guy who liked "clever hacks". Now I try to make code readable in the first place (largely thanks to Python philosophy).
I remember in college, even Python often turned into "code golf" style competitions. Could we collapse this 10-line block into an incomprehensible nested list comprehension? Why not!? Surely 1 line is better than 10 lines!
Later in college when we started doing more team projects, comprehensible code became much more important. No one had the time to understand someone's doubly-nested list comprehension. We just wanted to finish the project and get on with life. In this sense, I think the moment you read someone else's shitty code is the moment you realize that you need to write good code yourself.
Still, I'd say it was good practice and helped me think about code in different ways.
There is an element of relativity to this though. I guarantee that there are programmers and teams for whom nested list comprehensions are much more comprehensible than 10 lines of loop code that could be doing anything and have to be carefully analyzed to be understood. This holds for just about any programming technique that you might encounter.
Of course, Kernighan's law still applies, it's just that the definition of "too clever" recedes with skill and experience.
A counter is a saw a blog post by a CS professor picking apart of a single line implementation of the Sieve of Eratosthenes. She points out the first clue is that it's performance is bad, like really really bad. And then shows it's not actually implementing the Sieve of Eratosthenes.
I might just be dumb but I've been bitten so many times I'm dubious anything can be verified by inspection.
The most brilliant programmer I've ever worked with wrote hardly comprehensible code since they could understand other's code easily, so they didn't feel the need to write readable code because, in their word, "What do you mean you don't understand it? It compiles, it runs, it works, you don't need more than that".
Some parts of Python are good for readable code, some - not.
I bemoan lack of nice chaining. Some libraries provide APIs that allow chaining, e.g. Pandas. In other cases, I often find JavaScript (map/filter) and R (dplyr pipe operator) pipelines nicer to read.
However, Python 's Zen of Python can (and IMHO: should) be adopted to other languages.
One grad student friend of mine traded me a case of beer for a Saturday of help on his code. It was some code to record spike timings in the olfactory cortex and also control valves (smelly research).
By the 12th nested for-loop, I gave him back the beer.
I was helping a guy with some text-based game he was working on that had some pretty clever bits of code for constructing scene description text.
The problem was that once in a while you'd just get some garbled text on the screen.
Eventually I figured out why: Some of the sentence fragments were malloc'ed, others were returned from the stack. Once the description got over a certain number of bytes he was smashing his own stack. I say 'his' but the problem was introduced by an earlier collaborator.
Buddy, I really want to help you with this but I (much younger me) am not up for unwinding a use-after-free bug of this level of recursion. Good luck, I'm out.
I thought this story was going to end with you consuming the beer on the spot.
On several occasions, and with lesser crimes, I've told the person something like "I think you are confusing yourself with your own code. I want you to change to meaningful variable names (stop recycling variables) and factor out a couple of child functions here, and here. If you still can't see the problem, come get me and we'll try this again."
> I thought this story was going to end with you consuming the beer on the spot.
Haha, no way. Grad students are poor as is. Drinking a month's beer budget in front of the guy would be bad, leaving him with that code was cruel enough.
Nested 12 loops, is this part of not-so-smart code.
Less code-golf, more hackathon-style "one more copy-paste and it should work". It has some use cases, just readability or maintainability are not among these.
This. I'm not saying I know much about good software engineering practices, but I used to be interested in amateur astronomy and spent time translating Fortran and BASIC code written by astronomers to C, and it was awful bordering on enraging. Several years ago I discouraged someone who called me about a job supporting supercomputing applications for scientists - perhaps subconsciously due to this prejudice.
I am one of those who moved from Perl to Python. As a Perl coder I loved writing clever hacks. After a few months of Python I began appreciating simplicity and elegance in code.
"The competent programmer is fully aware of the strictly limited size of his own skull; therefore he approaches the programming task in full humility, and among other things he avoids clever tricks like the plague. " - EWD 340 [1]
I've always found this one a bit peculiar - usually debugging is about looking at something on a simpler, lower level than implementation. Rather than looking at a whole set of logic, let's step through it piece by piece, look at each operation. When done slowly with care and understanding of the lower level components in the system, debugging code is simpler than learning and implementing most architectural patterns in all but the most extreme of circumstances (compiler bugs, hardware faults).
Or maybe I just spent too much time in OllyDbg at a young age to notice.
As others have pointed out, while it's easy at a quintessential level, it's not easy if you're dealing with actually complex (and clever) structures.
I don't know if you've ever had that experience in your OllyDbg-using days, but did you ever try to analyse software that was using some form of hashing function, or cryptography (ECDSA, for instance) with OllyDbg, without knowing what it was? The idea is kind of like that. While it's easy to get a concrete idea of what that block of code is doing (feed it an ASCII string and some memory structures, it spits gobledygook out), it's not so easy to look at it and conclude "Ha! This implements AES-CBC!" without some strong intuition or experience.
I will note that it doesn’t actually take all that much reverse engineering experience to be able to recognize most algorithms if you how they work “in code”.
I will usually plop a breakpoint at where the problem is presenting itself (often in a controller or view), scan up the stack to find where I think the problem is actually being caused, then break at that point and walk through the code as you suggest.
If I understand the application really well often I don't need to do the first step, but it can be a handy shortcut.
I think the law could be put more generally: Specifying how to do X using Y tool-set is half as hard as discovering the order corner case where Y tool-set breaks down.
So, yeah, specifying an architecture may be easier than debugging a routine but discovering the "hole" in the architecture can be twice as hard as specifying it.
Yes. Because if you have to use a debugger to understand a piece of code (your own or someone else's), the author has already failed. Good code can be read and understood without any additional tools.
For complex code you cannot understand it in the debugger. You can understand how it works, but you can't get the design of why it works (or should work if it had been implemented to the design)
I don't know, if I can debug code on my workstation, that does not seem extremely difficult. Time consuming, if anything, but I feel like anyone with sufficient grit could do that, regardless of education.
Tracking down a bug and fixing it with a one-liner does not feel like productive programming (though it brings value to the business).
If I can't debug code on my workstation, e.g. it's running on Kubernetes with service mesh and a cool feature must be added, that's when all hell breaks loose. For example, debugging why Kubernetes deployed on OpenStack will push an image to its internal repository but won't pull from it, that is hell and I'd much rather debug HTTPD than that.
For simple bugs yes. However if the bug is a one in a million race condition between threads where the lock is held/released at the right time except for one place where it isn't even obvious the other thread can write the data. This is particularly bad if there are performance considerations and the design ensures some places where it looks like a race are actually not and so a lock would be a bad thing. Without fully understanding the complex design you cannot add the needed lock in without adding in uneeded locks that kill performance.
There are limits you can place on the compiler. There are atomics in the compiler. There are times when you know you are safe because there is a future write that is memory barrier protected so even though there is a potential race, before the other threads can read it this thread will write something else. There are times where you know the other thread isn't started yet. There are platform specific ways to flush all the caches long after the race in question which can be tied to something else that is synchronized across threads...
> There are times when you know you are safe because there is a future write that is memory barrier protected so even though there is a potential race, before the other threads can read it this thread will write something else.
Often it is difficult to prove this to the compiler.
I don't need to prove it to the compiler. I can prove it by a higher level understanding of my own system. So long as the compiler emits the memory barrier when I need it to, and the other thread stays in the state where it won't be reading the memory in question it doesn't matter what the compiler knows.
How do you mean? On x86 even if you have the assembly, you don’t know what order loads and stores will execute in. Depending on the architecture there will be some constraints, but usually not enough to guarantee a particular order. That is why barriers exist.
Memory barriers (also cache control mechanisms) are documented by the architecture, and are exposed as instructions or similar hardware interfaces. Obviously OO execution exists, but it's not a magical insurmountable fence (see what I did there?) that blocks all optimization. It has well-specified rules on every architecture. If it didn't, concurrent code couldn't be written correctly at all.
And on x86, the rules amount to "the processor will expend herculean effort to make sure that all memory orderings look identical to all readers in the system" (with a few oddball exceptions, read the SDM, yada yada). Really, on x86 you pretty much don't have to worry about this. If two memory operations happen in a given sequence in the disassembly, literally everyone who reads those locations will agree they happened in the order presented, no matter what optimizations the hardware might have done internally.
Sure, as long as DMA or SMP isn’t involved. And I’m not sure why you are bringing up optimizations here? Out of order memory access is itself a fairly important optimization, why would it preclude optimization?
On x86 the architecture guarantees memory read/write order. They are not allowed to make some important optimizations because it would change observed memory read/write. DMA or SMP makes no difference to this.
On most other architectures the CPU can and will change memory read/write order. As a result there is a lot of multi-threaded code that works correctly on x86 that fails when run elsewhere.
FWIW: this gets argued about occasionally, but consensus seems to be that the cited line in the SDM is documenting a misfeature on an older CPU (though the details escape me about which it is). That effect is, IIRC, not observable on current hardware.
Maybe. I’ve implemented a ring buffer that is used between two virtual machine domains. There were a few places where barriers were needed. If they were removed the ring buffer would start corrupting data. These barriers are in addition to the many obviously needed compiler barriers.
No real world compiler exists for that regime that doesn't provide appropriate control over memory ordering. But yes, the problem is difficult and you have to use those features.
> if I can debug code on my workstation, that does not seem extremely difficult. Time consuming, if anything,
I find debugging saves time overall. I don't have to guess about anything, I can see it working or not working, quickly run one liners to confirm. Sometimes getting an application into a certain state takes a bit of wrangling too, and a debugger helps save time by not having to constantly set up that state to see how things have occurred. You can just sit on a breakpoint while you deduce what's going on. A bug that happens at checkout success is a good example. Going through checkout over and over again is time consuming.
> debugger helps save time by not having to constantly set up that state to see how things have occurred
Absolutely. I found out very quickly in my career that I can either stare into the code for hours to see the tiny mistake, or spend 10 minutes stepping through the code to let it tell me what's wrong.
But, it is of course more difficult if you have to debug a complex code base that you don't know. It might take you hours to even know where to place debuggers. So, it can be time consuming, but of course it's still orders of magnitude faster than staring into the code, trying to see where the issue is.
I don't know, if I can debug code on my workstation, that does not seem extremely difficult.
If you wrote the code, sure. Then you have the context of each step you took to iterate the code to what's there now, with some insight in to where the issue might lie.
Take someone else's code that's the final, non-working version and it's a different story. Debugging is hard.
You're ready for a promotion from software engineer to senior software engineer once you've spent hours debugging some piece of software, only to realize it's actually working as intended.
I agree it's hard to build a simple solution to a complex problem, but not necessarily wrt what we normally talk about as problem solving skills in programming.
What I mean is the hard part is stepping back and actually solving the real problem in the best way. That might be utterly trivial code wise, it might be brute force vs optimal. It might be just solving a readily parallelisable problem on a single thread. It might be a unilanguage monolith instead of microservices in the 'right language for every task', etc.
The smartness involved in making code simple (and I mean simple, not elegant) is more about pragmatism, lateral thinking, discipline and focus on end goals than any sort of analytical smartness.
The intelligence required to write a piece of code =/= the intelligence required to understand it. Usually the second is a lot easier. Writing a fancy Haskell typeclass to structure a problem might require deep expertise or creativity. Working with such an existing solution isn't as hard.
Even in math or science it's harder to create something new than it is to merely imitate it.
It's easier to write code than to read. This isn't a fact but it is generally accepted by many. It doesn't seem obvious at first why but "reading code" isn't like "reading".
It's also called the law of "Uphill Analysis, Downhill Invention" which states "It is much more difficult to guess the internal structure of an entity just by observing its behavior than it is to actually create the structure that leads to that behavior."
(There is a blog-post version behind this link as well, in case anybody wanted to skim the contents rather than watch video of the entire talk to find out how it fits in.)
Fancy algorithms or one liners are not all that difficult to understand, because all the information is right there. It's when you're writing convoluted code inside a big system that issues of understanding begin to arise, because the incoming developer is unlikely to have all of the context that made the code make sense to the person who wrote it.
It's hard to write a concise example because it really takes a big application to make the point, but consider:
featureFlag(10, false);
processOrder();
What's flag 10? I have to go figure it out, wastes time, oh turns out it was to disable tax, why are we doing that? Okay in the function that called this one we checked if we were processing a subscription order.
I would argue that in order to fully and truly understand some code you have to be able to re-create it from scratch. E.g. to explain weird constants, data structures, design decisions, estimations on complexity. Hence, in my eyes, the intelligence required to perform those tasks are the same.
But, there is a backdoor. Solving problems needs immense creativity. Not everyone that easily grasps a piece of code (a domain expert, savant, MIT wizard) could have written it. To my eyes, that doesn't mean that the savant is necessarily of inferior intelligence, but likely less creative in this area.
yeah no. this is not imitating something. This is troubleshooting something that does not behave as expected. The issue is that it works on the normal case/flow but it does not in other edge cases. You're in trouble when you no longer understand why you did things is a certain way when you thought you were "clever"
Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it.
(Brian Kernighan)
The rest of the laws are interesting, so I’d recommend taking a look as well..
A related interpretation: your debugging tools need to be at least as good as your programming tools. Ideally, better.
Debugging a K8 cluster with print statements is hopeless, but if the cleverest code you can write in a dumb editor is going through a good test suite and profiler, you might be fine. How many as-clever-as-possible optimization tricks have been made viable by Valgrind?
I write every piece of code as if someone else is going to have to be able to use it later, even if it's just for me. And I find remarkable overlap between the clarity that some random person would need and the clarity that future-me ends up needing.
This law is actively biting me in the ass right now as we speak ;)
Gotta love T-SQL, where hard things are easy, easy things are hard, and one must perpetually choose between extreme cleverness and extreme verbosity because there is absolutely no middle ground.
> Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it.
The expression "by definition" is misplaced here. The first proposition "Debugging is twice as hard as writing the code in the first place" is not a definition. At best, we can treat it as an axiom. In any case, the conclusion "if you write the code as cleverly as possible, you are not smart enough to debug it" does not follow by definition.
The meaning of what the quote is saying is clear. It is also clear how the logical reasoning is supposed to be. My - admittedly pedantic - point is that the choice of words is incorrect.
I worked in a technical support department for a long time. I often was the guy between the customer (the technical customers) and the engineering teams.
After a while figured out pretty good ways to manage my interactions with the engineering teams, despite having no coding experience or really any visibility to the code.
How I worked with Development and Continuation teams was DRAMATICALLY different.
When working with "Development Engineering" (they wrote new code) I always asked them:
"What does X, Y, Z do?"
Effectively I always asked them what they think the code "should" do and what the results should be. There was no point in presenting them "OMG it doesn't do the thing" because they would just panic, get defensive / shutdown. Rather I understood they knew what their code does (well what they thought it did) and asking them that was the key to get them talking / sharing.
After I had their words and phrasing I could better present "Hey we see A, B, C under D conditions. I expected to see X, Y, Z." That would get a lot more buy in from those folks.
I also had to avoid some of the more obvious "hey it does G" where G would obviously break the feature entirely... they often didn't understand use cases (one guy I'm pretty sure didn't even know what the product did, but he could program an ASIC for sure..) so you had to be careful about spelling out customer experiences and rope in a program manager if you felt there was a fundamental problem.
When working with Continuation Engineering (bugfixes and etc):
These guys were much more receptive to the customer's story on on how the customer is using the code / equipment, and you could much more quickly present "Seeing A, B, C, under D conditions. So then I changed L and got M..." and so on. Continuation engineering grocked the meaning of why someone would change L... and so it was helpful. Continuation didn't often know what the code "should do" (or they weren't confidant in their understanding) so just saying "Saw A... that's not right" meant nothing to them. In contrast development engineering needed to talk about the happy path first.
Note that much of this occured after I gave them a good writeup / heads up of all the information I had (even if they didn't read it I never kept anyone in the dark).
It got to the point that if a panicked support call came in and it was end times and someone gave engineering a heads up, they would ask it be assigned to me if I was in the office. Technically I was far from the best tech, but I could talk to engineering.
It was telling how much a difference there was between debugging and initial development.
Pretty much, just warming them up to get them into debug mode ;)
For me I (and the customer) all thought we knew what it should do, but it's always good to spell it out for everyone so we all understand when / or even if we're seeing an exception.
Lots of "Woah hey is that what the protocol really says?" moments too.
Hit a performance cliff and rewrote a simple divide-and-conquer multithreaded application into a two-thread solution using hand-rolled map and list structures using optimistic concurrency with atomics and paying attention to dining philosophers. Didn't work first time, funnily enough.
For about a day I was terrified that it would never work. Then I found my mistakes (two).
Except in emergencies like this, my time is much better spent writing "inefficient" yet easy to develop code that everyone can understand.
This highlights the importance of writing clean, legible, and well-thought-out code but is also an excellent example of hyperbole and false statistics. I've debugged my code consistently for 15+ years (which if you were to take this as law, would imply I doubled my intelligence with a crazy-high compounding rate, and while I enjoy my ego, that's a bit too much for me).
This is one of the things I learned to really appreciate about Go. Code readability and comprehensibility is way better than any other language I've ever used.
Eh... Not denying the law is correct, but this kind of thinking makes more sense when you're not doing some cloud SaaS solution that can have daily changes due to business needs and continuous deployment. Code can get complicated over time because business moves fast and there isn't time do everything perfectly. It isn't always people trying to be too clever by a half.
Dealing with that at work right now. We got super clever, changed a couple times, tried to be even more clever, and engineering gets to pick up the mess and try to make it all work.
True but that's too many characters for HN's title field. Which might be why it was submitted with editorialised version.
> /s/it/your cleverest code
massive nitpick but you should be suffixing rather than prefixing the regex with a slash:
s/it/your cleverest code/
As I said, this is a massive nitpick; literally the only reason I pointed this out was because the submission was about debugging code and I liked the irony of debugging someone's posted about debugging code :)
(there is another law somewhere about people who nitpick are prone to making mistakes in their corrections thus getting nitpicked themselves -- I'm hoping I'm not exception hehe)
Another way to think about this: The more steps that exist between the developer and the end product the less likely the developer is to care or identify whether their solution is clever.
Moyer's Law: "When we can finally print from any computer to any printer consistently, there will be no CS problems left to solve."
Since every good law has a corollary, I created one of those too (simply to be funnier):
The Corollary to Moyer's Law: "The printed dates will still be wrong."
While PDFs have brought us closer to "universal printing", I won't claim we're anywhere close to solving all CS problems. Sadly, date conversion and formatting continue to be problems (hint, consider UTC ISO-8601 or RFC3339 dates/times for the JSON representation).
P.S. I don't actually think I'm smart enough to have a law named after me ... nor have I really contributed enough to "our art".
- https://en.wikipedia.org/wiki/Gray_code
- https://www.electronics-notes.com/articles/radio/modulation/...