The most important pattern to learn in C is to allocate a giant arena upfront and reuse it over and over in a loop. Ideally, there is only one allocation and deallocation in the entire program. As with all things multi-threaded, this becomes trickier. Luckily, web servers are embarrassingly parallel, so you can just have an arena for each worker thread. Unluckily, web servers do a large amount of string processing, so you have to be careful in how you build them to prevent the memory requirements from exploding. As always, tradeoffs can and will be made depending on what you are actually doing.
Short-run programs are even easier. You just never deallocate and then exit(0).
Most games have to do this for performance reasons at some point and there are plenty of variants to choose from. Rust has libraries for some of them, but in c rolling it yourself is the idiom. One I used in c++ and worked well as a retrofit was to overload new to grab the smallest chunk that would fit the allocation from banks of them. Profiling under load let the sizes of the banks be tuned for efficiency. Nothing had to know it wasn't a real heap allocation, but it was way faster and with zero possibility of memory fragmentation.
Most pre-2010 games had to. As a prior gamedev after that period I can confidently say that it is a relic of the past in most cases now. (Not like that I don't care, but I don't have to be that strict about allocations.)
Yeah. Fragmentation was a niche concern of that embedded use case. It had an mmu, just wasn't used by the rtos. I am surprised that allocations aren't a major hitter anymore. I still have to minimize/eliminate them in linux signal processing code to stay realtime.
The normal practical version of this advice that isn't a "guy who just read about arenas post" is that you generally kick allocations outward; the caller allocates.
> Ideally, there is only one allocation and deallocation in the entire program.
Doesn't this techically happen with most of the modern allocators? They do a lot of work to avoid having to request new memory from the kernel as much as possible.
Just because there's only one deallocation doesn't mean it's run only once. It would likely be run once every time the thread it belongs to is deallocated, like when it's finished processing a request.
Except this is an article on how to perform technical politics in large organizations. Functional, intelligent, non-nepotistic leadership is the exception, not the rule. It has been this way for a long time, perhaps forever. Dilbert became one of the most circulated comics for good reason. This article is the third guy.
Pretending that identifying stakeholders' needs, communicating the solutions, and delivering them are the keys to succeeding in corporate politics is a joke. It's our parent's telling us that we need to be good for Santa Claus. Human politics is an enormously deep subject, and a newbie will get trampled every single time. If you are sitting at a poker table and don't know who the sucker is within five minutes, congratulations, you are that sucker.
> Pretending that identifying stakeholders' needs, communicating the solutions, and delivering them are the keys to succeeding in corporate politics is a joke.
I don't think that's actually true. Identifying the stakeholders' needs is absolutely something that will lead to success in corporate politics. Just don't expect their needs to be about building decent products.
> Functional, intelligent, non-nepotistic leadership is the exception
The majority of marriages end in divorce. This doesn't mean that I should treat all prospective partners as someone I will eventually divorce. That is not healthy for me, the people I interact with, or my future.
Survivorship bias. Get burned a bunch of times and see where your strategy lies. You'd be a fool to keep sinking all your effort into things that devastate your life time and again.
If I kept getting burned, I might think about the types of people I get into relationships with, the type of things I'm learning while dating, and I might talk with friends to see how their relationships are going and see if I could be doing something different. I don't think I would start telling everyone in a relationship to prepare for divorce.
I've been divorced once. That in itself doesn't mean I go around telling people to not get married.
But, it does mean that I forced [1] my 2nd wife to sign a pre-nuptual agreement, and I go around recommending others to do so as well.
[1] she initially refused to sign it, I told her the wedding's off if she doesn't, so she did; she's still unhappy about this and hates me for a day whenever she's reminded of it; this was 5 years ago, we're still married and not divorcing currently; while I know it doesn't sound romantic, it was the right thing to do because people and life circumstances change _a lot_; I hope we will stay together forever and get buried next to each other, but I had the same hope with my 1st wife and then she cheated on me when my then-startup was failing, so now, much wiser, I can see a 1000 ways for such hopes to fall apart
This unhappiness that your wife has will not go away and you will deal with situation at some point. These hard conversations have a way of finding you.
I won't tell you to tear up the pre-nup, but I highly recommend coming up with a compromise (over time) that meets both of your needs.
I know what you mean, I do think about how to soften this situation often. It'd be easier if she'd be rational about it, but she's not the rational type — which is even more reason to have the pre-nup..
My man, all of us would. I talk to my wife often about this. We are just built differently and that is what makes the opposite sex so attractive tbh. That irrationality comes out in different positive ways I'm sure (or you wouldn't have married), you can't just turn it on and off (unfortunately)
> The majority of marriages end in divorce. This doesn't mean that I should treat all prospective partners as someone I will eventually divorce. That is not healthy for me, the people I interact with, or my future.
You should be aware that it's a possibility and act accordingly. Pretending divorce is impossible is what's unhealthy; preparing for the possibility will make for a healthier marriage and a better future, whether you ultimately divorce or not.
This is pedantic, but if I understand correctly, this is not true anymore. Moreover, this number is inflated by a set of people getting divorced multiple times.
Respectfully, what you should do is first make sure you don't live somewhere that recognizes common law marriage, then commit to that person without actually legally marrying her.
If your c-suite is idiotic or nepotistic you can absolutely still influence them with good politics, you just need to understand their incentives and frame your arguments that way. You need to understand that you’re not playing meritocracy and get your outcomes done in the system you are playing.
In this case that means being in that golf game or figuring out a way how you can use corruption to get good outcomes done.
Or, more likely if your moral compass is sound, quit and find an organisation that isn’t like this.
While I agree with you that random corporate world does behave this way, companies where founders are still around - don’t - because they’re mission driven.
We have already removed so much liability in other licensed professions, either through law or practice. It's almost impossible to obtain a malicious prosecution or SLAPP judgement against attorneys. Malpractice lawsuits have been legally capped all over the USA against doctors. Engineering liability is a joke outside of rare circumstances.
The social will for having real accountability and professional ethics just isn't there. If you license software, large companies will just outsource the stamps to small contractors who will legally assume responsibility while nothing else changes. All real accountability will be so sparse and random that all ethical complaints will be ignored. If this liability becomes large enough to effect anything, all tech companies will band together to bribe politicians to limit legal remedies in the laws themselves.
In the end, the only thing that will happen is that large companies will use these regulations to bludgeon smaller competition like they already do.
How are you going to get around Griggs v. Duke Power Co.? AFAIK, personality tests have not (yet) been given the regulatory eye, but testing cognitive ability has.
I have thought about doing this and I just can't get around the fact that you can't get much better performance in JS. The best you could probably do is transpile the JS into V8 C++ calls.
The really cool optimizations come from compiling TypeScript, or something close to it. You could use types to get enormous gains. Anything without typing gets the default slow JS calls. Interfaces can get reduced to vtables or maybe even straight calls, possibly on structs instead of maps. You could have an Int and Float type that degrade into Number that just sit inside registers.
The main problem is that both TS and V8 are fast-moving, non-standard targets. You could only really do such a project with a big team. Maintaining compatibility would be a job by itself.
At least without additional extensions, TypeScript would help less than you think. It just wasn’t designed for the job.
As a simple example - TypeScript doesn’t distinguish between integers and floats; they’re all just numbers. So all array accesses need casting. A TypeScript designed to aid static compilation likely would have that distinction.
But the big elephant in the room is TypeScript’s structural subtyping. The nature of this makes it effectively impossible for the compiler to statically determine the physical structure of any non-primitive argument passed into a function. This gives you worse-than-JIT performance on all field access, since JITs can perform dynamic shape analysis.
It’s advertised as that, and it’s a cool project, but while it’s definitely a statically typed language that reuses TypeScript syntax, it’s not clear to me just what subset of the actual TypeScript type system is supported. That’s necessarily bad—TypeScript itself is very unclear about what its type system actually is. I just think the tagline is misleading.
Probably a better way to think about AssemblyScript is first as a DSL for WASM, and second as providing a subset of TS syntax and authoring semantics to achieve that. The type system is closer to TS than the syntax and semantics. At least that was my experience when I explored it some time back.
The tagline on the site seems to be " TypeScript-like language for WebAssembly", which seems pretty clear to me that it's not pretending to be a strict subset or anything.
The 2019 paper[1] says: “STS primitive types are treated according to JavaScript
semantics. In particulars, all numbers are logically IEEE 64-bit floating point, but 31-bit signed tagged integers are used where possible for performance. Implementation of operators, like addition or comparison, branch on the dynamic
types of values to follow JavaScript semantics[.]”
I think the even bigger elephant in the room is that TypeScript's type system is unsound. You can have a function whose parameter type is annotated to be String and there's absolutely no guarantee that every call to that function will pass it a string.
This isn't because of `any` either. The type system itself deliberately has holes in it. So any language that uses TypeScript type annotations to generate faster/smaller code is opening itself to miscompiling code and segfaults, etc.
It might be useful for an interpreter though. I believe that in V8 you have this probabilistic mechanism in which if the interpreter "learns" that an array contains e.g. numbers consistently, it will optimize for numbers and start accessing the array in a more performance way. Typescript could be used to inform the interpreter even before execution.
(My supposition, I'm not an interpreter expert)
So - I know this in theory, but avoided mentioning it because I couldn’t immediately think of any persuasive examples (whereas subtype polymorphism is a core, widely used, wholly unrestricted property of the language) that didn’t involve casts or any/unknown or other things that people might make excuses for.
Do you have any examples off the top of your head?
Here's an example I constructed after reading the TS docs [1] about flow-based type inference and thinking "that can't be right...".
It yields no warnings or errors at compile stage but gives runtime error based on a wrong flow-based type inference. The crux of it is that something can be a Bird (with "fly" function) but can also have any other members, like "swim" because of structural typing (flying is the minimum expected of a Bird). The presence of a spurious "swim" member in the bird causes tsc to infer in a conditional that checks for a "swim" member that the animal must be a Fish or Human, when it is not (it's just a Bird with an unrelated, non-function "swim" member).
type Fish = { swim: () => void };
type Bird = { fly: () => void };
type Human = { swim?: () => void; fly?: () => void };
function move(animal: Fish | Bird | Human) {
if ("swim" in animal) {
// TSC infers wrongly here the presence of "swim" implies animal must be a Fish or Human
onlyForFishAndHumans(animal);
} else {
animal;
}
}
function onlyForFishAndHumans(animal: Fish | Human) {
if (animal.swim) {
animal.swim(); // Error: attempt to call "not-callable".
}
// (receives bird which is not a Fish or Human)
}
const someObj = { fly: () => {}, swim: "not-callable" };
const bird: Bird = someObj;
move(bird);
// runtime error: [ERR]: animal.swim is not a function
This narrowing is probably not the best. I'm not sure why the TS docs suggest this approach. You should really check the type of the key to be safer, though it's still not perfect.
Compilers don't really have the option of just avoiding non-idiomatic code, though. If the goal is to compile TypeScript ahead of time, the only options are to allow it or to break compatibility, and breaking compatibility makes using ahead-of-time TypeScript instead of some native language that already exists much less compelling.
> I think the even bigger elephant in the room is that TypeScript's type system is unsound.
Can you name a single language that is used for high-performance software and whose type system is sound? To speed up the process, note that none of the obvious candidates have sound type systems.
Java, C#, Scala, Haskell, and Dart are all sound as far as I know.
Soundness in all of those languages involves a mixture of compile-time and runtime checks. Most of the safety comes from the static checking, but there are a few places where the compiler defers checking to runtime and inserts checks to ensure that it's not possible to have an expression of type T successfully evaluate to a value that isn't an T.
TypeScript doesn't insert any runtime checks in the places where there are holes in the static checker, so it isn't sound. If it wasn't running on top of a JavaScript VM which is dynamically typed and inserts checks everywhere, it would be entirely possible to segfault, violate memory safety, etc.
I can't speak for the others, but Java allows assigning arrays of subtypes to variables declared as an array of a supertype, which isn't sound:
class A {}
class B1 extends A {}
class B2 extends A {}
A[] arr = new B1[1];
arr[0] = new B2();
In the above example only way that assigning an array of `B1` to a variable typed as an array of `A` is if only valid `B1` objects are ever put into it, at which point there's no reason not to just have the variable typed as a `B1` array. It still will compile fine though!
Because the context here is the idea of using the type system to justify removing those sorts of dynamic checks to generate better code.
The dynamic checks in the Java case are are a well-defined and narrowly-targeted part of the language semantics- you get an exception on mismatched array writes, out-of-bounds access, etc., but when an expression produces a value it always matches its type.
TypeScript defers these kinds of type system violations to the underlying JavaScript engine, which makes things work out (sometimes with an exception, but sometimes just proceeding with a value that doesn't match the expression's type) using precisely the dynamism we wanted to get rid of. And this can leak out and cause arbitrarily-far-away parts of the program not to match their types, either.
> Because the context here is the idea of using the type system to justify removing those sorts of dynamic checks to generate better code
It's more specific than that; the discussion is about writing an ahead-of-time compiler, which necessarily wouldn't be running on a JavaScript engine. The compiler could just as easily emit code that always throws a runtime exception instead of emitting an equivalent to whatever the JavaScript would do.
Okay, I think I understand now. My intuition was that "soundness" refers to whether the compile catches all invalid usage of types, and that soundness if violated if that doesn't happen; it sounds like the way you're using the term is measured whether the invalid usage is caught either at compile-time or run-time, and soundness if violated if it's not caught by any of the checks. I don't know whether my narrower understanding of soundness is incorrect or not, but it's at least more clear to me now why you grouped Java and JavaScript differently in terms of soundness.
All of these have, at the very least, escape hatches that makes the type system unsound overall. And probably other issues https://counterexamples.org/ I can find a few in there for at least scala and haskell. Perhaps this is not a satisfying answer to you, an "unsound type system" is a technical, precise notion, and this is what people who parrot "typescript is unsound" are referring to. You cannot just reply "well there are a few runtime checks so it's all good."
> an "unsound type system" is a technical, precise notion
Yup. Milner's "can't go wrong", progress and preservation, etc.
> You cannot just reply "well there are a few runtime checks so it's all good."
Sure I can. I really like how Shriram Krishnamurthi describes soundness in Programs and Programming Languages [1]. I can't think of a better definition for soundness than:
"The central result we wish to have for a given type-system is called soundness. It says this. Suppose we are given an expression (or program) e. We type-check it and conclude that its type is t. When we run e, let us say we obtain the value v. Then v will also have type t."
The "we obtain the value v" part is critical. If an expression of type e doesn't produce a value at all (it terminates or throws an exception), then we have also satisfied soundness.
Indeed, note that he also says:
"Any rich enough language has properties that cannot be decided statically (and others that perhaps could be, but the language designer chose to put off until run-time to reduce the burden on the programmer to make programs pass the type-checker). When one of these properties fails—e.g., the array index being within bounds—there is no meaningful type for the program. Thus, implicit in every type soundness theorem is some set of published, permitted exceptions or error conditions that may occur. The developer who uses a type system implicitly signs on to accepting this set."
A term like "soundness" for a programming language should be useful. We could, for example, define "evenality" as a property of programming languages where we say that a language whose built-in atomic types have names that are all an even number of letters has evenality and other languages don't. That's a well-defined concept and we could neatly partition extant languages into whether they have evenality or not. But who cares?
When it comes to soundness, the above definition from PAPL is useful for (at least) two concrete reasons:
1. When a user is reading code, if they see an expression has some type T, they can safely reason that any value the expression evaluates to will have type T and when they are reasoning about code surrounding that expression, they can rely on that fact.
2. Likewise, when a compiler is compiling code, it can safely assume that if an expression has type T, then all subsequent code that depends on the value of that expression can assume it has type T. The compiler can optimize safely and correctly based on that assumption.
Neither of these properties require that all type checks are performed at compile time. If the runtime throws an exception on out of bounds array indices, that still correctly preserves the soundness invariant that the type of an array element access is the type of the array element. The reader might have to think about the fact that the expression could throw. But they don't have to think about it evaluating to the wrong type.
If that's not your definition of soundness and you require a sound language to have zero runtime checks, then I'm not aware of any widely-used language that meets that requirement, nor do I see how it's a particularly useful term.
Note that it's not the case that every language is sound according to the above definition. C, C++, TypeScript, and Dart 1.0 (but not 2.0 and later) are all unsound. In the first two, it's possible to completely reinterpret memory as another type which leads to the majority of software security issues in the world. In the latter two, the only reason that doesn't happen is because the underlying execution environment doesn't rely on the static types of expressions at all.
JVM bytecode is a "language" and is proven to be sound. The languages that compile to that language, on the other hand, are a different kettle of fish.
This is specifically about type systems. It's easy to have a sound type system when you have no type system.
Also, I'm not too familiar with JVM bytecode, but if I load i64 in two registers and then perform floating point addition on these registers, does the type system prevent me from compiling/executing the program?
Can you say more about "proven to be sound"? Are you talking about a sound type system?
Fun fact: Said type system has a 'top' type that is both the top type of the type system as well as the top half of a long or double, as those two actually take two values while everything else, including references, is only one value. Made some sense when everything was 32 bit, less so today.
I doubt it's been proved to be sound. It shows up a lot on https://counterexamples.org/, although if I skim the issues seem to have been fixed since then.
I've run a few times into messages of the sort "you can't use these features together" before and I assume at least sometimes these were lessons that they had to learn the hard way.
I'm a little behind times on Haskell (haven't used it for some years) – there always were extensions that made it unsound, but the core language was pretty solid.
Outside of really funky code, especially code originally written in TS, you can assume the interface is the actual underlying object. You could easily flag non-recognized-member accesses to interfaces and then degrade them back to object accesses.
Suppose you have some interface with fields a and c. If your function takes in an object with that interface and operates on the c field, what you want is to be able to do is compile that function to access c at “the address pointed to by the pointer to the object, plus 8” (assuming 64-bit fields). Your CPU supports such addressing directly.
Because of structural subtyping, you can’t do that. It’s not unrecognized member. But your caller might pass in an object with fields a, b, and c. This is entirely idiomatic. Now c is at offset 16, not 8. Because the physical layout of the object is different, you no longer have a statically known offset to the known field.
I would bet that, especially outside of library code, 95+% of the typed objects are only interacted with using a single interface. These could be turned into structs with direct calls.
Outside of this, you can unify the types. You would take every interface used to access the object and create a new type that has all of the members of both. You can then either create vtables or monomorphize where it is used in calls.
At any point that analysis cannot determine the actual underlying shape, you drop to the default any.
Which is exactly the kind of optimizations JIT compilers are able to perform, and AOT compiler can't do them safely without having PGO data, and even then, they can't re-optimize if the PGO happens to miss a critical path that breaks all the assumptions.
> Because of structural subtyping, you can’t do that
In practice v8 does exactly what you're saying can't be done, virtually all the time for any hot function. What you mean to say is that typescript type declarations alone don't give you enough information to safely do it during a static compile step. But modern JS engines, that track object maps and dynamically recompile, do what you described.
Oh, I thought JIT in your comment meant a single compilation. Either way, having TS type guarantees would obviously make optimizing compilers like v8's stronger, right? You seem to be arguing there's no value to it, and I don't follow that.
My claim is that the guarantees that TS provides aren't strong enough to help a compiler produce stronger optimizations. Types don't just magically make code faster - there's specific reasons why they can make code faster, and TypeScript's type system wasn't designed around those reasons.
A compiler might be able to wring some things out of it (I'm skeptical about obviouslynotme's suggestions in a cousin comment, but they seem insistent) or suppress some checks if you're happy with a segfault when someone did a cast...but it's just not a type system like, say, C's, which is more rigid and thus gives the compiler more to work with.
Contributor to Porffor here!
I actually disagree, there's quite a lot that can be improved in JS during compile time. There's been a lot of work creating static type analysis tools for JS, that can do very very thorough analysis, an example that comes to mind is [TAJS](https://www.brics.dk/TAJS/) although its somewhat old.
> there's quite a lot that can be improved in JS during compile time
I wonder how much performance gain you expect to achieve. For simple CPU-bounded tasks, C/Rust/etc is roughly three times as fast as v8 and Julia, which compiles full scripts and has good type analysis, is about twice as fast. There is not much room left. C/Rust/etc can be much faster with SIMD, multi-threading and fine control of memory layout but an AOT JS compiler might not gain much from these.
In my mind, the big room for improvement is eliminating the cost to call from JS into other native languages. In node/V8 you pay a memcopy when you pass or return a string from C++ land. If an ahead of time compiler for JS can use escape analysis or other lifetime analysis for string or byte array data, you could make I/0 or at least writes from JavaScript to, for example, sqlite, about twice as fast.
Honestly, I’m fine with only some speed up compared to V8, it’s already pretty fast…
My issue with desktop/mobile apps using web tech (JS) is mostly the install size and RAM hunger.
The "node" binary on my laptop is 45MB in size. I guess the browser component may take more disk space than JS runtime. Similarly, I am not sure whether JS runtime or webpage rendering takes more RAM. If it is the latter, an AOT compiler won't help much.
Yea I came here to say this, actually I was able to transpile a few typescript files from my project into assembly using GPT just for fun and it actually worked pretty well. If someone simply implements a strict typescript-like linter that is a subset of javascript and typescript that transpiles into assemblyscript, I think that would work better for AOT because then you can have more critical portions of the application in AOT and other parts that are non-critical in JIT and you get best of both worlds or something like that. making js backwards compatible and AOT sounds way too complicated.
Ecmascript 4 was an attempt to add better types to the language, which sadly failed a long time ago.
It'd be nice of TS at least allowed for specifying types like integer, allowing some of the newer TS aware runtimes could take advantage of the additional info, even if the main TS->JS compilation just treated `const val: int` the same as `const val: number`.
Yeah, that is why I said TS (or something similar). TS made some decisions that make sense at the time, but do not help compilation. The complexity of its typing system is another problem. I'm pretty sure that it is Turing-complete. That doesn't remove feasibility, but it increases the complexity of compiling it by a whole lot. When you add onto this the fact that "the compiler is the spec," you really get bogged down. It would be much easier to recognize a sensible subset of TS. You could probably even have the type checker throw a WTFisThisGuyDoing flag and just immediately downgrade it to an any.
Because JS code can arbitrarily modify a type, any language trying to specify what the outputs of a function can be also has to be Turing complete.
There are of course still plenty of types that TS doesn't bother trying to model, but it does try to cover even funny cases like field names going from kebab-case to camelCase.
You say you have "thought about doing this"..."[but] you can't get much better performance", then describe the approach requiring things that are described first-thing, above the fold, on the site.
Did the site change? Or am I missing something? :)
You can do inference and only fall back to Dynamic/any when something more specific can't be globally inferred in the program. For an optimization pass this is an option.
What do you mean, "So?" This is maybe one step below being removed from the ability to have a checking account. He has been removed from Internet commerce with Zero Recourse and even gaslit by the company harming him.
This will keep happening because people like you will just flippantly continue saying, "It's not my problem." It will become your problem soon. As companies continue ignoring completely screwing a slowly growing percentage of people, that demographic will do what he will likely do next, call his local congressional representative to grab your company by the metaphorical ear.
Internet companies have incredible legal privilege right now. As expected, they are now abusing it as they have grown large enough. I'm glad these companies are filled with people like you because you don't even mask yourself. That is what will make the boot come down faster and harder.
It happens in all fields, at all levels. You cannot trust any papers without reproducing them. The intentional fraud is rampant. There is too much reward for publishing junk as quickly as possible and too little punishment for misbehavior.
Apple is just as sleezy as Google but doesn't have as much power and reach. If I don't want to, I don't have to interact with Apple at all. Google is much harder to avoid, impossible in some domains like email.
It's less about risk aversion than it is about position, size, and complexity. As these things grow, the incentives change and the ability to understand what the organization even is becomes impossible.
A startup starts at the bottom. It begs investors, customers, and employees from a position of optimism and humility. The organization enthusiastically changes itself to find a good balance between those three or it dies. As the organization grows, it starts demanding everyone else change for them instead. Google's interviews are an example. Its famous customer service is an example.
Then we get to size and complexity. Thanks to Dunbar's numbers, we know that there are numerical limits to a human's ability to know people. This makes sense. I can know everything about 6 people, most things about 50, and keep track of about 250 well enough. As the organization grows, your ability to know it disappears. You begin making abstractions. Instead of knowing exactly what Susan does, you say she works in X Department, for Y Initiative, doing Z position.
Google is so big that one person can't understand it anymore. The inevitable reduction to a corporate abstraction occurs and then people treat it like the X Company, which is just like Y Company but makes X instead of Y. Short term revenue and expenses are the only measures at the end.
And in this faceless abstraction, the professional parasite class infests and extracts resources and morale. Eventually the C-suite stops fighting it and joins in on it until only the sheer size and momentum of the company keeps it going. Maybe an investor group will come and force a rework of the company, but not before the company is just a shadow of the shadow of its former self.
I explain it in the third paragraph but to illustrate it further: Consider a function. A function that is 1 line long is immediately understandable. At 10 lines, it is readable within a minute or two. At 100 lines, it is maybe legible to someone who lives in that function. At 1000 lines, it is a black box. Human organizations are the same way.
You might suggest refactoring, which is what companies do too. They create departments, promotion ladders, org charts, and mission statements. The problem is that abstractions leak by design. As your abstractions accumulate and change outside your view, your ability to understand the entirety reduces.
But that has to do with the capabilities of the executives involved, it doesn't make it impossible. Just like in your example, there are many, many developers that can perfectly understand large functions or code bases without issue.
If you have such a code base and you hire people that are not equipped, either through inexperience or capability, of managing that code base that is a resourcing issue.
If your executives cannot understand and control the organization they are tasked with controlling and cultivating, then they should be fired.
Except large code bases do the same. They regularly die when their ability to be understood drops too low. Even with well organized code, they are pushed to add features until they aren't understood at the deepest level. Once you hit millions of lines of code, even when you spend decades in that code base, you still forget changes you made even if you have an overarching picture. That's ignoring other people working on it all the time. The understanding gets reduced to contracts, types, and interfaces.
And most importantly, humans are more complicated than code. With enough time and knowledge, I can accurately tell you what any piece of code does on a single expression or statement. Humans regularly do things they don't even know for purposes they don't understand.
Do you have any resources to learn this. How to untangle the situation. What would happen if the resources indeed isn’t the problem to tackle, rather its complexity that is hard to untangle.
too many things going on, involving too many people and nobody can possibly keep track of it all in their head. You have to split it up. But by splitting it up, the left hand doesn't really know what the right hand is doing.
So controls and processes are put in place to ensure no bad outcomes are possible, but this also prevents good, innovative outcomes from sprouting.
Fundamentally, it's a loss of trust that can exist in a smaller organization.
Freedom not only allows jerks and kooks, but it requires them. We need sovcits and police auditors, because they define the boundary and remind the police and prosecutors, whom are apt to forget, where that boundary is.
Everyone demands victims be kind and normal, but that is rarely the case. Kind and normal people stay far away from the boundaries by nature, but the boundaries always shrink when there is no push-back. We need the kooks out there getting tazed so we don't have to.
Short-run programs are even easier. You just never deallocate and then exit(0).
reply