Here's a concise explanation of SSA. Regular (imperative) code is hard to optimize because in general statements are not pure -- if a statement has side effects, then it might not preserve the behavior to optimize that statement by, for example:
1. Removing that statement (dead code elimination)
2. Deduplicating that statement (available expressions)
3. Reordering that statement with other statements (hoisting; loop-invariant code motion)
4. Duplicating that statement (can be useful to enable other optimizations)
All of the above optimizations are very important in compilers, and they are much, much easier to implement if you don't have to worry about preserving side effects while manipulating the program.
So the point of SSA is to translate a program into an equivalent program whose statements have as few side effects as possible. The result is often something that looks like a functional program. (See: https://www.cs.princeton.edu/~appel/papers/ssafun.pdf, which is famous in the compilers community.) In fact, if you view basic blocks themselves as a function, phi nodes "declare" the arguments of the basic block, and branches correspond to tailcalling the next basic block with corresponding values. This has motivated basic block arguments in MLIR.
The "combinatorial circuit" metaphor is slightly wrong, because most SSA implementations do need to consider state for loads and stores into arbitrary memory, or arbitrary function calls. Also, it's not easy to model a loop of arbitrary length as a (finite) combinatorial circuit. Given that the author works at an AI accelerator company, I can see why he leaned towards that metaphor, though.
> Given that the author works at an AI accelerator company
I actually believe he works at Buf.build and his resume is stale, the author previously posted about their Go Protobuf parser hyperpb which is a Buf project.
Maybe still "author recently worked at an AI accelerator company" though.
1. Removing that statement (dead code elimination)
2. Deduplicating that statement (available expressions)
3. Reordering that statement with other statements (hoisting; loop-invariant code motion)
4. Duplicating that statement (can be useful to enable other optimizations)
All of the above optimizations are very important in compilers, and they are much, much easier to implement if you don't have to worry about preserving side effects while manipulating the program.
So the point of SSA is to translate a program into an equivalent program whose statements have as few side effects as possible. The result is often something that looks like a functional program. (See: https://www.cs.princeton.edu/~appel/papers/ssafun.pdf, which is famous in the compilers community.) In fact, if you view basic blocks themselves as a function, phi nodes "declare" the arguments of the basic block, and branches correspond to tailcalling the next basic block with corresponding values. This has motivated basic block arguments in MLIR.
The "combinatorial circuit" metaphor is slightly wrong, because most SSA implementations do need to consider state for loads and stores into arbitrary memory, or arbitrary function calls. Also, it's not easy to model a loop of arbitrary length as a (finite) combinatorial circuit. Given that the author works at an AI accelerator company, I can see why he leaned towards that metaphor, though.