Shouldn't there be a diagram of the complex plane so that people can see what it's the right half plane of? On top of it, there's a picture of a plane which is confusing.
Fascinating subject though, in engineering class it was quite surprising how this bunch of functions tracing lines and dots on the complex plane would be relevant to just about everything. Perhaps the first lesson is that even if you know how a system works, you can't just take the inverse function to control what comes out.
> Shouldn't there be a diagram of the complex plane so that people can see what it's the right half plane of?
Author here. Yes! Very fair criticism. I was trying to strike a balance between making the concept approachable for those who don't have a background involving complex numbers, but that certainly leaves the name of the concept more confusing. I should add it in a footnote at least.
And I did honestly not think about the potential for confusion between plane // airplane. An airplane was the most familiar example system I could think of to explain the concept. Oops!
> Perhaps the first lesson is that even if you know how a system works, you can't just take the inverse function to control what comes out.
That's a great point too. It probably even deserves its own article.
> Stuff like Nyquist criterion just sort of appears out of nowhere as functions.
Black's canonical 1934 paper[1] Stabilized Feedback Amplifiers, which had an outsized influence on EE classical control theory, may have something to do with that:
Results of experiments, however, seemed to indicate something more was involved and these matters were described to Mr. H. Nyquist, who developed a more general criterion for freedom from instability applicable to an amplifier having linear positive constants.
I went down this rabbit hole in grad and my opinion is that Control Theory is good for writing academic papers, but has few applications besides the classical ones (mostly in mechanics).
Techniques often need very strong assumptions about the systems being modeled, which severely limits their usefulness.
In fact, CT is sort of the antitheses of the currently most hyped way of modeling systems: Machine Learning.
Also systems modeling is not the same as control theory. You could indeed utilize machine learning to model a system, which you could then control by classical controllers.
On the other hand, control algorithms that use machine learning are a thing.
Really good article. Make changes and corrections to it, sure, but be careful not to over react to the criticism and make too many changes such that you then get even more critical feedback and make more corrections and get worse feedback and bigger corrections and oh god help me worse feedback and bigger changes and stop me please worse feedback and help bigger changes seriously kill me bigger feedback worse changes arggggh
> the potential for confusion between plane // airplane
This reminds me of the famous (possibly apocryphal) story of the algebraic geometer of middle eastern descent who was brought aside by Air Marshals for talking about how a particular problem could be solved by “blowing up points on a plane”
The main thing missing for me: presumably if there's a plane, you're graphing something vs something. What are the X and Y? (or, uh, the x and y in the x+iy) Talk of a plane remains rather vague without knowing that. Maybe I missed it, but I looked twice, and read the comments on this page, and couldn't see it mentioned at all. There are a couple of graphs/planes, but they seem to be a different kind of thing.
Uh that first link is just a picture of a right complex half-plane... I know what that is, having done quite a bit of complex maths, but the kind that uses i, not the kind that uses j, e.g. I love Visual Complex Analysis. I just don't know what things are being graphed in the article! Sorry I didn't explain better. Ok thanks for the 13pp article, I will have a look sometime soon. I was hoping someone could just tell me the answer.
edit: I looked at the first few pages of the paper but I feel none the wiser, at all.
edit2: Ah... "the poles and zeros of a transfer function may be complex, and the system dynamics may be represented graphically by plotting their locations on the complex s-plane". The transfer function (whatever that is) is a rational function of the complex variable s, i.e. (in my words) it's a fraction with complex polynomials for numerator and denominator. The zeros are the roots of the numerator and the poles are the roots of the denominator.
Ok, I still don't know what the transfer function is or means or comes from, but am much less in the dark, thank you! :-)
> I know what that is, having done quite a bit of complex maths, but the kind that uses i, not the kind that uses j
Some things in life leave a lasting impression[1]. :eye_roll:
It sounds like what you're looking for is an explanation of root locus analysis[2].
In the simplest control case, a transfer function is nothing more than the expression of a continuous closed-loop LTI system's output Y(s) over its input X(s) in the Laplace domain, conveniently abstracted as its forward path G(s) and negative feedback path H(s).
From there, Routh-Hurwitz method[3] can be used to determine stability of the system.
I see that /u/metaphor has given you some formal references.
I'd like to chime in with a more intuition-based explanation of what transfer functions are, from my recollections of college control theory classes in both electrical signals and a more general "systems engineering" application:
Basically, the transfer function is a different perspective on modelling/representing a system's output as a function of its input. Classically, when modelling and/or reasoning about a system in physics, the perspective we adopt is that of "input" being the forward advance of time (and sometimes initial conditions) and "output" being the amplitude of the physical quantity(ies) or dimension(s) of the system that interest(s) us. The transfer function, then, is when we switch perspectives to consider the "input" to be a sinusoidal signal (characterized by amplitude and phase over time), and the "output" is the new amplitude and phase of that signal [after "traversing" the system]. Of course, you're actually working with a closed-loop, but most input/output systems can be modeled as a closed-loop if you sufficiently broaden the system's boundaries.
This turns out to be useful for/in several reasons/contexts:
- many physical phenomena are sine waves (or, thanks to Fourier, a sum of sometimes many different sine waves), and often times a system's purpose (to us humans) is to control such a phenomena precisely along the lines of "do this to the amplitude, and/or adjust the phase like so" - dampening, feedback loops, more sophisticated processes like hysteresis, maintaining a steady state given incoming perturbations, etc. In these cases the transfer function ends up being the mathematical expression of that system's function in the "domain language" of that problem, so to speak.
- It turns out that often, when working with systems whose "classical" representation involve components like exponentials or sine and cosine of time (which are "just" complex exponentials of those quantities), the corresponding transfer functions are "simple" fractions of polynomials. More precisely, passing into the Langrange domain allows transforming a differential equation problem into a complex polynomial fractions problem - often much easier to crunch/solve. Furthermore, in the Lagrange domain, de-phasing a signal by pi/2 is equivalent to simply adding 1/(j * signal's frequency) to that signal (if I recall correctly). This makes much of the math more accessible to human intuition, and especially on more complex systems that have several "moving parts" the linear quality of polynomials becomes invaluable.
Personally, I remember quickly adopting, once I'd grokked it, the transfer function perspective when trying to reason about the effect of introducing a capacitor into an existing circuit - analog or DC[0] - as well as things like how the material properties of a door contribute to its behavior as a low-pass filter on sound waves. Sitting down and doing the math, the formulas that I would arrive at spoke much more clearly to me. Also, you are sort of adopting a "time-agnostic" (or perhaps time-invariant) perspective, where the system itself does not change over time. Instead, its' input is characterized by how it behaves over time, and the transfer function (especially when plotted) gives you a clear, direct sense of what the output's "behavior over time" will accordingly be. Notably, it's here that the zeroes of the OP become so meaningful.
[0]: part of what initially started making things "tick" for me was when a professor explained that an impulse on an input signal (i.e. a quasi-instant variation, then back to the preceding "steady state" value of it - i.e. a DC current "turning on"), to a transfer function, "looks like" a sine wave signal with a constant amplitude but monotonously increasing phase offset - again I forget if the rate is constant, polynomial, exponential or what.
You had me at "a more intuition-based explanation of what transfer functions are". :-) Thank you so much for this.
edit: By "Lagrange" did you possibly mean to write "Laplace"? I confuse those two gentlemen too. p.s. I just learnt Lagrange was Italian! born Giuseppe Luigi Lagrangia.
Fascinating subject though, in engineering class it was quite surprising how this bunch of functions tracing lines and dots on the complex plane would be relevant to just about everything. Perhaps the first lesson is that even if you know how a system works, you can't just take the inverse function to control what comes out.