It's not satire. If you have some function with several inputs, the Jacobian is a matrix of all the partial derivatives of this function with respect to each input variable[1]. Since derivatives give the slope of a function, if you think about your function as being like a bumpy surface with the height at each point being the output, this matrix tells you which way (and how far) to change any input if you want to go "up" or "down" in the output.
Backpropogation is a way to optimise a neural network. You want to know how best to nudge the weights of the network to optimise some loss function, so what you do is compute the gradient (ie partial derivative) of that function with respect to each of the weights. This allows you to then tweak the weights of the function so your network gets better at whatever task you're trying to get it to learn. See [2] to understand how this works and and [3] to understand how this relates to the Jacobian, but generally if you're trying to go "downhill" in your loss function it's easy to see intuitively that knowing which way the function slopes (ie the effect of tweaking each of the weights) is important and that's what the Jacobian tells you.
The inverse of a matrix[4] and its transpose[5] are two different operations in linear algebra. Transpose turns rows into columns and columns into rows and the inverse of a matrix is a little harder to grasp maybe without background, but you could think of multiplying one matrix by the inverse of another as a little like division (since you can't actually divide matrices).[6]
Backpropogation is a way to optimise a neural network. You want to know how best to nudge the weights of the network to optimise some loss function, so what you do is compute the gradient (ie partial derivative) of that function with respect to each of the weights. This allows you to then tweak the weights of the function so your network gets better at whatever task you're trying to get it to learn. See [2] to understand how this works and and [3] to understand how this relates to the Jacobian, but generally if you're trying to go "downhill" in your loss function it's easy to see intuitively that knowing which way the function slopes (ie the effect of tweaking each of the weights) is important and that's what the Jacobian tells you.
The inverse of a matrix[4] and its transpose[5] are two different operations in linear algebra. Transpose turns rows into columns and columns into rows and the inverse of a matrix is a little harder to grasp maybe without background, but you could think of multiplying one matrix by the inverse of another as a little like division (since you can't actually divide matrices).[6]
[1] https://en.wikipedia.org/wiki/Jacobian_matrix_and_determinan...
[2] https://www.youtube.com/watch?v=Ilg3gGewQ5U
[3] https://www.youtube.com/watch?v=tIeHLnjs5U8
[4] https://math.libretexts.org/Workbench/1250_Draft_4/07%3A_Mat...
[5] https://math.libretexts.org/Bookshelves/Linear_Algebra/Funda...
[6] algebraists please don't shoot me for that.