Hacker Newsnew | past | comments | ask | show | jobs | submit | more EsportToys's commentslogin

Fun fact: this is only valid for domains that have a notion of "selfness", i.e. that there is such thing as an "identity matrix" for the quantities.

Consider the following square matrix:

            TSLA APPL GOOG MSFT    
    Alice | 100   5     0    1
    Bob   |  0    30   100   5
    Carol |  2    2     2    2
    Dan   |  0    0     0  1000
An input vector of stock prices gives an output vector of net worths. However, that is about the only way you can use this matrix. You cannot transform the table arbitrarily and still have it make sense, such as applying a rotation matrix -- it is nonsensical to speak of a rotation from Tesla-coordinates to Google-coordinates. The input and output vectors lacks tensor transformation symmmetries, so they are not tensors.

This is also why Principal Component Analysis and other data science notions in the same vein are pseudoscience (unless you evaluate the logarithm of the quantities, but nobody seems to recognize the significance of unit dimensions and multiplicative vs additive quantities)


There's a little more nuance:

1. Technically, the table you shared is better thought of as a two-dimensional tensor, rather than a "graph-like matrix" -- which as you point out must be a linear map from a (vector) space to itself.

2. While not technically "Principal Component Analysis", one could do "Singular Value Decomposition" for an arbitrarily shaped 2-tensor. Further, there are other decomposition schemes that make sense for more generic tensors.

3. (Rotations / linear combinations in such spaces) Given a table of stock holdings, it can be sensible to talk about linear combinations / rotations etc. Eg: The "singular vectors" in this space could give you a decomposition in terms of companies held simultaneously by people (eg: SAAS, energy sector, semiconductors, entertainment, etc). Likewise, singular vectors on the other side would tell you the typical holding patterns among people (and clustering people by those, eg. retired pensioner invested for steady income stream, young professional investing for long-term capital growth, etc). As it turns out, this kind of approximate (low-rank) factorization is at the heart of recommender systems.


Yeah, this is also the case if the table is not square as the values can't represent edges any more. So its more something like the rows and columns should index the same "thing".

By the way by changing the graph representation we can give meaning even to non square matrices as described in this article https://www.math3ma.com/blog/matrices-probability-graphs


I think the most common application is describing mesh connectivity in Finite Element Methods, where each entry in the matrix represents the influence each node has on each other. Basically any N^2 table can be constructed to describe the general dependency of components in any simulation or systems in genera.


Another notable example is the PageRank algorithm [0] where you consider the graph where nodes are web pages and edges are links between them and you can build an adjacency matrix of this graph and with this algorithm sort the pages based "popularity" (which pages have more links pointing to them intuitively)

Let's say that in most cases you have a graph and you consider the corresponding matrix. Doing the inverse is not as useful in practice except in some cases as explained in the article.

[0]: https://en.wikipedia.org/wiki/PageRank


You make a good point about types of matrices that a graph representation makes sense with but it seems a bit much to say that PCA is pseudoscience?

If you had a lot of people and a lot of stocks, a low-rank representation of the matrix (probably not PCA per se with that particular matrix, but something closely related) could convey a lot of information about, e.g., submarkets and how they're valuated together. Or not, depending on how those prices covary over time.


I disagree... there are more ways you can use this matrix to creatively extract information out of it.

For instance, you can normalize along the columns, and build a "recommender system" using matrix factorization.

With that, when a new person comes with a portfolio, the system will output a probability for this new person to acquire the other assets he doesn't have.

It's (the very basic) idea of how Netflix recommends movies.


When I try to get this point across about techniques like the PCA, I like to show that the measurement units strongly affect the inference.

Really, if your conclusions change depending on whether you measure in inches or centimeters, there’s something wrong with the analysis!


I would disagree and here is why:

> When I try to get this point across about techniques like the PCA, I like to show that the measurement units strongly affect the inference.

In such a case the problem is not with PCA but with application. PCA is just a rotation of the original coordinate system that projects the data on new axes which are aligned with the directions of highest variability. It is not the job of PCA to parse out the origin of that variability (is it because of different units, or different effects).

> Really, if your conclusions change depending on whether you measure in inches or centimeters, there’s something wrong with the analysis!

To get a statistical distance one should: subtract the mean if the measurements differ in origin; divide by standard deviation if the measurements differ in scale; rotate (or equivalently compute Mahalanobis distance) if the measurements are dependant (co-vary). The PCA itself is closely related to Mahalanobis distance: Euclidian distance on PCA-transformed data should be equivalent to Mahalanobis distance on the original data. So, saying that something is wrong with PCA because it doesn't take units of measurement into account is close to saying that something is wrong with dividing by standard deviation because it doesn't subtract the mean.


Is the effect of measurement units eliminated by applying something like zero mean unit variance normalization prior to dimensionality reduction?


I dunno, there are some semi-useful things you can do.

For example, the transform from (Alive, Bob, Carol, Dan) to (Male, Female) is linear -- it's another matrix that you can compose with the individual-ownership one you have here.

Or, call your individual-ownership matrix A, and say that P is the covariance of daily changes to prices of the four stocks listed. Then A P A' is the covariance of daily changes to the peoples' wealths. The framing as linear algebra hasn't been useless.

I kinda get what you're saying though. Like, why would powers of this matrix be useful? It only makes sense if there's some implicit transform between prices and people, or vice versa, that happens to be an identity matrix.

You can make up a story. Say the people can borrow on margin some fraction of their wealth. Then say that they use that borrowing to buy stock, and that that borrowing affects prices. Composing all these transforms, you could get from price to price, and then ask what the dynamics are as the function is iterated.

But, ok, "I'm just going to do an SVD of the matrix and put it in a slide" isn't going to tell anybody much.

Maybe there's a use for a rank-one approximation to this system? Like, "this is pretty close to a situation where there's a single ETF with those stocks in these proportions, and where the people own the following numbers of shares in the ETF"? Maybe if you have millions of people and millions of stocks and wanted to simulate this "stock market" at 100Hz on a TI-83?

I dunno. You can make up stories.


Is there any domain where you can apply arbitrary transformations on a table and still make sense? I feel there is some depth in your argument that I cannot infer just by the content of your comment and I would be keen to look further into it. I.e in you domain example, a currency would be coordinate and you can move to alternate currencies? Would that be the identity you look for?


looks like Manim.


I had written a Windows script for adding three-finger drag[0] that I use daily, perhaps it could provide some inspiration.

Basically, it is an independent subscriber to RawInput messages that only keeps track of whether or not to send three-finger drag, and posts emulated mouse messages using SendInput. I have a few other scripts that each run as independent userland processes that only monitors their own trigger and nothing else.

Tangentially, my TPMouse[1] script implemented inertia in a framerate-independent way so that it uses very little resource while having perfect simulation stability.

A previous discussion where I explained the analytic derivation for this low-resource exact-solution damped inertia can be seen in [10]

[0] https://github.com/EsportToys/PrecisionThreeFingerDrag/blob/...

[1] https://github.com/EsportToys/TPMouse

[10] https://old.reddit.com/r/Trackballs/comments/ym9q2t/tpmouse_...


Fun fact: every videogame on Windows is potentially a userspace keylogger if you let it run in the background while, e.g. browsing the web etc.

Basically, any application that uses the Raw Input API can request to receive raw device events even when the application is not running in the foreground, by using the RIDEV_INPUTSINK flag.

The app will then receive every raw device input packet that the hardware sends to the system, replete with timestamps[0] and your mouse position when the event happened[1].

In the case of keyboards it would provide the virtuak-key codes and scancodes[10].

Rawinput is used by modern FPS game like Valorant[11], so if you leave it running in the background it may potentially be able to observe your every single keystroke while you use your browser, enter passwords, etc.

TPMouse, my opensource trackball-emulation script that lets you use the homerow as a trackball for your cursor[100], uses Raw Input with the RIDEV_INPUTSINK option so that it runs entirely in userspace without needing to hook to low level drivers.

It is certainly a double-edged sword -- for open source it's a convenience blessing since what you're running can be inspected directly, but in the case of close-sourced games like Valorant you're relying on your trust of Riot Games's intentions and competence.

[0] https://learn.microsoft.com/en-us/windows/win32/api/winuser/...

[1] https://learn.microsoft.com/en-us/windows/win32/api/winuser/...

[10] https://learn.microsoft.com/en-us/windows/win32/api/winuser/...

[11] https://playvalorant.com/en-gb/news/game-updates/valorant-pa...

[100] https://github.com/EsportToys/TPMouse


With the TPMouse script, I implemented the activation shortcut as [LShift][RShift][C (trackball mode) / G (grid mode) / Q (quit)], which I felt had a nice balance between deliberateness and easy-to-reach (since you are using it with your hands on homerow).

Though because some keyboards have key rollover issues with using both Shifts, [Capslock][<modekey>] is also allowed as an alternative activation shortcut.

[0] https://github.com/EsportToys/TPMouse


if you are only every gonna use critical damping, you can do something like:

    function crit_response(dt,pos,vel,rate)
       local dissipation = math.exp(-dt*rate)
       local disspatePos = pos*dissiptation
       local disspateVel = vel*dissiptation
       local posCarried = 1 + dt*rate
       local velCarried = 1 - dt*rate
       local posFromVel =     dt 
       local velFromPos =    -dt*rate*rate
       local newpos = posCarried*dissipatePos + posFromVel*dissipateVel
       local newvel = velFromPos*dissipatePos + velCarried*dissipateVel
      return newpos , newvel
    end
(intentionally verbose for self-didactic purposes to show how you can actually split the dissipation step into its own loop out from the elastic calculation, meaning that you can use this as a more accurate initial guess in numeric simulations compared to just constant-acceleration approximations)


I can see an argument for wanting to go slightly to one side or the other of the critical factor for aesthetic reasons, but yes. If you want "critically-damped spring behaviour" you don't have to go the long way round to get it.


I posted a much more concise derivation (and implementation, in just 16 lines) on HN a while back but didn't gain much traction:

https://news.ycombinator.com/item?id=35899215

Here are the 16 lines in full:

   function sprung_response(t,d,v,k,c,m)
      local decay = c/2/m
      local omega = math.sqrt(k/m)
      local resid = decay*decay-omega*omega
      local scale = math.sqrt(math.abs(resid))
      local T1,T0 = t , 1
      if resid<0 then
         T1,T0 = math.sin( scale*t)/scale , math.cos( scale*t)
      elseif resid>0 then
         T1,T0 = math.sinh(scale*t)/scale , math.cosh(scale*t)
      end
      local dissipation = math.exp(-decay*t)
      local evolved_pos = dissipation*( d*(T0+T1*decay) + v*(   T1      ) )
      local evolved_vel = dissipation*( d*(-T1*omega^2) + v*(T0-T1*decay) )
     return evolved_pos , evolved_vel
   end
And the embedded interactive demo in the blog post:

https://www.desmos.com/calculator/ynbtai98ns

I'm hesitant about reposting as all of my submissions had been inexplicably blocked by HN filters, by the time someone vouches the post it has already been buried.


Have you tried reaching out to dang ([email protected])? One of my submissions was blocked and I reached out to him - he responded a while later to tell me that the domain was blacklisted (it was an article I'd written while working with a some org's PR dept. - but it helped to know why it was getting blocked).


> I'm hesitant about reposting as all of my submissions had been inexplicably blocked by HN filters

Almost exclusively submitting content from the same source might look like spam to the system. I‘d try submitting other interesting things that you come across too and see how that goes in the long term.


The automatic edge-panning of topdown games is a behavior inherited from 90s RTS games that has remained largely unchanged.

While it is sufficient for ball-mouse era casual players, it is actually a detrimental mechanic largely avoided by competitive players and considered a beginner's footgun that causes bad habits to develop.

In this playable proof-of-concept, the camera displacement is changed from the traditional time-dependent panning to a direct position-based panning.

This way, competitive players can leverage their mouse control/muscle memory to move their cursor and their camera as quickly as their skills allows.


AutoIt is also great especially for quickly prototyping GUIs.

I recently built a proof-of-concept for a modernized method of interacting with RTS cameras[0], which unfortunately could not be achieved with frameworks like SDL due to their abstraction obscuring some of the native OS functions needed to create my idea.

Using AutoIt lets me basically just treat it as a minimal-boilerplate sandbox to make DllCalls. This also means that I could directly listen as well as post raw device messages. For example, I implemented an inertia-based cursor script that basically lets you use your homerow vim keys like it's a trackball[1], which I now use everyday whenever I'm not with my ThinkPad.

[0] https://github.com/EsportToys/NaturalEdgePan

[1] https://github.com/EsportToys/TPMouse


How does AutoIt compare to AHK? I realize AutoIt is not open-source (free EXE), but besides that what are the key differences? The AutoIt VisualBasic-like language looks easier to learn for those of us who grew up on Microsoft. Can anyone confirm that?


I came across this very interesting bit of history while looking for an image to illustrate my RTS camera control concept[0] as an analogy.

It's interesting how a lot of the wartime "computational drudgery" frequently employ women, the perception of which shifted to becoming a more men-dominated employment after the war.

It would be lovely to hear interesting perspectives from people here on what other tidbits you know of about little-known roles and the social climate during the war, or your insights on how things came to be the way they are during and after the war.

[0] https://github.com/EsportToys/NaturalEdgePan#figure-2-this-w...


> the perception of which shifted to becoming a more men-dominated employment after the war

That's a wee bit of revisionism. Read this:

https://www.amazon.com/gp/product/0262535181


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: