Actually, in the age of ML it more or less does. You wire up the model, specify the metrics to optimize for, and then feed it lots of data. The algorithm figures out the details of how to achieve the specified goal on it's own. Have a look at (https://cs.stanford.edu/people/karpathy/convnetjs).
I suspect you're being rhetorical, but the algorithm and specific metrics to use are selected by the developer. The data is entirely user generated - it's the result of collecting the metrics over some period of time. The trained model is the result of feeding the collected data into the chosen algorithm.
The point is that the algorithm is, for all practical purposes, tuning itself. The developer has essentially selected a black box to feed the data into, told it what to optimize for, and given it the ability to wiggle a bunch of unlabeled knobs. Which knobs it should tweak and in precisely what way is never specified by the developer. Instead of "show the following things to the following users", the developer just says "maximize number of videos viewed per visit" and the algorithm tweaks whatever parameters have been made available to it until it finds something that works.
Unfortunately, "something that works" is often not what we might have liked. ML is a bit like a Djinn, fulfilling wishes in an unpredictable and borderline malicious manner.