Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Let's try a thought experiment. There is a `Feedback` button on the bottom right, imagine that Google developers actually look at them. Let's assume some numbers:

- Once every ten searches there is that "smart" box. - Once every thousand "smart" boxes a user spots an error and clicks the `Feedback`. - There are a hundred developers behind this feature, working 8h/day.

So, according to [the result in a smart box](https://kenshoo.com/monday-morning-metrics-daily-searches-on...) there are 228 million searches per hour. So, to go through all the feedbacks, your developers need to average well over 600 per hour.

An alternative approach to get this estimation: imagine every tenth user reports one error per year. There are [2 billion gSuite users](https://www.zdnet.com/article/google-g-suite-now-has-2-billi...), so intuitively there should be at least as many Google search users. By simple division, your developers would need to go through almost 700 feedbacks per hour.

Having these numbers: how do you think Google engineers should actually react to the feedback?

Disclaimer: I work at Google, but on something not exposed to the outside world. However, we do hit similar scale issues with our users being only Google engineers.



Solution: Hire 10,000 people to vet the feedback and funnel it into the org. Those people vet 6-7 feedback per hour, or 18-21 if you keep the operation running 24/7.

But that would cut into the Profit from the Money Printing Machine.

Saying "We're doing business on a scale that's too big for us to be (profitably) accountable" isn't an acceptable answer.


Imagine Google is a public utility. Would it be worthwhile for the society to spend that much manpower to funnel the feedback brute-force way like that ? To me, it clearly is a waste of money for the society, and an inefficient and pointless way to improve the product.

This kind of deontological approach is not very useful unless it's applied to a morally important issue, and even then, an utilitarian/consequentialist approach is needed to cross-check to make sure deontological approach doesn't go astray.


Very slippery slope, since you can use that argument to justify anything you want.

The bottom line is this: You guys are actively spreading misinformation and profiting from it. That's a bad thing.


Do you have any alternative ideas on how to make this better?




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: