Adding to the litany of bad google ideas from the past 10+ years:
- Killing google reader
- Pointless UI changes
- Multiple chat and videocall apps that cannibalize each other.
- Stadia fiasco
- Shoving AI down our throats in their MAIN PRODUCT
What's the source of this rot? I have a friend at google who says the place is filled with smart people competing with each other. Perhaps this competition fuels a chaotic lack of coherency? Kind of feels like they have no clear vision in the "Google Ecosystem", and are hopping on the AI bandwagon with hopes it'll ride them into the future.
Google's Gemini is not mind-blowing nor probably top model but without a doubt is in the same ballpark as all their competitors. Which I think is a pretty Good sign. Just like Meta, Google did not drop the ball on AI and looks like they had their ears to the ground better than say MSFT, APPLE or AMZN.
In that sense I can see why investors are happy. What matters is if Google can continue to innovate and at a rate faster than competition.
I know that view is popular, but it seems so short-sighted. Most companies can increase profit whenever they want, but driving out innovators and abusing goodwill never works out long-term. Look at IBM, Oracle, Intel, Boeing, US Steel, GE, Commodore, Quiznos, etc. I feel like the Google's stock holders are just getting the wool pulled over their eyes. An increase in profit often means a cannibalization of value.
I'm mostly wondering why shareholders go along with it or even pay on premium on shares making a lot of money now that may be comparatively worthless in 20 years. Do they really believe this is sustainable? Do they expect Google to just start issuing massive dividends? Are they just hoping for a greater fool?
Wall St wants to see them keeping up with the latest tech. That's why Apple put billions into VR and we will hear nothing of it in 12-24 months. Another reason is talent retention/acquisition. AI will be history in 12-24 months when the bubble bursts and we are left with a mountain of cheap GPUs.
I think it’s safe to assume that in the next few years almost all consumer hardware will be able to run a local LLM comfortably. And no one knows if LLMs can go beyond GPT-4 in a way that will genuinely blow people away.
> (...) almost all consumer hardware will be able to run a local LLM comfortably.
The last thing we want. For example, car manufacturers tried to make voice command work and the results are still unreliable; they also had experimented with touch screens and those are going away, because they are a poor and unsafe way to operate a moving vehicle. People want tactile feedback and ability to operate them without taking their eyes off the road. Why would anybody want an LLM in my camera, phone, washing machine, thermostat?
I personally don’t want that. In my opinion, this current iteration of LLM is completely overblown and I am perplexed as to how quickly it is getting adopted into everything.
Of course, LLMs are useful. For small tasks, there is a significant productivity boost to be had, but they are not trustworthy. And that is the main issue.
If we see them as untrustworthy, then perhaps it is necessary to accelerate their exposure into consumer technology as a means to show that an LLM can cause harm - in whatever form that may come.
It’s very easy to overlook LLMs making things up, but they do (including GPT4) - and if that can’t be solved then it’s safe to assume this hype will be short-lived.
All tech that becomes ubiquitous is based on giving humans answers that do not change based a throw of coin. Your bank account statement shows the same balance for the same end of day query no matter when you request it; your GPS gives you directions to the same place every time you put in the same destination address, not to a place that looks like a probable destination you might want. The best uses for AI are those that do not use generative BS, but for ML that give us answers, patterns, action scripts. Our brains are wired for survival and constantly look for answers, patterns, and scripts that do not change at random. There is a reason why movies and stories follow a hero's arc. We want order, life is chaotic enough. Not realising that will be a rude awakening for all investors in generative AI.
People that hide behind procedures and metrics and competition driven by that instead of looking outside the window and seeing if it's rainy or sunny.
MBA heads (which might be unfair here because this is more like engineer clockwork head) that use "products launched" as a metric even if you're actually taking down and launching the same product over and over
G killing "unprofitable" products like Google Reader because it makes sense on paper except that a) it's a minor line item b) they are too analytical to measure the impact on good will and brand "soft power" of the power and c) the existence of the product created demand for RSS producers and it was not simply another reader.
I've heard one link in the chain is that promotions are too tied to Shipping New Things, not improving current products or keeping things from becoming worse.
Accurate. Abject lack of a cohesive vision/strategy, and no leadership to articulate and drive it. It’s a motley collection of fiefdoms competing for perf points.
The relentless drive to make as much money as possible, as quickly as possible. Publicly traded companies tearing at the soft underbelly of society for a few pennies more each quarter.
- Killing google reader
- Pointless UI changes
- Multiple chat and videocall apps that cannibalize each other.
- Stadia fiasco
- Shoving AI down our throats in their MAIN PRODUCT
What's the source of this rot? I have a friend at google who says the place is filled with smart people competing with each other. Perhaps this competition fuels a chaotic lack of coherency? Kind of feels like they have no clear vision in the "Google Ecosystem", and are hopping on the AI bandwagon with hopes it'll ride them into the future.