I'm really looking forward to the Trough-of-Dissillutionment phase of LLM's hype cycle. This insistence of shoehorning it into everything is getting beyond stupid.
This remind me of when they were shoehorning voice assistants into everything. “Alexa can play music through my smoke alarm? Alexa can start my microwave for me instead of pushing two buttons? Why not”
This remind me of when everybody and his dog was shoehorning blockchain into everything. Blockchain-based pet platforms, pet owners earning tokens for participating in community, pet care services fueled by smart contracts, and the like.
Misery might be a bit hyperbolic, but I'm referring to the larger scale model of engagement being a tier 0 metric for success. Instagram & cohorts stealing people's attention spans is something I'd describe as negative and almost evil, with the larger scale problem of the smartest people in our industry having been at ad companies for the past 2 decades.
In the context of LLMs I think that they're useful tools, and if "we" play it right they can be a great boon. Short-term they'll lead to the enshittification of the internet even more though imo.
What do you expect exactly? I'm sure every big tech has had AI in their products for a while now: Who do you think filters the spam in your Gmail, if not their AI Bots? Or the music suggestions in your Spotify?
Why do you think Microsoft would be a hellhole for doing the same? Especially considering all the productivity use cases they've shown for the Office suite.
I swear HN needs to hate everything Microsoft is doing just because.
I think there are two ways to go about implementing AI.
The low-key implementations that assist are the most elegant ways to implement AI functionality. If I can use a product and not realize AI is behind it, the product has successfully utilized it. Spam filters fall into this case. Automatic “radio” stations from streaming services fall into this too.
The worst forms of AI implementation are the kind that spend more screen real estate advertising AI as if the product has something to prove. This hinders my experience as a user because I do not care about AI if it isn’t seamlessly fitting in my workflow.
I’m not sure what’s happening at Microsoft, but their insistence on AI in very unusual places doesn’t give me confidence they want to embrace AI in a manner that’s helpful. It feels like someone’s resume boosting exercise. It gives me the feeling they are desperate.
It was an exaggeration but most of pre-ML AI is just a set of fairly rigid and general rules. They mostly differ in the sense that rules are added by different means (human vs machine) or at different times (during the "training" process or on the go while being used)
It's true I'll hate on micrsoft for about anything but they didn't say "some products" "some services" "some processes", they said "EVERY SINGLE THING!!!", see the difference?
Given the quote is referring to what they've done "for years and years and years [...] in our product groups" outside of the OpenAI arrangement, the fact that a large number their of products have come to make some use of AI models without much fanfare (search, spell-check, spam filtering, voice dictation, language translation, recommendation systems, ...) is not inherently due to the more recent LLM shoehorning. Machine learning is just the best choice for a good number of tasks.
I'm not talking about LLMs in particular. I guess this is a company wide mandate to grow knowledge of how to do this stuff well, I mean that makes sense. But in the trenches (aka hells-ahole) that means a lot of bad bad stuff is being relied on and it generates calcification of business segments and kafkaesque anti-patterns for the uninitiated. This doesn't only apply to "AI" its a generic feature of shoe-hornings. The problem with the shoe-horn is that its politically costly to resist even if it makes good business sense to resist at the micro level.
I'd agree that "We're going to shove a chatbot in every single one of our products", like the recent Copilot integrations, would reek of shoe-horning and a possible company wide mandate.
But remarks more along the lines of "Looking back over the past decade, we've made use of ML models in some part of almost all of our products" seems fairly reasonable to me, not necessarily indicative of much other than machine learning being the best tool for an increasing number of tasks. If they weren't using ML-based echo cancellation in Teams calls for instance, they would have a worse product than competitors that do.
I don't claim that Microsoft products are perfect, just that it seems a reasonable use of machine learning. The things I've seen them use ML models for are genuinely useful, and mostly added without too much fanfare years prior to the recent generative AI hype.
I understand, but think that in many relevant cases it'd now be non-ML approaches that would be "forced". Machine learning is just the easiest and best way to accomplish a large range of tasks.
I guess a general application of statistics and control is going to be called ML then? If that's the world we're living in then I wonder how the missing fraction are being governed. Pure malice?
Not all applications of statistics/control are ML - I don't know what I said to give that impression.
A spam filter based on regex or manually-selected criteria and thresholds would not be ML for instance, whereas modern effective spam filters typically do make use of machine learning.
Its well known you can infer the safety of a specification by assessing a small subset of its features. Quod erat demonstrandum. /s
But seriously, this is how backdoors are trivially integrated.