Hacker Newsnew | past | comments | ask | show | jobs | submit | peteradio's commentslogin

> DAG of other devices

Its well known you can infer the safety of a specification by assessing a small subset of its features. Quod erat demonstrandum. /s

But seriously, this is how backdoors are trivially integrated.


Ok but hands off my chickens hoa clucker


I've seen self described "product owners" claim to build using LLM outputs without knowing how to code.


Money won by definition is not lost so therefore I'm rich!


> All he had to do to win was use a different stick shaped object for the test.

His WEEEENNUS?????


> Half the time an engineer is sitting around thinking or drinking all the coffee.

Is that supposed to sound like a bad thing?

> tangible improvements, process cranking, or institutional knowledge, the most valuable people are the admins

Valuable like a splinter in your big toe. "It hurts when I move it!"


> AI models are used in almost every one of our products, services and operating processes at Microsoft

Oh boy. Any insight from microsfot peoples on this apparent hell-hole?


I'm really looking forward to the Trough-of-Dissillutionment phase of LLM's hype cycle. This insistence of shoehorning it into everything is getting beyond stupid.


This remind me of when they were shoehorning voice assistants into everything. “Alexa can play music through my smoke alarm? Alexa can start my microwave for me instead of pushing two buttons? Why not”


This remind me of when everybody and his dog was shoehorning blockchain into everything. Blockchain-based pet platforms, pet owners earning tokens for participating in community, pet care services fueled by smart contracts, and the like.


When the metric for success starts and stops at "engagement"!


The big problem with these things is that the people responsible for the misery they cause will not feel any of the consequences.


In fact, they'll likely be rewarded for delivering results.


What misery? The hyperbole here is astounding.


Misery might be a bit hyperbolic, but I'm referring to the larger scale model of engagement being a tier 0 metric for success. Instagram & cohorts stealing people's attention spans is something I'd describe as negative and almost evil, with the larger scale problem of the smartest people in our industry having been at ad companies for the past 2 decades.

In the context of LLMs I think that they're useful tools, and if "we" play it right they can be a great boon. Short-term they'll lead to the enshittification of the internet even more though imo.


What do you expect exactly? I'm sure every big tech has had AI in their products for a while now: Who do you think filters the spam in your Gmail, if not their AI Bots? Or the music suggestions in your Spotify?

Why do you think Microsoft would be a hellhole for doing the same? Especially considering all the productivity use cases they've shown for the Office suite.

I swear HN needs to hate everything Microsoft is doing just because.


I think there are two ways to go about implementing AI.

The low-key implementations that assist are the most elegant ways to implement AI functionality. If I can use a product and not realize AI is behind it, the product has successfully utilized it. Spam filters fall into this case. Automatic “radio” stations from streaming services fall into this too.

The worst forms of AI implementation are the kind that spend more screen real estate advertising AI as if the product has something to prove. This hinders my experience as a user because I do not care about AI if it isn’t seamlessly fitting in my workflow.

I’m not sure what’s happening at Microsoft, but their insistence on AI in very unusual places doesn’t give me confidence they want to embrace AI in a manner that’s helpful. It feels like someone’s resume boosting exercise. It gives me the feeling they are desperate.


>Who do you think filters the spam in your Gmail, if not their AI Bots?

I would hope it's a purpose-built ML model and not an LLM that was cajoled into doing spam filtering.


So exactly what I just said.

Trained ML Models have been in use long before LLMs came out.


ML models are AI too. And you don't even need statistical models to call your system AI, just a set of conditional blocks behind a decision


You really think of a bunch of conditions can be labeled as AI? Do you work in marketing?

https://miro.medium.com/v2/1*gXZeYDjqLBWqbnGvlr_gyQ.png


It was an exaggeration but most of pre-ML AI is just a set of fairly rigid and general rules. They mostly differ in the sense that rules are added by different means (human vs machine) or at different times (during the "training" process or on the go while being used)


Is your contention that AI can't be implemented on a Turing machine?


It's true I'll hate on micrsoft for about anything but they didn't say "some products" "some services" "some processes", they said "EVERY SINGLE THING!!!", see the difference?


>they said "EVERY SINGLE THING!!!", see the difference?

Where did they say this? I read "almost every".


Does it make a difference of context whether or not they've successfully shoehorned only 99%?


Given the quote is referring to what they've done "for years and years and years [...] in our product groups" outside of the OpenAI arrangement, the fact that a large number their of products have come to make some use of AI models without much fanfare (search, spell-check, spam filtering, voice dictation, language translation, recommendation systems, ...) is not inherently due to the more recent LLM shoehorning. Machine learning is just the best choice for a good number of tasks.


I'm not talking about LLMs in particular. I guess this is a company wide mandate to grow knowledge of how to do this stuff well, I mean that makes sense. But in the trenches (aka hells-ahole) that means a lot of bad bad stuff is being relied on and it generates calcification of business segments and kafkaesque anti-patterns for the uninitiated. This doesn't only apply to "AI" its a generic feature of shoe-hornings. The problem with the shoe-horn is that its politically costly to resist even if it makes good business sense to resist at the micro level.


I'd agree that "We're going to shove a chatbot in every single one of our products", like the recent Copilot integrations, would reek of shoe-horning and a possible company wide mandate.

But remarks more along the lines of "Looking back over the past decade, we've made use of ML models in some part of almost all of our products" seems fairly reasonable to me, not necessarily indicative of much other than machine learning being the best tool for an increasing number of tasks. If they weren't using ML-based echo cancellation in Teams calls for instance, they would have a worse product than competitors that do.


Teams doesn't even do copy paste or open Microsoft based formats in under 10 seconds (IME)... I rest my case.


I don't claim that Microsoft products are perfect, just that it seems a reasonable use of machine learning. The things I've seen them use ML models for are genuinely useful, and mostly added without too much fanfare years prior to the recent generative AI hype.


I'm saying that forcing particular technology leads to worse products in many cases.


I understand, but think that in many relevant cases it'd now be non-ML approaches that would be "forced". Machine learning is just the easiest and best way to accomplish a large range of tasks.


I guess a general application of statistics and control is going to be called ML then? If that's the world we're living in then I wonder how the missing fraction are being governed. Pure malice?


Not all applications of statistics/control are ML - I don't know what I said to give that impression.

A spam filter based on regex or manually-selected criteria and thresholds would not be ML for instance, whereas modern effective spam filters typically do make use of machine learning.


I hope they are not drawing inspiration from Steve Ballmer's brain.


Now we can take this LLM, and paste it right into windows's write!


It shouldn't be a very large one. Lot's of empty space and dead synapses leading nowhere in the source material.


Developers! Developers! Developers! Developers!

becomes

CoPilots! CoPilots! CoPilots! CoPilots!


Co-copilots! Co-copilots! Co-copilots!


I can write that with 2 lines of basic with print and goto. It also has the word Developers a few times.


Indeed, in python you can just eval.


Nah uh! We are still warring! Israel winks at Gaza, Iran squeezes the U.S.A's ass.


I used to hide porn deep in the lotus folder cache. We are kindred spirits!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: