That's one of the things that drives me nuts about all the public discourse about AI and our future. The vast majority of words written/spoken on the subject are by generic "thought leaders" who really have no greater understanding of AI than anyone else who uses it regularly.
A characteristic of the field since the beginning. Reading What Computers Can't Do in college (early 2000s) was an important contrast for me.
> A great misunderstanding accounts for public confusion about thinking machines, a misunderstanding perpetrated by the unrealistic claims researchers in AI have been making, claims that thinking machines are already here, or at any rate, just around the corner.
> Dreyfus' last paper detailed the ongoing history of the "first step fallacy", where AI researchers tend to wildly extrapolate initial success as promising, perhaps even guaranteeing, wild future successes.
And the article agrees with you, and is pretty scathing about all the books except Narayanan’s (which is also the only book with a balanced anti-hype perspective):
> A puzzling characteristic of many AI prophets is their unfamiliarity with the technology itself
> After reading these books, I began to question whether “hype” is a sufficient term for describing an uncoordinated yet global campaign of obfuscation and manipulation advanced by many Silicon Valley leaders, researchers, and journalists
Prusa is going to use Bondtech's upcoming INDX system, which swaps out the entire filament path.
Bambu Vortek seems to just be swapping nozzles so, while that should cut down on waste, it's going to be much slower (XL is already much faster than AMS based printers but comes at a substantial price increase).
INDX tool changes are expected to be around 8-12 seconds. Vortek would be probably be around 30+ seconds.
Interesting, yeah. I'm mainly interested in making multi-material faster without upping to the size of an XL, so it looks like the Prusa solution would be a better fit for my needs personally.
If Bambu are only swapping nozzles it also means they still need something AMS-like to swap the filament path, which somehow feels a bit clunky. I think having seperate filament paths is overall a cleaner and simpler design.
I've had two interactions with Wendy's AI drive-through, and the first time I was pleasantly surprised, but the second time it would not stop suggesting add-ons after every single thing I said. It was comically pushy.
A human would have pretty quickly picked up on my increasingly exasperated "no, thanks" and stopped doing it, but the AI was completely blind to my growing frustration, following the upsell directive without any thought.
It reminded me of when I worked in retail as a kid and we were required to ask if they needed any batteries at checkout, even if they were just buying batteries. I learned pretty quickly to ignore that mandate in appropriate situations (unless the manager was around).
Makes me wonder how often employees are smart enough to ignore hard rules mandated by far-off management that would hurt the company's reputation if they were actually followed rigidly. AI isn't going to have that kind of sensitivity to subtle clues in human interaction for some time, I suspect.
Everyone who's detached from reality whether an MBA in HQ or some two bit in the internet comment section who fancies themselves a central planner thinks that the problem is the people on the ground not following "the rules" when in reality "the rules", in just about any situation where there are rules are crap if followed and often themselves are knowingly crap written in response to other crap ("government says you need to tell you wear this PPE, no exception, yes we know you'll get heat stroke in some conditions, we're not checking <wink>" type stuff).
That was my first thought as well. Every customer-facing job has ridiculous requirements from corporate that any employee with half a brain knows to skip. I wonder how much more exasperating customer service experiences will get with the proliferation of language models that don't know how to soft-pedal this stuff.
One of my line managers described the corporate management style as "Asking for an unreasonably excessive goal in order to motivate people to work towards a reasonable outcome".
That, and the CYA safety stuff, which corporate orders us to follow but does not in all cases actually expect us to follow; If they did they would have taken their regulations written in blood and asked somebody "How many more people do we need to hire to implement this?" So the management that needs to actually deliver on hard, visible cleanliness & sale-related metrics relaxes enforcement until barely anybody actually knows that the policy exists. Part of their job is to be ritually fired when that goes wrong.
you've hit the nail on the head here. AI rollout has this hilarious consequence where "lower" departments have for a long time insultated the c-suite against their worst excesses and worst mistakes. Now that barrier is slowly crumbling due to AI-first, giving the c-suite an incredibly rare opportunity to discover how bad some of its ideas are in practice and there's less opportunity to blame those outcomes on others.
I am pretty certain that if you are in an org where c-suite shifts reasons for negative results to external sources, they will find a way to do the same in the age of AI.
I've always thought of this as the reality grease problem.
We need rules. Yet the infinite variety of reality creates infinite situations in which the rules are counterproductive.
Previously: the ground folks had a brain and bent/ignored certain rules in the interest of getting their job done.
The principle peril of creating a more end-to-end automated, lights-out business is that there is no longer a brain to grease the interface between c-level and reality.
And c-level is never going to admit their own mistakes.
Ergo, you're going to get a lot of command-heavy companies that plow themselves into the ground over the next 10-20 years, because the low-level people they're going to fire were performing an essential function.
(Note: the easiest escape, inasmuch as I can see one, in radically data-driven management, with frequent random shifts between analogous but independent metrics)
I'm optimistic that the ease of enforcing rules like this and better customer data (maybe via the apps) will lead to a better format. The annoyance grows from the rules causing us to be prompted to do or respond to things we don't want or need. When the taco bell guy asks if I want to add sour cream for the third time, I am getting pretty annoyed. I don't like sour cream, period. But every time they hit me with "would you like to double the chicken", even if I wasn't a yes upon driving to the window, I cave when they ask and both parties are probably happier for it. Management isn't totally wrong here because there are upsells that all of us would take when presented at the right time. It's a bit like ad targeting. Its just happening in realtime at the window.
So the problem in my mind is the format. How do you not ask 3 questions with every dish? Maybe the screens can help. Now that you have an AI that can follow the rules always and likely follow more complex decision trees quickly "at the window", it reasonable chains could start to dial in how this works to be more targeted and active vs passive at the right times.
I wish I was optimistic that data and compliant robots will be used to make things better for customers.
I think it's far more likely that they will, at best, be used to do whatever horrible and unpleasant things that temporarily juice sales numbers. Across our economy we'll see this play out in every customer service interaction. And a wave of perniciously persistent upselling attempts will wash over us all.
After a while, we won't stop noticing that the simple process of buying a soda requires saying no to 15 different requests to subscribe to a service, put our credit card on file, sign up for notifications, and consider buying cookies, a burger, and some fries. But our lives will be worse for it.
I’ve always wondered if that battery spiel paid off. Do you have any stats? I never once was at Radio Shack and was like “yeah let me get some of your batteries” when they asked. Maybe I’m a fringe case.
My understanding is that Ra-Elco essentially moved to Standard Supply after the building burned down. I haven't checked it out myself yet so I can't confirm.
I dunno, but I do know that Standard Supply has a lot of the items I generally need and / or seek out when I'm visiting such places, so that's all good for me. I still miss Ra-Elco, but as long as I can buy the right parts locally when I'm in a real hurry to grab some important little widget or gizmo, that's what matters most to me. Especially important since Radio Shack is no longer at all that kinda store (and hasn't been for many years now).
You can configure it to not be enabled by default, but you click a little blue circle next to the title and it will show you the community version.
Plenty of good content is forced to play the clickbait thumbnail/title game and it would be a shame to miss some of it because of YouTube's incentive problems.
GEGL is just a library created to modernize GIMP's image manipulation pipeline. It forms a DAG of image operations. It's what unlocked non-destructive edits and porting everything over to it was a pretty massive undertaking (though it probably didn't need to take 20 years...)
End users really don't need to know about it. Its exposure in the UI is likely just because a lot of stuff it can do isn't available yet in the traditional GIMP UI.
I'm a data engineer so DAG/node systems in content creation software delights me (Blender, Resolve, Houdini). Terrible it's hidden behind the name "GEGL Operations". I'm exactly the person who will get it and love it immediately, but will never find it. Sounds like your parent commenter is the same. GIMP has always been a leader in UI self-owns.
"End users really don't need to know about it... a lot of stuff it can do isn't available yet in the traditional GIMP UI"
So why "don't they need to know about it?" And regardless, putting a meaningless label on it is a user-hostile blunder. This blunder is not all that uncommon either. Affinity did it by burying a bunch of stuff under a menu item called "Studio." Not as bad as "GEGL", but still meaningless.
I think you maybe didn't see the "yet" in there. This is software written by unpaid volunteers. There is no user-hostility, only a lack of time and help at implementing things.