> What other industry relies on its customers as implicit developers?
I would say most of them. To list a few:
- restaurants (almost all of them will send you feedback surveys these days, they also rely on you to tell them if they, for example, cooked your steak to the wrong temp)
- property maintenance (again, feedback surveys)
- auto mechanics (if the thing they fixed is still broken, a good mechanic wants to know)
- doctors (they rely heavily on YOU to tell you what wrong with your body)
- democratic political systems (when working correctly)
- road infrastructure (the city won't fix potholes nobody is reporting, and they won't do anything about badly tuned traffic lights nobody complains about)
- vaccines and medicine (the testing phase may not uncover every possible single side effect, they need recipients/users to report those if they happen)
(Please nobody come back with cynical takes on how these aren't helpful in their specific case/location, that's clearly not the point)
I think it's a given that I'm not using perfect metaphors, dissecting them is ignoring the point.
Users operate with different configurations, hardware, and needs. It is literally impossible to release bug free software. Every developer should try their best, obviously, but NOT requesting that bugs be reported is pure hubris on anyone's part
The unfortunate situation is that bugs in modern software just seem to… show up, as if their appearance is an ongoing maintainence issue rather than the outcome of something somebody on the development team did.
But, anyone who took the time to write bug-free code went out of business decades ago.
Yes, we know that floppy disks and drives will wear out, and they have few if any sources for new repair parts. So the fact that the system is still more or less working today doesn't mean it isn't doomed and needs to be replaced before experiencing a catastrophic unrecoverable failure.
2. There are floppy emulators that replicate the functionality of floppy drives with flash
3. The above two probably absorb all of the demand today, but even if they didn't, the volume is so low that fixed manufacturing costs per unit could likely push unit prices well beyond even $50. The tooling for factories often costs millions and unless you are selling in high volume, you will have quite a high fixed cost per unit.
Because it wouldn't be profitable? How many do you think they could sell to a dying market, and what would those manufacturing costs be? What experts could you tap who know this space? they are all gone
The point is it’s enough legroom to be reasonably cautious in the rollout rather than needing to get a big contractor to do a major and therefore expensive push.
I feel like a lot of responses here are lecturing about aphantasia rather than SDAM. I learned of SDAM from this article, but it resonates with my own experiences.
I would describe this in terms of telling stories from childhood. Many people I know can spin a narrative around significant events from their childhood, as if they're living it again as they tell it. This is something media has taught me is the normal way of experiencing a memory. But for me, it's just a list of facts. I can tell you various bits about the time I got punched in the face as a child (second grade, his name, my telling him to "make me" before he did it, every teacher not believing I could have been partially at fault), but those are simply fact lookups in list form. Part of that is aphantasia sure, but the other part is the lack of an emotional memory. I don't remember how any of that made me feel, I can just assume based on context. If I felt anything other than what would have made the most sense in context, it's logged as a fact about the incident.
Sadly, that means I have very little actual memory of my childhood. It's mostly a list of incidents and some data points about the incidents. I don't have emotional core memories of my grandparents, just some events associated with them that I know happened, but can't relive.
I'm somewhere in between. I mostly learn a lot like the author of the article says, just incorporating things into my worldview. I am terrible at remembering things like "so and so's" theorem/algorithm/historical proclamation. But when I absorb the idea, it becomes part of my own operating model. I'm terrible at citation.
Despite aphantasia, my autobiographical memory is a weird mixture of gaps and some very solid vignettes/moments where I remember a lot of detail. It's never a long-running scene. Many of these memories are pinned at some traumatic or surprising moment, but some seem to be much more mundane and yet somehow were recorded as if they were pivotal.
I have a pretty high ACE score. Ironically, some of my pivotal memories are meta-moments when I had a sudden veil lift from previously repressed memories. I'm remembering not the original traumatic moments, but the moment of realization that my memory had these decade-plus gaps or eras to it.
Yeah I describe my imagination to people as kinetic. Even if I'm trying to "see" a static object, it's in a form like a sparkler drawing.
Similarly, I can't hear a particular song in my head even if it's an earworm. Instead, I hear a rough approximation of it as if I were trying to describe it to someone else (instruments as mouth sounds, bad falsetto, and so on)
I know what you're going for, but no, "we" didn't have a choice at all. A select few did, and they convinced many that it was a great idea. I am part of the we, and I did not choose this.
Why can't it be both? I fully believe that the current strategy around AI will never manifest what is promised, but I also believe that what AI is currently capable of is the purest manifestation of evil.
Am I a psychopath? What is evil about the current iteration of language models? It seems like some people take this as axiomatic lately. I’m truly trying to understand.
Even if current models never reach AGI-level capabilities, they are already advanced enough to replace many jobs. They may not be able to replace senior developers, but they can take over the roles of junior developers or interns. They might not replace surgeons, but they can handle basic diagnostic tasks, and soon, possibly even GPs. Paralegals, technical writers, many such roles are at risk.
These LLMs may not be inherently evil, but their impact on society could be potentially destabilising.
The diffusion-based art generators seem pretty evil. Trained (without permission) on artists' works, devalues said works (letting prompt jockeys LARP as artists), and can then be deployed to directly compete with said artists to threaten their livelihoods.
These systems (LLMs, diffusion) yield imitative results just powerful enough to eventually threaten the jobs of most non-manual laborers, while simultaneously being not powerful enough (in terms of capability to reason, to predict, to simulate) to solve the hard problems AI was promised to solve, like accelerating cancer research.
To put it another way, in their present form, even with significant improvement, how many years of life expectancy can we expect these systems to add? My guess is zero. But I can already see a huge chunk of the graphic designers, the artists, the actors, and the programmers or other office workers being made redundant.
Making specific categories of work obsolete is not evil by any existing moral code I know. On top of that, history shows that humans are no less employed over the generations as we’ve automated more things. You entire comment is rooted in fear, uncertainty, and doubt. I have the opposite mindset. I love the fact that we have trained models on large corpuses of human culture. It’s beautiful and amazing. Nobody has the right to dictate how the culture they generate shall be consumed, not me, not you, and not Warhol, not Doctorow, not Lessig. Human created art always has been and will continue to be valuable. The fact that copyright is a really poor way to monetize art is not an argument that AI is evil. I support all my favorite creators on Patreon, not by buying copies of their work.
Thats kind of part of the problem, though. Yes, switching jobs constantly is a solid path to higher wages in fields like tech (at least it was before this year, some of the most competent people I know are struggling to change jobs), but in my experience that act tends to reduce ones number of "give a damns".
How would that work? The point of waffle house index is the association with waffle house. Without that, you have a unitless statistic unless you assume every visitor is already in the know.
I would say most of them. To list a few:
- restaurants (almost all of them will send you feedback surveys these days, they also rely on you to tell them if they, for example, cooked your steak to the wrong temp)
- property maintenance (again, feedback surveys)
- auto mechanics (if the thing they fixed is still broken, a good mechanic wants to know)
- doctors (they rely heavily on YOU to tell you what wrong with your body)
- democratic political systems (when working correctly)
- road infrastructure (the city won't fix potholes nobody is reporting, and they won't do anything about badly tuned traffic lights nobody complains about)
- vaccines and medicine (the testing phase may not uncover every possible single side effect, they need recipients/users to report those if they happen)
(Please nobody come back with cynical takes on how these aren't helpful in their specific case/location, that's clearly not the point)
reply