> Now older and wiser, candidly a lot of folks would be well served by default blue links, og html submit buttons and tables for layouts. A fair bit of modern UI is complete trash: it's the product of a designer and a product person putting the next bullet point on their resume.
If I were emperor of the world I’d make every consumer program pass a battery of tests that included demonstrating sufficient usability for a panel of users from a nursing home, a panel of users with sub-90 IQ who were in a stressful environment and trying to complete other tasks at the same time, blind users, deaf users, et c.
I expect the outcome would be a hell of a lot less twee “on brand” UI elements and a lot more leaning on proven design systems and frameworks, including fucking crucially for appearance. And also a lot less popping shit up on the screen without user interaction (omg those damn “look what’s new!” sorts of pop ups or focused tabs—congrats, some of your users are now totally lost)
If you aren't making a consumer product for nursing home patients with sub-90 IQs, then you'd be wasting your time, and the feedback you got from the exercise wouldn't be useful. In fact, any decisions you made based on it could be wrong. The point isn't to design for the lowest common denominator, but for the users you will actually have, and usability test participants should be recruited with that in mind.
There is some merit to what I assume is your underlying argument, but the way you phrase it isn't helpful.
>The point isn't to design for the lowest common denominator, but for the users you will actually have
Keyword: situational disability
Even a perfectly fit and educated target audience sometimes suffers from certain conditions or in an environment that significantly reduces their mental or physical capacity. Stress, injury, pregnancy, too many beers, very long nails, terrible weather, a toddler trying to grab your phone, non-native speaker etc etc. You may know the user even personally, but you never know what’s going on in their lives when they use your app. So general advice: ALWAYS follow accessibility guidelines. Even bad copy may drop your usage by a significant percentage, because there are plenty of people with dyslexia.
Pick your favorite programming language. Do you think it should be tested on people in a nursing home? I'd argue that's the wrong audience. (A programming language isn't a graphical user interface, but it is a user interface!)
Programming language is not a user interface, it is a way to describe commands. The UI in this case would be the way to enter the program, e.g. punch cards, text editor or IDE, or AI copilot. People who can write code are very broad audience and of course all accessibility requirements must apply.
A programming language is absolutely a user interface. The error messages and diagnostics emitted by the language are the feedback mechanisms. The syntax and semantics are the design of the interface. The documentation is the user manual. Text editors, IDEs, punch cards and AI copilot are all separate UIs on top of whatever programming language you happen to be using.
After all, TUIs are a thing and nobody debates that they are also user interfaces. Just because a programming language is all text doesn’t mean that usability metrics don’t exist for it.
>The error messages and diagnostics emitted by the language are the feedback mechanisms.
The error messages and diagnostics are emitted by the tools like compiler, linker or interpreter and are part of their interface. Language standard may codify some error messages, but language itself cannot present them to you because language is not a program.
>Just because a programming language is all text doesn’t mean that usability metrics don’t exist for it.
Just because some usability metrics can be applied to a programming language, it does not make it UI. Interface implies interaction. You do not interact with language - it cannot receive UI events and react to them, you interact with the tools that understand it.
You’re being overly pedantic to a fault. Here’s the definition Wikipedia gives for UI:
> a user interface (UI) is the space where interactions between humans and machines occur[0]
Further, Wikipedia lists a ton of different kinds of user interfaces. Included among those is:
> Batch interfaces are non-interactive user interfaces
And further, here’s a better explanation describing how a programming language is a user interface then I can provide here[1]. It really is as simple as the programming language being an interface to the machine, and the programmer being the user of that interface. I don’t understand why you’re arguing so much against a widely accepted fact. When computers were first made, there was no such thing as a mouse or keyboard, there were punch cards. The only way for a user to interface with the machine would be to insert a program with punch cards. Nowadays we have all sorts of input devices to give us new ways to interface with machines, but the most basic way we can interface with a machine is by writing a program with our intent for it.
And if you want to be so pedantic then is a pure HTML/CSS website a UI? There’s no program there just markup. The only program that runs is the browser. So then is the website nothing and the browser the only user interface? Or how about the steering and brakes/accelerator in a car? Those are purely mechanical, are they a user interface because they don’t have a program? Or how about the original arcade games like pong? They were directly soldered onto the board. There was no program just a circuit. There were no instructions being executed. So does that make those games a non user interface?
Using labels does not make your arguments any stronger, on the contrary. Speaking of which, you quote Wikipedia, but neither the article you refer to, nor the article "Programming language" does say, that programming language is an interface. Languages by definition are merely syntax and semantics, they are used in interactions but they do not define an interface themselves - it is not an "is", but "is used by" relationship. You can write a program on a sheet of paper and put it in a frame on a wall, so that your friends could read it and enjoy the beauty of the algorithm after a couple of bottles of wine, or you can print it on a t-shirt communicating your identity. In neither case there exists an interaction between a human an a machine.
Interface is always about interaction: a keyboard to write the command or the program on, a display presenting an IDE or command interpreter etc. So, looking at your examples: HTML is not an interface and html file is not, but the static website opened in the browser is, because browser has downloaded the site and now knows how to interface with you. Steering wheel is of course an interface, because, as I said earlier including in my previous comment, it allows interaction. The example with arcade games is actually the same as for the first computer, which did not have an interface for programming (punch cards came later) and had to be re-assembled to run a new program: they did have user interfaces for data inputs and outputs.
Your second reference is clearly written for the beginners and simplifies things to the point where it becomes nonsense, even saying that "Programming, therefore, generally involves reading and editing code in an editor, and repeatedly asking a programming language to read the code to see if there are any errors in it". Do you still think it was worth quoting it?
Now, if you feel that I'm over-pedantic with this response too,
Okay, then maybe a better example would be a command-line program like `grep` or `sed`. Should those be tested in a nursing home? I'd argue that's the wrong audience, and testing there would cause you to simplify these tools to a point where they're no longer useful.
(I do think it's notable that you can combine the inputs and outputs of such programs into a shell script, which feels a lot like using a programming language—but this is beside the point I was trying to make.)
Horrible advice for expert tools. If you can make the assumption that the end user is going to learn the tool you can design it for peak effectiveness after a learning curve. If you have to consider retards and hostage situation level panic you can't do that, and create a worse product overall.
I think the point is that you can design for peak effectiveness while considering usability, and that makes the tool more effective. There’s a lot more scrutiny on edge cases when designing expert tools.
On “Expert Tools” I’d argue it’s imperative to consider high stress levels interactions, because the outcome outweighs the expert using it.
You are missing the point, it's obvious that a cockpit needs to account for stress or a crisis. Extending this to CAD software for example is nonsense.
I like your confidence, but it also manifests lack of experience and understanding of what engineering is. Expert tools have much lower tolerance for user mistakes because there are big money at stake (or sometimes lives of other people). A typo in Instagram post is not the same as a wrong number in CAD. I have personally seen a construction project where incorrect input in CAD resulted in several dozen foundation piles for a 16-story building installed outside the site boundary. Just because an architect responsible for aligning the building on site made a mistake working in a hurry, confusing two fields in the UI. Of course, there was a chain of failures, each next step costing more than previous one, but it could have been prevented if the software cared about the user and did not assume he is a superman.
It is so easy to squeeze as much functionality as possible on a screen trying to optimize productivity, but then quality of labels is sacrificed, click zones become too small and feedback is reduced to a barely visible message in status bar. It takes one sleepless night or a family argument for the user to get distracted and make a wrong but very expensive mistake.
A UI designer does a good job if the person who's paying them thinks they did a good job, not whether or not they actually followed best practices, unless that's how the work gets approved. A frontend developer does a good job if their tickets are done and their boss likes them, which may or may not include actual quality work that's accessible or usable. That's the secret I wish I'd known when I started working, could have avoided the extra personal cost of trying produce quality results despite there being no incentive structure for it.
Just like morality and law are not the same, the objective quality of a UI designer's work doesn't necessarily have anything in common with their employer's preferences.
You're right that only one of those is paid well, but that's not what GP was talking about.
> You're right that only one of those is paid well, but that's not what GP was talking about.
I didn't say anything about how much someone is paid, just that it is often a job, and whether you have a job depends overwhelmingly on whether the person paying you is convinced that you're doing it well, which may or may not relate to the objective merit of the work. It doesn't matter if you're making $150k or $20k, but it's not wise to prioritize things that nobody paying you didn't ask for.
The exceptions are of course things that don't pay at all, in which case your goal is still probably to get the best job you can done under the constraints provided. If those are too tight, things get cut, or you don't sign up for it.
So what if we will? That does not mean we will be users of the products we are designing the UI for at that point. Design for actual disabilities that you can reasonably expect your users to have, such as color blindness, not the full spectrum of the human condition.
That said, I do think products should be as simple and clear as possible for a given level of essential complexity.
Countless apps do not even accomodate the users they actually have and very obviously don't test accordingly. The non-lowest common denominator is far lower than you seem to assume.
If you think that a fancy UI rework or "please pay our subscription" screen is only confusing to people in nursing homes you are very wrong. They can be nontrivial obstacles to users who work every day, organize conferences etc.
>>> users from a nursing home, a panel of users with sub-90 IQ who were in a stressful environment and trying to complete other tasks at the same time, blind users, deaf users, et c.
Or we could just give the Product manager, designer, and JS engineer a 5 year old laptop with 8 gigs of ram at least 10 browser plug ins and every corporate security package...
We have gone from "go fast, break things" to moving at the speed of stupid. Slowing these three groups down might help.
Make it 4 and have a 1st gen i3 and we got a plan.
Most of my family has old Toshiba Satellite laptops from the early 2010s that they don’t throw away because they cost a grand when they bought it.
> a panel of users with sub-90 IQ who were in a stressful environment and trying to complete other tasks at the same time
So long Vim or Emacs ;)
I understand that your example is somewhat tongue in cheek, serving illustrative purposes, but I think good UX is about trade-offs more than a one-dimensional spectrum of convenient vs. inconvenient, and you can't optimize it for the sake of stressed out sub-90 IQ users without hurting the usability for some other groups.
> a panel of users with sub-90 IQ who were in a stressful environment and trying to complete other tasks at the same time, blind users, deaf users, et c.
For me, the first task would be to make absolutely sure that I block any apps designed by you. Such lack of empathy in your wording proves that you cannot possibly be a decent, half-decent, or even mediocre UI designer.
“Think of how stupid the average person is, and realize half of them are stupider than that.”
— George Carlin
You get that when I was on the other side of that 2 way mirror one of the qualifications to be a user was "can you use a mouse". As late as 2005 people being able to navigate a basic web form was quite the challenge.
Find someone far removed from tech and ask them if they have used ChatGPT.
That IQ 90 user stressed out of their mind isnt that far from reality. Go look at the "bad/poor/low quality" content on Facebook, or YouTube or if your brave tick tock. That is the person you're writing an app for.
I know people with PHD's who can run rings around you on "their topic" and can't cross the street without support.
I know teachers who are smart, and phenomenal at their job who I had to arm twist to go play with chat GPT cause it's not in their wheelhouse/on their radar.
The not so bright person, who is stressed out is likely a user of your app. The same as the absent minded PHD or the teacher who is too over worked to care about your new tech widget.
These are real people. And there are a LOT OF THEM (for there to be an average 100 intelligence there are a fair number of people UNDER IT). You can go look on FB/YouTube/Ticktok and find them. "Stupid" people exist, there are a lot of them. Making sure your app works for them is good for them, for your company, for your customer support costs... The whole point of usability is to get average, less technical people in and get them testing your app.
To put a fine and final point on it, after 90 IQ, the author went on to talk about blind people. Candidly your app working for everyone with a disadvantage is just good business and shows a LOT Of empathy. A point you seem to have missed in your indignation.
The only thing that is wrong is when UI doesn't account for these sorts of user contexts (disengaged, disinterested, distracted, disabled ...)
Chat GPT is about highlighting a breadth of experience. The world has stupid people and smart people who will be baffled by bad UI. It has people who might not have the context that we, the tech community, has around emergent tools.
Many of us live so far in the tech bubble that we forget how people who do "other things" approach technology. This is the gap that UI design should be bridging (and is doing a horrid job at these days)
"sub-90 IQ", when interpreted literally, means the dumbest 25% of the population. I think it's awfully important to remember not to ignore 1/4 of the population.
You’ve just made a map of rich and poor areas with extra steps.
Which is effective. If you’re a parent trying to decide which schools you want your kids in, maps of where the money is and maps of school rankings are damn near interchangeable (mostly not for funding-related reasons, though). You could use either and come to similar conclusions.
Probably because "Uncredited" signifies the absence in credits sequence(s) in general for crew and staff, while the term "Extra" applies just to actors.
People’d find it outrageous if beanie baby collectors were sacrificing 2% of US food production on altars to their beanie babies, because doing so made the beanie babies more valuable to other collectors. Even if some of the collectors were growing their own food for the purpose, et c, et c—it’d still bother a lot of people. Maybe most people.
Nitpick, but this specific example, despite its place in pop culture, wasn't a case of a lawsuit-happy wacko. McDonalds screwed up, after being warned they were screwing up and failing to correct, and someone got badly hurt.
Yeah, IPv6 is shut off on my Google Fiber router. Stuff on the Internet breaks when it’s on (last tested three or so months back when they sent me a new router).
When we first got Fiber, years ago, Amazon’s store was one of the things that broke until I turned off IPv6. Wouldn’t load at all, on any device on our network.
Works fine for VPNs and such, but I don’t talk to the Internet with it, because my experience has been it’s terribly unreliable.
I'm curious what breaks with IPv6 on. I've been running IPv6 dual stack for over a decade at home and pretty much never have any issues. I think I've run into a prefix change get bugged on my router's announcements that required a reboot, but that's like 2 times in the last 10+ years and I'm not 100% sure that was truly the issue. My phone is pretty much always has an IPv6 address and pretty much never has IP-related connectivity problems. I'm not using Google Fiber's router though, so that could be the complication.
The typical behavior is that DNS returns an IPv6 address, then whatever-it-is sits there until a timeout, because it’s simply not being routed. I’ve not investigated further because turning off IPv6 fixes the problem and breaks nothing (that I care about). Anything that only returns an IPv4 address from DNS works either way.
My cellular connection supports IPv6, but testing sites report it’s misconfigured in a bunch of ways. I don’t see problems in practice, though. But on my home network, it’s turned off.
This is what I was thinking about with the prefix issue. I've encountered this issue like twice and rebooting my router ended up with a different prefix, so it seems more like my router just didn't get the new prefix and thus kept handing out the old prefix to everyone with SLAAC and thus wouldn't get routed right.
The big question for us to be able to figure anything out based on this: what did that chart look like in other years?
Healthcare and government are enormous slices of the economy. I'd expect them to be at or near the top of hiring much of the time. IDK about hospitality.
If I were emperor of the world I’d make every consumer program pass a battery of tests that included demonstrating sufficient usability for a panel of users from a nursing home, a panel of users with sub-90 IQ who were in a stressful environment and trying to complete other tasks at the same time, blind users, deaf users, et c.
I expect the outcome would be a hell of a lot less twee “on brand” UI elements and a lot more leaning on proven design systems and frameworks, including fucking crucially for appearance. And also a lot less popping shit up on the screen without user interaction (omg those damn “look what’s new!” sorts of pop ups or focused tabs—congrats, some of your users are now totally lost)