Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have yet to see a chat agent deployed that is more popular than tailored browsing methods. The most charitable way to explain this is that the tailored browsing methods already in place are the results of years of careful design and battle testing and that the chat agent is providing most of the value that a tailored browsing method would but without any of the investment required to bring a traditional UX to fruition - that may be the case and if it is then allowing them the same time to be refined and improved would be fair. I am skeptical of that being the only difference though, I think that chatbots are a way to, essentially, outsource the difficult work of locating data within a corpus onto the user and that users will always have a disadvantage compared to the (hopefully) subject matter experts building the system.

So perhaps chatbots are an excellent method for building out a prototype in a new field while you collect usage statistics to build a more refined UX - but it is bizarre that so many businesses seem to be discarding battle tested UXes for chatbots.



agree.

Thing is, for those who paid attention to the last chatBot hype cycle, we already knew this. Look at how Google Assistant was portrayed back in 2016. People thought you'd be buying starbucks via the chat. Turns out the starbucks app has a better UX


Yea, I don't want to sit there at my computer, which can handle lots of different input methods, like keyboard, mouse, clicking, dragging, or my phone which can handle gestures, pinching, swiping... and try to articulate what I need it to do in English language conversation. This is actually a step backwards in human-computer interaction. To use an extreme example: imagine instead of a knob on my stereo for volume, I had a chat box where I had to type in "Volume up to 35". Most other "chatbot solved" HCI problems are just like this volume control example, but less extreme.


It's funny, because the chat bot designers seem to be continually attempting to recreate the voice computer interface from Star Trek: TNG. Yet if you watch the show carefully, the vast majority of the work done by all the Enterprise crew is done via touchscreens, not voice.

The only reason for the voice interface is to facilitate the production of a TV show. By having the characters speak their requests aloud to the computer as voice commands, the show bypasses all the issues of building visual effects for computer screens and making those visuals easy to interpret for the audience, regardless of their computing background. However, whenever the show wants to demonstrate a character with a high level of computer mastery, the demonstration is almost always via the touchscreen (this is most often seen with Data), not the voice interface.

TNG had issues like this figured out years ago, yet people continue to fall into the same trap because they repeatedly fail to learn the lessons the show had to teach.


It's actually hilarious to think of a scene where all the people on the bridge are shouting over each other trying to get the ship to do anything at all.

Maybe this is how we all get our own offices again and the open floor plan dies.


Hmm. Maybe something useful will come of this after all!

"...and that is why we need the resources. Newline, end document. Hey, guys, I just got done with my 60 page report, and need-"

"SELECT ALL, DELETE, SAVE DOCUMENT, FLUSH UNDO, PURGE VERSION HISTORY, CLOSE WINDOW."

Here's hoping this at least gets us back to cubes.


Getting our own offices would simply take collective action, and we're far too smart to join a union, err, software developers association to do that.


They’d just have an array of microphones everywhere and isolate each voice - rooms only need n+1 microphones where n is the maximum number of people. That’s already simple to do today, and it’s not even that expensive.

Profound observation, thank you for this.

Remember Alexa? Amazon kept wanting people to buy things with their voice via assorted echo devices, but it turns out people really want to actually be in charge of what their computers are doing, rather than talking out loud and hoping for the best.


“volume up to 35”

>changes bass to +4 because the unit doesn't do half increments

“No volume up to 35, do not touch the EQ”

>adjusts volume to 4 because the unit doesn’t do half increments

> I reach over, grab my remote, and do it myself

We have a grandparent that really depends on their Alexa and let me tell you repeatedly going “hey Alexa, volume down. Hey Alexa, volume down. Hey Alexa, volume down,” gets really old lol we just walk over and start using the touch interface


It's also a matter of incentives. Starbucks wants you in their app instead of as a widget in somebody else's - it lets them tell you about new products, cross-sell/up-sell, create habits, etc.

This general concept (embedding third parties as widgets in a larger product) has been tried many times before. Google themselves have done this - by my count - at least three separate times (Search, Maps, and Assistant).

None have been successful in large part because the third party being integrated benefits only marginally from such an integration. The amount of additional traffic these integrations drive generally isn't seen as being worth the loss of UX control and the intermediation in the customer relationship.


Current LLMs are way better at understanding language than the old voice assistants.

Omg thank you guys. It felt so obvious to me but nobody talked about it.

A UX is better and another app or website feels like the exact separation needed.

Booking flights => browser => skyscanner => destination typing => evaluation options with ai suggestions on top and UX to fine-tune if I have out of the ordinary wishes (don’t want to get up so early)

I can’t imagine a human or an AI be better than is this specialized UX.


Hard disagree.

At least in my domains, the "battle-tested" UX is a direct replication of underlying data structures and database tables.

What chat gives you access to is a non-structured input that a clever coder can then sufficiently structure to create a vector database query.

Natural language turns out to be far more flexible and nuanced interface than walls of checkboxes.


> I have yet to see a chat agent deployed that is more popular than tailored browsing methods.

Not an agent, but I've seen people choose doctors based on asking ChatGpt for criteria and the did make those appointments. Saved them countless web interfaces to dig through.

ChatGpt saved me so much money by searching for discount coupons on courses.

It even offered free entrance passwords on events I didn't know had such a thing (I asked it where the event was and it also told me the free entrance password it found on some obscure site).

I've seen doctors use ChatGpt to generate medical letters -- Chat Gpt used some medical letters python code and the doctors loved the result.

I've used ChatGpt to trim an energy bill to 10 pages because my current provider generated a 12 page bill in an attempt to prevent me from switching (because they knew the other provider did not accept bills of more than 10 pages).

Combined with how incredibly good codex is, combined with how easily chat gpt can just create throw away one-time apps, no way the whole agent interface doesn't eat a huge chunk of the traditional UX software we are used to.


> the tailored browsing methods already in place are the results of years of careful design and battle testing

Have you ever worked in a corporation? Do you really think that Windows 8 UI was the fruit of years of careful design? What about Workday?

> but it is bizarre that so many businesses seem to be discarding battle tested UXes for chatbots

Not really. If the chatbot is smart enough then chatbot is the more natural interface. I've seen people who prefer to say "hey siri set alarm clock for 10 AM" rather than use the UI. Which makes sense, because language is the way people literally have evolved specialized organs for. If anything, language is the "battle tested UX", and the other stuff is temporary fad.

Of course the problem is that most chatbots aren't smart. But this is a purely technical problem that can be solved within foreseeable future.


> I've seen people who prefer to say "hey siri set alarm clock for 10 AM" rather than use the UI.

It's quicker that way. Other things, such as zooming in to an image, are quicker with a GUI. Bladerunner makes clear how the voice UI is poor for this compared to a GUI.


In an alarm, there is only one parameter to set. In more complex tasks, chat is a bad ui because it does not scale well and it does not offer good ways to arrange information. Eg if I want to buy something and I have a bunch of constraints, I would rather use a search-based UI where i can fast tweak these constraints and decide. Chathpt being smart or not here is irrelevant, it would just be bad ui for the task.


You're thinking in wrong categories. Suppose you want to buy a table. You could say "I'm looking for a €400 100x200cm table, black" and these are your search criteria. But that's not what you actually want. What you actually want is a table that fits your use case and looks nice and doesn't cost much, and "€400 100x200cm table, black" is a discrete approximation of your initial fuzzy search. A chatbot could talk to you about what you want, and suggest a relevant product.

Imagine going to a shop and browsing all the aisles vs talking to the store employee. Chatbot is like the latter, but for a webshop.

Not to mention that most webshops have their categories completely disorganized, making "search by constraints" impossible.


Funny, I almost always don't want to talk to store employees about what I want. I want to browse their stock and decide for myself. This is especially true for anything that I have even a bit of knowledge about.


The thing is that "€400 100x200cm table, black" is just much faster to input and validate versus a salesperson, be it a chatbot or an actual person.

Also, the chatbot is just not going to have enough context, at least not in it's current state. Why those measurements? Because that's how much room you have, you measured. Why black? Because your couch is black too (bad choice), and you're trying to do a theme.

That's kind of a lot to explain.


Even when going to a shop, I prefer to look into the options myself first. Explaining a salesperson what I need can take much more time, and then I am never sure if they just try to upsell, if I can explain my use case well etc. The only case where I opt for a salesperson first is when I cannot translate my use case to specification due to high degree of technical or other knowledge needed. I can imagine eg somebody who knows nothing about computers ask "I want a laptop, with good battery, I would use it for this and that", the same way they would ask a salesperson or a technical friend. But I cannot imagine using such an LLM to look for a table where I need it to fit measurements etc, or anything that is not inaccessible in terms of product knowledge. If I know the specifications, opting for an AI chatbot is inefficient. If not, it could help.

> I've seen people who prefer to say "hey siri set alarm clock for 10 AM" rather than use the UI. Which makes sense, because language is the way people literally have evolved specialized organs for.

I don't think it's necessary to resort to evolutionary-biology explanations for that.

When I use voice to set my alarm, it's usually because my phone isn't in my hand. Maybe it's across the room from me. And speaking to it is more efficient than walking over to it, picking it up, and navigating to the alarm-setting UI. A voice command is a more streamlined UI for that specific task than a GUI is.

I don't think that example says much about chatbots, really, because the value is mostly the hands-free aspect, not the speak-it-in-English aspect.


Even when my phone is in my hand I'll use voice for a number of commands, because it's faster.


I'd love to know the kind of phone you're using where the voice commands are faster than touchscreen navigation.

Most of the practical day to day tasks on the Androids I've used are 5-10 taps away from a lock screen, and get far less dirty looks from those around me.


My favorite voice command is to set a timer.

If I use the touchscreen I have to:

1 unlock the phone - easy, but takes an active swipe

2 go to the clock app - i might not have been on the home screen, maybe a swipe or two to get there

3 set the timer to what I want - and here it COMPLETELY falls down, since it probably is showing how long the last timer I set was, and if that's not what I want, I have to fiddle with it.

If I do it with my voice I don't even have to look away from what I'm currently doing. AND I can say "90 seconds" or "10 minutes" or "3 hours" or even (at least on an iPhone) "set a timer for 3PM" and it will set it to what I say without me having to select numbers on a touchscreen.

And 95% of the time there's nobody around who's gonna give me a dirty look for it.


and less mental overhead. Go to the home screen, find the clock app, go to the alarm tab, set the time, set the label, turn it on, get annoyed by the number of alarms that are there that I should delete so there isn't a million of them. Or just ask Siri to do it.


One thing people forget is that if you do it by hand you can do it even when people are listening, or when it’s loud. Meaning its working more reliable. And in your brain you only have to store one execution instead of two. So I usually prefer the more reliable approach.

I don’t know any people that do Siri except the people that have really bad eyes


God I miss physical buttons and controls. being able to do something without even looking at it.

> Not really. If the chatbot is smart enough then chatbot is the more natural interface. I've seen people who prefer to say "hey siri set alarm clock for 10 AM" rather than use the UI. Which makes sense, because language is the way people literally have evolved specialized organs for. If anything, language is the "battle tested UX", and the other stuff is temporary fad.

I do that all the time with Siri for setting alarms and timers. Certain things have extremely simple speech interfaces. And we've already found a ton of them over the last decade+. If it was useful to use speech for ordering an uber, it would've been worth it for me to learn the specific syntax Alexa wanted.

Do I want to talk to a chatbot to get a detailed table of potential flight and hotel options? Hell no. It doesn't matter how smart it is, I want to see them on a map and be able to hover, click into them, etc. Speech would be slow and awful for that.


Alarm is a good example of an “output only” task. The more inputs that need to be processed the less a pure chatbot interface is good (think lunch bowl menus, shopping in general etc.)

> Of course the problem is that most chatbots aren't smart. But this is a purely technical problem that can be solved within foreseeable future.

Ah yes, it's just a small detail. Don't worry about it.


I'm sure some very smart Chatbots are working on it.


I don't understand how come that a website for tech people turned into a boomerland of people who pride themselves in not using technology. It's like those people who refuse to use computers because they prefer doing everything the old-fashioned way and they insist on the society following them.


Maybe you can have discussions with a chatbot instead. They always agree with you.



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: