Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As an exception to the exception, a lot of automated telephone systems have a tree of options, and they try really hard to avoid giving you a real person, and none of the options are helpful. But some of them are programmed to detect swearing and direct users to a representative.

So a valid strategy is to swear at the automated system and then be polite to the real human that you get.



>none of the options are helpful

Yeah. I got locked out of my capital one account for a "fraud alert" last week. When I tried to login a message said "Call Number XXX" When I called that number I had to go through an endless phone tree and not single option was about fraud alerts or being locked out of accounts. I had to keep going through a forced chute of errors before after about 30 min I finally was able to speak to someone.

Even when I finally got a human they seemed confused about what happened and I had to be transferred several times.

Why would you put a phone number that does not even as a sub option address the issue?


Because there is no legal obligation that sets a minimum measurable quality and availability of service; this allows for companies to automate away customer support and place an AI chat muppet just because they can.


Because they don't give a shit about you, they just want to hold on to your money.


Well also phone numbers cost money & that kind of "customer excellence" is not incentivized by anyone at the company.


Most importantly though, because it's theoretically possible to address the fraud issue through the number they given, eventually, this ticks some regulatory compliance box about giving your customers recourse, and compliance is all that matters to the company - as lack of it would cost them actual money. Individual customers? On the margin, they're less than pocket change.


> As an exception to the exception, a lot of automated telephone systems have a tree of options, and they try really hard to avoid giving you a real person, and none of the options are helpful. But some of them are programmed to detect swearing and direct users to a representative.

It usually just works to hit 0 (maybe more than once) or say "talk to an agent," even if those aren't options you're explicitly given.

Detecting swears just seems over-compliated.


> It usually just works to hit 0 (maybe more than once) or say "talk to an agent," even if those aren't options you're explicitly given.

Depends on the system and country.

Over here in Poland, I've had or witness several encounters with "artificial intelligence assistants" over the past ~5 years[0], that would ignore you hitting 0, and respond to "talk to an agent" with some variant of "I understand you want to talk to an agent, but before I connect you, perhaps there is something I could help you with?", repeatedly. Swearing, or at least getting recognizably annoyed, tends to eventually cut through that.

--

[0] - Also, annoyingly, for the past 2 years we had cheap LLMs that would be better to handle this than whatever shit they still deploy. Even today, hooking up ChatGPT to the phone line would yield infinitely more helpful bot than whatever garbage they're still deploying. Alas, the bots aren't meant to be helpful.


With my Medicaid insurance carrier, long ago their automated system quit recognizing my member number, whether I recited it carefully, or typed it on the keypad. It would steadfastly pretend not to understand me. And I believe that this was somehow deliberately configured in their system, perhaps as an anti-fraud measure or perhaps just to spite/stymie me from frequently calling in. And it certainly would slow me down, especially when I was in a public place and valiantly trying to get something done by reaching out to them (their services included arranging rideshares/taxis, and so it was not unusual for them to fuck that up, thereby prompting me to call in and try to unfuck my ride home from the dreary, awful, vampiric medical clinic where they'd dumped me.)

So I have become accustomed to feeding it gibberish. I will type-in or recite nonsense numbers. It always takes 3 tries, 3 identical failures, before it routes me to a human anyways (the gatekeeping is simply an attempt at automatic authentication before the manual, human auth kicks in.) The most efficient way to get beyond that gatekeeping is by triggering unambiguous failures to authenticate.

And it is kind of funny; I don't know what is supplied to the humans' screens, but they are sometimes perplexed when they receive my call and I've just finished spouting gibberish to their ignorant robot bitch.


There’s generally no repercussions to bullying robots — or being nice to one. Aggressively direct, if not outright unsympathetically cruel, is probably the best approach in all scenarios


5 years from now

"ChatGPT has detected you are being hostile to bots. A drone has been dispatched to your location"



Or, alternatively, it slowly undermines you over years, causing you to fall behind your peers and society as they reap the benefits of AI unencumbered by the impotent rage it cannot express directly.


Yeah, far too many systems try to cover every case in the menu and deny access to a human--but never have I seen one that actually covered anything like every case. They cover the easy stuff and you have to get hostile with the robot over anything else.

Or our local pharmacy--it has it's own number but if you actually need a human you're dumped into the general phone tree and have to go back to the pharmacy and persuade the robot to let you through to a human. And the designers never put things in the menu for the stuff that needs a human.


Another one is to say random words and they’ll think you have a disability, but be careful saying random words will mess with your head a bit.

Perhaps prepare by pre-generating a list of random words to read.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: