First thing that I thought off when LLMs came out - literally been in my head for 2 years.
A lot of price gouging is based on you not knowing the details or the process. With LLMs you can know both.
For most anything from kitchen renovations to A/C installation to Car servicing - you can now get an exacat idea on details and process. And you can negotiate on both.
You can also know how much "work" contractors have at this time which gives you more leverage.
For anything above $1000 in spend, learn about it from your LLM first. My usual questions:
1. What are all the steps involved? Break the steps down by cost.
2. What is the demand for this service in my area around this time of the year?
3. using the above details, how can I negotiate a lower price or find a place which will have this at a discount ?
Completely serious question here: is it still price gouging if they're one of a few players in town?
Information asymmetry is only valuable if you can execute on it. All of your examples are actually examples of both asymmetry and market control. HVAC, there's typically only a few legitimate licensed providers in town so they can set the price however they want. Car servicing, indie shops are always better but if you want to maintain your warranty you'll need to use a stealership which goes by a book (and it's mandatory).
I'm not convinced an LLM can help with these situations. I would suspect you're more likely to get a "screw you" price in return rather than winning a negotiation. When I shopped for a new HVAC after mine gave up the ghost after 20 years most providers were within a few hundred dollars of each other. An LLM would've been useful here for warnings ("you probably dont need ducting", "you probably don't need duct cleaning") but as for the bulk of the cost there's a monopoly and there ain't nothin you can do about it. When I got my yard worked on it was a similar story. Despite every landscaper providing offers from cheap to absurd, the ones that I could sue if they hit a gas line were all within the same price range.
These people are also very used to the "know-it-all homeowner". They're more likely to ignore you than help you because if you actually knew-it-all you'd do it yourself.
I think, rather, LLMs will be extremely useful in bill negotiation where the data is absolutely clear, you have a copy of it, and it can be analyzed in full (no asymmetry). For example, an LLM could be trained on medical billing codes and be able to analyze your bills for improperly coded procedures (very common).
The LLMs help you understand how the pricing works.
Eg: when my shower didn't work I was able to figure out all steps - and also do most of them before getting stuck at one particular point because I couldn't physically pull the unit out of the socket.
I was able to negotiate down $150 for that one.
In another instance with gas pipes I was able to find laborers who were good but just didn't have a branded van yet.
In this case LLMs help me understand that the laborer was damn good at his job and how to cut the cost of the job by breaking into different pieces.
The whole process is very tactical - you will lose quite a few negotiations before figuring it out. Also its not useful to just abstract all the jobs as you've done in your post. You've somehow got to the EMH except for service providers - its just not true.
The way different types of compnaies force you to pay more is very different. Lockpickers are very different from plumbers for example.
Also each service provider have their own way of doing things and breaking points.
Also every geography is different. Service providers seem to charge the most with elderly house owners and peopel with nice houses in nice areas. So you can definitely use LLMs in those situations to find areas to put ads to attract better prices.
the best part of all this is how you can apply these negotiation skills to your job search or any other situation. definitely a long game like finance or health.
Haven't you always been able to do these same steps?
From books and guides at the library and bookstore, to "This Old House" and "Click and Clack" we have been distributing the knowledge of how to do things for a long time.
The internet just made all of that knowledge much easier to access, with the time/cost/distance dependency being removed.
Have Americans become less capable over time? Or are we just more aware of the portion of the population who simply does not put in the leg work to DIY things?
Maybe a bit of both, with a lean into those who do not know having a larger voice. As an example I saw a video yesterday of someone being a "full on foodie" followed up by someone who was calling an onion "garlic".
Does an LLM really change what COULD have always been done, or just make it more accessible for those of us who do/want to have the tool?
> The internet just made all of that knowledge much easier to access, with the time/cost/distance dependency being removed.
Yes, but I don't know what point this is supposed to make, though. LLMs lowered certain costs in an extreme way.
You could always have become a plumber in order to negotiate with plumbers. The reason you didn't is because the investment to become a plumber was more than you were likely to get the price lowered (or to save by doing the work yourself), and you would have to anticipate your needs before they came up. The people who did become plumbers set up (or joined) a business and marketed themselves so they were negotiating with a lot of people over a lot of jobs, making the investment worth it.
People who invested the time to learn plumbing traded with other people who also concentrated their investments into a few things (but different things), and together, made civilization.
> Does an LLM really change what COULD have always been done, or just make it more accessible for those of us who do/want to have the tool?
I'm trying to figure out if you were arguing with somebody who said that it was IMPOSSIBLE to learn the things that people clearly know how to do. Changing arguments into existence proofs has always made them easy to refute; I'm not willing to say that it's impossible for pigs to fly, it's just not cost effective. AI has clearly made it cheaper to obtain the knowledge negotiate with plumbers about a specific plumbing problem that just came up in your life than watching hundreds of hours of This Old House, buying your own tools, and practicing.
I agree with your assessment, it's maybe a bit of both.
The internet has given anyone/everyone a voice, for better or for worse, both widening and shortening the feedback loop. Now LLMs are shortening the loop even more, while unable to distinguish fact from fiction. Given how many humans will regurgitate whatever they read or heard as facts without applying any critical thought, the parallels are interesting.
I suspect that LLMs will affect society in several ways, assisting both the common consumers with whatever query they have at the moment, as well as DIY types looking for more in-depth information. Both are learning events, but even when requesting in-depth info, the LLM still feels like a shortcut. I think the gap between superficial and deep understanding of subjects is likely to get wider in the post-LLM world.
I do have hope for the garbage in, garbage out aspect though. The early/current LLMs were trained on plenty of garbage, but I think it's inevitable that will be improved.
You can't meaningfully negotiate details and processes that weren't designed to be negotiated individually. "My LLM tells me that tapping the walls is 20% of the cost of a mini-split installation, so I'll drill my own holes and you have to charge me 20% less". Not going to happen.
This whole style of negotiation is just going to blow up in the face of most homeowners. The person trying to sell me bullshit can use an LLM to help them sell it even harder and think of the most high quality retorts to whatever my LLM tries to argue against them with.
But regardless, this arms race doesn't happen because the vast majority of people are bad at prompting models, and when you start writing prompts with spelling errors and other grammar issues, your model responds with low quality, wronger outputs just to punish you for your lack of attention to detail.
Untrue - there isn't a personal finance solution that doesn't have mistakes. Not one. I've literally tried them all. Its mostly because the syncing with accounts is very very brittle and a lot of things like stock plans etc aren't supported well so your daily view will always be somewhat off.
There are insights to derive from the data - like how much do you really spend. But again its really hard to get there because the numbers are always off and most people don't actually want to know.
You are severely underestimating the average person's agency with their money.
I use an app but I also have an excel sheet where I track everything very carefully every week or so.
Trust my excel sheet much much more.
Honestly for everyone I know this is how they do it. There is one guy who built his own app and his is perfect because he has solved for his specific bank accounts.
He knows every $ coming in and going out - its pretty impressive.
That 80% of medical issues could be categorized as "standard medicine" with some personalization to the person?
residency you obviously see a lot of real life complicated cases but aren't the majority of the cases something a non resident could guide if not diagnose ?
I've seen that LLMs hallucinate in very subtle ways when guidng you through a course of treatment.
Once when having to administer eyedrops to a parent, and I saw redness and was being conservative, it told me the wrong drop to stop. The doctor saw my parent the next day so it was all fixed but did lead to me freaking out.
Doctors behave very differently from how we normal humans behave. They go through testing that not many of us would be able to sit through let alone pass.
And they are taught a multitude of subjects that are so far away from the subjects everyone else learns that we have no way to truly communicate to them.
And this massive chasm is the problem, not that the LLM is the wrong tool.
Thinking probabilistically (mainly basyesia) and understanding the initial first two years of medschool will help you use an LLM much more effectively for your health.
With LLMs the ability to create your own experience whilst using services as "databases" is really possible.
My crazy though is that this is where the internet will go.
One of the big problems with SaaS is that the apps are tuned to increase company profit - not user agency.
All that will flip. Agents are the first barrage in that direction but the movement is only just starting.
One barrier is configuration (code at scale) - ie a way to communicate exactly what you want.
Once we have that we compile from our needs (configuration) to an app thats exactly what we want backed by our accounts on amazon, uber, google, openai as databases and processing.
This is also why starting your own thing will give you better experience at being a VP than working at a 9-5.
(But it’s also harder than being a VP.)
And if you’re going back to 9-5 after an unsuccessful startup stint, look for VP or GM like roles. Anything else won’t have enough degrees of freedom to keep you happy
I'm old enough to remember when they first said that about the Internet. We were going to enter a new enlightened age of information, giving everyone access to the sum total of human knowlege, no need to get a fancy degree, universities will be obsolete, expertise will be democratized.... See how that turned out.
The motivated will excel even further, for the less motivated nothing will change. The gap is just going to increase between high-agency individuals and everyone else.
Is it really that much worse today? When I was a kid, my great aunt died of skin cancer. She was a Christian Scientist and rejected medical treatment in favour of prayer.
As a teenager, I remember being annoyed that the newspapers had positive articles on the rejuvenating properties of nonsense like cupping and reiki. At least a few of my friends' parents had healing crystals.
People have always believed in whatever nonsense they want to believe.
It feels like the friction to get un-science has been completely removed now. Before you had to get lagged content and physically fetch it somehow. Now you can have it in the palm of your hands 24-7 with the bonus of the content being designed to enrage you to get you sucked in.
LLMs and the internet both make it easier for us to access more information, which also means we can reach dumber conclusions quicker. It does go both ways though.
In practice, not so much. Not in my experience. I have a drive littered with failed AI projects.
And by that I mean projects I have diligently tried to work with the AI (ChatGP, mostly in my case) to get something accomplished, and after hours over days of work, the projects don’t work. I shelve them and treat them like cryogenic heads. “Sometime in the future I’ll try again.”
It’s most successful with “stuff I don’t want to RTFM over”. How to git. How to curl. A working example for a library more specific to my needs.
But higher than that, no, I’ve not had success with it.
It’s also nice as a general purpose wizard code generator. But that’s just rote work.
It's true that once you have learned enough to tell the LLM exactly what answer you want, it can repeat it back to you verbatim. The question is how far short of that you should stop because the LLM is no longer an efficient way to make progress.
From a knowledge standpoint an LLM can give you pointers at any point.
Theres no way it will "fall short".
You just have to improve your prompt. In the worst case scenario you can say "please list out all the different research angles I should proceed from here and which of these might most likely yield a useful result for me"
My skepticism flares up with sentences like "Theres no way it will "fall short"." Especially in the face of so many first hand examples of LLMs being wrong, getting stuck, or falling short.
I feel actively annoyed by the amount of public gaslighting I see about AI. It may get there in the future, but there is nothing more frustrating than seeing utter bullshit being spouted as truth.
First, rote work is the kind I hate most and so having AI do it is a huge win. It’s also really good for finding bugs, albeit with guidance. It follows complicated logic like a boss.
Maybe you are running into the problem I did early. I told it what I wanted. Now I tell it what I want done. I use Claude Code and have it do its things one at a time and for each, I tell it the goal and then the steps I want it to take. I treat it as if it was a high-level programming language. Since I was more procedural with it, I get pretty good results.
They seem pretty good with human language learning. I used ChatGPT to practice reading and writing responses in French. After a few weeks I felt pretty comfortable reading a lot of common written French. My grammar is awful but that was never my goal.
I spent a couple weekends trying to reimplement microsoft's inferencing for phi4 multimodal in rust. I had zero experience messing with ONNX before. Claude produced a believably good first pass but it ended up being too much work in the end and I've put it down for the moment.
I spent a lot of time fixing Claude's misunderstanding of the `ort` library, mainly because of Claude's knowledge cutoff. In the end, the draft just wasn't complete enough to get working without diving in really deep. I also kind of learned that ONNX probably isn't the best way to approach these things anymore. Most of the mindshare is around the python code and torch apis.
Do NOT work only your contract hours. It might seem like the logical choice but it rarely is because of following reasons :
1. You cannot build strong skills working just contract hours.
2. You cannot market your achievements by working only 40 hours. And in turn this makes you dispensable and more disposable.
3. You can't control your work which is probably the most important element of all this.
4. you can't search for great positions with the new skills you achieved if you only work 40 hours ...
Work in such a way that you get recognition for the hours worked.
Do important projects, not crappy side projects and MARKET THE HELL out of your work. Everyone should know what you are doing.
Every day at a job is a campaign to increase your salary massively - either at the job or somewhere else. (Btw, this is how most people in NYC think thought they may not admit it.)
You have to do impressive things and then use 3x your time marketing them to everyone else in the company. Everything else - money and promotions will follow.
(Process won't be pretty - but you won't be floundering nearly as much as others who don't take this advice.)
A lot of price gouging is based on you not knowing the details or the process. With LLMs you can know both.
For most anything from kitchen renovations to A/C installation to Car servicing - you can now get an exacat idea on details and process. And you can negotiate on both.
You can also know how much "work" contractors have at this time which gives you more leverage.
For anything above $1000 in spend, learn about it from your LLM first. My usual questions:
1. What are all the steps involved? Break the steps down by cost. 2. What is the demand for this service in my area around this time of the year? 3. using the above details, how can I negotiate a lower price or find a place which will have this at a discount ?