I've lead a change like that - the very core of our data model was compromised from the early days of our company and we knew it... and knew it... and four years into working there I started a serious effort that ended up taking about a year and a half to pay off. These efforts always need a lot of careful planning and you usually want to work within the constraints of early model decisions as much as possible but it is quite possible to gracefully transition. When you're doing something like this it's important to be extremely greedy with SMEs to try and understand as much as you can about the field to future proof your new solution - our company did that once - there's not a chance it'd do it twice.
I did it for my own startup. Messed up the whole "how do we break down what constitutes a tenant" thing in the initial design at 0 customers. Made me really feel the whole "experience is reading your own code from 5 years ago and wondering what idiot wrote that" thing.
Worked out OK in the end, but took substantial effort to fix.
Someone may correct me but - in three levels of conciseness...
A monad is a function that can be combined with other functions.
It's a closure (or functor to the cool kids) that can be bound and arranged into a more complex composite closure without a specification of any actual value to operate on.
It's a lazy operation declaration that can operate over a class of types rather than a specific type (though a type is a class of types with just a single type so this is more a note on potential rather than necessary utility) that can be composed and manipulated in languages like Haskell to easily create large declarative blocks of code that are very easy to understand and lend themselves easily to abstract proofs about execution.
You've probably used them or a pattern like them in your code without realizing it.
Like a lot of enterprise software companies - their customers aren't their end users so a lot of their incentives aren't actually aligned to produce a good UX.
As someone who works enthusiastically on new ground breaking technology... a dumb idea is a dumb idea - and when we pour this many resources into a dumb idea it prevents us from funding good ideas.
This tunnel has been used as a continued excuse for NIMBYism to block an extension of the monorail which is the actual solution that Las Vegas needs. In fact, the entire boring company appears to, after the fact, have been an effort dedicated solely to derailing SoCal high speed rail efforts.
I'm very happy to see the article covering the high labor costs of reviewing code. This may just be my neurodivergent self but I find code in the specific style I write to be much easier to quickly verify since there are habits and customs (very functional leaning) I have around how I approach specific tasks and can easily handwave seeing a certain style of function with the "Let me just double check that I wrote that in the normal manner later" and continue reviewing a top-level piece of logic rather than needing to dive into sub-calls to check for errant side effects or other sneakiness that I need to be on the look out for in peer reviews.
When working with peers I'll pick up on those habits and others and slowly gain a similar level of trust but with agents the styles and approaches have been quite unpredictable and varied - this is probably fair given that different units of logic may be easier to express in different forms but it breaks my review habits in that I keep in mind the developer and can watch for specific faulty patterns I know they tend to fall into while building up trust around their strengths. When reviewing agentic generated code I can trust nothing and have to verify every assumption and that introduces a massive overhead.
My case may sound a bit extreme but in others I've observed similar habits when it comes to reviewing new coworker's code, the first few reviews of a new colleague should always be done with the upmost care to ensure proper usage of any internal tooling, adherence to style, and also as a fallback in case the interview was misleading - overtime you build up trust and can focus more on known complications of the particular task or areas of logic they tend to struggle on while trusting their common code more. When it comes to agentically generated code every review feels like interacting with a brand new coworker and need to be vigilant about sneaky stuff.
I have similar OCD behaviors which make reviewing difficult (regardless of AI or coworker code).
specifically:
* Excessive indentation / conditional control flow
* Too verbose error handling, eg: catching every exception and wrapping.
* Absence of typing AND precise documentation, i.e stringly-typed / dictly-typed stuff.
* Hacky stuff. i.e using regex where actual parser from stdlib could've been used.
* Excessive ad-hoc mocking in tests, instead of setting up proper mock objects.
To my irritation, AI does these things.
In addition it can assume its writing some throwaway script and leave comments like:
// In production code handle this error properly
log.printf(......)
I try to follow two things to alleviate this.
* Keep `conventions.md` file in the context which warns about all these things.
* Write and polish the spec in a markdown file before giving it to LLM.
If I can specify the object model (eg: define a class XYZController, which contains the methods which validate and forward to the underlying service), it helps to keep the code the way I want. Otherwise, LLM can be susceptible to "tutorializing" the code.
Our company introduced Q into our review process and it is insane how aggressive Q is about introducing completely inane try catch blocks - often swallowing exceptions in a manner that prevents their proper logging. I can understand wanting to be explicit about exception bubbling and requiring patterns like `try { ... } catch (SpecificException e) { throw e; }` to force awareness of what exceptions may be bubbling up passed the current level but Q often just suggests catch blocks of `{ print e.message; }` which has never been a preferred approach anywhere I have worked.
Q in particular is pretty silly about exceptions in general - it's nice to hear this isn't just us experiencing that!
In the modern corporate world that leadership has entirely insulated itself from customer feedback - if it was plausible to voice your opinion through more appropriate channels I'd advocate for that but many companies have purposefully shut those channels down.
What is the better option to pass along that message than modestly increasing retraining costs for that position?
I treat service workers with respect, personally, but I am struggling to see what other venues of communication are still available.
Like I said in my other comment, this is missing the point. This approach won’t be effective. Nothing is actually being communicated to the people making decisions. The difficulty in finding another more effective approach doesn’t change that fact. If you feel passionate about this issue, you should try some of the suggestions by the other commenter.
Ideally you're correct - but it does have the very large side effect of making real estate fraud much more lucrative. This isn't to say that the potential of people breaking a law means we shouldn't have the law at all - but higher property taxes necessitate a much large spend into property value auditors and require a lot more stringency around property improvement permitting to ensure that increases in property value are captured and recorded accurately.
How much fraud we talking? I like to think along Matt Levine's ideas that at least some amount of fraud is acceptable - there will always be some amount of it and under or over-regulation creates a net-negative.
For sure. Probably should be using the total homestead exemption eligibility as the denominator instead of prop tax receipts. Even then transfer tax is a "hill" and the homestead exemption is a "valley." But It's hard enough to estimate any of this data from a keyboard in a few seconds
Sure - but nothing driving up revenue value is actually being created[1]. What's missing in that system is a way that money is entering the system. These deals are (in my cynical opinion at least) being inked to create the appearance of large continued investment and market excitement to pump or sustain valuations. Oracle actually spotlit the arrangement as future sales in their recent earnings and that seemed to be what mostly drove their valuation up.
Performative actions to drive up valuation and try and attract more investors absolutely feels bubbly to me.
1. Discounting products that are not only currently operating at a loss but are priced well below actual resourcing required to produce.
Customers pay for the compute. There are tons of CSPs selling the capacity to small and large consuming entities alike (both OpenAI/Anthropic + other small outfits we've not heard of).
The fair criticism of the infra $ is where the non-VC non-bank-loan cash stream is, but there could be a lot of B2B deals and e.g. Meta, TikTok and other behemoths do tend to make plenty of money and pay their bills, and have extreme thirst for more AI capacity.
Take Oracle for example (as a whole, not just OCI) - tons of customers who are paying for AI-enhanced enterprise products.
It's still the early days, as the cost of creating software continues to approach zero the rules will change in ways which are hard to predict. The effect this will have on other white collar industries is even more challenging to reason about.
That line of reasoning conveniently left out the explosive datacenter revenue growth that generates huge free cash flows. Even disregarding AI, it's double-digit compounding growth.
NVIDIA's stock may eventually get decimated (but the company itself will be fine, they have a relatively low employee account and insane margins), the Coreweaves of the world are definitely leveraged plays on compute and may indeed end up being DotCom style busts, but a key difference is that the driving forces at the very top - the Microsofts and Amazons of the world - have huge free cash flows, real compute demand growth beyond the AI space, and fortress balance sheets.
I think that's a fair point and sort of speaks to one of the indicators that say this possible bubble may be different than the dotcom bubble. I think that end-user revenue for AI is a pipe-dream - but the companies interested in compute have a whole lot of resources and so long as they are willing to divert those resources to prop up AI it can keep going for quite a while (at a smaller scale though).
There is a commonly held belief that there is a level of compute (vaguely referred to as AGI) that would be extremely valuable and those companies may continue to rationally fund AI research as R&D though if the VC and loan funding dries up there will probably be serious fights with the accounting departments. It is good to point out that companies with huge war chests do seem poised to continue investing in this even if VC/etc dries up due to the lack of end-user profitability - it'll be an interesting shift but probably not as disastrous as the dot-com bubble burst was.
>What's missing in that system is a way that money is entering the system.
Or maybe not enough money soon enough, and at this scale that could be more of a disaster than it had to be.
So far it's not looking like a business boom much at all compared to the massive investment boom which is undeniable, and that's where a good amount of remaining prosperity is emanating from.
If you were a financial person wouldn't you figure there were a lot bigger bonuses by getting involved with the amount of cash flow being invested rather than the amount resulting from profits being made in AI right now?
I have yet to see a chat agent deployed that is more popular than tailored browsing methods. The most charitable way to explain this is that the tailored browsing methods already in place are the results of years of careful design and battle testing and that the chat agent is providing most of the value that a tailored browsing method would but without any of the investment required to bring a traditional UX to fruition - that may be the case and if it is then allowing them the same time to be refined and improved would be fair. I am skeptical of that being the only difference though, I think that chatbots are a way to, essentially, outsource the difficult work of locating data within a corpus onto the user and that users will always have a disadvantage compared to the (hopefully) subject matter experts building the system.
So perhaps chatbots are an excellent method for building out a prototype in a new field while you collect usage statistics to build a more refined UX - but it is bizarre that so many businesses seem to be discarding battle tested UXes for chatbots.
Thing is, for those who paid attention to the last chatBot hype cycle, we already knew this. Look at how Google Assistant was portrayed back in 2016. People thought you'd be buying starbucks via the chat. Turns out the starbucks app has a better UX
Yea, I don't want to sit there at my computer, which can handle lots of different input methods, like keyboard, mouse, clicking, dragging, or my phone which can handle gestures, pinching, swiping... and try to articulate what I need it to do in English language conversation. This is actually a step backwards in human-computer interaction. To use an extreme example: imagine instead of a knob on my stereo for volume, I had a chat box where I had to type in "Volume up to 35". Most other "chatbot solved" HCI problems are just like this volume control example, but less extreme.
It's funny, because the chat bot designers seem to be continually attempting to recreate the voice computer interface from Star Trek: TNG. Yet if you watch the show carefully, the vast majority of the work done by all the Enterprise crew is done via touchscreens, not voice.
The only reason for the voice interface is to facilitate the production of a TV show. By having the characters speak their requests aloud to the computer as voice commands, the show bypasses all the issues of building visual effects for computer screens and making those visuals easy to interpret for the audience, regardless of their computing background. However, whenever the show wants to demonstrate a character with a high level of computer mastery, the demonstration is almost always via the touchscreen (this is most often seen with Data), not the voice interface.
TNG had issues like this figured out years ago, yet people continue to fall into the same trap because they repeatedly fail to learn the lessons the show had to teach.
It's actually hilarious to think of a scene where all the people on the bridge are shouting over each other trying to get the ship to do anything at all.
Maybe this is how we all get our own offices again and the open floor plan dies.
They’d just have an array of microphones everywhere and isolate each voice - rooms only need n+1 microphones where n is the maximum number of people. That’s already simple to do today, and it’s not even that expensive.
Remember Alexa? Amazon kept wanting people to buy things with their voice via assorted echo devices, but it turns out people really want to actually be in charge of what their computers are doing, rather than talking out loud and hoping for the best.
>changes bass to +4 because the unit doesn't do half increments
“No volume up to 35, do not touch the EQ”
>adjusts volume to 4 because the unit doesn’t do half increments
> I reach over, grab my remote, and do it myself
We have a grandparent that really depends on their Alexa and let me tell you repeatedly going “hey Alexa, volume down. Hey Alexa, volume down. Hey Alexa, volume down,” gets really old lol we just walk over and start using the touch interface
It's also a matter of incentives. Starbucks wants you in their app instead of as a widget in somebody else's - it lets them tell you about new products, cross-sell/up-sell, create habits, etc.
This general concept (embedding third parties as widgets in a larger product) has been tried many times before. Google themselves have done this - by my count - at least three separate times (Search, Maps, and Assistant).
None have been successful in large part because the third party being integrated benefits only marginally from such an integration. The amount of additional traffic these integrations drive generally isn't seen as being worth the loss of UX control and the intermediation in the customer relationship.
Omg thank you guys. It felt so obvious to me but nobody talked about it.
A UX is better and another app or website feels like the exact separation needed.
Booking flights => browser => skyscanner => destination typing => evaluation options with ai suggestions on top and UX to fine-tune if I have out of the ordinary wishes (don’t want to get up so early)
I can’t imagine a human or an AI be better than is this specialized UX.
> I have yet to see a chat agent deployed that is more popular than tailored browsing methods.
Not an agent, but I've seen people choose doctors based on asking ChatGpt for criteria and the did make those appointments. Saved them countless web interfaces to dig through.
ChatGpt saved me so much money by searching for discount coupons on courses.
It even offered free entrance passwords on events I didn't know had such a thing (I asked it where the event was and it also told me the free entrance password it found on some obscure site).
I've seen doctors use ChatGpt to generate medical letters -- Chat Gpt used some medical letters python code and the doctors loved the result.
I've used ChatGpt to trim an energy bill to 10 pages because my current provider generated a 12 page bill in an attempt to prevent me from switching (because they knew the other provider did not accept bills of more than 10 pages).
Combined with how incredibly good codex is, combined with how easily chat gpt can just create throw away one-time apps, no way the whole agent interface doesn't eat a huge chunk of the traditional UX software we are used to.
> the tailored browsing methods already in place are the results of years of careful design and battle testing
Have you ever worked in a corporation? Do you really think that Windows 8 UI was the fruit of years of careful design? What about Workday?
> but it is bizarre that so many businesses seem to be discarding battle tested UXes for chatbots
Not really. If the chatbot is smart enough then chatbot is the more natural interface. I've seen people who prefer to say "hey siri set alarm clock for 10 AM" rather than use the UI. Which makes sense, because language is the way people literally have evolved specialized organs for. If anything, language is the "battle tested UX", and the other stuff is temporary fad.
Of course the problem is that most chatbots aren't smart. But this is a purely technical problem that can be solved within foreseeable future.
> I've seen people who prefer to say "hey siri set alarm clock for 10 AM" rather than use the UI.
It's quicker that way. Other things, such as zooming in to an image, are quicker with a GUI. Bladerunner makes clear how the voice UI is poor for this compared to a GUI.
In an alarm, there is only one parameter to set. In more complex tasks, chat is a bad ui because it does not scale well and it does not offer good ways to arrange information. Eg if I want to buy something and I have a bunch of constraints, I would rather use a search-based UI where i can fast tweak these constraints and decide. Chathpt being smart or not here is irrelevant, it would just be bad ui for the task.
You're thinking in wrong categories. Suppose you want to buy a table. You could say "I'm looking for a €400 100x200cm table, black" and these are your search criteria. But that's not what you actually want. What you actually want is a table that fits your use case and looks nice and doesn't cost much, and "€400 100x200cm table, black" is a discrete approximation of your initial fuzzy search. A chatbot could talk to you about what you want, and suggest a relevant product.
Imagine going to a shop and browsing all the aisles vs talking to the store employee. Chatbot is like the latter, but for a webshop.
Not to mention that most webshops have their categories completely disorganized, making "search by constraints" impossible.
Funny, I almost always don't want to talk to store employees about what I want. I want to browse their stock and decide for myself. This is especially true for anything that I have even a bit of knowledge about.
The thing is that "€400 100x200cm table, black" is just much faster to input and validate versus a salesperson, be it a chatbot or an actual person.
Also, the chatbot is just not going to have enough context, at least not in it's current state. Why those measurements? Because that's how much room you have, you measured. Why black? Because your couch is black too (bad choice), and you're trying to do a theme.
Even when going to a shop, I prefer to look into the options myself first. Explaining a salesperson what I need can take much more time, and then I am never sure if they just try to upsell, if I can explain my use case well etc. The only case where I opt for a salesperson first is when I cannot translate my use case to specification due to high degree of technical or other knowledge needed. I can imagine eg somebody who knows nothing about computers ask "I want a laptop, with good battery, I would use it for this and that", the same way they would ask a salesperson or a technical friend. But I cannot imagine using such an LLM to look for a table where I need it to fit measurements etc, or anything that is not inaccessible in terms of product knowledge. If I know the specifications, opting for an AI chatbot is inefficient. If not, it could help.
> I've seen people who prefer to say "hey siri set alarm clock for 10 AM" rather than use the UI. Which makes sense, because language is the way people literally have evolved specialized organs for.
I don't think it's necessary to resort to evolutionary-biology explanations for that.
When I use voice to set my alarm, it's usually because my phone isn't in my hand. Maybe it's across the room from me. And speaking to it is more efficient than walking over to it, picking it up, and navigating to the alarm-setting UI. A voice command is a more streamlined UI for that specific task than a GUI is.
I don't think that example says much about chatbots, really, because the value is mostly the hands-free aspect, not the speak-it-in-English aspect.
I'd love to know the kind of phone you're using where the voice commands are faster than touchscreen navigation.
Most of the practical day to day tasks on the Androids I've used are 5-10 taps away from a lock screen, and get far less dirty looks from those around me.
1 unlock the phone - easy, but takes an active swipe
2 go to the clock app - i might not have been on the home screen, maybe a swipe or two to get there
3 set the timer to what I want - and here it COMPLETELY falls down, since it probably is showing how long the last timer I set was, and if that's not what I want, I have to fiddle with it.
If I do it with my voice I don't even have to look away from what I'm currently doing. AND I can say "90 seconds" or "10 minutes" or "3 hours" or even (at least on an iPhone) "set a timer for 3PM" and it will set it to what I say without me having to select numbers on a touchscreen.
And 95% of the time there's nobody around who's gonna give me a dirty look for it.
and less mental overhead. Go to the home screen, find the clock app, go to the alarm tab, set the time, set the label, turn it on, get annoyed by the number of alarms that are there that I should delete so there isn't a million of them. Or just ask Siri to do it.
One thing people forget is that if you do it by hand you can do it even when people are listening, or when it’s loud. Meaning its working more reliable. And in your brain you only have to store one execution instead of two. So I usually prefer the more reliable approach.
I don’t know any people that do Siri except the people that have really bad eyes
> Not really. If the chatbot is smart enough then chatbot is the more natural interface. I've seen people who prefer to say "hey siri set alarm clock for 10 AM" rather than use the UI. Which makes sense, because language is the way people literally have evolved specialized organs for. If anything, language is the "battle tested UX", and the other stuff is temporary fad.
I do that all the time with Siri for setting alarms and timers. Certain things have extremely simple speech interfaces. And we've already found a ton of them over the last decade+. If it was useful to use speech for ordering an uber, it would've been worth it for me to learn the specific syntax Alexa wanted.
Do I want to talk to a chatbot to get a detailed table of potential flight and hotel options? Hell no. It doesn't matter how smart it is, I want to see them on a map and be able to hover, click into them, etc. Speech would be slow and awful for that.
Alarm is a good example of an “output only” task. The more inputs that need to be processed the less a pure chatbot interface is good (think lunch bowl menus, shopping in general etc.)
I don't understand how come that a website for tech people turned into a boomerland of people who pride themselves in not using technology. It's like those people who refuse to use computers because they prefer doing everything the old-fashioned way and they insist on the society following them.
Cisco, Level3 and WorldCom all saw astronomical valuation spikes during the dotcom bubble and all three saw their stock prices and actual business prospects collapse in the aftermath of it.
Perhaps the most famous implosion of all was AOL who merged (sort of) with TimeWarner gaining the lion's share of control through market cap balancing. AOL fell so destructively that it nearly wiped out all the value of the actual hard assets that TW controlled pre-merger.
reply