DDD suggests continuous two-way integration between domain experts <-> engineers, to create a model that makes sense for both groups. Terminology enters the language from both groups so that everyone can speak to each other with more precision, leading to the benefits you stated.
Google SERPs are personalized. Likely OP is a Midjourney user which is recorded in his targeting profile.
When OP searches for Midjourney as a Midjourney user, Google’s algorithm infers he might want to consider an alternative because why would an existing user search for the product they’re already using.
We see evidence supporting this given no Midjourney ad showed up for a direct keyword match query; and only alternatives triggered.
This is kinda like Amazon retargeting you with alternative toasters after you just bought a new toaster. Most people think this is stupid. Well, the most likely cohort to buy a new toaster is a person that just bought one because they’re not satisfied with their purchase.
they still have to pay to stock, ship, then return and restock the shitty one - then stock and ship the better one. Hard to imagine there’s profit in that.
IDK about most people but I rarely bother to return a cheap broken item. It's just not worth my time. I'd just grumble and replace it (but probably not from the same seller).
>When OP searches for Midjourney as a Midjourney user, Google’s algorithm infers he might want to consider an alternative because why would an existing user search for the product they’re already using.
That's insane. Someone searching for something they've searched for in the past is looking for stability of the search results; they're trying to get back to where they've been before. If they wanted different results, they'd change the search query.
Is this the "logic" behind Google and Youtube search results being different each time a query is run?
> This is kinda like Amazon retargeting you with alternative toasters after you just bought a new toaster. Most people think this is stupid. Well, the most likely cohort to buy a new toaster is a person that just bought one because they’re not satisfied with their purchase.
I don't think that makes sense. The goal of Amazon can't be to have you unhappy with shopping on Amazon, if for no other reason that returns cost money.
You know how you can ask ChatGPT the same thing 3x in a row and get 3 completely different results? Google's basically the same and has been for a long time.
If you and me both ask for something hyper-specific, we'll see the same results. But the more generic the search term is, the more hyper-personalised it gets.
In some ways it makes sense, for example we shouldn't see the same thing when we search for "restaurants" as we're unlikely to be looking for restaurants on the other side of the world, in many other ways it's annoying and counter-productive.
I oversimplified. :) Main gist is that SERPs are personalized and based on your targeting profile which makes the results non-deterministic, as we're experiencing. Google is the only entity who will ever truly know.
It has links to public sources on the pricing of both LLMs and search, and explains why the low inference prices can't be due the inference being subsidized. (And while there are other possible explanations, it includes a calculator for what the compound impact of all of those possible explanations could be.)
Prices would be significantly higher if OpenAI was priced for unit profitability right now.
As for the mega-conglomerates (Google, Meta, Microsoft), GenAI is a loss leader to build platform power. GenAI doesn't need to be unit profitable, it just needs to attract and retain people on their platform, ie you need a Google Cloud account to use Gemini API.
I believe the API prices are not subsidized, and there's an entire section devoted to that. To recap:
1) pure compute providers (rather than companies providing both the model and the compute) can't really gain anything from subsidizing. That market is already commoditized and supply-limited.
2) there is no value to gaining paid API market share -- the market share isn't sticky, and there's no benefit to just getting more usage since the terms of service for all the serious providers promise that the data won't be used for training.
3) we have data from a frontier lab on what the economics of their paid API inference are (but not the economics of other types of usage)
So the API prices set a ceiling on what the actual cost of inference can be. And that ceiling is very low relative to the prices of a comparable (but not identical) non-AI product category.
That's a very distinct case from free APIs and consumer products. The former is being given out for no cost in exchange for data, the latter for data and sticky market share. So unlike paid APIs, the incentives are there.
But given the cost structure of paid APIs, we can tell that it would be trivial for the consumer products to be profitably monetized with ads. They've got a ton of users, and the way users interact with their main product would be almost perfect for advertising.
The reason OpenAI is not making a profit isn't that inference is expensive. It's that they're choosing not to monetize like 95% of their users, despite the unit economics being very lucrative in principle. They're making a loss because for now they can, and for now the only goal of their consumer business is to maximize their growth and consumer mindshare.
If OpenAI needed to make a profit, they would not raise their prices on things being paid for. They'd just need to extract a very modest revenue from their unpaid users. (It's 500M unpaid users. To make $5B/year in revenue from them, you'd need just a $1 ARPU. That's an order of magnitude below what's realistic. Hell, that's lower than the famously hard to monetize Reddit's global ARPU.)
Yes, I read your entire article and that section, hence my response. :)
1) Help me understand what you mean by “pure compute providers” here. Who are the pure compute providers and what are their financials including pricing?
2) I already responded to this - platform power is one compelling value gained from paid API market share.
3) If the frontier lab you’re talking about is DeepSeek, I’ve already responded to this as well, and you didn’t even concede the point that the 80% margin you cited is inaccurate given that it’s based on a “theoretical income”.
1) Any companies that host APIs using open-weights models (LLama, Gemma, Deepseek, etc) in exchange for money. There's a lot of them around, at different scales and different parts of a hosting provider's lifecycle. Check for example the Openrouter page for any open-weights model for hosters of that model with price data.
2) (API) platform power having no value in this space has been demonstrated repeatedly. There are no network effects, because you can't use the user data to improve models. There is no lock-in, as the models are easy to substitute due to how incredibly generic the interface is. There is no loyalty, the users will jump ship instantly when better models are released. There is no purchasing power from having more scale, the primary supplier (Nvidia) isn't giving volume discounts and is actually giving preferential allocations to smaller hosting providers to fragment the market as much as possible.
Did you have some other form of platform power in mind?
3) I did not concede that point because I don't think it's relevant. They provide the exact data for their R1 inference economics:
- The cost per node: a 8*H800 node costs $16/hour=$0.0045/s to run (rental price, so that covers capex + opex).
- The throughput per node: Given their traffic mix, a single node will process 75k/s input tokens and generate 15k/s output tokens.
- Pricing ($0.35/1M input when weighing for cache hit/miss, $2.2/1M output)
- From which it follows that the per-node revenue is $0.35/(1M/75k/s) = $0.026/s for input, and $2.2/(1M/15k/s)=$0.033/s for output. That's $0.06/s in revenue, substantially higher than the cost of revenue.
Like, that just is what the economics of paid R1 inference are (there being V3 in the mix doesn't matter, they're the same parameter count). Inference is really, really cheap both in absolute cost/token terms and relative to the prices people are willing to pay.
Their aggregate margins are different, and we don't know how different, because here too they choose to also provide free service with no ads. But that too is a choice. If they just stopped doing that and rented fewer GPUs, their margins would be very lucrative. (Not as high as the computation suggests since the unpaid traffic allows them to batch more efficiently, but hat's not going to make a 5x difference.)
But fair enough, it might be cleaner to use the straight cost per token data rather than add the indirection of margins. Either way, it seems clear that API pricing is not subsidized.
Just had a quick glance, but I think I found something to add to the Objection!-section of your post:
Brave's Search API is 3$ CPM and includes Web search, Images, Videos, News, Goggles[0]. Anthropic's API is 10$ CPM for Web search (and text only?), excluding any input/output tokens from your model of choice[1], that'd be an additional 15$ CPM, assuming 1KTok per request and Claude Sonnet 4 as a good model, so ~25$ CPM.
So your default "Ratio (Search cost / LLM cost): 25.0x" seems to be more on the 0.12x side of things (Search cost / LLM cost). Mind you, I just flew over everything in 10 mins and have no experience using either API.
Thank you for narrowing your claims, you might want to update your post at the top of the thread to call out your ADV/ACV assumption.
I appreciate all the experience and advice you’re offering on this thread! Take my feedback as a nitpick: as I was reading through your top post, my initial thought was “this isn’t true all the time” because I spent 6 years in 2 separate startups with significant and successful outbound sales where our ADV > $100k.
One company stayed private and profitable while driving revenue north of $80M/yr; and the other company sold enough long-term enterprise contracts to be acquired by a bigger $B company.
Correct. Kinda like it suddenly came up when Facebook started showing memories of dead friends and relatives to people that didn't want it nor enjoyed it. There's many instances of humanity plowing headfirst into some technology thinking "this will be great!" only to haphazardly run into the unanticipated not-so-great parts.
Not to mention there's literally people creating tech out here _today_ that's recreating _exactly_ what some Black Mirror episodes were talking about years ago. Like interactive chatbots model after dead people from voice samples, videos, and messages.
Can you talk through specifically what sprint goals you’ve completed in an afternoon? Hopefully multiple examples.
Grounding these conversations in an actual reality affords more context for people to evaluate your claims. Otherwise it’s just “trust me bro”.
And I say this as a Senior SWE who’s successfully worked with ChatGPT to code up some prototype stuff, but haven’t been able to dedicate 100+ hours to work through all the minutia of learning how to drive daily with it.
If you do want to get more into it, I'd suggest something that plugs into your IDE instead of Copy/Paste with ChatGPT. Try Aider or Roo code. I've only used Aider, and run it in the VS terminal. It's much nicer to be able to leave comments to the AI and have it make the changes to discrete parts of the app.
I'm not the OP, but on your other point about completing sprint goals fast - I'm building a video library app for myself, and wanted to add tagging of videos. I was out dropping the kids at classes and waiting for them. Had 20 minutes and said to Aider/Claude - "Give me an implementation for tagging videos." It came back with the changes it would make across multiple files: Creating a new model, a service, configuring the DI container, updating the DB context, updating the UI to add tags to videos and created a basic search form to click on tags and filter the videos. I hit build before the kids had finished and it all worked. Later, I found a small bug - but it saved me a fair bit of time. I've never been a fast coder - I stare at the screen and think way too much (function and variable names are my doom ... and the hardest problem in programming, and AI fixes this for me).
Some developers may be able to do all this in 20 minutes, but I know that I never could have. I've programmed for 25 years across many languages and frameworks, and know my limitations. A terrible memory is one of them. I would normally spend a good chunk of time on StackOverflow and the documentation sites for whatever frameworks/libraries I'm using. The AI has reduced that reliance and keeps me in the zone for longer.
I think experiences vary. AI can work well with greenfield projects, small features, and helping solve annoying problems. I've tried using it on a large Python Django codebase and it works really well if I ask for help with a particular function AND I give it an example to model after for code consistency.
But I have also spent hours asking Claude and ChatGPT with help trying to solve several annoying Django problems and I have reached the point multiple times where they circle back and give me answers that did not previously work in the same context window. Eventually when I figure out the issue, I have fun and ask it "well does it not work as expected because the existing code chained multiple filter calls in django?" and all of a sudden the AI knows what is wrong! To be fair, there was only one sentence in the django documentation that mentions not chaining filter calls on many to many relationships.
In what specific way did this post misrepresent or abuse the Dunning-Kruger concept? (Btw, the graph used is the same one used on the Wikipedia page for DK.) If you’re able to explain what you understand to be misrepresented, you can clear up the misconception for others — like me.
So, instead of engaging in a discussion, and sharing your knowledge, with someone genuinely interested in learning from you— to improve upon the seeming misconception that bothers you — you link to paper and do nothing to correct your own pet peeve. Maybe consider that human life is finite, no person will ever be able to read or analyze everything, so you can help others when you have a piece of knowledge. Relevant - https://xkcd.com/1053/.
I do see other graphs that tell a different story. Namely, that confidence is a monotonically increasing function of competence. If the data supports the idea that there is a valley of despair where confidence decreases as competence increases, I must be missing it.
This is Commons, not a Wikipedia article. This image is incorrect, has been removed from the enwiki article, and is in fact explicitly tagged with a disputed factual accuracy notice.
Dunning-Kruger described a relationship between people's subjective opinion of their skill, and their performance on a test. They find the subjective curve is less steep than the objective one (low performers believe they are closer to the center than they really are, and so do top performers). There's no "peak of stupid", or anything else on that graph.
Repeating vague associations you've seen on the Internet before is how misinformation spreads.
I dispute nothing you write. Looking at the paper that graph is not within it.
Either my eyes skipped past it or that dispute notice was added after I linked the image. Regardless it belongs there.
I have previously seen a similarly shaped graph with Dunning-Kruger effect discussions many times, including on Wikipedia I believe. Now I'm curious what the source of the misrepresentation is since it does not appear quite derivable without artistic interpretation from the paper's data.
Regardless, I'm glad to update and add to my beliefs.
Please note that despite the implication that seems to be in your final statement, I did not mean to say the graph was correct, only that it is a graph commonly associated with the paper's message and thus understandable for the author to have used. From that, the use of it doesn't quite come from nowhere. I'm fact, I didn't really say much at all. While Wikipedia is the first search result, the Decision Lab is next which has a similar, even more distorted graph on their page [0] and yet is a fairly well esteemed organization.
Glad to improve my knowledge but that the graph is in common use is not misinformation even if the graph itself misinforms and isn't from the paper.
Presumably you read the section where Brooks highlights all the forecasts executives were making in 2017? His NET predictions act as a sort of counter-prediction to those types of blind optimistic, overly confident assertions.
In that context, I’d say his predictions are neither obvious nor lacking boldness when we have influential people running around claiming that AGI is here today, AI agents will enter the workforce this year, and we should be prepared for AI-enabled layoffs.
In what sense is self-driving “here” if the economics alone prove that it can’t get “here”? It’s not just limited coverage, it’s practically non-existent coverage, both nationally and globally, with no evidence that the system can generalize, profitably, outside the limited areas it’s currently in.
It's covering significant areas of 3 major metros, and the core of one minor, with testing deployments in several other major metros. Considering the top 10 metros are >70% of the US ridehail market, that seems like a long way beyond "non-existent" coverage nationally.
You’re narrowing the market for self-driving to the ridehail market in the top 10 US metros. That’s kinda moving the goal posts, my friend, and completely ignoring the promises made by self-driving companies.
The promise has been that self-driving would replace driving in general because it’d be safer, more economical, etc. The promise has been that you’d be able to send your autonomous car from city to city without a driver present, possibly to pick up your child from school, and bring them back home.
In that sense, yes, Waymo is nonexistent. As the article author points out, lifetime miles for “self-driving” vehicles (70M) accounts for less than 1% of daily driving miles in the US (9B).
Even if we suspend that perspective, and look at the ride-hailing market, in 2018 Uber/Lyft accounted for ~1-2% of miles driven in the top 10 US metros. [1] So, Waymo is a tiny part of a tiny market in a single nation in the world.
Self-driving isn’t “here” in any meaningful sense and it won’t be in the near-term. If it were, we’d see Alphabet pouring much more of its war chest into Waymo to capture what stands to be a multi-trillion dollar market. But they’re not, so clearly they see the same risks that Brooks is highlighting.
There are, optimistically, significantly less than 10k Waymos operating today. There are a bit less than 300M registered vehicles in the US.
If the entire US automotive production were devoted solely to Waymos, it'd still take years to produce enough vehicles to drive any meaningful percentage of the daily road miles in the US.
I think that's a bit of a silly standard to set for hopefully obvious reasons.
> ..is a tiny part of a tiny market in a single nation in the world.
Calculator was a small device that was made in one tiny market in one nation in the world. Now we all got a couple of hardware ones in our desk drawers, and a couple software ones on each smartphone.
If a driving car can perform 'well' (Your Definition May Vary - YDMV) in NY/Chicago/etc. then it can perform equally 'well' in London, Paris, Berlin, Brussels, etc. It's just that EU has stricter rules/regulations while US is more relaxed (thus innovation happens 'there' and not 'here' in the EU).
When 'you guys' (US) nail self-driving, it will only be a matter of time til we (EU) allow it to cross the pond. I see this as a hockey-stick graph. We are still on the eraser/blade phase.
if you had read the F-ing article, which you clearly did not, you would see that you are committing the sin of exponentiation: assuming that all tech advances exponentially because microprocessor development did (for awhile).
Development of this technology appears to be logarithmic, not exponential.
He's committing the "sin" of monotonicity, not exponentiation. You could quibble about whether progress is currently exponential, but Waymo has started limited deployments in 2-3 cities in 2024 and wide deployments in at least SF (its second city after Phoenix). I don't think you can reasonably say its progress is logarithmic at this point - maybe linear or quadratic.
Speaking for one of those metro areas I'm familiar with: maybe in SF city limits specifically (where they still are half the Uber's share), but that's 10% of the population of the Bay Area metro. I'm very much looking forward to the day when I can take a robo cab from where I live near Google to the airport - preferably, much cheaper than today's absurd Uber rates - but today it's just not present in the lives of about 95+% of Bay Area residents.
> preferably, much cheaper than today's absurd Uber rates
I just want to highlight that the only mechanism by which this eventually produces cheaper rates is by removing having to pay a human driver.
I’m not one to forestall technological progress, but there are a huge number of people already living on the margins who will lose one of their few remaining options for income as this expands. AI will inevitably create jobs, but it’s hard to see how it will—in the short term at least—do anything to help the enormous numbers of people who are going to be put out of work.
I’m not saying we should stop the inevitable forward march of technology. But at the same time it’s hard for me to “very much look forward to” the flip side of being able to take robocabs everywhere.
People living on the margins is fundamentally a social problem, and we all know how amenable those are to technical solutions.
Let's say AV development stops tomorrow though. Is continuing to grind workers down under the boot of the gig economy really a preferred solution here or just a way to avoid the difficult political discussion we need to have either way?
I'm not sure how I could have been more clear that I'm not suggesting we stop development on robotaxis or anything related to AI.
All I'm asking is that we take a moment to reflect on the people who won't be winners. Which is going to be a hell of a lot of people. And right now there is absolutely zero plan for what to do when these folks have one of the few remaining opportunities taken away from them.
As awful as the gig economy has been it's better than the "no economy" we're about to drive them to.
This is orthogonal. You're living in a society with no social safety net, one which leaves people with minimal options, and you're arguing for keeping at least those minimal options. Yes, that's better than nothing, but there are much better solutions.
The US is one of the richest countries in the world, with all that wealth going to a few people. "Give everyone else a few scraps too!" is better than having nothing, but redistributing the wealth is better.
But this is the society we live in now. We don’t live in one where we take care of those whose jobs have been displaced.
I wish we did. But we don’t. So it’s hard for me to feel quite as excited these days for the next thing that will make the world worse for so many people, even if it is a technological marvel.
Just between trucking and rideshare drivers we’re talking over 10 million people. Maybe this will be the straw that breaks the camel’s back and finally gets us to take better care of our neighbors.
Yeah but it doesn't work to on the one hand campaign for not taking rideshare jobs away from people on an online forum, and on the other say "that's the society we live in now". If you're going to be defeatist, just accept those jobs might go away. If not, campaign for wealth redistribution and social safety nets.
Public transit has a fundamentally local impact. It takes away some jobs but also provides a lot of jobs for a wide variety of skills and skill levels. It simultaneously provides an enormous number of benefits to nearby populations, including increased safety and reduced traffic.
Self-driving cars will be disruptive globally. So far they primarily drive employment in a small set of the technology industry. Yes, there are manufacturing jobs involved but those are overwhelmingly going to be jobs that were already building human-operated vehicles. Self-driving cars will save many lives. But not as many as public transit does (proportionally per user) And it is blindingly obvious they will make traffic worse.
Waymo's current operational area in the bay runs from Sunnyvale to fisherman's wharf. I don't know how many people that is, but I'm pretty comfortable calling it a big chunk of the bay.
They don't run to SFO because SF hasn't approved them for airport service.
I just opened the Waymo app and its service certainly doesn't extend to Sunnyvale. I just recently had an experience where I got a Waymo to drive me to a Caltrain station so I can actually get to Sunnyvale.
The public area is SF to Daly City. The employee-only area runs down the rest of the peninsula. Both of them together are the operational area.
Waymo's app only shows the areas accessible to you. Different users can have different accessible areas, though in the Bay area it's currently just the two divisions I'm aware of.
Why would you consider the employee-only area? For that categorization to exist it must mean it's either unreliable for customers or too expensive cause there's too much human drivers on the loop. Either way it would not be considered as an area served by self driving, imo.
There are alternative possibilities, like "we don't have enough vehicles to serve this area appropriately" or "we don't have statistical power to ensure this area meets safety standards even though it looks fine", and "there are missing features (like freeways) that would make public service uncompetitive in this area" to simply "the CPUC hasn't approved a fare area expansion".
It's an area they're operating legally, so it's part of their operational area. It's not part of their public service area, which I'd call that instead.
I wish! In Palo Alto the cars have been driving around for more than a decade and you still can't hail one. Lately I see them much less often than I used to, actually. I don't think occasional internal-only testing qualifies as "operational".
Where's the economic proof of impossibility? As far as I know Waymo has not published any official numbers, and any third party unit profitability analysis is going to be so sensitive to assumptions about e.g. exact depreciation schedules and utilization percentages that the error bars would inevitably be straddling both sides of the break-even line.
> with no evidence that the system can generalize, profitably, outside the limited areas it’s currently in
That argument doesn't seem horribly compelling given the regular expansions to new areas.
Analyzing Alphabet’s capital allocation decisions gives you all the evidence necessary.
It’s safe to assume that a company’s ownership takes the decisions that they believe will maximize the value of their company. Therefore, we can look at Alphabet’s capital allocation decisions, with respect to Waymo, to see what they think about Waymo’s opportunity.
In the past five years, Alphabet has spent >$100B to buyback their stock; retained ~100B in cash. In 2024, they issued their first dividend to investors and authorized up to $70B more in stock buybacks.
Over that same time period they’ve invested <$5B in Waymo, and committed to investing $5B more over the next few years (no timeline was given).
This tells us that Alphabet believes their money is better spent buying back their stock, paying back their investors, or sitting in the bank, when compared to investing more in Waymo.
Either they believe Waymo’s opportunity is too small (unlikely) to warrant further investment, or when adjusted for the remaining risk/uncertainty (research, technology, product, market, execution, etc) they feel the venture needs to be de-risked further before investing more.
Isn’t there a point of diminishing returns? Let’s assume they hand over $70B to Waymo today. Can Waymo even allocate that?
I view the bottlenecks as two things. Producing the vehicles and establishing new markets.
My understanding of the process with the vehicles is they acquire them then begin a lengthy process of retrofitting them. It seems the only way to improve (read: speed up) this process is to have a tightly integrated manufacturing partner. Does $70B buy that? I’m not sure.
Next, to establish new markets… you need to secure people and real estate. Money is essential but this isn’t a problem you can simply wave money at. You need to get boots on the ground, scout out locations meeting requirements, and begin the fuzzy process of hiring.
I think Alphabet will allocate money as the operation scales. If they can prove viability in a few more markets the levers to open faster production of vehicles will be pulled.
> Alphabet has to buy back their stock because of the massive amount of stock comp they award.
Wait, really? They're a publically traded company; don't they just need to issue new stock (the opposite of buying it back) to employees, who can then choose to sell it in the public market?
This is just a quirk of the modern stock market capitalist system. Yes, stock buybacks are more lucrative than almost anything other than a blitz-scaling B2B SAAS. But for good of society, I would prefer if Alphabet spent their money developing new technologies and not on stock buybacks / dividends. If they think every tech is a waste of money, then give it to charity, not stock buybacks. That said, Alohabet does develop new technologies regularly. Their track record before 2012 is stellar, their track record now is good (Alphafold, Waymo, Tensorflow, TPU etc), and it is nowhere close to being the worst offender of stock buybacks (I’m looking at you Apple), but we should move away from stock price over everything as a mentality and force companies to use their profits for the common good.
That's a very hand wavy argument. How about starting here:
> Mario Herger: Waymo is using around four NVIDIA H100 GPUSs at a unit price of $10,000 per vehicle to cover the necessary computing requirements. The five lidars, 29 cameras, 4 radars – adds another $40,000 - $50,000. This would put the cost of a current Waymo robotaxi at around $150,000
There are definitely some numbers out there that allow us to estimate within some standard deviations how unprofitable Waymo is
(That quote doesn't seem credible. It seems quite unlikely that Waymo would use H100s -- for one, they operate cars that predate the H100 release. And H100s sure as hell don't cost just $10k either.)
You're not even making a handwavy argument. Sure, it might sound like a lot of money, but in terms of unit profitability it could mean anything at all depending on the other parameters. What really matters is a) how long a period that investment is depreciated over; b) what utilization the car gets (ot alternatively, how much revenue it generates); c) how much lower the operating costs are due to not needing to pay a driver.
Like, if the car is depreciated over 5 years, it's basically guaranteed to be unit profitable. While if it has to be depreciated over just a year, it probably isn't.
Do you know what those numbers actually are? I don't.
Here in the product/research sense, which is the hardest bar to cross. Making it cheaper takes time but generally we have reduced cost of everything by orders of magnitude when manufacturing ramps up, and I don't think self driving hardware(sensors etc) would be any different.
It’s not even here in the product/research sense. First, as the author points out, it’s better characterized as operator-assisted semi-autonomous driving in limited locations. That’s great but far from autonomous driving.
Secondly, if we throw a dart on a map: 1) what are the chances Waymo can deploy there, 2) how much money would they have to invest to deploy, and 3) how long would it take?
Waymo is nowhere near a turn-key system where they can setup in any city without investing in the infrastructure underlying Waymo’s system. See [1] which details the amount of manual work and coordination with local officials that Waymo has to do per city.
And that’s just to deploy an operator-assisted semi-autonomous vehicle in the US. EU, China, and India aren’t even on the roadmap yet. These locations will take many more billions worth of investment.
Not to mention Waymo hasn’t even addressed long-haul trucking, an industry ripe for automation that makes cold, calculated, rational business decisions based on economics. Waymo had a brief foray in the industry and then gave up. Because they haven’t solved autonomous driving yet and it’s not even on the horizon.
Whereas we can drop most humans in any of these locations and they’ll mostly figure it out within the week.
Far more than lowering the cost, there are fundamental technological problems that remain unsolved.
Thanks for linking me to the ICAP framework. ICAP and your “I do something else to learn” methodology generally jives with the same thing I happened upon by chance after spending a lot of time trying to “learn how to learn” the best way; landing upon SRS and Anki specifically, as a tool; and then finding a much better process+system. Given how deeply involved you are in the space, I assume you’ve heard of, and possibly follow, some of Justin Sung’s videos and techniques?
He provides some scientific foundation behind the recommendations he makes, specifically his recommendations around mind mapping and _how_ to do it properly. His process puts mind mapping firmly in the _Interactive_ mode. The results are truly unbelievable.
So much so that after investing 20 hours to mind map a book for myself 7 months ago, I can recall practically all the information I mind mapped without rehearsal.
Mind mapping makes up probably 70% of my learning these days, then I have a long-form written system for the other 29%, and sometimes, when I have a little isolated fact that doesn’t fit in either system, I turn to SRS for memorization of the last 1%.
I'm not familiar with Sung's videos, but after a quick perusal of his thumbnails I have some knowledge in the various cognitive science concepts he covers (Cognitive Load Theory, Flow, Mindmaps). I didn't really dig into educational theory until I started teaching Computer Science. I wanted to figure out how to better instruct my students after learning about the high drop/fail rates in intro courses. Once I started to experiment in my classes, I decided I should get the PhD for a pay bump.
Once I was in the program, I focused most of my research in reading where cog sci was being used for stem, but also for general practice research. I've been training martial arts for almost 20 years now, so some of the research was me double dipping in how to improve teaching CS and punching people.
Honestly, I still argue that martial arts' spaced repetition was a bigger influence on how I view learning. I need to allocate 2-4 hours 1-4 times a week for practice (4 when I was younger and could get away with it; correct due to immediate feedback from my partners; and have a giant support network of people through the US that make me vested in not only the art but their lives as well. I acknowledge the benefits of meta-cognitive methods like planning and self-reflection, but they feel more theory than application.
Planning is great until you're a novice that doesn't know what to train next. Then you are just a struggling student receiving negative reinforcement, which only amplifies any imposter syndrome you already have. Sadly, there isn't much research exploring how physical athletes learn beyond simple spaced repetition. There's some work in interleaved practice [1] but since physical training is more or less "solved", progress is slow.
Instead, I focus on the various lower-level practice activities so students can acquire subskills without needing to program. Then, I heavily encourage building a 'sense of community' [2], not through group projects (which have their own faults) but rather in simply "giving a damn" about your classmates' progress.
At the end of the day, I think learning is heavily a "time on task" [3] problem and determining how to structure lower-level practice and toy examples that encourage you to keep with it and break Carol Dweck's "fixed mindset" [4].
I'd like to dig deeper into how to properly structure practice across ICAP modalities, but the sheer number of variables and even determining how many activities should be in a practice is too complex of a problem without a very large sample size.
DDD suggests continuous two-way integration between domain experts <-> engineers, to create a model that makes sense for both groups. Terminology enters the language from both groups so that everyone can speak to each other with more precision, leading to the benefits you stated.