> Whoever ultimately owns the AI (or the Bazooka) will always dictate how and where the particular tool is used.
Your take confuses me, because in this case the owner is Meta. So yes, they have to think about what tools they make ("should we design a bazooka") and how they'll use what they made ("what's the target and when to pull the trigger ?")
They disbanded the team that was tasked with thinking about both.
From the article:
> RAI was created to identify problems with its AI training approaches, including whether the company’s models are trained with adequately diverse information, with an eye toward preventing things like moderation issues on its platforms. Automated systems on Meta’s social platforms have led to problems like a Facebook translation issue that caused a false arrest
So when you've taken 6-12 months to ship and everybody already iterated twice by directly using a hosted model and is building a real customer base you are only at v0.1 with your first customers who are telling you they actually wanted something else and now you have go and not just massage some prompts but recode your compiler and tool chain and everything else up and down the stack.
Perhaps if you already know your customers and requirements really really well it can make a lot of sense but I'd be very sceptical about "given how easy it is to do, why are you not validating your concept early with a fully general / expensive / hosted model". Premature optimisation being root of evil type stuff.
Already happened. [1][2]
[1] https://news.ycombinator.com/item?id=36211820
[2] https://www.sec.gov/news/press-release/2023-102