FT uses "underwater" because the deal was $300 Billion and the stock has lost $315 Billion in market cap since the deal. That's a bit of a stretch, but the rest of the article is very good.
And as it won't be obvious to everyone here: Alphaville is one of the few free parts of FT online. You need to create an account to access it, but don't need a paid subscription.
Yea these market cap discussions are always a bit meaningless actually, stocks can be volatile for many reasons… its not like they actually lost the delta
So as some of my own feelings/thoughts on this: I've also sat on the "receiving side" of a "free forever" campaign now 2 times in my career. The first time driven by the CEO and the second time driven by the marketing team (and supported by the CEO). In both cases, I knew the truth (sitting on the product management side) that there was no sustainable way to have a "free forever" campaign: that there was finite end in both cases on the 2-5 year horizon before we needed to change plans. I advocated against adding the "forever" verbiage knowing this. The first time, I didn't push strongly: it was my mistake.
The second time, I pushed strongly and made sure the entire executive team knew that we would be misleading our users. I pointed to the horizon and talked about the problems with "forever" language. I had to push very strongly back on the marketing team to change verbiage and then they silently made updates anyway to add "forever" verbiage. They were eventually fired for this.
But what I find concerning here isn't that the "free" tier went away (it almost always must) but that there's denial and push-back in this set of threads about the verbiage. You made a mistake. Own it and apologize for the verbiage you put out there. Don't deny that it was ever there or argue over pedantic details about where/how that verbiage was placed.
as much as I question why someone would trust a free "forever" offer, I have a lot more questions about a company that is denying what's in the public record.
To be fair the CEO is quite comfortable using the word 'forever'. It was used as the title of the announcement to withdraw the Hobby tier and also specifically used to justify it:
Wow. The guy is a jerk and a liar. The board at PlanetScale needs to get this guy off the internet. He's too much of an asshole to be seen in public.
I have no real horse in this race. I know how to manage my own databases, but I do have people asking me about PlanetScale and asking me to use it for certain projects, and I will absolutely never do so now.
This is the bloody pricing page. If I as the CEO of a SaaS startup don't even know what our pricing page says I should step down. That's our offer, that's the most important page we have. Come on now. Writing "free forever" isn't something some rogue marketing intern does, this is a core positioning decision and something you'd absolutely be part of, if not leading, as the founder.
What are you arguing about? People send you web archive links and you're still stubborn enough to say that it wasn't the case. Shame to even see you here, just shows what kind of company you are from the inside.
I love the website. It stands out amongst a million vanilla SaaS marketing sites all using the same section stack template.
But nobody will actually use it the way they describe in this article. Nobody is going to use the site enough to learn and remember to use your site-specific window management when they need it.
Idk, the UX seems really self-evident to me. Also it’s fun. I usually click away from this kind of product immediately but I stayed on this for provably 5-10 minutes just snooping around to see what it was all about.
I think it's revealing that a group that historically values making decisions based on verifiable and accurate information is now jumping to discredit "Vibe Coding" based on rumors that are easily disproven.
2. Replit "AI Deleted my Database" drama was caused by guy getting inaccurate AI support. All he needed to do was click a "Rollback Here" button to instantly recover all code and data. https://x.com/jasonlk/status/1946240562736365809
What does this eagerness to discredit vibe coding say about us?
It's just human nature. Technology advances exponentially and huge leaps forward are increasingly being compressed into very short amounts of time. Just like how sages were worried about memory after the invention of writing or luddites were worried about job displacement with the industrial revolution; it is natural. What would the Italian Renaissance artists think of Photoshop? People whose livelihood and identity are inherently tied to this discipline can't help but be dismissive. "Vibe Coding" will be "coding" or "programming" in the near future, likely in just a few years as tools evolve. Just like we use text editors and GUIs now to do computing instead of punch cards and a single CLI.
> Just like how sages were worried about memory after the invention of writing or luddites were worried about job displacement with the industrial revolution;
This is pretty exemplary of resolution loss in verbal reasoning.
Which sages were worried about memory after the invention of writing? If you refer to Phaedrus dialogue, the passage about the invention of writing is a just a story that is used to move the conversation along. By that time writing had existed for thousands of year. The dialogue is about rhetoric and learning, not about writing itself.
Luddites are also often quoted by people with a lossy compression of history. These "King Ludd" graffiti and sabotage actions are highly correlated with labor action suppression and severe enforcement of criminal acts calling for execution of laborers involved in labor organization. It was a revolt against severe oppression, not some dumb way to try and stop progress.
I suppose it's all a matter of what one is using an LLM for, no?
GPT is great at citing sources for most of my requests -- even if not always prompted to do so. So, in a way, I kind of use LLMs as a search engine/Wikipedia hybrid (used to follow links on Wiki a lot too). I ask it what I want, ask for sources if none are provided, and just follow the sources to verify information. I just prefer the natural language interface over search engines. Plus, results are not cluttered with SEO ads and clickbait rubbish.
Hmm I don't feel like this should be taken as a tenet of AI. I feel a more relevant kernel would be less black and white.
Also I think what you're saying is a direct contradiction of the parent. Below average people can now get average results; in other words: The LLM will boost your capabilities (at least if you're already 'less' capable than average). This is a huge benefit if you are in that camp.
But for other cases too, all you need to know is where your knowledge ends, and that you can't just blindly accept what the AI responds with.
In fact, I find LLMs are often most useful precisely when you don’t know the answer. When you’re trying to fill in conceptual gaps and explore an idea.
Even say during code generation, where you might not fully grasp what’s produced, you can treat the model like pair programming and ask it follow-up questions and dig into what each part does. They're very good at converting "nebulous concept description" into "legitimate standard keyword" so that you can go and find out about said concept that you're unfamiliar with.
Realistically the only time I feel I know more than the LLM is when I am working on something that I am explicitly an expert in, and in which case often find that LLMs provide nuance lacking suggestions that don’t always add much. It takes a lot more filling in context in these situations for it to be beneficial (but still can be).
Take a random example of nifty bit of engineering: The powerline ethernet adapter. A curious person might encounter these and wonder how they work. I don't believe an understanding of this technology is very obvious to a layman. Start asking questions and you very quickly come to understand how it embeds bits in the very same signal that transmits power through your house without any interference between the two "types" of signal. It adds data to high frequencies on one end, and filters out the regular power transmitting frequencies at the other end so that the signal can be converted back into bits for use in the ethernet cable (for a super brief summary). But if want to really drill into each and every engineering concept, all I need to do is continue the conversation.
I personally find this loop to be unlike anything I've experienced as far as getting immediate access to an understanding and supplementary material for the exact thing Im wondering about.
Above average people can also use it to get average results. Which can actually be useful. For many tasks and usecases, the good enough threshold can actually be quite low.
A broken clock is right twice a day. If you're spending more time writing instructions for the AI, and then rewriting those instructions several times before you get usable code, then you're absolutely wasting your time. At least this is my experience with "AI". Even the fancy autocomplete portion of "AI" gets in my way more than it helps. Of course, YMMV, but I seriously doubt you're doing any better with "AI" than anyone else actually is, and a lot of people using it just don't realize all the time it's wasting, if they paid attention and added it all up.
reply