"You will get less leads with the 'enterprise style' contact page. You don't have enough leads right now. You don't have low value self-serve users you want to turn away. Your BDR team is not overflowing with leads you need to turn away. You can make money from having more leads. Less leads will generate less revenue. Here are some potential metrics from the two styles of contact pages. Here is how these metrics tie into revenue."
I think an honest message like this, at least communicated via email to the budget owners would abscond... or at least absolve one of any guilt.
Also, thank you for having the option to toggle the font. I wrote a css rule, but found it later.
I think the last point combined with some real data or case studies would prompt introspection.
Anecdotally I stick to companies with good customer support like glue, even if their product is inferior. It's an absolute wonder to be taken seriously by a company, to have feedback integrated into future products, or just have small issues taken care of without hassle.
You're going to laugh, but this is why I stick with AWS. They've twice helped me with billing issues on my personal account - as in an actual human helping me. They have no idea I manage large (not huge) AWS deployments at my day job. They just demonstrate great customer service to me as a small client.
So they have me as a loyal customer. And advocate, it seems.
My toddler was playing with my Kindle the other day, and he bought a £600 (yes, six hundred) volume of books. I was unable to refund them automatically, and when looking for help I was confronted with a "fuck off" contact page. After finding the option to talk to a human, I was put through within 5 seconds, and the woman had the item refunded in about 1 minute.
Amazon seems to be going for a model where they keep support costs down by making it progressively harder over time to actually contact a person, but when you do manage to you get a good experience. It's an interesting idea, and I suspect that the pleasant surprise at the end makes up for a lot of the frustration getting there.
>They have no idea I manage large (not huge) AWS deployments
I wonder if that is true? Like, how tenacious are they with knowing customers? If the same IP address was used to login to manage two deployments would customer service see a potential link in their interface?
I'm never quite sure in our supposed data-driven economy how clever companies get with this stuff.
First, if this is private vs corporate, they are probably using a separate laptop, likely with a VPN. Second, doing this kind of shadow profiling is a lot of work with potential legal consequences with little gain, at least for support teams. For fraud detection, that is a completely different thing.
So I think a simpler explanation is more plausible: they are selling AWS at such a premium that they can afford normal human customer service and still make a lot of buck.
AWS specifically has a policy of having strong support regardless of how much money they're getting from you, be it $5/mo or $5,000/mo. They definitely have the resources and signals to connect the dots, but it doesn't necessarily effect whether you get support, unless that SigInt tells them you're abusing the system (Eg Scammer/Spammer/Bad Actor) in some way. More money certainly seems to get you better support, but even entry level users still get decent support, and any SigInt connecting of the dots that may or may not be happening doesn't seem to have an impact unless you're using the same billing/contact info or account. That said, I can't objectively say what their customer support reps actually see regarding that kind of info, but after 2 decades of working with clients big and small using or considering AWS, I can confirm their approach to support is genuinely quite good, especially for the "Cheaper" end of offerings compared to competitors.
Hell, they still treat me well despite being a very out-spoken critic socially, and professional have steered a lot of clients away from their ecosystem and thus am objectively responsible for very real losses in revenue; though ultimately still surely a rounding error to their bottom line.
For context, these days I primarily work in helping people deploy performant and/or secure storage systems and associated networks. "This is how much money you're wasting by using AWS/the cloud" is a common approach for us, and the most common counter-point is how good AWS support is (and they're not wrong).
TL;DR: I have lots to criticize about AWS, but their support isn't really one of them, it's genuinely good especially for small users. Also, for many people AWS is perfectly fine, I still use them off and on myself. I only allege it's a "waste of money" in specific situations, but that's also largely subjective of course depending on what's important to you/the client.
AWS support is extremely good. I have had the same experience in personal projects and in turn have quadrupled down on our leverage of their support at my work.
Absolutely. I've communicated with product teams at AWS in my day job, which is pretty sweet as I've worked for some large organisations, but I've also been put in contact with product teams in my personal projects when I encountered bugs with the AWS SSO, for example.
It's annoying that they actually solve my problems because it would be so easy to hate them as the 900 lb gorilla.
On the professional side, they also often let you interact with their experts and architects directly, as part of your support contract. With most other companies, you either have to go through front-office support exclusively, or pay extra for Professional Services.
I don’t think the person you’re replying to is suggesting literally that exact message, but something like it. Adapt to your client and the type of relationship you have with them. You can transmit that same message with a different tone.
It will still come across as scolding and out of touch. It makes a lot of assumptions that a contractor will never have insight into. And because of that, no matter how soft the wording it will always come across as self aggrandizing
Actually, the default font is much more pleasant than that used on this site (https://lenowo.org/index.php) which I complained about a few days ago - and that site doesn't have an option to make it more readable as far as I can see...
You have to judge it client by client though. Some are amenable to and grateful for a flatly stated analysis and recommendation, even if it goes against their ideas. Some will feel belittled and undermined. You need both sorts to pay their invoices and refer their peers, so you pick your battles.
This has always frustrated me. You wouldn't go to a doctor, hear that you need an appendix removed, and feel "belittled and undermined"!
The 'problem' (it's a problem from my pov) is that clients simply think they know better when it comes to digital/computer/online stuff. They're used to browsing the web, so they think they know what a good website is. They know how to write a letter in MS Word, so they think they can write good web copy. Etc.
> You wouldn't go to a doctor, hear that you need an appendix removed, and feel "belittled and undermined"!
It happens more than you'd think, even in the HN comment section! Go to any thread where the topic is medical or diseases. Plenty of people distrust their doctor and advocate going to the doctor with your own crackpot theory you "researched" on WebMD. There's a huge anti-credential streak, even here. A lot of people see professional service providers of all kinds as "mere gatekeeping implementors of my own ideas" rather than experts in the field.
There’s a site that collects stories about experiences like this. It used to be called Clients From Hell, but got absorbed into a bigger site, called Not Always Right[0]. I suspect some of the stories are apocryphal, but it can be entertaining.
A lot of it is internal politics. As a consultant, you see the tip of the iceberg. There may be rational reasons for seemingly irrational decisions that you're not privy to. Your contact's boss wants it done some particular way, so your contact insists on doing it that way. Or your contact has recommended doing it some way internally, and they don't want to be made to look a fool by an outside consultant. Etc.
> This has always frustrated me. You wouldn't go to a doctor, hear that you need an appendix removed, and feel "belittled and undermined"!
Many people absolutely do. Hell, look at the number of people who refused to take a safe and effective vaccine during a pandemic!
> The 'problem' (it's a problem from my pov) is that clients simply think they know better when it comes to digital/computer/online stuff.
I must also say there is definitely a reasonable point to challenge your doctor. While they're an expert, they're still human. As a software engineer, I expect my non-expert colleagues to challenge me, and I've come up with better ideas as a result.
As a real-life example, I'm currently trying to get treatment for my Morton's neuroma (foot-nerve issue). The orthopaedic consultant wants to do a neurectomy but I want to investigate alternatives before taking the leap. Why? The alternatives, while they may not work, won't make things appreciable worse, whereas a neurectomy has a 3-6 month recovery if it goes well and can't really be undone if it goes wrong.
An unfortunate yet unsurprising report to those familiar with the literature on cognitive ability. I too donated to similar programs. I hope better computer skills make some sort of earning impact, though the prevalence of smart phones probably makes a bigger difference.
I'm surprised about how popular these racist explanations about why the program failed, and not exploring the fact that the hardware, software and training for teachers might have been lacking
If they were good at what they set out to do, the program would've been successful and desired in Western countries (perhaps with upgraded models). But it wasn't.
I'd say the lack of ability to self-reflect on the shortcomings of the HW/SW/Infra and the willingness of the program's creators to embrace such explanations is much more telling about the probable cause of failure.
I'm sure most of these supposedly cognitively inferior Peruvian kids are on their laptops right now, playing League or Overwatch, with most of them having smartphones.
In broader strokes, and with the benefit of hindsight, I think the story of Africa is worth exploring, after a century of Western selfless efforts of trying to civilize and develop the continent, very little progress has been made. Then the Chinese moved in with far less noble intentions and a profit motive, and succeeded beyond imagination at civilization-building.
Again, any evidence? What exactly is 'cognitive ability'? A hallmark of the lack of substantive argument is vague terms that can mean anything the speaker likes, and by not defining the term they prevent any substantive critique - nobody really knows what they're talking about (and usually, not the speaker either).
I highly doubt it's all or nothing. While there are likely variations in anything, they can be quite insignificant. For example, everyone, with tiny exceptions, can learn to speak & understand language, and write & read - highly sophisticated cognitive abilities. And they can improve those abilities through education.
These baseless generalities don't show much 'cognitive ability'.
That'd be very amusing, since I'm not white. But no, it just happens to be fairly well documented that some people run faster hardware than others for a wide variety of nature and nurture reasons.
They also run better or worse neural networks on that hardware, which can be educated, but there's no replacement for displacement.
Plenty of white supremecists and their supporters are not white. Plenty are, and say they aren't.
> it just happens to be fairly well documented that some people run faster hardware than others for a wide variety of nature and nurture reasons.
Vague statements like that are unfalsifiable. Of course there are variations in performance - that's absolutely undeniable; mostly likely I type faster or slower than you do. The questions are, how big are those variations and how much are they dependent on what the person is conceived with biologically. If you want to claim anything, you need to be much more specific about those issues.
Good on you for admitting it, but this popular way of being intentionally wrong just because some baddies have stuck their flag in the hill of truth is anti-scientific. Everyone's trying to protect their personal image at the expense of honesty. I'm constantly encountering people who have wrong beliefs about this stuff because the scientific conclusions are so well hidden from mainstream writing on the topic. Even the person replying to you seems shocked to hear that intelligence is innate. Blank-slatism and everybody's-a-winner has infected popular understanding of intelligence.
Except that raw unadjusted IQ scores for even the "hardest" and supposedly most culturally unbiased test (Raven's Progressive Matrices) have consistently shown a secular gain of about one standard deviation over 30-to-40 years, due to the so-called Flynn Effect; with much of it concentrated at the low end. The whole notion that these tests simply measure some kind of purely "innate" ability is highly implausible to say the least; even more so when you compare across different cultural subgroups and even totally different countries.
Not to mention that any test of “innate” ability should not be affected by training or practice, but all known tests of supposed innate ability are. Even Binet (yes, the guy who intended the IQ test) found substantial practice effects; these effects were replicated by Gibson (1969).
It's obviously both genetic and environmental. You can limit people with a detrimental environment (extreme example - inflicting brain damage) but cant improve them beyond their their natural ceiling. And yes, tests don't purely measure that innate ceiling.
> It's obviously both genetic and environmental. You can limit people with a detrimental environment (extreme example - inflicting brain damage) but cant improve them beyond their their natural ceiling.
The question besides the obvious is how close to their ceiling the average human is (or even the 90th percentile). Because the entire discourse about “ceiling” implies that people are somewhat limited by their ceiling. But if 90% of the people are plateauing at 30% of their ceiling because of environmental factors, it makes little sense to talk about the ceiling at all.
The A100 SXM4 has a TDP of 400 watts, let's say about 800 with cooling etc overhead.
Bulk pricing per KWH is about 8-9 cents industrial. We're over an order of magnitude off here.
At 20k per card all in price (MSRSP + datacenter costs) for the 80GB version, with a 4 year payoff schedule the card costs 57 cents per hour (20,000/24/365/4) assuming 100% utilization.
I'll 2nd this; Obsidian has a new core feature being beta tested called "base" it allows for filtering on properties and other attributes of notes into a table type view, allowing display and editing of those properties in the "base" view. This is a huge step forward for a lot of users wanting to make the jump off of Notion.
Whether this is true or not, this is a clever move to publicize. Anyone being poached by Meta now from OpenAI will feel like asking for 100m bonuses and will possibly feel underappreciated with only a 20 or 50 million signing bonus.
Barry Badrinath, down on his luck man-hooker: It's $10 for a BJ, $12 for an HJ, $15 for a ZJ...
Landfill: [Interrupting] What's a ZJ?
Barry Badrinath: If you have to ask, you can't afford it.
Isn't pretty much everyone working at OpenAI already clearly motivated by money over principle? OpenAI had a very public departure from being for-good to being for-money last year...
Lots of people working for AI labs have other AI labs they could work for, so their decisions will be made based on differences of remuneration, expected work/role, location, and employer culture/mission.
The claim above is that OpenAI loses to other labs on most of the metrics (obviously depends on the person) and so many researchers have gone there based on higher compensation.
Millions take a noticeable pay cut, it suppress wages in many fields.
It’s one of the reasons so many CEO’s hype up their impact. SpaceX would’ve needed far higher compensation if engineers weren’t enthusiastic about space etc.
It’s not like tech companies have a playbook for becoming “sticky” in peoples’ lives and businesses by bait and switch.
They still call it “open” by the way. Every other nonprofit is paying equivalent salaries and has published polemics about essentially world takeover, right?
There are options other than money and virtue signaling for why you'd work a given job.
Some people might just like working with competent people, doing work near the forefront of their field, while still being in an environment where their work is shipped to a massively growing user base.
Even getting 1 of those 3 is not a guarantee in most jobs.
While your other comment stands, there is no separating yourself with the moral impetus of who you're working for.
If your boss is building a bomb to destroy a major city but you just want to work on hard technical problems and make good money at it, it doesn’t absolve you of your actions.
If you worked at OpenAI post "GPT-3 is too dangerous to open source, but also we're going to keep going", you are probably someone who more concerned the optics of working on something good or world changing.
And realistically most people I know well enough who work at Open AI and wouldn't claim the talent, or the shipping culture, or something similar are people who love the idea of being able to say they're going to solve all humanity's problems with "GPT 999, Guaranteed Societal Upheaval Edition."
Working at a employer that says they're doing good isn't the same as actually doing good.
Especially when said employer is doing cartoonishly villainous stuff like bragging how they'll need to build a doomsday bunker to protect their employees from all from the great evi... er good, their ultimate goal would foist upon the wider world.
Good point. I was thinking the "actually doing good". Absolutely there's a lot of empty corporate virtue signalling, and also some individuals like that. But there's still individuals who genuinely want to actually do good.
I'm really confused by this comment section, is no one is considering the people they'll have to work with, the industry, the leadership, the customers, the nature of the work itself, the skillset you'll be exercising... literally anything other than TC when selecting a job?
I don't get why this is a point of contention, unless people think Meta is offering $100M to a React dev...
If they're writing up an offer with a $100M sign on bonus, it's going to a person who is making comparable compensation staying at OpenAI, and likely significantly more should OpenAI "win" at AI.
They're also people who have now been considered to be capable of influencing who will win at AI at an individual level by two major players in the space.
At that point even if you are money motivated, being on the winning team when winning the race has unfathomable upside is extremely lucrative. So it's still not worth taking an offer that results in you being on a less competitive team.
(in fact it might backfire, since you do probably get some jaded folks who don't believe in the upside at the end of the race anymore, but will gladly let someone convert their nebulous OpenAI "PPUs" into cash and Meta stock while the coast)
> even if you are money motivated, being on the winning team when winning the race has unfathomable upside
.. what sort of valuation are you expecting that's got an expected NPV of over $100m, or is this more a "you get to be in the bunker while the apocalypse happens around you" kind of benefit?
$100M doesn't just get pulled out of thin air, it's a reflection of their current compensation: it's reasonable that their current TC is probably around 8 figures, with good portion that will 10x on even the most miserable timelines where OpenAI manages to reach the promised land of superintelligence...
Also at that level of IC, you have to realize there's an immense value to having been a pivotal part of the team that accomplished a milestone as earth shattering as that would be.
-
For a sneak peak of what that's worth, look at Noam Shazeer: funded a AI chatbot app, fought his users on what they actually wanted, and let the product languish... then Google bought the flailing husk for $2.7 Billion just so they could have him back.
tl;dr: once you're bought into the idea that someone will win this race, there's no way that the loser in the race is going to pay better than staying on the winning team does.
Ehh. I think much less of people who “sellout” for like $450k TC. It’s so unnecessary at that level yet thousands of people do it. $100M is far more interesting
I think an honest message like this, at least communicated via email to the budget owners would abscond... or at least absolve one of any guilt.
Also, thank you for having the option to toggle the font. I wrote a css rule, but found it later.
reply