> But if you are in the vicinity of enough successful projects, over a long period of time, there's a good chance that leadership will notice that the common element is you.
This is only true if average tenure of leadership and management is more than a couple of years.
> For whatever reason, kubernetes just irritates me. I find it unpleasant to use. And I don't think I'm unique in that regard.
I feel the same. I feel like it's a me problem. I was able to build and run massive systems at scale and never used kubernetes. Then, all of a sudden, around 2020, any time I wanted to build or run or do anything at scale, everywhere said I should just use kubernetes. And then when I wanted to do anything with docker in production, not even at scale, everywhere said I should just use kubernetes.
Then there was a brief period around 2021 where everyone - even kubernetes fans - realised it was being used everywhere, even when it didn't need to be. "You don't need k8s" became a meme.
And now, here we are, again, lots of people saying "just use k8s for everything".
I've learned it enough to know how to use it and what I can do with it. I still prefer to use literally anything else apart from k8s when building, and the only time I've ever felt k8s has been really needed to solve a problem is when the business has said "we're using k8s, deal with it".
It's like the Javascript or WordPress of the infrastructure engineering world - it became the lazy answer, IMO. Or the me problem angle: I'm just an aged engineer moaning at having to learn new solutions to old problems.
I think the reason why mental health is more public these days is because it wasn't talked about and addressed.
To extend you physical injury analogy: yes, people get physically injured. People break legs, and because of the focus and progress on physical injuries, they wear a cast for a few weeks, and then - for all practical intents and purposes - the injury never happened.
Because the same attention wasn't applied to mental health, I think people realised they were surrounded by the equivalent of people dragging themselves around on the ground because of a broken leg a decade ago that never got fixed. Why would anyone do that? Either because they don't know about the treatment, or because they live in an environment where the idea of getting treatment is seen as a bad or weak or shameful thing.
> Why is it so implausible to say that practically everyone can expect to eventually have to deal with at least one significant mental injury, too?
Just like we expect to walk down the street and see the occasional person with a plaster or bandage to handle a physical injury, if you accept we all have mental injuries, why do you expect to see them handled any more privately than physical ones?
Because historically we haven't handled mental injuries as well as the physical ones. I don't completely disagree with your original points. I think depth, nuance, and accuracy of the conversation matters most of all. There is plenty of superficial, influencer-level chatter in both realms.
> Because historically we haven't handled mental injuries as well as the physical ones.
I think that's the crucial point: "because that's how we've always done it" is the only real justification I can think of for us not tackling mental struggles more head on. If we're brave enough to compassionately question the things we don't normally question, being more open about mental stuff is the right thing to do IMO.
I sometimes wonder if superficial, influencer-level chatter is an early part of the process of normalising tough conversation points. It can let people test the waters in a safe way, signalling they want to talk about this stuff without getting too deep or vulnerable, yet.
I think we can all agree AI is a bubble, and is over-hyped. I think we can ignore any pieces that say "AI is all bad" or "AI is all good" or "I've never used AI but...".
It's nuanced, can be abused, but can be beneficial when used responsibly in certain ways. It's a tool. It's a powerful tool, so treat it like a powerful tool: learn about it enough to safely use it in a way to improve your life and those around you.
Avoiding it completely whilst confidently berating it without experience is a position formed from fear, rather than knowledge or experience. I'm genuinely very surprised this article has so many points here.
Commenting on the internet points (this article is having), I realised I was reading most of the popular things here for some time, months, and it was such a huge and careless waste of my time…
So I’m not even surprised it’s having so many internet points. As if they were the sign of quality, then the opposite. Bored not very smart people thinking the more useless junk they consume, the better off they’ll become. Doesn’t work that way.
I think the stories told about this time in particular will be the same as the stories told about any boom/bust cycle: a frenzied feeling of progress which resulted in a tiny handful of people getting outrageously wealthy, whilst the vast majority of people and society as a whole loses a whole lot of time, money and dignity.
I think this is specifically the consequence of smart people working in a bubble: there's no clearly defined problem being solved, and there's no common solution everyone's aiming for, there's just a general feeling of a direction ("AI") along with a pressure to get there before anyone else.
It leads to the false feeling of progress, because everyone thinks they're busy working at the forefront, when in reality, only a tiny handful of people are are actually innovating.
Everyone else (including me and the person you responded to) is just wasting time relearning new solutions every week to "the problem with current AI" .
It's tiring reading daily/weekly "Advanced new solution to that problem we said was the advanced new solution last month", especially when that solution is almost always a synonym of "prompt engineering", "software engineering" or "prompt engineering with software engineering".
> It's tiring reading daily/weekly "Advanced new solution to that problem we said was the advanced new solution last month"
At least for the current iterations that come to mind here, every advanced new solution solves the problem for a subset of problems, and the advanced new solution after that solves it for a subset of the remaining problems.
E.g. if you are tool calling with a fixed set of 10 tools you don't _need_ anything outlined in this blog post (though, you may use it as token count optimization).
It's just the same as in other programming disciplines. Nobody is forcing you to stay up to date with frontend framework trends if you have a minimally interactive frontend where a <form> elements already solves your problem.
Similarly, nobody forces you to stay up-to-date with AI trends on a daily basis. There are still plenty of product problems ready to be exploited, that do well enough with state of AI & dumb prompt engineering from a year ago.
Given how many paid offerings Google has, and the complexity and nuance to some of those offering (e.g. AdSense) I am pretty surprised that Google don't have a functioning drop in solution for billing across the company.
If they do, it's failing here. The idea of a penny pinching megacorp like Google failing technically even in the penny pinching arena is a surprise to me.
There's so, so many different reasons in the real world as to why/how details like that end up in front of the customer.
Is OP happy to work for Satan as long as he appears grammatically accurate, polite and concise?
Alternatively, OP is a nightmare to work with because every single other role in the company has to do things in exactly the way the engineers want, otherwise they're careless morons.
”Nontheistic Satanism, as exemplified by LaVeyan Satanism (practiced by the Church of Satan and First Satanic Church) and The Satanic Temple, holds that Satan does not exist as a literal anthropomorphic entity, but rather as a symbol of a cosmos which Satanists perceive to be permeated and motivated by a force that has been given many names by humans over the course of time. In this religion, "Satan" is not viewed or depicted as a hubristic, irrational, and fraudulent creature, but rather is revered with Prometheus-like attributes, symbolizing liberty and individual empowerment. To adherents, he also serves as a conceptual framework and an external metaphorical projection of the Satanist's highest personal potential.”
You see, Satan is not just a biblical figure, but also the average IT company.
This was a helpful reminder of a particular world view, thank you.
I think some intelligent people are intelligent because of a need for stimulation: they need more new information, so they learn lots and keep learning and the world throws more learnings at them because they get good at it.
Intelligence becomes an emergent property of dealing/distracting from that craving for more information - a beneficial addiction.
So when someone like that stops doing stuff, or that flow of new information and experiences slows with life, that craving/withdrawal becomes sadness.
One solution is to feed the addiction. Learn more. Do stuff. Don't have any stuff to do? Well other people do! Do their stuff for them!
Yeah intelligent people are good problem solvers & they naturally assume you can solve your way to happiness by a thought, plan, new goal, idea etc. Where it's mostly about letting go & living.
We often also get attached to ways of operating that brought success in certain fields of life. Often subtly tying their self-worth & safety to this identity. This is hardest to see often, these subtle identities we create to navigate life. They help in certain ways, but then also limit us in others. If we become more aware of these patterns, we can keep them when useful, and take them less serious when they are limiting.
I thought this article wasn't great, really. It was interesting, but I expected some more takeaways. Like this was the main consistent point I could find:
> they find that the people who score high on one of the many intelligences tend to score high on the others, too, just as Spearman would’ve predicted a hundred years ago.
It keeps coming up through the article, and it feels like the author disagrees with it and doesn't want to accept it, but doesn't give a solid argument against it.
I think intelligence comes down to perceived truth. An intelligent person is someone who demonstrates awareness of something about the world that feels intuitively true that you hadn't heavily considered before.
Intelligence is the word we use to describe people or things that are able to do this truth-revealing with some consistency - for whatever reason.
I think the unhappiness associated with certain types of intellect comes from the clash of that definition of intelligence with the concept of civil society.
The dream of a civil society is where we all work together equally to solve eachothers problems, and we all coexist as peers. The reality of that definition of intelligence is that some people are just a hell of a lot better for society than others.
When a person who doesn't feel intelligent has a problem, the expectation of society is that someone intelligent somewhere, somehow, will have solved that problem for them.
An intelligent person has the opposite experience: they see their problems more clearly, and they don't have the safety net of knowing the best performing of the species working for them to solve them.
An unintelligent person is expected to take and use value in society. An intelligent person is expected to provide and create that value in society. It's pretty easy to see how some intelligent people might feel hard done by with that arrangement.
The really happy intelligent people I've met seem to be the ones who've accepted their intelligence means a life of hardship serving others, whereas the angry intelligent people seem to have an air of entitlement or expectation or feeling that the world owes them something for their genius.
This is only true if average tenure of leadership and management is more than a couple of years.