While earlier detection has been beneficial, there's such a thing as too much. Really, there's balance to be struck. For instance the detection procedures themselves (even things like non-invasive imaging) aren't risk free themselves, false positives can set off a chain of events that carry their own harm, and even it's not at all uncommon to develop cancers you "die with" instead of "die from" but once they're detected you have to assume the worst - and treatment itself causes its own form of harm.
Getting one annually is not recommended for most patients. After my bowel resection, I had annual colonoscopies for five years, then every three years, now every five (the normal recommendation for my age cohort).
Colonoscopies are no big deal from my perspective, but they do have some risk; bowel perforation being the primary one. The prep stuff is the worst aspect for most patients; I used to love lemon lime Gatorade before I used it once as a way of drinking the liquid laxative...
If you want calorie counting to be of any value to you you're going to want to be accurate. Thankfully we're mostly creatures of habit. Source accurate numbers for everything you eat in a week and you're 90% covered for every other week. I recorded 30 items over two weeks and then added 15 more over 2.5 months. Don't sabotage your efforts with tools that don't work.
Was a post here recently about someone who did just this and was able to survive in a competitive niche. The key(?) is to look for those where a significant slice is owned by PE firms.
>Most developers don't love writing tests, or updating documentation, or working on tricky dependency updates - and I really think we're heading to a world where AI can take the load of that and free me up to work on the most interesting and complex problems.
Where does the most come from? There's a certain sense of satisfaction in knowing I've tested a piece of code per my experience in the domain coupled with knowledge of where we'll likely be in six months. The same can be said for documentation - hell, on some of the projects I've worked on we've entire teams dedicated to it, and on a complicated project where you're integrating software from multiple vendors the costs of getting it wrong can be astronomical. I'm sorry you feel this way.
> There's a certain sense of satisfaction in knowing I've tested a piece of code per my experience in the domain coupled with knowledge of where we'll likely be in six months.
one of the other important points about writing unit tests isn't to just to confirm the implementation but to improve upon it through the process of writing tests and discovering additional requirements and edge cases etc (tdd and all that)
i suppose its possible at some point an ai could be complex enough to try out additional edge cases or confirm with a design document or something and do those parts as well... but idk its still after-the-fact testing instead of at design-time its less valuable imo...
Adopting this feels like a very risky proposition given Google's tendency to drop support for things. Samsung have supported DeX for almost a decade now.
Apart from regulating social media, which a lot of people will say is censorship, I can't think of much else.
We can, however, negate the benefits of using social media to spread misinformation by making it a criminal offense for politicians to knowingly lie or misrepresent the truth. This limits the damage they can cause.
Also, extending that to social media influencers, and the media in general, would make sense.
As for who decides what is true, that would be up to the court system.
reply