The genius of streaming was being more convenient than piracy. With streaming prices hiking up, recommendations getting worse and their libraries becoming plagued by one-song-releases and AI Slop, piracy is becoming a thing again. The same is happening for video, apparently, as people get tired of having to pay for half a dozen streaming services more than they used to pay for 300 cable channels.
Apple actually used to have a platform that was decent at providing legitimate music at reasonable pricing and convenient means to play it with iTunes. I wonder if Apple Music can become that again.
This looks absolutely amazing, but since this is the Internet and people need to complain about stuff, I can't see the knit cloth mat working well with a wheeled chair!
The author hints very briefly that Semantic Version is a hint, not a guarantee, to which I agree - but then I think we should be insisting on library maintainers that semantic versioning *should* be a guarantee, and in the worst case scenario, boycott libraries that claim to be semantically versioned but don't do it in reality.
I don't understand why major.minor.patchlevel is a "hint". It had been an interface contract with shared libraries written in C when I first touched Linux, and that was 25+ years ago; way before the term "semantic version" was even invented (AFAICT).
Imagine I make a library for loading a certain format of small, trusted configuration files.
Some guy files a CVE against my library, saying it crashes if you feed it a large, untrusted file.
I decide to put out a new version of the library, fixing the CVE by refusing to load conspicuously large files. The API otherwise remains unchanged.
Is the new release a major, minor, or bugfix release? As I have only an approximate understanding of semantic versioning norms, I could go for any of them to be honest.
Some other library authors are just as confused as me, which is why major.minor.patchlevel is only a hint.
The client who didn't notice a difference would probably call it a bugfix.
The client whose software got ever-so-slightly more reliable probably would call it a minor update.
The client whose software previously was loading large files (luckily) without issue would call it major, because now their software just doesn't work anymore.
It's also an almost-real situation (although I wasn't the library developer involved)
You can Google "YAMLException: The incoming YAML document exceeds the limit" - an error introduced in response to CVE-2022-38752 - to see what happens when a library introduces a new input size limit.
What happened in that case is: the updated library bumps their version from 1.31 to 1.32; then a downstream application updates their dependencies, passes all tests, and updates their version from 9.3.8.0 to 9.3.9.0
> Imagine I make a library for loading a certain format of small, trusted configuration files.
> Some guy files a CVE against my library, saying it crashes if you feed it a large, untrusted file.
Not CVE-worthy, as the use case clearly falls outside of the documented / declared area of application.
> refusing to load conspicuously large files [...] Is the new release a major, minor, or bugfix release?
It deserves a major release, because it breaks compatibility. A capability that used to work (i.e,. loading a large but trusted file) no longer works. It may not affect everyone, but when assessing impact, we go for the most conservative evaluation.
It can't be a guarantee. Even the smallest patches for vulnerabilities change the behavior of the code. Most of the time this is not a problem, but weird things happen all the time. Higher memory usage, slower performance, some regressions that are only relevant for a tiny amount of users, ...
Pretty much. Everything is a breaking change to someone. Best to just ignore sem ver and have a robust automated test suite and deployment process that minimises issues with a bad build.
I don't really think that's accurate. In my last interview cycle, I aced the livecoding portion at each interview and didn't practice any leetcode problems at all. In my normal workflow I write utility scripts in Python using only the standard library pretty regularly. If you know how to write complete, small programs using only the standard library of some language, you'll do fine on livecoding interviews. A lot of people struggle to do this because they only know how to work inside of a framework.
Maybe they didn't give you the same sets of problems most companies use.
Last time I went through any of these one of the problems was implementing a priority queue, for which I would have to write a min-heap on the spot. There's no chance I'd be able to do it on 45 minutes with an interviewer breathing down my neck.
In other situations I had easier ones. I don't remember the problems especifically but I recall one I googled after the interview and the answer was using two-pointers fast/slow to iterate through a list. I spent maybe 20 minutes tryng a couple different approaches and that one never occurred to me. Last time I had to use a two-pointer solution for a problem was at uni, which I left 15 years ago.
But then again they could still force you to use another language (as I had to during my interviews) and even though strict syntax isn’t required it still throws the candidate off.
Anyone that has been working on corporate jobs for a good lot of years won't have a portfolio or a busy github.
I've been coding for 20 years. Almost all this work is in some private repository behind multiple levels of locks protected by a stack of NDAs and Trade Secret agreements.
I can't really explain why, but to me they don't feel the same at all.
I've always finished exams absurdly quickly. I used to finish 60 min exams in like 20 min and sleep the rest of the time when the professor didn't allow us to leave, because I had likely spent the previous night drinking and partying my way through college. I'd usually ace most of them.
Thing is, at those times we were working with freshly acquired knowledge that you've been practicing a lot. That's not the case with most leetcode interviews. As a senior/staff engineer, I'm not using double pointers to perform little math tricks on a daily basis, I'm troubleshooting cascading timeouts on distributed systems. I'm not worried about how fast I'm going to iterate over a batch of a couple thousand records on a list I'm worried about how much latency I'll accrue if I rely on a primary source of truth instead of hitting a cached answer.
Code interviews don't measure experience or competence. I don't even think they measure stress as the article mentions. To me, they just measure how much leetcode you practiced for that interview. Nothing more.
Generally, yes. You need an account in some Brazilian bank to use PIX, so these are likely Brazilian nationals living abroad and accepting payments directly to their Brazilian accounts.
This. Mercado Pago is also available in Brazil, and it requires you deposit funds in their own account to use.
The thing about PIX most people don't get, including Europeans in this thread, is it integrates into whatever is your bank, so you can use your bank's cashing account to pay, no external app or account necessary.
You can put your credit cards on Mercado Pago and pay with it... But yeah. Before PIX, places were going with PicPay and Mercado Pago world... And Pix just "killed" both overnight lol
Apple actually used to have a platform that was decent at providing legitimate music at reasonable pricing and convenient means to play it with iTunes. I wonder if Apple Music can become that again.