I wrote tons of perl in my life. I would rather keep writing perl than touching python. Every time I see a nice utility and see that it's written in python - tab closed.
Ah I'd say the exact opposite, python in general is pretty good but jupyter sucks because the syntax isn't compatible with regular python and I avoid it like the plague.
Take the code you find in an average notebook, copy it to a .py text file, run it with python. Does it run? In my experience the answer is usually 'no' because of some extra-ass syntax sugar jupyter has that doesn't exist in python.
Back in the day, here is Sokovia, there was a local competetitor to Facebook. They had a great start and everything went perfect for them, but it quickly turned out that the technical side was really bad. Sluggish interface, constant outages, etc.
They tried to rewrite the app from scratch two times, and eventually failed.
So yes, making sure you're moving in the right direction at the beginning of your journey is pretty important.
You don't have to overengineer and stay in your shed until you have a complete, feature complete product, but at least make sure, that you're building on the right foundation.
This is actually the same reason Friendster failed in the face of Facebook, pun intended. Friendster simply could not keep up technically and had to shut down. Later, Facebook actually had a more solid technical footing and could scale quickly.
Yes, that's what I meant, my apologies if I conveyed that incorrectly. Friendster's own scale was untenable for them but Facebook was able to handle their own scale much better.
By all means, don’t spend forever agonizing over the perfect schema and never ship. It really does not matter at small scale anyway, DBs are absurdly fast.
Just understand and accept that you are taking on heavy technical debt that will need to be repaid, and that it’s much more difficult to do once you’ve already vertically scaled several times along the way.
If your selling point is consolidating apps, you absolutely have to get the data model right, else you don't solve the problem. Just because you don't go in and sell it that way, doesn't mean it's not important as hell. The very reason it's hard to get apps to interoperate is that each one has it's own data model. If they used one giant data model... it wouldn't be a problem.
Different apps have different technical problems that can be an enabler or a source of never-ending technical debt. Being able to add new features easily, rather than being stuck scaling, could make or break this product.
This is really interesting. I am someone who does not see the value in over 60% of tests. Even less so if they are written by the person who wrote the code. Huge fragile technical debt. Many times you see developers mocking or stubbing functionality in such a way that the code behaves identically to the production code only through a different mechanism. So you have at least duplicated work. The best kind of test is your error logs and your user feedback.
I am also someone who, at interview, is a proponent of TDD - but in reality, and after landing the job - has never used it nor worked anywhere that has used it where they say they do.
With that in mind. How the fuck do you manage change requests with TDD.
Riddle me this… given:
We change the code behavior, and now our test fails! But the failure isn't because we introduced a bug, it's because we changed our mind. The new message is now the correct behavior of greet(), and so the test is now incorrect and needs to be updated. Time spent "fixing" the test is pure overhead.
So you have two test specs, since if I am new to a TDD codebase I won’t know which tests, if any, I should change for a given behaviour change. The old one that returns “Hello Brandon” and now a new one written by me, “Hello Brandon welcome to blah”. Now I do the TDD loops. Code fails, I write my code, new test passes but now old test fails? What do I do? I should never change my tests to satisfy the needs of my code.
Do a whole back and forth with whomever comes up with the specs?
Or
Edit my code so on the second invocation it presents the correct message, assuming tests are run in a consistent and deterministic manner this would work.
In this instance it would be pretty clear there is a wrong specification so I could go back and ask someone for clarity.
But what if it is not clear. What if it is some intricate or subtle behaviour change where you can’t use your intuition to figure out what is a correct test or not.
How do you even know if the tests that you have written correctly describe the desired behaviour? Pass them by the decision makers who are most likely non technical?
The only thing that important is how your end users interact with your services and what is a dealbreaker to them. Everything else is a nice to have. These can not be tested in any way other than having your application in their hands. This mo has served me very well in my career so far.
But what about my academic mental exercise. You mean I learned all these tools and techniques for nothing? How will my manager know if I am a top engineer or not if my code is not impenetrable with ridiculous concepts applied irrationally.
Different scale, different problem. Those "top engineers" at Google are dealing with memory that won't even fit in a hard drive (let alone memory), so stuff like this may as well be pico-optimizations.
The logic makes sense for them. The problem is everyone else copying Google without understanding their own data. Any domain where real time performance is important need to put aside the SWE books and pick up a CPU/GPU architecture reference instead.
Half of the money in IT right now is flowing from investors hearing "blahblah AI blahblah" and seeing dollar bills, which in turn makes half of the movement in IT happen in shitty python code. shrugs
This is the driver for a lot of things. Anime. x264 was to enable better compression of weeb videos. This tech will allow fan dubs to better represent the animes in the videos.
Just saying that if I were working with this person it wouldn't make me think highly of him, and in my fairly extensive experience I can report that there's a strong correlation between silly commit messages and not great code. I didn't mean to imply that I was qualified or skilled enough to evaluate the JIT compiler for Python.
Just to provide an example, your previous comment could have been written something like this: "Being honest though, the guy's commit messages changed my preconceptions about how reliable and well-designed his code will be."
No knowledge of statistics required, Bayesian or otherwise.
OK, fair enough, your suggestion is totally reasonable. However I've been referring to people's "priors" though in informal conversation for about 25 years, to friends, romantic partners, and family as well as academics and programmers, and I know several other people who do the same. Apart from anything else it's a nice non-technical sounding word. I'm not a Bayesian statistics zealot (I don't even work in statistics any longer). But I definitely think all educated people should be familiar with the _idea_ of Bayesian inference. I think that goes without saying. I'm no expert on such matters but clearly our own perception/cognition has some sort of Bayesian flavour to it (you think a mammal dimly perceived on the horizon is probably a dog etc). What I'm saying is -- it sound like perhaps you also have had some involvement with the academic subject -- I think you don't need to push that word quite so far away from mainsteam culture. It's perhaps even a little patronizing to mainstream culture? And I think that if we are ever going to overcome CP Snow's Two Cultures problem then making little gestures like this in the right direction is actually important; especially from people like you and me.