I very strongly agree to the idea of using the tools you already have and know to solve the problem at hand and release it. Then observe where it could use help, then seek options on specific products solving those problems and only then can we get to real good questions that can pierce the veil of marketing and the comfort of herd mentality.
Using a tool without knowing reasonable bounds of the domain, current requirements, and how the tool (redis in this case) solves the problem isn't good advice.
Case in point our team went with Redis, just the default, use it blindly without fully understanding our requirements and how redis helps scale.
2 years later we spent 2 sprints, holding back the release trying to understand RDF vs AOF, and why we're seeing massive spikes in consumption and performance triggering pod eviction, running comparison tests to prove which works better and explaining why, running qa tests (regression, performance, load), introducing postgres for queuing, redoing our code to bypass the sync mechanism between how data flows between redis and postgres, updating dependenciies, migrating existing customer data (various on site locations), explaining this to team members, managers and their managers, installation technicians, support engineers and presenting it at engineering townhalls as a case of bad decisions.
Well, by your admission, you used Redis for a problem domain it wasn't suited for in the first place. How is this an argument for using in-database queues?
> use it blindly without fully understanding our requirements and how redis helps scale
I'm sorry I don't get how I could come across as advocating the use of Redis blindly. My point is if your data flow looks like a queue, then use a queue, don't hack a relational DB to become a queue. I think that's reasonable rule of the thumb, not going in blind.
We needed queues. We used Redis. That fits the domain.
Problem was there wasn't a good answer to "How much redis does your team need to know to put it in production".
We thought we knew it well enough, we thought we knew what we were getting into, and we thought so many others are using it for this, we should be good. That is makes a difference, clearly.
Also reading your reply I get the impression that "sync mechanism between redis and postgress" was the bottleneck. Wondering if you can add some details around it and also was this something that can't be fixed by fine tuning redis config, rather than completely removing it from your stack.
There were many problems but at the core of it, this was us having redis write huge amounts of data to disk very frequently causing this.
We could not reduce the frequency (product would not allow) and we couldn't find a way to make the writes reliably fast.
I like to believe there exists a possible way of handling this, but point being, our team had no time to find out, how redis works internally and have confidence that the new way won't bring up new surprises.
I like to call it "poor engineering"
and there are many forms of poor engineering I've seen. Following the analogy... painting the bottom of the bridge in same color as the terrain. Then repainting because there were color differences.
Spending time and money looking for lighter materials even when the bridge is designed to support a lot more weight than traditional time tested materials. Justification : "Maybe cars will get heavier, and bigger saving 3 pounds is a good pursuit"
This "Fatality Rate" compares successful summits and returns to those who die trying. Worded this way, it's a tricky number to pin down. You could be with a group that never planned to choose you as part of the smaller summit party, and still die on the mountain. But it's far from true that a quarter of people who visit the mountain die.
Nothing beats the fluidity, ubiquity and the sheer aesthetic quality of pen on paper. Excellent for diagrams and notes as I am discussing something on a call, or explaining things (mostly yo myself) consuming information dense video or audio content.
But I don't always have my notebook on me, and I tend to loose bits of paper easily. So to capture thoughts, I use "email thyself" on my phone.
Every once a few weeks, I clear out my inbox. Most notes go to junk. Some get cross referenced. Fewer still become actual files in my git repo.
My git repo is just a versioned bunch of files with upto 2 levels of hierarchy.
This works amazingly well, for capturing information. no lock-ins, no fear of losing content, frequently gets cleaned up and cross referenced with version history (which I dont mind losing as I have dates in the text files themselves).
What I haven't figured out (yet) is a sustainable way to regularly go back to the content I've collected to keep them rolling through my memory without it being an overwhelming amount of work.
Refreshing things regularly is the best, most effective way to find patterns and make better connections. Haven't broken that code yet.
Neither did I. I was introduced to Eclipse before I got my first job and today, 9 years later it still gets the job done. I briefly use IntelliJ IDEA CE only for one static analysis plugin that works better in IDEA, most code editing is in Eclipse.
I don't love eclipse, not a person to get too attached to an IDE, but it is an incredible piece of open source software. It works.
Also too much effort in rewiring keyboard shortcuts that have made their way to my muscle memory. So eclipse it is.
Yeah but...but... that would just get the job done. Nothing to write about, nothing to whine about, no weird dependencies being pulled, nothing to hand-wave, nothing to yell and humblebrag about. No theatre.
https://davidseah.com/node/compact-calendar/
I keep a pdf and screenshot version on my desktop and phone. For date ranges, I use the excel version.