Does anyone truly understand all the little edge cases with CSS?
I've write tons and tons of CSS, have done for a decade. I don't sit and think about the exact interactions, I just know a couple things that might work if I'm getting something unexpected.
I don't really see it possible to commit that to memory, unless I literally start working on an interpreter myself.
I think there can be a different way to think about CSS that can help with that feeling of never understanding it all. Recently I’ve heard people influential in the CSS world describe it as a “suggestion” to the browser. The browser has its own styles, the user might have some custom stylesheet on top of the browser’s version, extensions, etc etc and at some point CSS is really more a long list of “suggestions” about how the site should look.
If you embrace that idea to the fullest, you can create some interesting designs/patterns that can be more resilient. The “downside” is that this way of writing css will likely made the pixel perfect head of the marketing department hate you unless they also write code.
I think it’s also okay to say that some ways of writing css just aren’t relevant anymore. A good parallel in mind is building construction and general carpentry. These days, a quick 2x4 stud wall or insulated concrete forms is fast, cheap, and standardized around the world. However, many craftspeople still exist that will create beautiful joinery for what is ultimately a simple thing, but we can appreciate that art standalone. With CSS, I don’t suspect we will ever need to go back to floats or crazy background images or whatever but it’s nice that those tools are still there for not only the sake of back compat, but also as a way to tinker and “craft” something bespoke for a special project or just because you like it. Education will eventually catch up and grid and flexbox will keep gaining popularity until we decide that it’s too complicated and come up with some new algorithm. That can all be true though and you can bring value as a developer without knowing every single aspect to the public API.
But you need to, you know, actually float something in a text. I think to do it with flexbox/grid you need JS that calculates heights and than manually splits the text into boxes with heights, so essentially you are doing rendering.
Also is there another way to position boxes side-by-side in an inline context without float?
I suppose if your lens is actually a vertical stack of lenses, each with hundreds of inputs and outputs, then why wouldn't this work? Although I cannot fathom fabricating it. Maybe start by finding the absolute simplest/smallest image classifier you can
Yes, by "lens" I meant a possibly large stack of optical elements, each designed to perform a single computational step. Kind of like layers of an NN, but etched on sheets of plastic.
I do like this article a lot for showing how to do this pattern of slurping data and inserting it into a DB, in the context of Arrow Flight.
The concurrency rules of DuckDB is here [1]. Reads/writes need to happen in the same process, but multiple threads can do so.
This is putting a server in front of a DuckDB instance, so all read/writes are funneled there in that one process. DuckDB takes care of the concurrency within the process via MVCC.
You could do the same thing with an HTTP server or other system, but this shows it with the Flight RPC framework.
NOTE: I had an incorrect comment for about 2 minutes that I deleted and restructured here. Sorry if you saw that noise.
They can’t really get rid of Cognito. It’s the authentication/authorization platform for external users. There is no migration path that will seamlessly transfer user names and passwords to other mainstream services like CodeCommit.
If you run internal apps on ECS, cognito effectively gives you beyondcorp for free. Being able to put cognito as a rule in your load balancer is choice. I wouldn't use it for anything else, though.
I just hope its powerful enough that Indies can target it along with the Steam Deck, rather than just hope an pray like they did for Switch 1's late lifecycle. The amount of <30fps indie titles on there was sad.
I wouldn't blame Unity for this. It's perfectly capable of running games efficiently on mobile. Problem is people either don't know how to or don't care to optimize their games performance.
Sure, they're more limited but Unity actually has very good and accessible profiling tools included. It'd be easy for most developers to get quick wins if they've never optimized their game before.
The Switch was weak when it came out. Decent PCs from that same year can handle most of these games just fine. It's not really the developer's fault when the Switch is the only platform with issues, and they're usually not "pushing the envelope" in any way. The fault here is Nintendo's, they didn't prioritize support for ported games, though admittedly they couldn't really foresee the indie game boom, since it wasn't nearly as big of a deal at the time, especially in Japan.
First-party Nintendo titles are more or less the only games that actually manage to "push the envelope" on the Switch, and that's because they have the resources and experience to do it. Even then, some games end up constrained compared to the original vision, because the hardware can't handle it no matter how much insider knowledge you have about how it works and how to use it right.
Thanks to the success of The Witcher 3, I wouldn't call CDPR an indie dev anymore. I'm sure porting that game wasn't easy, but it had a well resourced studio behind it. Not all games can even make the tradeoffs that were necessary for it to work, though. Factorio, a 2D game, also made by a pretty competent but still indie developer, was ported to the Switch, but its expansion pack Space Age couldn't be.
Sorry, I only meant that the hardware was weak. As a product, the Switch was an overwhelming success, and I don't really think Nintendo made a mistake by choosing weaker hardware at the time. However, it's 9 years later and things are different now. The new platform should try to be more accommodating for ports IMO and the issues with the original are just backdrop.
Kinda. It had to be downscaled to below 720p to get passable frame rate performance. Compared to like almost any PC with a discrete GPU or any alternative console release it had, the Switch port was a huge step down in visual quality.
But i dont care about any of those things; they dont make the game more fun for me. It was a great port. Buy a different machine if you want to be inside the matrix.
Most indie devs don't have time and money to optimize. They will make the game primarily for the biggest audience, and then make it somewhat playable for everyone else.
The closer Switch is to the Steam Deck, the more likely both will be targeted.
"Beautiful art style" and "cutting-edge graphics" are nowhere near synonymous. They are orthogonally related at best (and many people would even argue that they are opposing goals).
I get sleep paralysis (less so these days) and I absolutely get the instant REM thing. I'm dreaming before I'm asleep every single night, and I'm often dreaming for 5-10 minutes after I wake up. Its just a stream of audiovisual nonsense that doesn't shut off until I'm properly awake. Always figured it was normal.
I follow some mountain bike youtube channels, not professional riders but content creators, and literally every one of them has a yearly major injury. Broken bones are just normal.
Seth (Alvo, of Berm Peak) is honestly asking for it half the time with the stuff he does for the camera. Trying to 360 a Brompton on a rainy day in a skate park was a recent ridiculous example.
I've write tons and tons of CSS, have done for a decade. I don't sit and think about the exact interactions, I just know a couple things that might work if I'm getting something unexpected.
I don't really see it possible to commit that to memory, unless I literally start working on an interpreter myself.