I think you misunderstand that case
"compiled its scores and statistics by employing people to listen or watch the games, then enter the scores on the computer which transmits the scores to STATS' on-line service, to be sent out to anyone using a SportsTrax pager.[1]"
Notice how they watched the game and got the statistics like that. The restrictions are about using the scoreboard and the data displays and reselling/commercialising that data.
It is however legal to watch the game and compile and distribute your own stats due to the game entering public domain.
Due to this many betting companies and data collection companies have to pay people to watch the game vs just scraping the scoreboard (which is the context from which I learnt about this). ironically at venue OCR is a common way to get scoreboard data.
I'm not a lawyer, but my interpretation of the lawsuit based on the Wikipedia article is that game results/scores are public facts and hence not copyrightable data. I don't see how the method by which that public data is collected changes anything materially about that case. Are you saying that inferring the score based on the scoreboard is what makes this illegal (why?)? What if they would infer the score using motion/ball tracking instead?
if you using CV to track the player, the ball etc from a broadcast it is fine, the scoreboard however not so straight forward. fwiw, doing CV from broadcast for accurate scoring of sports is neigh near impossible due to edges, but human in the loop systems exist. there are also numerous in venue CV systems which auto collect game and player information.
I don't think it's possible to be in compliance with every law in every jurisdiction simultaneously. There are over 300,000 federal laws in the US, and apparently no one knows how many laws each of the 50 states has. That's 1 of the world's 195 countries
Things like camera intrinsics and extrinsics are not fixed. 1000 bytes seems small to me given the amount of processing in modern cameras to create a raw image. I could easily imagine storing more information like focus point, other potential focus points with weights as part of the image for easier user on device editing.
As an Akamai user I already serve all my DASH traffic (video) over http3. Akamai itself return to origin only supports http 1.1 LL-HLS forces me use HTTP2.
The problem here is Akamai really in only supporting HTTP1.1 to the origin.
Cloudfare I think only supports HTTP2 to origin.
Does Fastly yet support QUIC to origin? Does Cloudfront, I could only find information about it supporting QUIC the last mile.
Maybe more CDN support will drive web server support.
I maintain auto-generated Rust and Zig bindings for my C libraries (along with Odin-, Nim-, C3-, D- and Jai-bindings), and it's a difference like night and day (with Zig being near-perfect and Rust being near-worst-case - at least among the listed languages).
> Do you find zig easier than the ffi interface in Rust?
Yes, but it's mostly cultural.
Rust folks have a nasty habit of trying to "Rust-ify" bindings. And then proceed to only do the easy 80% of the job. So now you wind up debugging an incomplete set of bindings with strange abstractions and the wrapped library.
Zig folks suck in the header file and deal with the library as-is. That's less pretty, but it's also less complicated.
I've somehow avoided Rust, so I can only comment on what I see in the documentation.
In Zig, you can just import a C header. And as long as you have configured the source location in your `build.zig` file, off you go. Zig automatically generates bindings for you. Import the header and start coding.
This is all thanks to Zig's `translate-c` utility that is used under the hood.
Rust by contrast has a lot more steps required, including hand writing the function bindings.
You only hand-write function bindings in simple or well-constrained cases.
In general, the expectation is that you will use bindgen [0].
It's a very easy process:
1. Create a `build.rs` file in your Rust project, which defines pre-build actions. Use it to call bindgen on whatever headers you want to import, and optionally to define library linkage. This file is very simple and mainly boilerplate. [1]
2. Import your bindgen-generated Rust module... just use it. [2]
You can also skip step 1: bindgen is also a CLI tool, so if your C target is stable, you can just run bindgen once to generate the Rust interface module and move that right into your crate.
Exactly.
gpu's have become too profitable and of strategic importance, to not see several deep pocketed existing technology companies invest more and try and acquire market share.
there is a mini moat here with cuda and existing work, but some the start of commodification must be on the <10 year horizon
I has to convert a bitmask to svg and was wishing to skip the intermediatary step so looked around for papers about segmentation models outputting svg and found this one https://arxiv.org/abs/2311.05276
more tricky frequently because you need to measure the moisture for each plant as maintainance is difficult without, but this is generally the most efficient low cost method in very arid regions from what I have seen (dad is prof in the field, so exposure is years of unpaid labour as a child and student)
you don't need anything fancy for drip, a small hole in the pipe and a timer on your pump is is generally enough. If you really want to go fancy you can isolate the system and use moisture sensors, which are cheap.
Moisture sensors that measure conductivity are pretty useless unless frequently recalibrated but time domain reflectometry sensors are much better and more accurate.