Hacker Newsnew | past | comments | ask | show | jobs | submit | salamo's commentslogin

Really happy to see additional solutions for on-device ML.

That said, I probably wouldn't use this unless mine was one of the specific use cases supported[0]. I have no idea how hard it would be to add a new model supporting arbitrary inputs and outputs.

For running inference cross-device I have used Onnx, which is low-level enough to support whatever weights I need. For a good number of tasks you can also use transformers.js which wraps onnx and handles things like decoding (unless you really enjoy implementing beam search on your own). I believe an equivalent link to the above would be [1] which is just much more comprehensive.

[0] https://ai.google.dev/edge/mediapipe/solutions/guide

[1] https://github.com/huggingface/transformers.js-examples


I mainly blog for myself in the future, but in a slightly different flavor than the author mentions. If there's a complicated ML concept that I'd really like to understand, explaining it to an audience (even if that audience is myself in the future) is a great way to understand it.

That goes double for visuals, which is another reason I use a custom static site. Can't run JS on Medium. I even built out a client-side search and learned a good amount from that too.


I live in the South Bay and would be willing to "host". But there's no way to announce a meetup or even reach out to some people.


If you want, I can display something similar to this at your location: https://meet.hn/city/43.6534817,-79.3839347/Old-Toronto

Hit me up if interested :)


Ok, I emailed you.


I believe that observation is borne out in the statistics too, but traditional chess training usually centers around finding the best, hard-to-find move in a position rather than avoiding blunders. I think it would be great if there was more blunder-avoidance training. In other words, a normal position where a blunder looks attractive but the player needs to avoid it.


The issue is that humans and computers don't evaluate board positions in the same way. A computer will analyze every possible move, and then every possible response to each of those moves, etc. Human grandmasters will typically only analyze a handful of candidate moves, and a few possible replies to those moves. This means human search is much narrower and shallower.

If you want a computer that plays like a human, you will probably need to imitate the way that a human thinks about the game. This means for example thinking about the interactions between pieces and the flow of the game rather than stateless evaluations.


Grandparent was suggesting the hybrid approach where you select a handful of good candidate positions and then explore them (DFS) as far as possible. Which is pretty much how humans work.


I built something like this. It works as long as you're not too high-rated: chessmate.ai. Once players get higher rated it is more difficult to predict their moves because you need to model their search process, not just their intuitive move choice. It's also possible to train on one player's games only so that it is more personalized.

It uses a similar approach to Maia but with a different neural network, so it had a bit better move matching performance. And on top of that it has an expectation maximization algorithm so that the bot will try to exploit your mistakes.


Really nice work! The tabs other than "play" don't seem to be working, but I was able to try some novelty openings and it certainly felt like it was responding with human moves. It would be great to have the ability to go back/forth moves to try out different variations.

I'm curious how you combined Stockfish with your own model - but no worries if you're keeping the secret sauce a secret. All the best to you in building out this app!


I'm happy you enjoyed it! There are definitely a few rough edges, yes.

Since the whole thing is executed in the browser (including the model) there aren't a ton of secrets for me to keep. Essentially it is expectation maximization: the bot tries to find the move with the highest value. What is "value"? Essentially, it is the dot product between the probability distribution coming out of the model and the centipawn evaluations from Stockfish.

In other words if the model thinks you will blunder with high probability, it will try to steer you towards making that mistake.


Thank you for the explanation! I completely understand the rough edges, I have some rough ideas out there myself. Would it be alright if I add a link to your site to our chess club's list of online resources?

I can also make a note of it privately and check back in with you in the future. I found it pretty remarkable that it played a human-like response to some niche openings - I actually ended up checking against Stockfish and it played different moves, which is pretty neat.


Sure, go ahead!


Hello! I built a Chess AI also named Chessmate in high school (2005) that made it into a Minecraft mod 10 years later: http://petrustheron.com/posts/chessmate.html

Java source code here: https://github.com/theronic/chessmate


I personally prefer a UI most of the time. It's higher fidelity and cuts through the inherent ambiguity of language.

The exception for me would be situations where I can't use my hands, like driving. I don't want to have to look at a screen. If a voice agent could replicate the functionality of CarPlay, that would be really useful.


We’re working on getting Martin to work better in the car! You’ll be able to either call him over the phone or use him via CarPlay in the next couple months.


I found Cosine Club recently and it's pretty cool. They have a model which just finds the top ~50 most similar tracks to the one you're searching for. They use a model trained from this paper [0] which was trained based on four types of similarity: 1) two samples from the same song, 2) two samples from the same album, 3) two samples from the same artist, 4) two samples from the same record label.

I get pretty good recommendations, and they're different from Spotify which is nice.

[0] https://repositori.upf.edu/bitstream/handle/10230/54158/Alon...


CATL has a 650 mile battery, which would definitely put Tesla on the ropes if e.g. Toyota could use it. Range and cost are probably the main barriers to electric cars.

The stove linked in the article is also bonkers. It would have been more impressive with a thermometer readout before they turned it on though. I checked other videos [0] and it seems to be real.

[0] https://www.youtube.com/watch?v=YdawGen0QPc


Is there any info on how to diagnose this problem? Having just put together a computer with the 14900KF, I really don't want to swap it out if not necessary.


There is no reliable way to diagnose this issue with the 14th gen, the chip slowly degrades over time and you start getting more and more (usually gpu driver under windows) crashes. I believe the easy way might be to run decompression stress tests if I remember correctly from Wendell's (Level1Techs) video.

I highly recommend going into your motherboard right now and manually setting your configurations to the current intel recommendation to prevent it from degrading to the point where you'd need to RMA it. I have a 14900K and it took about 2.5 months before it started going south and it was getting worse by the DAY for me. Intel has closed my RMA ticket since changing the bios settings to very-low-compared-to-what-the-original-is has made the system stable again, so I guess I have a 14900K that isn't a high end chip anymore.

Below are the configs intel provided to me on my RMA ticket that have made my clearly degraded chip stable again:

CEP (Current Excursion Protection)> Enable. eTVB (Enhanced Thermal Velocity boost)> Enable. TVB (Thermal Velocity boost)>Enable. TVB Voltage Optimization> Enable. ICCMAX Unilimited bit>Disable. TjMAX Offset> 0. C-States (Including C1E) >Enable. ICCMAX> 249A. ICCMAX_APP>200A. Power limit 1 (PL1)>125W. Power limit 2 (PL2)>188W


OCCP burn in test with AVX and XMP disabled.

Tbh, XMP is probably the cause of most modern crashes on gaming rigs. It does not guarantee stability. After finding a stable cpu frequency, enable xmp and roll back the memory frequency until you have no errors in occp. The whole thing can be done in 20 minutes and your machine will have 24/7/365 uptime.


This is good advice for overclocking, but how does it help with the 13th/14th Gen issue? The issue is not due to clocks, or at least doesn't appear to be.


Running a full memtest overnight and a day of Prime95 with validation is the traditional way of sussing out instability.


it’s also a terrible stability test these days for the same reasons Wendell talks about with cinebench in his video with Ian (and Ian agrees too). Doesn’t work like 90% of the chip - it’s purely a cache/avx benchmark. You can have a completely unstable frontend and it’ll just work fine because prime95 fits in icache and doesn’t need the decoder, and it’s just vector op, vector op, vector op forever.

You can have a system that’s 24/7 prime95 stable that crashes as soon as you exit out, because it tests so very little of it. That’s actually not uncommon due to the changes in frequency state that happen once the chip idles down… and it’s been this way for more than a decade, speedstep used to be one of the things overclockers would turn off because it posed so many problems vs just a stable constant frequency load.


Prime95 also completely ignores the high-clock domain btw so it can also be completely prime95 stable yet fail completely on a routine task that boosts up! So it’s technically not even a full test of core stability either.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: