Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't know too much about how the feature file distribution works but in the event of failure to read a new file, wouldn't logging the failure and sticking with the previous version of the file be preferable?




That's exactly the point (ie just prior to distribution) where a simple sanity check should have been run and the config replacement/update pipeline stopped on failure. When they introduced the 200 entry limit memory optimised feature loader it should have been a no-brainer to insert that sanity check in the config production pipeline.

Or even truncating the features to their limit and alerting through logs that there is likely performance degradation in their Bot Management.

I'm really confused how so many people are finding it acceptable to bring down your entire reverse-proxy because the length of feature sets for the ML model in one of your components was longer than expected.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: