Hacker Newsnew | past | comments | ask | show | jobs | submit | AStrangeMorrow's commentslogin

Yeah for me the three main issues are: - overly defensive programming. In python that means try except everywhere without catching specific exceptions, hasattr checks, when replacing an approach by a new one adding a whole “backward compatibility” thing in case we need to keep the old approach etc. That leads to obfuscated errors, silent fails, bad values triggering old code etc - plain editing things it is not supposed to. That is “change A into B” and it does “ok I do B but I also removed C and D because they had nothing to do with A” or “I also changed C in E which doesn’t cover all the edge cases but I liked it better” - keep re-implementing logic instead of reusing

Oh, the defensive programming! That thing must have been trained on job interview code, or some enterprise stuff. Heaps of "improvements" and "corrections" that retry, stub, and simply avoid correctly doing stuff for no reason (fix deserialization bug that thing just caused? no, why! Let's instead assume API and docs are wrong and stuff is failing silently so let's retry all api calls N times, then insert some insane "default value in case API is unreachable" then run it, corrupt local db by writing that default everywhere, run some brain damaged test that checks that all values are present (they are, clonk just nuked them), claim extraordinary success and commit it with a message containing emoji medals and rockets).

And these "oh, I understand, C is completely incorrect" then proceeding to completely sabotage and invalidate everything.

Or assembling some nuclear python script like some McGyver and running it, to nuke even the repo itself if possible.

Best AAA comedy text adventure. Poor people who are forced to "work" like that. But cleanup work will be glorious. If companies will survive that long.


It seems like it is fiction, from what I could find. I was doubting it at times but it feels like for how old it was some of the tech wasn’t quite there then.

https://news.ycombinator.com/item?id=37723862


It's fiction, there's breadcrumbs at the top that list it as in the "Fiction" category. qntm is good at plausible sci-fi, e.g. https://qntm.org/mmacevedo

It reads like a creepypasta, which is cool, but definitely not convincing.

Yeah love her stuff. And honestly the voice description is part of the music flow at this point.

I feel like that’s kinda how people imagined navigating whatever cyber domain when the first big cyberpunk novels came out


Obviously much simpler Neural Nets, but we did have some models in my domain whose role was to speed up design evaluation.

Eg you want to find a really good design. Designs are fairly easy to generate, but expensive to evaluate and score. Understand we can quickly generate millions of designs but evaluating one can take 100ms-1s. With simulations that are not easy to GPU parallelize. We ended up training models that try to predict said score. They don’t predict things perfectly, but you can be 99% sure that the actual score designs is within a certain distance of said score.

So if normally you want to get the 10 best design out of your 1 million, we can now first have the model predict the best 1000 and you can be reasonably certain your top 10 is a subset of these 1000. So you only need to run your simulation on these 1000.


Heuristical branch-and-bound

Yes, Levenshtein in that case give too big an exploration space. A keyboard edit distance would probably work better. Delete and swap are still 1 but replace and add should be within say 1-key at most

My guess is also that not all typos are equal. Should have a stricter edit version for 1-keystroke-away filtered edits (that is delete, swap or add 1 key away / replace one key away) instead of pure Levenshtein. Like Fqcebook is a more likely typo than Fjcebook but they are both edit-1

Someone should make a qwertyshtein() function.

If I understand correctly from the paper what qualifies as an edit distance of 1 is pure Levenshtein distance-1 right?

Just curious because while the edit-1 space can be fairly big, I’d assume all edits have very different probabilities. So the squatted domains probably skew to a higher probability edit. By that I mean mostly keyboard edit typos, eg on a phone: the “cwt” typo is more likely than “cpt” for “cat” because of an and w keyboard proximity. Wonder what the squatting rate is when you filter for edit within one key stroke for example (only really change the add and replace types of edits, not delete or swap)


Yeah, same as a French speaker first living in the US, I have to sometimes refrain myself from calling things “just fine”, “will do” or “not bad”. These are still used in American English, but I tend to use them for cases were people normally use more positive/stronger version.

Like at a grocery store: “is that enough? That will do yes -> yes that’s perfect”


Same for a German.

Scales of goodness of expressions are shifted relative to English: "good" (gut) to a German means "it totally fulfills all my needs and expectations, so it is perfect for my purpose". "very good" (sehr gut) means "it exceeds all my expectations" and to a German already sounds like total hyperbole. Anything like "delightful" or "excellent" to a German sounds either totally sleazy or sarcastic.

When something is not perfect but adequate and we are happy with it, we would say something like "not bad", "it's fine" or "you can leave it like that". Which to the english speaking world has totally different connotations and can lead to rather interesting misunderstandings.

And especially "not bad" ("nicht schlecht") can be confusing in that it is sometimes something rather positive. It, in German and said in the right tone of voice" can mean "this is suprisingly good".


I really enjoy writing some of the code. But some is a pain. Never have fun when the HQ team asks for API changes for the 5th time this month. Or for that matter writing the 2000 lines of input and output data validation in the first place. Or refactoring that ugly dictionary passed all over the place to be a proper class/dataclass. Handling config changes. Lots of that piping job.

Some tasks I do enjoy coding. Once in the flow it can be quite relaxing.

But mostly I enjoy the problem solving part: coming up with the right algorithm, a nice architecture , the proper set of metrics to analyze etc


Yeah at this point I basically have to dictate all implementation details: do this, but do it this specific way, handle xyz edge cases by doing that, plug the thing in here using that API. Basically that expands 10 lines into 100-200 lines of code.

However if I just say “I have this goal, implement a solution”, chances are that unless it is a very common task, it will come up with a subpar/incomplete implementation.

What’s funny to me is that complexity has inverted for some tasks: it can ace a 1000 lines ML model for a general task I give it, yet will completely fail to come up with a proper solution for a 2D geometric problem that mostly has high school level maths that can be solved in 100 lines


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: