Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Strongly agree that AI isn't magic, but I think you're making too broad a statement here. AI can certainly be superhuman in some areas, eg. chess and go. Whether or not human-level is the maximum depends on how the training data is created. If you have to rely on human experts to produce the labels (this is a dog, this is a cat, etc.), then it's going to be hard to design a system that can beat human performance. But for chess and go, you can get around that by using self-play.

In the case of matching writings, you can get around it by having a bunch of people create several pieces of writing each. Even if no human expert could tell whether two pieces were from the same person, it's still possible to give the network the correct labels when training because we know the ground truth of who wrote what.

Of course, a model can only work with information that actually exists. My gut says that writing leaks plenty of information about identity, so it should at least be possible to identify the author of large chunks of text (over 1000 words, say). But I could be wrong about that.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: