Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes, I think that in the end, this would be true. AGI itself would be transformational - partly because it would be easily linked with computers' "superhuman" powers of memory, throughput etc.

But, on the way to AGI - which seems to me a long way off, despite some of the interesting arguments made here - it's not really true that "being very humanlike" is a big commercial advantage. As long as we have (say) less than the intelligence of a 2 year old child, or a chimp, then we'd rather have computers be doing very un-human things.



"Human-level intelligence/performance" is a term that is often used in many ML tasks to indicate a top-level performance by a very performant human, to compare the performance to, performance at that specific task which is being discussed. Perhaps not world's best human (but sometimes, like in AlphaStar), but at least someone competent at the task (for example in https://github.com/syhw/wer_are_we). It is just a term to use, to gauge and compare how well a network operates.

You could of course find cases where this term makes other sense (or does not at all), since English is a flexible language, but I think that in areas where we obviously discuss AI/ML, let's just use the de-facto term and make everyone's lives easier.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: