This seems to me a crucial point not often discussed by the AI Risk folks such as Bostrom and Yudkowsky. Whether it’s a bug or a feature of the AI Risk industry is harder to know, a thorn in the side of their project, or beneficial for (potentially endless) fundraising? Only time will tell, or it won’t.
This is from Superintelligence cannot be contained: Lessons from Computability Theory (Alfonseca et al. 2016):
Another lesson from computability theory is the following: we may not even know when superintelligent machines have arrived, as deciding whether a machine exhibits intelligence is in the same realm of problems as the containment problem. This is a consequence of Rice’s theorem , which states that, any non-trivial property (e.g. “harm humans” or “display superintelligence”) of a Turing machine is undecidable.
I have a short article coming out soon in an IEEE publication, which builds on this insight.