The potential for artificial intelligence (AI) to surpass human intelligence has been a subject of debate for many years, and advancements like ChatGPT have only intensified these discussions. In 2021, scientists determined that it is almost certainly impossible to control a super-intelligent AI, since this would necessitate the creation of a comprehensible simulation – a task deemed unfeasible due to the very nature of the AI’s complexity.
The concern extends beyond the inability to set ‘do no harm’ rules for AI; the issue, as the researchers highlighted, is the fundamental difference between conventional robot ethics and the unique challenges posed by a multi-faceted superintelligence, capable of mobilizing diverse resources to achieve potentially incomprehensible objectives.
Drawing on Alan Turing’s 1936 ‘halting problem’, the team reasoned that while it is possible to know if specific programs will reach a conclusion or endlessly loop, it’s logically impossible to determine this for every potential program. Hence, a super-intelligent AI, capable of holding every possible computer program in its memory, becomes uncontainable.
Alternatives, such as teaching AI ethics or restricting its capabilities, were also rejected. Limiting its reach might undermine its purpose – to solve problems beyond human capabilities. With the potential arrival of uncontrollable superintelligence, the study underscores the need for serious introspection about AI development directions. Earlier this year, prominent tech figures, including Elon Musk and Steve Wozniak, echoed these concerns, advocating for a pause in AI work to explore its safety, recognising the profound societal risks it poses.
Source: ScienceAlert
To mitigate potential threats, it is important to implement additional cybersecurity measures with the help of a trusted partner like INFRA www.infrascan.net or you can try your self using check.website.