CONTACT US

Contact Form

    News Details

    A.I. makers must create and observe new ‘laws of robotics’
    • June 30, 2023

    Not long ago, the artificial intelligence (A.I.) bot ChatGPT as a “courtesy” sent me a copy of my abbreviated biography, which it had written.

    ChatGPT, developed by the San Francisco firm OpenAI, was wrong on both my birth date and birthplace. It listed the wrong college as my alma mater. I had not won a single award it said I did, but it ignored those I actually won. Yet, it got enough facts right to assure this was no mere phishing expedition, but a version of the new real thing.

    Attempts at correction were ignored.

    Related: California college professors test out AI in the classroom, even as cheating debate continues

    All along, I knew this could be dicey, both in providing information that – had it been used to correct – could have led to identity theft or, worse, directed criminals to my door.

    The experience recalled the science fiction stories and novels of Isaac Asimov, who prophetically devised a generally recognized (in Asimov’s fictional future) set of major laws governing intelligent robots.

    In his 1942 short story “Runaround,” Asimov first put forward these three laws, which would become staples in his later works:

    “The first law is that a robot shall not harm a human, or by inaction allow a human to come to harm. The second law is that a robot shall obey any instruction given to it by a human, and the third law is that a robot shall avoid actions or situations that could cause it to harm itself.”

    These fictitious laws were reminiscent of the U.S. Constitution, open to constant re-interpretation: new questions arose on what is harm and whether sentient robots should be condemned to perpetual second-class, servant status.

    It took more than 30 years, but eventually others tried to improve on Asimov’s laws. Altogether, four authors proposed more such “laws” between 1974 and 2013.

    All sought ways to prevent robots from conspiring to dominate or eliminate the human race.

    The same threat was perceived in May by more than 100 technology leaders, corporate CEOs and scientists who warned that “A.I. poses an existential threat to humanity.” Their 22-word statement warned that “Mitigating the risk of extinction from A.I. should be a global priority alongside other societal scale risks such as pandemics and nuclear war.”

    President Biden joined in during a California trip, calling for safety regulations on A.I.

    As difficult as it has been to get international cooperation against those other serious threats of pandemics and nuclear weapons, no one can assume A.I. will ever be regulated worldwide, the only way to make such rules or laws effective.

    The upshot is that a pause – not a permanent halt — in advancement of A.I. is needed right now.

    For A.I. has already permeated essentials of human society, used in college admissions, hiring decisions, generating fake literature and art and in police work, plus driving cars and trucks.

    Related: Europe, US urged to investigate the dangers of AI technology

    An old truism suggests that “Anything we can conceive of is probably occurring right now someplace in the universe.” The A.I. corollary might be that if anyone can imagine an A.I. robot doing something, then someday a robot will do it.

    And so, without active prevention someone somewhere will create a machine capable of murdering humans at its own whim. It also means that someday, without regulation, robots able to conspire against human dominance on Earth will be built, maybe by other robots.

    Asimov, of course, imagined all this. His novels featured a few renegade robots, but also noble ones like R. Daneel Olivaw, who created and nurtured a (fictitious) benevolent Galactic Empire.

    Related: ChatGPT’s responses to suicide, addiction, sexual assault crises raise questions in new study

    In part, Asimov reacted to events of his day, which saw some humans exterminate other types of humans on a huge, industrial scale. He witnessed the rise and fall of vicious dictatorships, more despotic than any of today’s.

    Postulating that robots would advance to stages far beyond even today’s A.I., he conceived a system where they would co-exist peacefully with humans on a large scale.

    But no one is controlling A.I. development now, leaving it free to go in any direction, good or evil. Human survival demands limits on this, as Asimov foresaw. If we don’t demand it today, not even a modern Asimov could predict the possible consequences.

    Email Thomas Elias at [email protected].

    Related Articles

    Opinion |


    The bizarre revival of the Industrial Welfare Commission

    Opinion |


    New California law protects ‘the children’ by destroying free speech

    Opinion |


    Supreme Court upholds equal rights, strikes down discriminatory ‘affirmative action’ schemes

    Opinion |


    Judge Andrew Napolitano: Can the president fight any war he wishes?

    Opinion |


    Veronique de Rugy: America needs a president who will actually take spending seriously

    ​ Orange County Register 

    News