How does Asimov's second law deal with contradictory orders from different people?

57

The second law states that:

A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

But it says nothing about property. What if a robot owned by someone is given an order by someone else?

Surely if the same person gives contradictory orders, the last one will be executed:

Move that box upstairs. Wait no, just put it on the porch.

But what if a robot is carrying groceries outside, and someone says:

Hey you, stop what you're doing and take those groceries to my apartment.

It shouldn't agree to do that.

So, do people just say "Only obey me" when they buy a robot? Does that mean a robot ignores every order by non-owners, even if it's small help like helping an old lady cross the road? The robot would have to tell between harmless orders from strangers and orders that cause non-physical harm (like stealing their stuff, smashing their car, etc.)

If a households owns a robot, they'll say something like "Only obey me and my family", then what if orders contradict between family members?

How does it work exactly? Is it just managed by some part of the AI less fundamental than the 3 rules?

I've never read Asimov, sorry if it's explained in the first chapter of the first book. I didn't see it discussed anywhere.

por Teleporting Goat 31.07.2019 / 17:31

8 respostas

Consider, all Robots stories are short stories or novellas published during a very long time spawn, so they are not 100% coherent among themselves sometimes, though the overall framework is.

As highlighted in the comments, Positronic brains reaction to orders, situations and environment is generally described in-universe as the outcome of the "potentials" induced by them on the 3 Laws. There is no AI at work as we know it today for the Robot to decide to do something or not, but more like a compulsion towards a generic line of action, driven by priorities between the laws and the Robot understanding of his surroundings. To give you an idea, there is some discussion in one of the stories if I remember well on a potential scenario where a strong enough order, repeated continuously under different wording and reinforced with information on how negative would it be for the person issuing it if not followed, could have potentially led to a breach of First Law by simple buildup of potential on Second Law adherence. The idea is rejected as impossible in practice, but it is discussed.

Also take into account there is an in-universe evolution on how Laws are enforced by design; in the first stories, which are supposed to take place earlier in-universe, Laws are rigid, literal absolutes with no space for interpretation. As in-universe timeline progresses, they become more flexible and open to interpretation by the Robot, in part as an attempt by US Robotics to prevent scenarios such as a third party ordering a Robot to, for instance, “get that bike and throw it to the river”, when the bike owner is not present to negate the order.

This flexibility is often presented as a re-interpretation of First Law, where “damaging a human” is originally presented as implying exclusively physical damage, and it slowly includes the concept of mental damage or distress. Under this implementation of the First Law, breaking it is to build up enough negative potential vs. it as per Robot understanding of human mind and social conventions, which leads to amusing side-effects (read the books!).

So the bike will not be thrown to the river because the robot guesses the owner will not like it (negative potential vs. First Law).

So, what does all this rambling mean to your question:

  • An early Robot will tend to do as ordered literally if there is no evident violation of First Law, or a violation of a stronger previous order. A way to protect your Robot from abuse would be to issue a second order in strong terms not to obey counter-orders. This will generate two positive potentials a single counter-order will find difficult to overrule.
  • A later Robot will tend to be more mindful of negative psychological impact of following an order or taking a course of action, which leads to amusing side-effects (seriously, read the books!).
  • Any Robot of whichever period, faced with a situation where contradictory lines of action will have the same potential on all Laws, or inevitably break First Law (later interpreted as leading to a negative high enough potential vs. it, not necessarily implying physical damage), will enter a loop looking for an impossible solution and be blocked.
31.07.2019 / 19:13

In some stories, robots distinguish importance of an order based on amount of urgency/importance in the language and on who gave the order.

Estou a pensar Little Lost Robot as mentioned by NKCampbell in a comment. The premise is that a supervisor gets very angry at a robot and tells it to "Get lost!" using extremely strong and colorful language; the robot responds by hiding among other identical-looking models. It is explained that simply ordering the robot to reveal itself won't work, as it would not countermand the direct order to get lost given in the strongest possible language by 'the person most authorized to command it'

“They are?” Calvin took fire. “They are? Do you realize one of them is lying? One of the sixty-three robots I have just interviewed has deliberately lied to me after the strictest injunction to tell the truth. The abnormality indicated is horribly deep-seated, and horribly frightening.”

Peter Bogert felt his teeth harden against each other. He said, “Not at all. Look! Nestor 10 was given orders to lose himself. Those orders were expressed in maximum urgency by the person most authorized to command him. You can’t counteract that order either by superior urgency or superior right of command. Naturally, the robot will attempt to defend the carrying out of his orders.

I seem to also recall a comment elsewhere to the effect that a robotics expert would better know how to phrase an order to be as strong as possible and likely overcome an order given by an ordinary person.

In general, interpreting how the laws apply to a specific situation is very much a part of the challenge in-universe. Simple models are at risk of burning out their brains if faced with a difficult dilemma involving conflicting laws, orders, or situations. So there's not supposed to be an easy answer to the kinds of questions you're asking, but there are some reasonable principles to follow.

01.08.2019 / 04:25

As far as I can recall, he doesn't.

It's important to remember that Asimov was writing the robot stories for Espantoso and his readers liked logical puzzles, surprise consequences and clever gimmicks — none of which would be present in a simple story of two different people telling a robot to do two different things.

The closest I recall, is it “Robot AL-76 Goes Astray” where a robot built for use on the Moon for some reason gets lost is the back woods. It invents a disintegration tool, causes some havoc and is told by someone to get rid of the disintegrator and forget about building more. When the robot is finally found, the owners want to know how it built the disintegrator, but it can’t help — it’s forgotten everything.

This wasn’t a case of contradictory orders, but it was an order by an unauthorized person.

31.07.2019 / 18:03

There was a case in the short story “Runaround” where a robot had two conflicting objectives. In this case, it was because he was built with an extra-strong Third Law, so could not decide between obeying human orders and self-preservation.

He essentially tries to satisfy both conditions, running circles a safe distance from his dangerous destination, and otherwise acting erratic and drunk.

I would think a similar thing would happen in the case of two conflicting orders. The robot would do what it could to satisfy both, but could become stuck if it was impossible.

In the above story, the deadlock is resolved when the human operators put themselves in danger, thus invoking the First Law. I would think in most cases committing a crime would be considered a minor violation of the First Law, so smart enough robots would try to avoid it. But as Zeiss mentioned, it would really depend on the model of robot.

31.07.2019 / 19:10

The answer to this question depends strongly on the level of sophistication of the robot. The robot stories (most of them were short stories, but a number were novel-length) covered an internal time span of centuries -- even though the core of the stories involved Susan Calvin from her graduate days until her old age.

Early on, robots weren't "smart" enough to know the difference between sensible orders and nonsensical ones. Later, they reached a level of sophistication where the could debate amongst themselves exactly what constitutes a human -- and conclude that eles are more qualified for that moniker than any of these biological entities. Somewhere in between, Second Law developed into some level of qualification relative to the order giver (orders from small children and known incompetents could be ignored). It was out of this latter capability that robots eventually decided that concerns for harm of humans (and humanity) overrode all their orders -- humans must be protected from themselves to the point where they were all declared incompetent to give orders to a robot.

31.07.2019 / 18:20

In the extreme, contradictory orders can lead to the permanent shutdown of a robot. This was dealt with in Robôs e Império, where a robot, R. Ernett Second, was under strict orders not to tell anything about his master. When he is forcefully ordered to do so anyway, his brain freezes irreversibly.

“Will you answer my questions and accept my orders, Ernett?”
    “I will, madam, if they are not counteracted by a competing order.”
    “If I ask you the location of your base on this planet what portion of it you count as your master’s establishment—will you answer that?”
    “I may not do so, madam. Nor any other question with respect to my master. Any question at all.”
    “Do you understand that if you do not answer I will be bitterly disappointed and that my rightful expectation of robotic service will be permanently blunted?”
    “I understand, madam,” said the robot faintly. [...]
    Gladia said, in a voice that rang with authority, “Do not inflect damage on me, Ernett, by refusing to tell me the location of your base on this planet. I order you to tell me.”
    The robot seemed to stiffen. His mouth opened but made no sound. It opened again and he whispered huskily, “... mile...” It opened a third time silently—and then, while the mouth remained open, the gleam went out of the robot assassin’s eyes and they became flat and waxen. One arm, which had been a little raised, dropped downward.
    Daneel said, “The positronic brain has frozen.”

01.08.2019 / 18:43

The robot would only have two real options.

It could malfunction. Runaround, for example, where the robot acted drunk. Or it could shut down entirely.

Or, it could prioritize one order over another. The key there is that the priority scheme is not dictated by the Laws themselves. That means they're flexible to the design of a specific robot and its intended purpose. Prioritize by time, rank, ownership, more complex schemes involving broad understandings of a situation...

Basically, this is a work-around of the second law, by playing with the definition of "order." "That command I got isn't really an order, because it was given by a criminal." But if the laws can be gamed that way, they can also be gamed by defining "human" in a super-narrow fashion, like the Solarians did.

And suddenly the laws may as well not exist.

There's a fundamental conflict between the very concepts of an absolute rule of behavior, and an intelligent actor.

02.08.2019 / 06:33

In fact there's a whole Asimov story about this problem : Os robôs do amanhecer

And the answer is simple : mental block, then "death" of the robot.

Two situations happen:

  • first, a robot has an order to find Elijah Baley and guide him somewhere, but the Baley (a human) refuses. This action makes the robot "bug", and Baley complies to the initial order to not break the robot which might be expensive. This is of course an expositional situation for the main plot of the story

  • and the main plot : Jander is a humanoid robot which had a mental block, Elijah must investigate why but the general hypothesis is contradictory orders...

02.08.2019 / 14:45