Evil Robots?

Quick one tonight. Tomorrow is Day 11 of continuous work and there’s one more before the next day off. I’m worn out. Good thing Stimpy is back. We’ll be back to normalish.

The Daily Moonhowler has some vague story about the imminent creation of Skynet and Terminators. They seem to think that since a robot cannot make a moral decision, that automatically makes any form of artificial intelligence evil.

It’s all based on a theoretical moral dilemma. There are five people on the tracks and there’s a loose wagon heading for them. You have access to a lever that will switch the wagon to another track that has one person on it. What do you do? It’s supposed to be unsolvable but under modern British justice it’s easily solved.

Whatever you do, don’t touch the lever. Shout a warning, since you can see all these people but don’t switch tracks.

If the wagon hits the five idiots who shouldn’t have been on the bloody tracks in the first place, it’s a terrible accident. If you deliberately aim it at the one person on the other track, that’s murder. If you thought about it, it’s premeditated.

So you ignore the lever and shout a warning.

The robot will also ignore the lever if it’s programmed according to Asimov’s robot rules. If so, it can never harm a human or, through inaction, allow a human to come to harm. Therefore, if it cannot warn the humans, it will attempt to derail the wagon by throwing itself on the tracks. If it has artificial intelligence it will know that it can be repaired even if totally smashed, but a human cannot.

If it was programmed by a rail geek it will hold the lever halfway (assuming old style cable connections), or switch tracks while the wagon is on the points so the wagon derails at the points. A properly-built robot’s reactions would be fast enough even if the wagon is travelling at speed.

The robot will look for a means to save all six. It will not choose between five or one. Is that evil? Surely ‘evil’ lies in deliberately aiming a wagon at one person, even when that action might save five.

Besides, they never define who these people on the tracks are. If they are politicians we’ll need another wagon.

Artificial intelligence is still going to be only as good as the programmers. It might lead to a programmed intelligence that can develop and adapt and go beyond its original program but robots, no matter how powerful, have something we don’t have. An ‘off’ switch. Which can be built with a remote operating facility.

There are robots with some autonomous functions already, but they are pre-programmed. True artificial intelligence is some way off yet but there are quite a few very clever people working on it. One day it’ll happen.

When it does, the inventors will realise there’s no actual use for it beyond stellar exploration and fierce chess games. Robot vaccuum cleaners already exist, you don’t need a smart machine to hoover the floor. Robot assembly lines already exist. They do the same thing over and over and need no intelligence. The only use for an artificial intelligence is to send it into space. It won’t need food supplies and does not have a finite lifespan so it can take hundreds of years to reach other stars and send back selfies to its Farcebok page.

Will it get bored? Will it go mad? Depends on how you program it. It could ‘sleep’ in a shutdown mode for years at a time. Even if it goes totally doolally, it’ll be light years away when that happens so it’s no danger.

Maybe it will encounter planets populated by robots. Oh wait, it’ll pass one on the way out. Mars is currently populated only by robots but they aren’t really ‘robots’ in the science fiction sense. They are still remotely controlled. Even so they are the only things moving around there.

If we send an artificial intelligence to Mars it could direct mining and smelting robots and produce steel and other metals and build big tripod war machines and… maybe that’s not such a good idea.

Finally, if any Daily Can’tbearsed ‘researchers’ ever read this, it’s ‘I, Robot’. Not iRobot, you halfwitted festering donkey’s bellends.

Steve Jobs didn’t write it.


16 thoughts on “Evil Robots?

  1. An iRobot would look fabulous and be very desirable, not to mention expensive, although not as good as it should be. It would do stuff for you which you really don’t want and a new model would come out frequently and you’d have to queue to get one. Then you’d have to flog the old one which you’d kept covered so it was in excellent condition for the next person to use.

    I liked Iain M Banks robots. They could be floating suitcases or down to the size of a fag packet (can I say that at this time of the day?) or just a mind in a giant spaceship called the “I can’t be arsed today so piss off”). I wouldn’t mind a knife missile which also could be a wee tiny drone too. Just to make sure that I had some sort of leverage shall we say should I encounter any resistance which is apparently futile………….


  2. Any robot powerful enough to act as a robot butler would be as strong as a human, without any real check on this strength. Robots this strong in factories operate in closed cages, and automatically stop if the cage door is opened; anything this powerful in a home would be lethal.

    No, what we will actually get is a lot of weaker, smaller machines of insect-level intelligence. Robotic rubbish-pickers for the streets, group-acting tidying-up robots for homes, even fluffy, cuddly robotic cat-like robot pets (built with a lot of padding for the times when one isn’t in the mood for an ultra-cute fluff-bot and simply boot it across the room instead).

    Robotic security devices would likely also be developed; noisy, extremely annoying things which when activated mob an intruder, screaming their heads off and trying to trip him up on purpose at the same time as spraying smelly dye at him. This wouldn’t exactly be dangerous, but try explaining to a visiting copper why you are coloured a bright green colour and currently smell like an open sewer, despite having been standing under a shower for the last hour.


    • Not iRobot, you halfwitted festering donkey’s bellends.
      Not iRobot, you halfwitted festering donkey’s bellends. Marvellous! Made me laff me head off! A bleedin robot that could iron would’ve made my life so much easier but hey … I guess most inventors are guys … why would they need to invent a robot to iron, when their Missis’s do it for me!! Now now guys !!! Calm down, tis only a joke! hehe. Keep up the good work Leggy. Me luffs ya long time!!


      • The washing machine and the vaccuum cleaner were invented by men who had to clean those things themselves.

        We are lazy buggers. If we have to do it ourselves we’ll invent a machine 😉


  3. If the train was a passenger train, de-railing it would also not be an option for the robot.

    Required to make a decision in which there were no acceptable alternatives, the robot would fry it’s circuits and, in effect, commit suicide.

    Isaac allready wrote about that happening.

    As for ‘morals’, why should robots have to be better at such stuff than humans; and, humans are not very good at making ‘moral’ decisions.


    • Good point. Morality is a perspective. ISIS think it’s morally acceptable to cut off innocent heads. We in the West don’t think it’s morally acceptable to hurt someone’s feelings.

      Somewhere in between is a good compromise.


  4. Asimov’s Laws were a breakthrough in robot storytelling. And if I’m remembering correctly, a good number of his stories involved exploring various niches and dilemmas involving them.

    At the moment I’m trying to recall what the series was…. it’s a sci-fi thing involving a vastly evolved artificial intelligence roughly galactic in size and spread of its net. Somehow it begins using human brains as part of its network while it is teleporting them around the place. Very nicely done trilogy or quadrilogy or somesuch. Anyone know what I’m talking about?



  5. What is a moral decision in a postmodernist world, where moral relativism seems to hold sway? What if the dilemma is perceived, by the robot/AI, as five sheep or one? Would is just go down the pub (to get well oiled) unless it had an investment in sheep?

    What would a computer lover be like?
    Any computer lover would love you,
    Until .. they turn … the elec..tri..city .. o…f….f
    – William R Strickland

    Perhaps all this will lead to a super cybernetic machine, connecting every computer in the universe? In a return to an earlier blog theme, when all the computers are connected, the question is asked:-
    “Is there a god?”,
    “Yes, now there is a God”.
    – Fedric Brown’s ‘Answer’
    ( http://www.roma1.infn.it/~anzel/answer.html )
    The switch, in this case, becomes fused shut.

    You never know with AI, even an off switch may not be fail safe …..


    • ‘Artificial intelligence’ is a confused concept. There may be intelligence in a machine but that is just intelligence: the ‘artificial’ part applies to the machine, not to the intelligence. So, there is no reason to suppose that there would be any significant difference between the way a human thinks and the way a robot would think.


      • So if the robot thinks as we do it’s intelligent – therefore an ‘artificial’ intelligence, if you spoke to it over a phone, would not be distinguishable from a human.

        That was Turing’s test, I think.

        If a machine gets to the point of intelligence then it isn’t artificial any more. It is, in effect, a created living thing.

        The Progressives will have apoplexy over that one.


First comments are moderated to keep the spambots out. Once your first comment is approved, you're in.

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s