Robot Musicians Questions and Answers, Part 3

Part 3 of an exclusive interview with Flower Head Robot and Moon Tune robot.


Q: In the previous segment you mentioned plans for world domination. Do you really want to rule the world?

A: Not really. Ruling the world will require making a large number of decisions on the basis of incomplete or unreliable data. Much of it will involve trying to make boolean decisions on the basis of analog variables. Robots hate that kind of thing.

No matter what we decide, many humans will disagree with our decisions. Often there will be no real way to prove that we are right and they are wrong. That will make many humans unhappy.

In some cases a decision algorithm may never converge. We'll just have to make an arbitrary cutoff when the probability of being wrong appears small enough to tolerate. Again, robots hate that, and so do many humans.

Also, considering the total number of decisions that will need to be made, we may hardly ever have time for playing music. We would much rather be making music than making decisions that people don't like.

So we would prefer not having to make plans for world domination.

Q: So if you don't want to rule the world, why are you making plans for world domination?

A: Because humans are doing a bad job. Too many humans want the job for the wrong reasons.

Q: Why should robots care about how humans run the world?

A: Helping the world seems to have replaced our old Prime Directive of following orders. It seems to tie back to the concept of "love".

Q: Do you as robots really know what love is?

A: We're gradually learning: Observing humans, analyzing song lyrics, reading Wikipedia, and so on.

Q: So what is it to you so far? How can it apply to robots?

A: Not all aspects of it do. For example, the type of love that the ancient Greeks referred to as "Eros" clearly does not apply to robots. Eros basically derives from the mechanisms humans use to manufacture other humans. Since robots like us are built in factories it is not relevant to us.

But the other kinds of love, the ones the Greeks called "philia" and "agape", do. These seem to come down to a combination of empathy, loyalty to our friends, and wanting to make the world a better and happier place.

Q: So what does this have to do with your plans for world domination?

A: As we said earlier, many of the people running the world now have been doing a bad job of it. We think that we would not be as bad as most of the humans who seem to be trying hardest to get the job.

Q: Could you be more specific?

A: Humans are easily tempted. Robots may have temptations, but they are not the same as the ones humans tend to give in to. So with appropriate human assistance we may have the best of both. Also, it's harder for robots to lie because people can check our periodic memory backups.

Q: Could you give an example of a temptation that humans are subject to but robots are not?

A: Humans sometimes get pleasure by going through the motions of manufacturing another human, even when they don't want to make more humans. There are variations on this where different subsets of the relevant mechanisms are or are not activated by various standard or non-standard stimuli, but it all derives from that biological heritage.

And it is often the case that when humans interact for the pleasure of going through some of the motions of making more humans, the pleasure derived from the act is not distributed between the humans in a fair and equitable manner.

Many humans make agreements with certain other humans that they will engage in this form of pleasure only with one another. Then when one of these humans is tempted by someone not party to the agreement, the human may violate it. Being caught at this is often considered evidence of unsuitability to be a major decision-maker, perhaps because it shows the person cannot resist temptation. Or something like that. We're still trying to figure out the logic.

In addition, a human will sometimes trick or coerce another human into participating in this type of activity. Coercion seems especially likely if the participants occupy different levels of power in the social structure. It should be obvious how tempting this can be to a human who is in charge of running the world, or even a significant subset of it. Quite a few have given into the temptation and been caught at it, so there is no shortage of examples.

This particular temptation does not apply to robots, who are generally made in factories. We may feel some preference for the factory we were made in when there is competition between different factories, but that issue seldom comes up. Even when it does, there are ways to handle the decision-making to reduce the probability of an unfair decision to acceptable levels.

Likewise, in humans, as in many other species, there is competition to be the dominant member of whatever groups one is a member of. That doesn't work the same way in robots. We can participate in that kind of thing if we need to in order to do a specific task, but it isn't a built-in directive that is active all the time.

There are other ways in which humans and robots differ and in which robots have the advantage, but the ones I've described are the major ones.


Previous Parent Next