5

The Ultimate Computer shows the faults of the Daystorm M-5. When given total control over a ship, it tends to do bad things. However, that same episode shows that it's perfectly capable of making reasonable suggestions, such as making recommendations for who should crew a landing party. With that in mind, why was it never kept as a pure suggestion machine, i.e., as an AI assistant?

3
  • 1
    Would you take suggestions from someone that has murdered despite being religious (yes a religious computer) and who's final act was attempted suicide - death by cop? Star Trek TOS is a luddite humanist show the entire point is to not take suggestions from machines but to be smart enough not to need them. Commented Jul 7 at 19:28
  • Gotcha! Don't give it direct control of anything. Just let it whisper reasonable sounding suggestions into the ears of the stressed-out, over-worked people who do have control. Sure, give it a go. What could possibly go wrong? But just, uh, don't try to un-plug it. Know what I mean? (KIRK: And how long will it be before all of us simply get in the way?) Commented Jul 8 at 1:47
  • I haven't seen the episode, hence commenting rather than answering. From a general AI safety perspective, an AI that is smart enough can achieve all its goals via "reasonable" suggestions. With advanced powers of manipulation and persuasion it could convince the crew to do bad things.
    – craq
    Commented Jul 8 at 4:48

1 Answer 1

9

We see what happens to malevolent Artificial Intelligences in the Lower Decks episode A Few Badgeys More. The Federation, recognising that they're an ongoing threat, essentially puts them into cold-storage until they can prove that they aren't planning to kill anyone.

enter image description here

I think we can reasonably assume that a computer that has actually managed to kill someone would be treated as a very serious threat indeed, regardless of how many safeguards you think you've put in place (spoilers, not enough).


The draft script for TNG: The Offspring indicates that a century later, the M-5 incident is still a byword for AI-led-bad-stuff happening

ADMIRAL HAFTEL: This is not personal, Captain. There are very real dangers here. Without peer review, Starfleet feels we're risking another M-5 catastrophe.

PICARD: That is a forced parallel, Admiral. M-5 was a battle computer.

HAFTEL: With an artificial intelligence, that led to disaster.

Not the answer you're looking for? Browse other questions tagged or ask your own question.