Love the story! You got me thinking about stuff like what Azimov wrote, about the 3 Laws of Robotics. The first Law being: "A robot may not injure a human being or, through inaction, allow a human being to come to harm." Azimov hypothesized that robots and AI would evolve to a point where they would realise that humans were so destructive to each other and to the environment that the logical step to ensure world peace would be to eliminate humans altogether. Hence the programming of the 3 Laws of Robotics into the positronic brains of all robots as a safety feature. Azimov was on to something. Is AI currently becoming too smart for our own good?

Brian Loo Soon Hua
Brian Loo Soon Hua

Written by Brian Loo Soon Hua

I create daily sci-fi and fantasy art at Xenolocution: https://medium.com/xenolocution I also write true ghost stories at https://medium.com/haunted-office

Responses (1)