Asimov's Three Laws of Robotics were designed as universal constraints for any thinking machine powerful enough to harm us: A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. On paper, the logic is flawless. You could even express it as a function: func isAsimovCompliant(willAllowHarmToHuman bool, ...) bool { if willAllowHarmToHuman { return false } ... return true } The main property of this function is that it is a hard constraint. No matter what input you feed the system, the law either permits or forbids the action deterministically, every time. The rules don't bend. We don't have humanoids walking among us just yet, despite Elon's promises. But we have modern generative AI. Our guardrails are delivered as system…
No comments yet. Log in to reply on the Fediverse. Comments will appear here.