2 hours ago · Culture · 0 comments

A few years ago I argued that utilitarian and Kantian ethics, with the trolley problem as their framing question, were suited for programming robots but not for human beings. It turns out I was wrong — not about the human beings, but about the robots. As of two years ago my day job has been Associate Director of Northeastern University’s Ethics Institute, which has a particular focus on AI and data ethics. My colleague John Basl regularly stresses the need for people in AI ethics to have both technical and philosophical expertise, so we put together programs (like the AIDE Summer institute for graduate students) to help them get it. And what I’m writing about today is a reason that combined expertise matters: something you might get wrong if you didn’t have it. To me it’s obvious why the philosophy expertise matters – engineering won’t tell you what action is morally right to take. But Basl pointed out something I’d got wrong by not having the technical expertise – something that…

No comments yet. Log in to reply on the Fediverse. Comments will appear here.