“As our daily lives involve ever more sophisticated computers, we will find that ascribing little thoughts to machines will be increasingly useful in understanding how to get the most good out of them” but “we must be careful not to ascribe properties to a machine that the particular machine doesn’t have”
This is Jessica. Maybe one of the biggest crimes of academic computer science (besides routinely ignoring prior work and making up social science to suit our needs) is our tolerance for abuse of language. We take technical things and inject them with social significance without thinking through what we’ve implied. This is perhaps forgivable in early stages of research when we’re trying to get more people excited about exploring some direction, but at some point people start taking things more seriously and we find ourselves committed to terminology that overreaches. Then the question becomes what, if anything, we should do about it. Previously it didn’t feel like such a crime to talk about intelligence or learning because nothing really worked that well, so the labels were clearly aspirational. But now it’s much easier to believe the simulacra. And so it becomes harder to tell when we are using it as a predictive convenience versus a scientific claim versus a marketing device. There…
No comments yet. Log in to reply on the Fediverse. Comments will appear here.