2 hours ago · Tech · 0 comments

The cultural battles over AI have broken down over predictable lines in the past few years, with critics rightfully calling out the big AI platforms for training on content without consent, recklessly building without considering environmental impact, and designing platforms that are unaccountable because their code and weights (the parameters that describe how an AI model works) aren’t open for third-parties to evaluate. The AI zealots have done themselves no favors, by not only dismissing all of these valid criticisms, but by also making increasingly outlandish and extreme claims about the capabilities of the Big AI platforms, while simultaneously scaremongering about the brutal effect they’ll have on people’s lives and careers. It’s no wonder the public sentiment about AI has become so negative. But a small cohort of us who are curious about LLMs as a technology, yet deeply critical of Big AI companies for their impact on society, have been asking what would “good” AI look like? Is…

No comments yet. Log in to reply on the Fediverse. Comments will appear here.