I don't really hate AI. Not all of it, anyway. It's great and is going to be greater when it comes to assistive technology that will let an awful lot of folks live their best lives. It will likely deliver on the some of its promise to provide better healthcare. It will absolutely eliminate a lot of drudge work.
But, but, but...However compelling those assistive technology and better healthcare arguments may be, this "death to drudgery" argument is a pretty dubious one. What's the benefit to those whose jobs involve drudge work?
One supposed ansewr is that, once drudgery-free, people will be freed up to unleash their inner whatever. Of course, most of those inner whatevers will eventually be rendered obsolete by AI.
Then there are the jobs that are higher up the food chain that AI will do away with. White collar/knowledge workers may have largely escaped the job losses that automation and shifts to overseas production resulted in over the last n decades that absolutely hit those who worked in manufacturing. But AI's coming for those jobs big time, too.
So one of my biggest issues with AI madness is that, as we shrug our shoulders and accept it as inevitable, we're not as a society talking about what it is that people are going to do for work. The point the AI gurus keep making seems to be that 'we don't yet know what all those great new jobs are going to be, but they're going to be.'
This may work on the macro level - after all, a lot of those lost manufacturing jobs were replaced by spiffier white collar jobs in financial services etc. - but it really doesn't do all that much for those whose jobs were lost who weren't for whatever reason able to skip into a fancy new job, somewhere outside the Rust Belt, that required them to not just uproot their lives, but completely remake themselves.
And what if all those presumed not-yet-imagined jobs don't pan out at the macro, let alone the micro, level? Are the trillionaire geniuses going to be happy with forking over a pittance of their trillions to give the rest of us a guaranteed income that's livable onable? Will we end up with a society that's even more alpha-beta-gamma-delta-epsilon than we already have? Blech, blech, a thousand times blech. Make that a trillion times blech.
What else are we ignoring when it comes to the downside of AI?
Well, there's the environmental impact of the power all those datacenters will be consuming as the AI algos do their AI-ing. I guess we'll be having nuclear powered datacenters. No Three Mile Island worries there... Then there's the potential for some absolutely dreadful use of surveillance technology and big data. You, the guy with the protest sign? You're done for. And hey, you, folks with your profile commit crimes maybe. So you're done for, too.
Anyway, as if anyone needs something more to worry about, when it comes to AI, there's a ton to add to your fret list.
And as if I needed yet another reason to at least quasi fear and at least quasi loathe AI, now there's this brohaha with OpenAI.
OpenAI, which was founded nearly a decade ago to make sure that AI would benefit humankind vs. the opposite, has two components: a non-profit research organization and a profit making sub.
You may recall that, earlier in the year, Elon Musk sued OpenAI, and his OpenAI co-founder Sam Altman, claiming that they were putting profits over their initial mission to make sure AI benefits humanity. Anyone else find the idea of Elon Musk's being such a fanboy of anything benefitting humanity, and not whatever his current narcissistic whim is, laughable? He did drop the suit in June. Guess he had to put more focus into doing whatever he can to make sure that he benefits humanity by getting Trump elected...Talk about blech, blech, a trillion times blech.
OpenAI has been in the news for more than the Musk suit. There's been a lot of back-and-forth about Sam Altman staying the CEO. Etc.
And more recently they've been in the news thanks to some whistleblowers lodging a complaint with the SEC claiming that "the artificial intelligence company illegally prohibited its employees from warning regulators about the grave risks its technology may pose to humanity, calling for an investigation."
The whistleblowers said OpenAI issued its employees overly restrictive employment, severance and nondisclosure agreements that could have led to penalties against workers who raised concerns about OpenAI to federal regulators...
OpenAI made staff sign employee agreements that required them to waive their federal rights to whistleblower compensation, the letter said. These agreements also required OpenAI staff to get prior consent from the company if they wished to disclose information to federal authorities. OpenAI did not create exemptions in its employee nondisparagement clauses for disclosing securities violations to the SEC.
These overly broad agreements violated long-standing federal laws and regulations meant to protect whistleblowers who wish to reveal damning information about their company anonymously and without fear of retaliation, the letter said. (Source: WaPo)
OpenAI, of course, pushed back, claiming that employees do have the right "to make protected disclosures." And that the organization is increasing their efforts to make sure that their AI models are safe and secure, their technology able to withstand the pressure for the profit-making part of the entity to, well, make profits.
Bit a spring update to the model behind their big product, ChatGPT, was supposedly rushed out the door in May:
...despite employee concerns that the company “failed” to live up to its own security testing protocol that it said would keep its AI safe from catastrophic harms, like teaching users to build bioweapons or helping hackers develop new kinds of cyberattacks.
Again, OpenAI acknowledges that, sure, there were pressures. But maintains they didn't cut any safety corners.
Somehow, I'm not 100% convinced that there's not someone out there using ChatGPT to figure out how to build bioweapons.
Meanwhile, it's not clear whether the SEC is opening an investigation based on the whistleblowers. Maybe there's an AI out there who can tell me.
No comments:
Post a Comment