This site needs a lot of updates and a ton of spring (multiple springs’) cleaning. However, it’s a particular mess right now because I updated my Ubuntu server to Jammy and its newer PHP destroyed my (dated) WordPress theme.
So… Lot to come there along with updating pages et al.
Like any declaration of its breadth, I have varying opinions about points within the statement, but in broad strokes it captures my concerns.
Note that I do not in any way limit this to OpenAI’s efforts only, but all machine learning efforts (“ML” or what is typically called “AI”, artificial intelligence, these days). There are numerous ethical concerns, not the least of which are biases caused by the data trained on, the potential for abuse, the unlikely but not entirely implausible risk of runaway sentience, the risk of unknowingly creating a feeling/suffering entity, and perhaps most importantly to me, the risk of greatly furthering social inequality through technology capture and the mass destruction of jobs with no sharing of gains. A mass destruction with no planning for UBI or other new reasonable social welfare structures (and, to note, not just only risk to the statement’s “fulfilling” jobs – income is needed by all regardless).
I grew up with science fiction authors the likes of Asimov, where while AI had its own set of problems, at least there was some hope of having the benefits of AI communally shared in terms of financial gain and quality of life. Our current societal deference to “rugged individualism” and “winner takes all” is not compatible with a future where jobs are broadly lost to AI. AI, in that case, will not be rescuing humanity, but potentially enslaving the majority left to eek out what remaining livable wage jobs that exist.
Before we jump headlong into this technology, as a society we need to not only understand the ethical and moral considerations, but also ensure we have the societal checks and balances already in place prior to avoid unnecessary damage and suffering.
Finally, as multiple friends have noted, the greatest risk from AI is not AI itself, but the humans who use and control AI. That is where we most need to step back, pause, and reevaluate.
Been quite some time since I’ve posted to this site, in part because I had to debug a plugin issue, but also because I’ve spent too much time on Twitter, and (no small deal, and by far the best part) had a son to shepherd to adulthood.