A few years ago, I blogged about AI. I was concerned about kids using it to cheat on assigned compositions. I bemoaned the impact of artists being displaced by AI created images. I resented the work of authors being downloaded to educate AI how to write, essentially stealing our work without compensation. Was I ever naive! Today, objecting to AI is like the old story of Pandora already being out of the box!
I am not unaware of the potential AI represents, but I am concerned that the positive potential was recognized and rushed to go forward, without understanding the full impact. Clearly, positive possibilities are still being discovered, with others already at work. What concerns me is whether AI has a conscience.
I cannot explain how all of that works, but as I understand it, the intelligence of humans, discovered and developed over more generations than I can imagine, is fed into the massive storage of AI. What once took researchers hours or months or years to discover or create can now be accessed from an AI search with significate speed. The wisdom of generations has been downloaded.
Obviously, the benefits of that are enormous. However, the ethical impact was not carefully examined before the ability to create this monster of human intelligence was set free. When you think about what it can do, it is difficult to decide where to start in controlling the potential power.
Once I was concerned about taking human work without compensation. That remains an issue, but now I realize that far more concerns exist. The more responsibilities that are transferred to AI, the more important issues arise. To list a few, should AI be responsible for values like fairness, accountability, safety and other human values. Before 'turning AI loose' should we have built in concepts of ethical guidelines, risk management, bias, unintended consequences, and accountability.
The implementation of AI is not just an American decision. Other nations are involved, and developing common rules and standards requires international cooperation. The rapid pace of AI development has outrun the speed of regulation, and defining and standardizing AI across the world somehow requires coming together to establish not only ethical principles but also safety and regulatory agreements. Assuming that is accomplished, who becomes the watchdog and the authority to hold offenders to account?
Assuming that is settled, have we really taken into account whether humans might have created artificial intelligence with consciousness. Some would suggest that we don't even fully understand how our own intelligence works, and lacking that knowledge, how can we control AI? We have already gone past the point of pausing to figure out the ramifications of AI before implementing it, already benefitting from positive uses.
There are, however, those who wonder if we have ventured into the world of the 1968 "2001: A Space Odyssey." I will pause for now, but there are already those who are looking ahead to see whether we are moving too fast.
No comments:
Post a Comment