A Tale of Two Priorities
“Those who cannot remember the past are condemned to repeat it” - George Santayana
A Lesson from History
In the mid-1990s I worked in IT for a county government, and we embarked on a project to implement Internet connected computers in a collection of schools.
In a time of transparency sheets and overhead projectors, we were installing switched 100Mb Ethernet in every classroom and bonded T1s (at a whopping 1.5 megabits each) for the Internet. It was groundbreaking, and we had to make up most of the software infrastructure as we went.
It was an achievement featured in local news and we became advisors for school districts in many areas about how to make it happen from funding to architecture. And then we let people actually start using it.
When you give people unfettered access to the Internet at a point where there were no local regulations for using it in a school and where governance of content within the Internet itself was lacking, we had our hands full.
The Internet was stocked with easily accessible content ranging from inappropriate to illegal, and I recall a discussion with our leadership that outright blocking content may be a violation of rights because the infrastructure was taxpayer funded. We resorted to monitoring only and notifying offenders that we knew what they were doing and would impose appropriate punishments.
In addition, we began seeing issues with theft and vandalism of technology. Cameras were installed, and when reviewing footage to find perpetrators, we also found evidence of students using stairwells and classrooms for more familiar activities. Something we did not consider. As a result, we had to make the camera installations more obvious and the signage more visible.
Lastly, many people were using the technology for the sake of using the technology, not for curriculum advancement. As a response we developed suggestions for enhancing class subjects and began training sessions to teach the teachers (many of whom did not want to use their new computers at all) about computers and the Internet.
By jumping in head-first to innovate before guidelines and thoroughly developed use cases, we saw first hand the importance of directly proportional advancement in technology and governance.
We implemented the technology because we could - there was funding available, we knew enough to be dangerous and we wanted to do something good.
We also entirely underestimated what would happen when the public was exposed to general content.
The lesson is that if we had guardrails on the technology, appropriate legislation, use cases for deployment, and training for the users, we could have avoided many of the complications we encountered.
Where are we now?
People are far from being technophobes; in fact they're at the other extreme of the spectrum. Combine that obsession with human competitive nature and the current "AI race" and I feel like we're right back to the mid-90s again.
The push to AI has exploded because of the transformer model architecture. Low supervision, ease of accessibility, impressive generative capabilities and a natural language interface.
领英推荐
All of which, as with computers in the classroom, can be beneficial or detrimental.
The most significant risk though is when AI is both.
AI models designed to help approve need-based loans but it rejects people in need. Ones that can find new drug therapies for serious illness but doesn't factor in side effects which could be harmful. Or ones that creates content designed to educate but the results are inherently biased or inaccurate.
We also have the issue of competing priorities - everyone wants to make the next big announcement. Reach the next milestone. Be first. All of that is done through innovation. Without proper governance of the development, training and use of those innovations, what is first to market could well end up creating a mess.
Even with AI governance, we're seeing potential contradictions. In the new EU Council AI Act they have conditions for exemption as well as outright bans and some public models today could easily fall into both categories.
Never has personal accountability been more essential. Accountability to organizations and to understanding the impact of actions in a global context.
...and I don't expect that people can do this without appropriate guidance.
We are starting to see large AI companies band together to work on responsible AI development and use. There's a large focus on legislation for governance, developing AI governance tools and even helping businesses self-evaluate their readiness for AI.
All of these efforts coupled with a some practical self-restraint could be a wonderful path forward to productive, positive impact AI innovation.
Every Advance Has Risks
This is well known and accepted. Balancing the risk vs. reward model for AI is tricky because of how close to the benefit/detriment line so many AI use cases can sit.
So let's not forget what we've learned from the past. Let's not get so caught up with the desire to innovate that we forget to govern. And let's keep working to apply AI in ways that help serve business, personal, and community needs.