A Tale of Two Priorities

A Tale of Two Priorities

“Those who cannot remember the past are condemned to repeat it” - George Santayana


A Lesson from History

In the mid-1990s I worked in IT for a county government, and we embarked on a project to implement Internet connected computers in a collection of schools.

In a time of transparency sheets and overhead projectors, we were installing switched 100Mb Ethernet in every classroom and bonded T1s (at a whopping 1.5 megabits each) for the Internet. It was groundbreaking, and we had to make up most of the software infrastructure as we went.

It was an achievement featured in local news and we became advisors for school districts in many areas about how to make it happen from funding to architecture. And then we let people actually start using it.

When you give people unfettered access to the Internet at a point where there were no local regulations for using it in a school and where governance of content within the Internet itself was lacking, we had our hands full.

The Internet was stocked with easily accessible content ranging from inappropriate to illegal, and I recall a discussion with our leadership that outright blocking content may be a violation of rights because the infrastructure was taxpayer funded. We resorted to monitoring only and notifying offenders that we knew what they were doing and would impose appropriate punishments.

In addition, we began seeing issues with theft and vandalism of technology. Cameras were installed, and when reviewing footage to find perpetrators, we also found evidence of students using stairwells and classrooms for more familiar activities. Something we did not consider. As a result, we had to make the camera installations more obvious and the signage more visible.

Lastly, many people were using the technology for the sake of using the technology, not for curriculum advancement. As a response we developed suggestions for enhancing class subjects and began training sessions to teach the teachers (many of whom did not want to use their new computers at all) about computers and the Internet.

By jumping in head-first to innovate before guidelines and thoroughly developed use cases, we saw first hand the importance of directly proportional advancement in technology and governance.

We implemented the technology because we could - there was funding available, we knew enough to be dangerous and we wanted to do something good.

We also entirely underestimated what would happen when the public was exposed to general content.

The lesson is that if we had guardrails on the technology, appropriate legislation, use cases for deployment, and training for the users, we could have avoided many of the complications we encountered.


Where are we now?

People are far from being technophobes; in fact they're at the other extreme of the spectrum. Combine that obsession with human competitive nature and the current "AI race" and I feel like we're right back to the mid-90s again.

The push to AI has exploded because of the transformer model architecture. Low supervision, ease of accessibility, impressive generative capabilities and a natural language interface.

All of which, as with computers in the classroom, can be beneficial or detrimental.

The most significant risk though is when AI is both.

AI models designed to help approve need-based loans but it rejects people in need. Ones that can find new drug therapies for serious illness but doesn't factor in side effects which could be harmful. Or ones that creates content designed to educate but the results are inherently biased or inaccurate.

We also have the issue of competing priorities - everyone wants to make the next big announcement. Reach the next milestone. Be first. All of that is done through innovation. Without proper governance of the development, training and use of those innovations, what is first to market could well end up creating a mess.

Even with AI governance, we're seeing potential contradictions. In the new EU Council AI Act they have conditions for exemption as well as outright bans and some public models today could easily fall into both categories.

Never has personal accountability been more essential. Accountability to organizations and to understanding the impact of actions in a global context.

...and I don't expect that people can do this without appropriate guidance.

We are starting to see large AI companies band together to work on responsible AI development and use. There's a large focus on legislation for governance, developing AI governance tools and even helping businesses self-evaluate their readiness for AI.

All of these efforts coupled with a some practical self-restraint could be a wonderful path forward to productive, positive impact AI innovation.


Every Advance Has Risks

This is well known and accepted. Balancing the risk vs. reward model for AI is tricky because of how close to the benefit/detriment line so many AI use cases can sit.

So let's not forget what we've learned from the past. Let's not get so caught up with the desire to innovate that we forget to govern. And let's keep working to apply AI in ways that help serve business, personal, and community needs.





要查看或添加评论,请登录

Matt Konwiser的更多文章

  • Synthetic Data is AI's Superhero Companion

    Synthetic Data is AI's Superhero Companion

    You can't move an inch without seeing more news about DeepSeek - but the model doesn't matter. What matters is how they…

    1 条评论
  • DeepSeek Just Helped IBM Win the AI War

    DeepSeek Just Helped IBM Win the AI War

    For years, the large closed source vendors have been promoting the importance of the model and only the model. During…

    10 条评论
  • Is GPT the next TikTok?

    Is GPT the next TikTok?

    We know that attention spans have decreased. We know that "zombie scrolling" is pervasive (I see it daily on the NYC…

    6 条评论
  • AI Chip Makers Will Have A DWDM Moment

    AI Chip Makers Will Have A DWDM Moment

    Most of you probably never saw that acronym before, but without it, the Internet as we know it today wouldn't exist…

    6 条评论
  • Living in the Ai Goldilocks Zone

    Living in the Ai Goldilocks Zone

    Every time a new AI capability comes out, it's either the best thing ever or one second closer to midnight. I've talked…

  • The Importance of TEO (Total Ethics of Ownership) for AI

    The Importance of TEO (Total Ethics of Ownership) for AI

    It's 1964. Rod Serling's "The Twilight Zone" is in full swing.

    5 条评论
  • Collective Intelligence and AI

    Collective Intelligence and AI

    When given an opportunity to choose a topic to speak about within the AI arena for a group of business people, this…

  • AI Use Cases For Emergency Management

    AI Use Cases For Emergency Management

    It all started with a tag. A random thought flew through my head "how does a ChatBot handle an emergency with a human?"…

    1 条评论
  • The Wolf and the Dog; How AI Changes Us

    The Wolf and the Dog; How AI Changes Us

    I recall a video years ago that I cannot find anymore - it showed a domesticated dog and a wild wolf both presented…

  • Why AI Projects Fail

    Why AI Projects Fail

    If you're looking for an uplifting read, this ain't it. You're watching the TV show Jeopardy! You see someone win their…

    4 条评论

社区洞察

其他会员也浏览了