Considering Technology Impacts

Considering Technology Impacts

The excitement surrounding the promise of AI has real-world impacts far beyond how technologists can enhance their customers’ businesses. Like the day-to-day work issues we try and solve with AI-powered solutions, people are also attempting to solve massive societal problems with AI – and the results aren’t necessarily positive.

?

Lawyer and anthropologist Petra Molnar explored this theme in her book The Walls Have Eyes, Surviving Migration in the Age of Artificial Intelligence, a dystopian story inspired by the real-life algorithmic technology deployed at the US-Mexico border against migrants, like the robot dogs the U.S. The Department of Homeland Security began using in 2022.

?

As we move toward a world where life and death are determined by the algorithm, what should IT providers know about the wider impact of AI’s applications? I welcomed Molnar, who also runs the Refugee Law Lab at New York University, onto a bonus episode of The Business of Tech to tell us more about her perspective.

?

From the growing business of border control technology to risks of over-automating the pros and cons of regulation, here’s what we talked about.

?

The AI Boom at the Border

?

Molnar has spent the past six years researching how technologies are playing out at the border and how private businesses are driving migration outcomes. I asked her to give an overview of how big of an industry this has become, and she explained that the entire immigration journey is now impacted by tech.

?

Even before people move, data analytics plays a role through social media scraping. At the border itself, surveillance is underscored by experimental technology like robodogs and AI-powered lie detectors. Once someone is in the country, voice reporting, immigration detention, and even algorithmic decision-making by the government are all at play.

?

How did technology become so accelerated in this space? Molnar, who was also a refugee lawyer and represented migrants in court, summarized the boom as a reaction to problematic inefficiencies and never-ending backlogs.

?

“A lot of governments say, well, we need more efficiency. We need more technology to help us with our processes. But the thing is, the system in and of itself has problems. And you don't fix a broken system by putting a band-aid solution on it that actually exacerbates some of these issues,” she said. “But that's really where it comes from, this kind of need to control or manage migration that a lot of states feel.”

?

And when the private sector steps up and offers ready-made solutions, governments often opt for experimental technology over tried and true solutions like training more officers or hiring more lawyers.

?

The Problem with Over-Engineering

?

Everyone wants to solve the same problem, but Molnar believes that we’ve reached the point of over-engineering and over-automating. But how exactly are we missing the mark?

?

On a philosophical level, Molnar says it's about who gets to define what the problem even is:

?

“Right now, people on the move, refugees, people who are stateless, they're seen as the problem to be managed. And when that's the logic that animates what we innovate on and why, what we're really talking about is a massive power differential between the people who are on the move and exercising their internationally protected right to asylum,” she said.

?

As a result of this power imbalance, migrants have become a group that the government allows private actors to experiment upon with emerging technology. Automation, like robodogs and AI-powered lie detectors, is seen as an immediate solution to a really complex problem, instead of a potential solution within a host of issues that exist within the context of migration.

?

What’s Missing From Regulation

?

Although the US is behind in terms of regulation (compared to the EU, for example), we still have plenty of compliance to worry about. So how is it that this sector has so few regulations in place to govern this kind of tech?

?

Molnar explained that borders have always been seen as a sort of lawless frontier, and businesses actually benefit financially from this mentality:

?

“When it comes to tech, right, there is not a lot of incentive to regulate a lot of this high-risk technology, because it is this kind of lucrative laboratory, you test stuff out at the border. And then you can say, well, okay, you know, it works there, let's implement it somewhere else,” she said.

?

Take, for example, those robodogs. After being introduced at the US-Mexico border, they went on to be used by the New York City Police Department. And as of now, there isn’t much regulation to protect the human beings they may impact:

?

“This stuff doesn't just stay at the border. It actually bleeds over into other spaces of public life. But we are seeing this massive gap in terms of governance and regulation. And if I put my lawyer hat on, it's a big problem, right? Because at the end of the day, we don't have a lot to pin responsibility for when things go wrong,” she said.

?

At the very least, Molnar is advocating for a societal conversation about no-go zones for this type of experimental technology. Are we really okay with robodogs in public spaces? Are we okay with facial recognition at the airport? If not, why not? Molnar believes we skipped a few steps in a conversation that we need to have.

?

Listeners know that I've been advocating for IT companies and solution providers to leverage work like what NIST is doing around ethical frameworks to help keep their customers out of trouble. I asked Molnar why that isn’t enough, she pointed to what she calls ‘ethics washing,’ where no matter how much companies promote ‘AI for good,’ that mission statement is not actually enforceable.

?

Like me, she believes that the old adage ‘regulation stifles innovation’ needs to be cycled out:

?

“When you have regulation that allows you to actually think about the human impact of what you're doing and then work collectively, also with those who are affected and impacted by the tech, you actually end up with a much healthier ecosystem of innovation, and you can come up with projects and technologies that are actually beneficial to people.”

?

The State of Unregulated Border Technology

?

To illustrate just how much unregulated, experimental technology is influencing how people on the border are treated, I asked Molnar to share what her research has revealed on this front.

?

In short, it’s big business with little to no incentive to bring humans into the conversation.

?

She’s had the chance to go to a variety of different conferences designed to connect the private sector’s tools and innovations with the government, essentially a showcase of wares for sale. In her experience, military tech like tanks and machine guns and border tech like facial recognition are being conflated without much critical thought:

?

“One of the reasons why is because, again, it's big business. And you don't see human rights lawyers there. You don't see impacted communities. Even the media has been having a hard time getting into these spaces because they're seen as this kind of bilateral kind of space where a lot of deals get made, and a lot of money changes hands. And again, that kind of sets up what we innovate on and why.”

?

The business incentives, in her view, are establishing the norms of what we innovate on and why.

?

“We need to pay attention to that and how power operates in this space,” she said.

?

Can AI Solve Racism?

?

You might be wondering how AI can be used for good in this space. We’ve all heard the premise that AI may be able to root out discrimination better than humans, so I asked Molnar how true that is.

?

Technology-wise:

?

“At face value, it makes sense that AI would be replicating or maybe even creating biases that are already present in our world as it exists. But that doesn't mean that we can't kind of push for different types of models and different types of thinking when it comes to this kind of technology,” she said.

?

Philosophically:

?

“It ultimately comes down to who makes the decision of what the priority is, right? Maybe this is a way that technology or new ways of thinking can help us. But it can't happen again without the involvement of those who are impacted and affected. And also those who have training in human rights law and ethics and sociology and anthropology,” she said.

?

In an industry where people talk in such silos, Molnar believes we need to build new tables where different ways of thinking can come together.

?

The Takeaway For Technologists

?

What should someone in your position know about research like Molnar’s? In her words:

?

“Technology is not neutral. It replicates power in society. That's why building these bridges between different disciplines, different lived experiences is super important.”

?

In my words, understanding customer pain points and getting into the system remains the key execution piece.

?


?

How are you handling the high stakes of AI implementation? What do you think is missing from the conversation?

?

As always, my inbox is open for insights, stories, questions, and whatever else is on your mind.

Skylar Graika

Public Speaker | Ex-Startup (raised $4M seed) | Coach | Product & Engineering Leader @ Smartsheet | Expert in helping businesses leverage AI to innovative faster and drive results

7 个月

AI’s potential to address societal challenges is immense, but its misuse poses real risks, especially in vulnerable spaces like migration. It’s crucial for IT providers to consider the ethical implications of AI applications to ensure technology uplifts humanity rather than causing harm.

要查看或添加评论,请登录

Dave Sobel的更多文章

  • Channel Chatter, March 3 - 10

    Channel Chatter, March 3 - 10

    Channel Chatter is the parsing of the listening posts that the Business of Tech podcast has around the MSP community…

    1 条评论
  • Channel Chatter, March 3 - 9

    Channel Chatter, March 3 - 9

    Channel Chatter is the parsing of the listening posts that the Business of Tech podcast has around the MSP community…

  • Channel Chatter, Feb 24 - March 1, 2025

    Channel Chatter, Feb 24 - March 1, 2025

    Channel Chatter is the parsing of the listening posts that the Business of Tech podcast has around the MSP community…

    1 条评论
  • Channel Chatter, Feb 17 - 21

    Channel Chatter, Feb 17 - 21

    Channel Chatter is the parsing of the listening posts that the Business of Tech podcast has around the MSP community…

    2 条评论
  • Channel Chatter, Dec 18 2024 - Jan 7, 2025

    Channel Chatter, Dec 18 2024 - Jan 7, 2025

    Channel Chatter is the parsing of the listening posts that the Business of Tech podcast has around the MSP community…

  • Considering CoPilot & Data Management

    Considering CoPilot & Data Management

    Recent conversations around AI tend to circle back to the same questions: structured versus unstructured data, Azure AI…

  • Channel Chatter, Dec 10 -18 2024

    Channel Chatter, Dec 10 -18 2024

    Programming Note: This will be the last Channel Chatter of 2024, and I will return after two weeks off with more…

    2 条评论
  • Identity Management Concerns in 2025

    Identity Management Concerns in 2025

    You already know that multi-factor authentication is a must. But what can we use beyond that? Are pass keys the answer?…

    1 条评论
  • Channel Chatter, Dec 4 - 11, 2024

    Channel Chatter, Dec 4 - 11, 2024

    Channel Chatter is the parsing of the listening posts that the Business of Tech podcast has around the MSP community…

  • How Vectorization impacts MSPs

    How Vectorization impacts MSPs

    Everyone’s talking about AI, but there’s one group I particularly love hearing from: people who’ve been using and…

社区洞察

其他会员也浏览了