Thoughts on Future Tech Impacts

Thoughts on Future Tech Impacts

As a third-generation mariner (great grandfather worked in lumber thus breaking a longer streak), I’ve had many thoughtful discussions on the evolution of radar. Salty old sea captains complaining about a younger generation that doesn’t understand relative motion or how a ship actually works. I grew up in a transitory period where I learned paper Mo-boards (Mo-boards or maneuvering boards are a technique to track relative motion based on plotting vectors on formatted paper to predict or force the distance, time, and angle you’ll cross another ship or to figure out which direction and speed you want to sail to get a proper wind envelope for aircraft). Even on my first ship, we also had radars that would track multiple tracks and do the processing to give the same information. One takeaway from the many conversations was that the younger generation, who practiced more on the newer technology, was capable of processing more information quickly, but less good at understanding the actual physics and troubleshooting. I suspect we’re going to see this effect frequently, as if we we’re all the younger generation, as AI spreads beyond mostly just big companies using it with large datasets for mostly advertising.

?

Mo-boards take about 30 seconds to a minute depending on complexity for a skilled person to answer. Radar processing is effectively instant. Mo-boards however force thought about the problem at hand. We don’t control the course and speed of other ships. Those values are assumptions based on their current values staying constant into the future. What sometimes happens to people looking at radars, accepting answers without doing the work, is a ship suddenly changes course or speed or an input value is captured that was previously missing, and there is a risk of collision with little time to react and limited situational awareness. Machine learning algorithms have incredible predictive power, but what situations will we find ourselves scrambling because the prediction is strange, or the suggestion has collateral impacts thus making it not reasonable to follow the solution?

?

Lots of other thoughts on this subject such as system reliability and interdependencies getting more fragile as series get longer following better predictions making it impossible to move away from certain technologies (people often overestimate probability in series, 70% success three times in a row is about 1/3 successful as a system; that feels weird); Biases and discrimination shielded by ambiguity; What bootstrapping infinitely looks like in generative AI; What generative AI will do when fully employed in election cycles; Changes in attention and how people consume media becoming massively shorter as how we read becomes even more skimmy; The effects on policy of not knowing causative factors for a prolonged period; Liability and accountability for AI caused deaths, both intentional and accidental; Massive positive and negative labor shocks; Massive swings in shifting resources and production; Consolidation of wealth; Control of the systems and resources that let people innovate; Feedback loops learning to placate or construct figurative walls instead of informing; People getting super boring as they follow contextless instructions for repetitive tasks. I’m actually really pro expansion of machine learning and AI. Solutions to problems that have been sticky for years will be solved at a rapid pace. I am excited to navigate the changes and chaos of the next 20 years. Hopefully we can get the policies, investment, and inventions more right than wrong.

要查看或添加评论,请登录

Trevor McLemore的更多文章

社区洞察

其他会员也浏览了