To Business Leaders of the Future,

To Business Leaders of the Future,

The recent rise of generative AI tools such as ChatGPT, Google Bard, and Microsoft’s Copilot have left many of us questioning “What’s next?” Generative AI is built on a deep learning framework, a subset of machine learning. In a nutshell, typical machine learning algorithms will give a computer a predetermined goal along with a set of inputs. The computer then makes random input decisions to see if it can make it closer to the goal. If the input works, it repeats it on the next try and if it fails, it tries something new. This process is repeated thousands if not millions of times until the computer is able to successfully complete the goal. This video shows how machine learning works by training a software to complete a level of Mario. Machine learning has become a staple in modern technology through things like facial recognition, medical imaging software, and penetration testing.

??????????? The difference between traditional machine learning and the deep learning used by our new AI assistants is that while machine learning grows and develops through experience, deep learning systematically layers and categorizes information in a manner that mimics neurons in the human brain. This means that information can be retrieved and even compiled into unique responses when the program is given a specific prompt. So when you ask ChatGPT how to fix your leaky faucet, it goes down a rabbit hole of information, gathering what is most relevant, and then compiles it into easy to follow instructions within a matter of seconds.

??????????? While all of this can be very useful, the nature of deep learning means that AI models come with various limitations and flaws that must be considered. Perhaps the largest challenge is that the training data for these models typically comes from massive internet archives containing both useful and harmful information. In order to train these models to understand multiple perspectives, diverse and contradictory information is typically not removed from the training data sets. While this helps the AI generate a vast range of useful knowledge, this also means that biases can come “baked in” and could result in incorrect or even harmful results. This is why the term “garbage in = garbage out” is used frequently in regard to deep learning models. Feeding a model diverse ideas, opinions, and perspectives is all a part of training AI to develop humanlike intelligence, but in doing so, AI also inherits the flaws and fallacies of humanity itself. As individuals and businesses begin to trust and rely on deep learning AI models, the question that must be asked is “Who moderates AI training information and to what extent can deep learning be trusted and utilized?”

??????????? The world’s largest companies have become data hoarders. According to a 2019 article posted by Forbes, Google typically stores anywhere between 2 and 5 GB of data about each individual in the U.S. This includes your search history, your shopping habits, your location/map data, and more. What it doesn’t include is your photos, your emails, your social media pages, or other information that you consciously put on their servers. Google’s information alone is enough to develop a personalized profile for each individual used for targeted advertising and recommendations. Imagine combining the billions of gigabytes of information that Google and other companies have collected about each of their customers. A deep learning algorithm trained on that data could essentially become an expert on every person with access to the internet.

??????????? You might be asking yourself “So What?” who cares if companies know a ton of information about me? If I’m not doing anything wrong I have no need to worry right? While we don’t know what the future holds, we are only just beginning to understand the uses of deep learning AI models. ?A recent article from Bloomberg discusses the idea of a “Social Credit Score” being piloted in China. Imagine a system where people who drive poorly, buy too many video games, or waste food at restaurants have a lower social credit. Social credit can be reviewed by landlords, travel companies, and college acceptance committees and will be used to determine “social worthiness”. While this is admittedly a very drastic example of malicious data use, the data each person generates is already being used by companies in the U.S. for things like hiring decisions and talent management.

??????????? Companies like Findem are using AI to gather and analyze applicant data from all across the internet in order to help recruiters make informed decisions about job candidates. AI tools can be used to determine if candidates are well suited for a specific job position or not. It can also be used to give recruiters metrics based on diversity and previous employment. While these types of tools help companies better understand their candidates, they compile data and develop insights based on the candidate’s online data, rather than their real life interactions. This results in an image based solely on the potentially biased and polarizing information that candidates share and post online.

??????????? As we head into the future, deep learning AI models will inevitably become valuable tools both for personal and professional use. But being aware of data privacy and online biases is essential as we continue to debate the ethics of using AI in professional settings. My plea to the business leaders of the future is this:


Use and develop AI tools to give you insights into the truly important aspects of your employees, your customers, and your business, but do not forget to gather data through your own individual experience and draw insights and conclusions based on your unique individual perspective. Remember to step out of your office and your management meetings to visit the humans behind your business; they are more than a bunch of data in the cloud. Their values, their goals, and their potential are all meant to be experienced and shared with you, a fellow human being. As insightful as our AI tools can and will become, there will always be value in making decisions based off the interactions that you share with your workforce and customers. Those experiences will determine the “Humanity” of your business and will be invaluable facing the unknown challenges that the digital future holds.

?

Sincerely, Adam Poll MHR

?

Sources

MarI/O - Machine Learning for Video Games

Forbes: How Much Does Google Really Know About You? A lot.

Bloomberg: China's 'social credit' system ranks citizens

Findem

要查看或添加评论,请登录

社区洞察

其他会员也浏览了