Challenges during AI implementation an alternate ways to overcoming these as emerging solutions work in progress...
Why is artificial intelligence important?: AI is important for its potential to change how we live, work and play. It has been effectively used in business to automate tasks done by humans, including customer service work, lead generation, fraud detection and quality control. In a number of areas, AI can perform tasks much better than humans. Particularly when it comes to repetitive, detail-oriented tasks, such as analyzing large numbers of legal documents to ensure relevant fields are filled in properly, AI tools often complete jobs quickly and with relatively few errors. Because of the massive data sets it can process, AI can also give enterprises insights into their operations they might not have been aware of. The rapidly expanding population of generative AI tools will be important in fields ranging from education and marketing to product design.
Indeed, advances in AI techniques have not only helped fuel an explosion in efficiency, but opened the door to entirely new business opportunities for some larger enterprises. Prior to the current wave of AI, it would have been hard to imagine using computer software to connect riders to taxis, but Uber has become a Fortune 500 company by doing just that.
AI has become central to many of today's largest and most successful companies, including Alphabet, Apple, Microsoft and Meta, where AI technologies are used to improve operations and outpace competitors. At Alphabet subsidiary Google, for example, AI is central to its search engine, Waymo's self-driving cars and Google Brain, which invented the transformer neural network architecture that underpins the recent breakthroughs in natural language processing.
Augmented intelligence vs. Artifical intelligence
Some industry experts have argued that the term artificial intelligence is too closely linked to popular culture, which has caused the general public to have improbable expectations about how AI will change the workplace and life in general. Though they have some misconceptions however, they have suggested using the term augmented intelligence to differentiate between AI systems that act autonomously -- popular culture examples include
Augmented intelligence, I mean intelligence amplification, or IA)— uses machine learning and predictive analytics of data sets not to replace human intelligence, but to enhance it. Some researchers and marketers hope the label augmented intelligence, which has a more neutral connotation, will help people understand that most implementations of AI will be weak and simply improve products and services. Examples include automatically surfacing important information in business intelligence reports or highlighting important information in legal filings. The rapid adoption of ChatGPT and Bard across industry indicates a willingness to use AI to support human decision-making.
Artificial intelligence. True AI, or AGI Artificial General Intelligence is closely associated with the concept of the technological singularity -- a future ruled by an artificial superintelligence that far surpasses the human brain's ability to understand it or how it is shaping our reality. This remains within the realm of science fiction, though some developers are working on the problem. Many believe that technologies such as quantum computing could play an important role in making AGI a reality and that we should reserve the use of the term AI for this kind of general intelligence.
Ethical use of artificial intelligence
While AI tools present a range of new functionality for businesses, the use of AI also raises ethical questions because, for better or worse, an AI system will reinforce what it has already learned.
This can be problematic because machine learning algorithms, which underpin many of the most advanced AI tools, are only as smart as the data they are given in training. Because a human being selects what data is used to train an AI program, the potential for machine learning bias is inherent and must be and has to be necessarily monitored closely.
Anyone looking to use machine learning as part of real-world, in-production systems needs to factor ethics into their AI training processes and strive to avoid bias. This is especially true when using AI algorithms that are inherently unexplainable in deep learning and generative adversarial network (GAN) applications.
Explainability is a potential stumbling block to using AI in industries that operate under strict regulatory compliance requirements. For example, financial institutions in the United States operate under regulations that require them to explain their credit-issuing decisions. When a decision to refuse credit is made by AI programming, however, it can be difficult to explain how the decision was arrived at because the AI tools used to make such decisions operate by teasing out subtle correlations between thousands of variables. When the decision-making process cannot be explained, the program may be referred to as black box AI.
In summary, AI's ethical challenges include the following:
领英推荐
1. Bias due to improperly trained algorithms and human bias.
2. Misuse due to deep fakes and phishing.
3. Legal concerns, including AI libel and copyright issues.
4. Elimination of jobs due to the growing capabilities of AI.
5. Data privacy concerns, particularly in the banking, healthcare and legal fields.
AI governance and regulations:
Despite potential risks, there are currently few regulations governing the use of AI tools, and where laws do exist, they typically pertain to AI indirectly. For example, as previously mentioned, U.S. Fair Lending regulations require financial institutions to explain credit decisions to potential customers. This limits the extent to which lenders can use deep learning algorithms, which by their nature are opaque and lack explainability.
The European Union's General Data Protection Regulation (GDPR) is considering AI regulations. GDPR's strict limits on how enterprises can use consumer data already limits the training and functionality of many consumer-facing AI applications including security concerns.
Policymakers in the U.S. have yet to issue AI legislation, but that could change soon. A "Blueprint for an AI Bill of Rights" published in October 2022 by the White House Office of Science and Technology Policy (OSTP) guides businesses on how to implement ethical AI systems. The U.S. Chamber of Commerce also called for AI regulations in a report released in March 2023.
Crafting laws to regulate AI will not be easy, in part because AI comprises a variety of technologies that companies use for different ends, and partly because regulations can come at the cost of AI progress and development. The rapid evolution of AI technologies is another obstacle to forming meaningful regulation of AI, as are the challenges presented by AI's lack of transparency that make it difficult to see how the algorithms reach their results. Moreover, technology breakthroughs and novel applications such as ChatGPT and Dall-E can make existing laws instantly obsolete. And, of course, the laws that governments do manage to craft to regulate AI don't stop criminals from using the technology with malicious intent.
What are your views, dimensions, thinking and opinions buddies?
Director - Big Data & Data Science & Department Head at IBM
1 年?? Navigate the Tableau Certification landscape with www.analyticsexam.com/tableau-certification! ??? Chart your course to success! ???? #TableauNavigation #CertificationSuccess