Generative AI revolution and its Implications for Utilities and at Large
Generative AI Generated Cute Fox! @Pixabay

Generative AI revolution and its Implications for Utilities and at Large

The technical curiosity and the “experimentalist persona” in me forced me to get addicted to ChatGPT since its launch in November 2022. ChatGPT (Generative Pretrained Transformer) and DALL-E (launched in January 2021) are generative AI tools, developed by OpenAI where Microsoft has invested over $11 billion to date. I have been using Generative AI extensively since its launch to understand it better, and most importantly, figure out how we can apply it to solve business problems. With all the sensations and news surrounding starting with corporates rushing to write blank checks to the tech companies to figure out how it can improve the bottom and top line, universities and schools encouraging students to use technology to support their learning, Presidents across the globe holding briefings with the CEOs of the tech companies, and has most Fortune CEOs talking about the AI implications to their workforce planning has indeed resulted in a fad problem for the CIOs/CTOs globally – Business wants it now and wants it fast. One of the reasons is that the learning curve in using Generative AI technology is just nothing because it is super user-friendly. Our built-in ability to ask questions or be imaginative is all it takes for one to start using these AI tools. The user-friendly nature of the platform makes it super attractive – employees come pre-trained as they are already using the technology on their devices and so it has a minimal adoption impediment in extending that to their jobs. This pleasantly resembles the situation that occurred during the onset of COVID when the global workforce started to work remotely and CIOs suddenly became the most important person in the company who could keep the entire workforce productive remotely, quickly, and safely.?

To put things into perspective how ChatGPT’s launch created a global sensation about not just itself but also the searches related to Artificial Intelligence can be seen from the Google search trends below, though it is not a perfect correlation and causation. ChatGPT has created a global interest amongst the common population about AI.

No alt text provided for this image
Google search trends for ChatGPT and Artificial intelligence

Additionally, the relative sources for these searches originated from countries that have some of the highest young labor participation, high per capita internet penetration, and rapidly growing economies.?

No alt text provided for this image

From a consumer-centric adoption perspective, ChatGPT has earned the badge for fastest user growth of 100 million users compared to most other popular consumer technologies launched in recent history. Albeit, Pokemon GO still had the record for fastest adoption.

With its widespread engagement, ChatGPT represents a “consumer-centric” mystic, making it accessible and highly user-comprehensible. However, most new AI technologies will have issues like errors, biases, and misinformation revealing the weaknesses of the technology. Although the vertical growth trajectory is unsustainable, GPT remains one of the most significant technological advancements in the last 100 years and reminds me of Neil Armstrong’s quote "One small step for man, one giant leap for mankind" and I relate it to “one small step for mankind, one giant leap for the AI”. It is capable of revolutionizing human interaction and information consumption. One of the challenges with AI has been its abstract understanding of its inner workings which required a higher level of understanding and thinking, however, with ChatGPT in the hands of the consumers, that gap is closed because consumers don’t have to know about how the complex algorithms work, instead how to use it. Remember, ChatGPT learns using reinforcement learning, and this means that most people who use it will make the algorithms smarter and more intelligent. The tool constantly learns and evolves as more and more users adopt it. Hence it is a blessing in disguise because we will be building something that will become more intelligent than mankind itself.

The launch of the ChatGPT and BARD kind of generative AI platforms caused an average 25% drop in the valuation of some of the large education services providers like Chegg and Pearson. Google lost $100 billion in market value because BARD generated incorrect information. Adding to woes was that ChatGPT is proved to be an excellent alternative to Google search which dominates over 85% of the global online search market share. For the first time in my memory, Google's search dominance is facing a legitimate threat. The overall impact on the cross-industry market valuation is still being assessed.

Let’s face it, the launch of the ChatGPT and other generative AI technologies that will rise to wide-spread adoption means the following, and it is not an over exaggeration by any stretch:?

Represents a setup for a perfect storm of mash-up of the lesser understood technologies that will converge.

  • A traditional chatbot or image composer on steroids that can enable natural-sounding conversational interactions or becomes a quick alternative for creative thinking.
  • An LLM (Large Language Model) performs advanced language-related functions, such as the generation of language, reasoning, summarization, image creation, and even providing answers that resemble humans without emotions.

Can have an intelligent and emotionless conversation with its users by pretending that it can understand and process the context of the conversation in ways that:

  • It is engaging, insightful, and humorous, and in some cases, Interactions with ChatGPT can even appear wisely convincing.
  • Causes biases and incorrect responses based on the inputs that were given to train the model.
  • The conversation and its eloquent nature create a psychological urge to believe in its responses that are factually incorrect or may be assumed to be correct because it is presented in an expertly-convincing in conversational way.
  • It's peer-to-peer marketing that the manmade algorithm can “comprehend” and “knowledgeable” implies that the human element of the technology is highly deceptive.

It has massive market implications that will shake up several industries:

  • It offers significant potential opportunities but also entails disruption and risk.
  • With the advent of the Internet, data and information that are freely available can be consumed by these AI tools at will. ChatGPT was built on over 45 terabytes of internet data. This implies that outputs from the AI tools are highly questionable given the source of data to train the models cannot be fully trusted.
  • Laws govern the conduct of humans, and sometimes the machines that humans use, such as cars. But what happens when those cars exhibit human-like driving behaviors using AI tools integrated into the car’s vehicle operating system? Who is responsible for any laws that an AI could violate?
  • Will drive a completely new level of automation in the production of the software services.

As with every technology, there are advantages and disadvantages, few top-of-mind considerations from an ethical and legal perspective include things such as:

  • Contextless responses and false answers: The model uses relational algorithms to predict the next words without a true understanding of the output of the content, feedback from the users is not used immediately to correct the output. Establishing controls with human supervision and review of output can help alleviate some of these concerns.
  • Inaccurate input training data issues result in biased, prohibited, and wrong data: Creating data access, governance, and classification policies that can be applied to the data that is being used for training the model will help in debiasing and increasing the accuracy of the outputs.
  • Increases risk exposure that can be either reputational, legal, or financial issues: Plagiarism, Intellectual Property, and copyright infringements may occur if proprietary or restricted data is used for training the tool. Hence strong policies and guardrails are needed when used in a business context.
  • Production of deep fakes that can be potentially used by activists and bad actors to spread misinformation campaigns, fake news, personalized impersonated misrepresentation, false correlations, manipulative content, fraud, and abuse by creating false reviews, false research papers, spam, and phishing.
  • Given that all the training happens in the backend using proprietary and less understood algorithms and technology, there is a lack of transparency and clarity of how things work behind the scenes before the outputs are generated.
  • The IT department in the organizations that were already focused on the low-code/no-code initiatives will now have access to the generative tools that can write codes, and novice developers can leverage this generative code to build software products and services which can potentially introduce zero-day vulnerabilities into the system threatening the cybersecurity posture of the organization. Gartner Inc. predicts that by 2026, developers outside of IT departments will account for at least 80% of users of low-code development tools, up from 60% in 2021. The global market for low-code development technologies is projected to reach $26.9 billion in 2023, an increase of 19.6% from 2022, according to Gartner. The ability to communicate with computers in natural language or plain English opens application development to more people, that have a lesser understanding of the Cybersecurity risks.
  • Given the complex nature of the LLM that requires immense processing power to ingest the data and train the model, such hyperscalers are available only with select few companies like Google, Microsoft, AWS, IBM, and Apple, etc. will result in the concentration of power.
  • Exposure of confidential and personal information from the dark net can result pose significant cybersecurity risks for companies and the general public.

I have sufficiently established the good, bad, and ugly nature of Generative AI tools. However, technological advancements like these have always led to a renaissance in the industry. Let’s consider electricity - Thales of Miletus discovered static electricity between 625- 525 BC, it wasn’t until the 1700s when Benjamin Franklin was credited with the discovery of electricity. It took over 150 years before Thomas Alva Edison used electricity by inventing the Light bulb in 1879 when electricity truly became a commodity which created a global revolution of using electricity in everyday life. Similarly, the history of AI dates to 1932 when Georges Artsrouni invented the “Mechanical Brain”, and it has taken 90+ years for AI to get commoditized. ?

It is important to stress that when we talk about AI we are talking about learning, learning makes us and things intelligent. Only humans can distinguish between what is morally and ethically good from bad, a fundamental difference between humans and the rest of the things in the universe. Like the human intelligence that learns from the experiences that they go through, Generative AI is fundamentally dependent on the data that is used to train the models. Therefore, the first step is to ask a logical set of foundational questions by CIOs and Business leaders so that a sufficient level of risk awareness is created before jumping into using Generative AI. One may not have answers to all these questions and more, but the fact that questions generate awareness help guide the adoption of the technology.

  1. What is the starting point to bring this technology into the corporate environment?
  2. What data to feed the generative AI models? And how good is data quality?
  3. How to get started with a safe approach to drive adoption in a corporate environment without compromising the cyber posture?
  4. Where do we start adoption? Do we start small and gradually adopt? Should IT departments be the sacrificial lamb before allowing the business to use it? Or should IT and Business collaborate to create the adoption model?
  5. Should there be a dedicated team that can focus on driving AI adoption? To whom will the team report? How will the team be held accountable for their actions?
  6. Do we have good data categorization, classifications, and segregation policies?
  7. What are the data security implications of hosting the data in the cloud?
  8. What kind of computing and storage capabilities do we have access to?
  9. How effectively are our IT policies implemented, governed, and controlled?
  10. Which data can be used to train the generative AI model and what are the associated risk amplification?
  11. Do we have the policies and controls to govern the AI tools if and when rolled out?
  12. How to monitor and control the usage of the tools within the organization?
  13. How do we distinguish between content that is AI generated vs. human-generated?
  14. How do we train the employees in using the AI tools the right way?
  15. ?At what point do we expand the adoption of AI? Should we consider a Paywall scenario?
  16. If Generative AI is used for software development, then how do we differentiate between AI-generated code vs. human-generated code, testing, vulnerabilities, potential data leakages, and? What controls are in place to ensure that organization data is not compromised while feeding it to the AI models?
  17. With most tech companies claiming some kind of AI tool integrated into their products and services, how do we know which tech partner can keep the data and IP safe and avoid third-party exposure?
  18. Should we invest in building the AI capability organically or inorganically? (I have seen often CIOs position the Data and Analytics capabilities as AI competence, they are not the same, but they complement each other)
  19. As with any innovative technology, how do we justify the initial investment in generative AI for adoption?
  20. What are the implications of plagiarizing and copyright violations?

The list of questions is endless, but one must start somewhere than trying to find answers to all the questions.?

Let’s see some of the use cases specifically for an integrated Utility that manages all linkages of the energy value chain starting with origination, marketing & development, sourcing & integration, generation, transmission, distribution, and retailing. Business processes where data is well organized and categorized are good candidates for generative AI in general.

  1. Land lease management – Large utilities that engage in power transmission enter into a long-term land lease that involves settlements and annuity payments. For national grid operators, these can run in the billions in payments and complex contract management. There is also a large stash of data that has been digitized and stored in the systems. This data can be used to train the generative AI model for easy access, conversational querying, new lease modeling, scenario planning, etc.
  2. Information search and constructive retrieval – Many utility functions such as Transmission and Distribution line maintenance, Power plant operations, Asset maintenance, etc. required by regulation to maintain detailed historical records of all the changes and maintenance activities performed over the lifetime of the asset. The related documentation and record keeping can over time cause a major business process overhead and impact the crew’s productivity. Ingesting these records into the generative AI model can make it simpler for the crews to access the content in a conversational interaction way instead of going through complex software user interfaces.
  3. Remote Electricity Infrastructure Inspection and Monitoring (REIIM) – Reliability and emergency response is one of the most important functions of an integrated electric utility. The uptime of the electric grid is highly dependent on the uptime of underlying transmission and distribution assets. Hence routine proactive and reactive monitoring of assets is critical. Many utilities have already started to use unmanned inspection vehicles like rovers and drones for performing remote inspection and monitoring. One of the key inputs that go into the monitoring is imaging and weather data in the case of an emergency response. One way to leverage Generative AI is to train the model with historical weather data, image data from past storms, and emergency procedures. Based on the specific situation, the system can help in the early dispatch of unmanned monitoring vehicles to address any potential issues.
  4. Customer service – Customer service is a critical business function for a utility and the goal to maintain CSAT scores drives several strategic initiatives within the organization. Most utilities must maintain historical customer information in their systems for regulatory and other reasons. The historical data can be used to train a generative AI model and offered to a Representative for improving their productivity or a customer self-service tool that can be used to engage the customers better and potentially reduce the cost to serve. They can be used for Improving chatbot intent identification, summarizing conversations, answering customer questions, directing customers to resources, and helping them make decisions. Another subset of the use cases could be feeding the voice-to-text data from the pre-recorded customer calls into the model to determine cross-correlation sentiment analysis.
  5. Design generations – Generative AI tools like DALL-E and Firefly can be used for generating designs for the deployment of solar panels and windmills based on various terrain conditions, locations, and weather data ingested into the AI models.
  6. Emergency response planning – Climate change will increasingly create weather-related risks for the utilities. Historical storm response information can be used to train AI models that can help create emergency response plans for a given weather condition.
  7. Content augmentation, text manipulation, and generation – Organization’s content can be used to supply product or service recommendations, provide product descriptions, manage schedules, summarize emails, compose emails, replies, and summaries, draft common documents, and provide simple-language descriptions of medical information and treatment recommendations.
  8. IT Development and Support – One of the constant challenges for any CIO is creating the best-in-class user experience in customer support. This is almost like a never-ending struggle. Ineffective IT support issues can make or break the reputation of the CIOs, something I have witnessed in many of the tech leadership roles I have held. One way the IT support experience can be improved is by giving the power back to the users to self-serve their IT issues. IT departments sit on a trove of knowledge articles and help instructions from OEMs, that can be used to train a Generative AI model and provide conversational self-service to the users. In a recent ROI analysis that we did, we can simulate 87% or more of the issues can be resolved using self-service. This can significantly help reduce IT support costs and enhance the NPS score for the user. In software development generative AI can be used for software code generation, translation, explanation, and verification, convert requirements into code, convert code from one programming language to another; correct erroneous code, and provide code explanations.

?Legal considerations and implications of AI tools are complex to understand and will require advice from our friends in Legal and Regulatory affairs. The world of AI and therefore robotics that leverage AI to complement human tasks is an interesting situation because laws are made to govern humans and not “human-like” things. The legal and regulatory aspects of dealing with AI are still in unchartered waters and each legal situation only creates more questions.?

  • Will AI be tried as a person or a thing? Who is culpable when a CEO uses the information provided by his or her teams that may have used AI tools to submit a proposal or generate content?
  • How will the legal system treat reinforcement learning? What if the AI-controlled traffic signal learns that it’s most efficient to change the light a couple of seconds earlier than previously done, but that causes more drivers to run the light and causes more accidents?
  • The prevailing legal framework finds that the liability in the case of interactions with software like robotics only considers where the developer was negligent or could foresee intentional harm. For example,?in 2007 the state of New York did not find the defendant liable when a robotic support loading system injured a worker, because the court found that the manufacturer had complied with regulations. In reinforcement learning-based AI systems, there’s no fault by humans and no foreseeability of such an injury, so traditional tort law would say that the developer is not liable. That reminds me of Terminator-like dangers if AI keeps proliferating with no responsibility, accountability, or governance.
  • Traditionally most global legal systems that have developed over the last 1000 years are based on precedents, meaning the most likely laws are enacted or regulation is passed only after an undesirable incident occurs and not in anticipation of an incident to occur. The current enacted laws will need to adapt to these technological changes shortly. However, what if legal scholars used Generative AI to research formulating the legal framework? It is unlikely that we will enter a dystopian future where AI is held responsible for its actions, given personhood, and hauled into court. That reminds me of the robot character in Stephen Spielberg’s movie iRobot where one AI system plans to murder a human whereas another AI-based robot tries to stop it. That would assume that the legal system, which has been developed for over 1000 years in common law and various courts around the world, would be adaptable to the new situation of an AI. AI systems by design are artificial, and hence ideas such as liability or a jury of peers are kind of meaningless. But the question is whether the AI should be liable if something goes wrong and someone gets hurt. If so, then who and or what will be tried?
  • I have heard many CIOs and CEOs saying that AI systems do not have feelings or emotions. It is a bit of an irony because I wonder if the day comes when AI systems truly have feelings and emotions, then they are indifferent to humans. Uncontrollable emotions and bad feelings are one of the drivers for humans to commit crimes. ??
  • Many financial institutions use AI-based securities trading software to place their bets, if a broker acts on the advice given by an AI system without the intention to cause financial distress, but if the decision turns out to be adverse, then how to try the case, even though the broker used the AI systems to make decisions. In general, we don’t regulate non-human behavior, like animals or plants or other parts of nature. Snakes aren’t liable for biting us. After considering the ability of the court system, the reality is that the world will need to adopt a standard for AI where the manufacturers and software developers agree to abide by general ethical guidelines, through a technical standard mandated by a treaty or consortium, or international regulation. And this standard will be applied only when it is foreseeable that the algorithms and data can cause harm.
  • How will we assess the quality of AI outputs including avoidance of harm to humans, will increase as we start seeing AI in control of more and more hardware. Not all AI models are created the same, as two models created for the same task by two different developers will work very differently from each other. Training an AI can be affected by a multitude of things, including random chances. AI-developed deepfakes pose a significant threat to personal identity and one can be falsely accused of the wrongdoing. The burden of proof falls on the accused to prove innocence.
  • Standardizing what the ideal neural network architecture or a safe re-enforcement learning algorithm is difficult to ascertain, as some architectures handle certain tasks better than others and history shows us that technology only evolves with time and adoption.
  • AI systems by design are rule-based algorithms and switching from an AI designed to recognize art to one designed to understand text would require a complete or significant change to the neural network associated with it. While creating an architecture standard can benefit, many researchers think that it will limit what they can accomplish while sticking to the standard, and proprietary network architectures might be common even when the standard is present.
  • ChatGPT has already created intense debate among lawmakers and regulators to look into some universal ethical code that will emerge as conveyed by a technical standard for developers, formally or informally. However, an ethical code is not an alternative to enacting laws that govern and enforce. We will most likely need some sort of government intervention, which does not seem too far off, considering many CEOs, heads of state, security consortiums, and people of influence have raised alarm over how AI can affect the future of society.
  • In April 2023 Gizmodo reported that Police in China arrested a man for allegedly lying about a train crash using ChatGPT.?

Emerging Cybersecurity threats due to Generative AI cannot be taken lightly. As many forward-thinking and me-first enterprises rush to adopt Generative AI, the Cybersecurity Implications of Generative AI Tools are far-reaching and complex. According to a recent Salesforce survey, nearly 71% of 500 senior IT leaders suggest that generative AI is likely to “introduce new security risks to data.” Using generative AI, threat actors can generate new and complex types of viruses, malware, phishing schemes, and other cyber threats, that can avoid conventional monitoring and detection measures. Such assaults may have significant repercussions like data breaches, financial losses, and reputational risks.

  • Sam Altman, the OpenAI cofounder, characterized ChatGPT as “incredibly limited, but good enough at some things to create a misleading impression of greatness.” Furthermore, Altman strongly urges companies not to use ChatGPT for essential tasks. JPMorgan prohibits the usage of ChatGPT (paywall) in the workplace, Amazon and Walmart have warned the staff to "exercise caution" while utilizing AI services in the execution of their duties.
  • Some of my CISO friends including myself have been scrambling as to how we can safely and securely onboard Generative AI tools into the enterprise IT landscape without risking the established cybersecurity posture. Many enterprises are still assessing the potential risks associated with such AI tools. Until we fully understand the impact of Generative AI, enterprises can focus on creating awareness, education, processes, policies, ethical code of conduct training, and governance of such technologies. Monitoring is another aspect under consideration to understand how we can monitor the employees’ usage of the AI tools including user-centric behavioral monitoring, a Zero-trust platform with anomaly detection capabilities that can enhance threat detection, response, and mitigation thereby minimizing cybersecurity compromise due to insider threats. One could also consider implementing checks and balances in the processes to detect and eliminate bad and fraudulent content that goes into training the models instead of relying on the automated process to ingest the data, this method is context and existing cybersecurity controls dependent.
  • ?In general, with the widespread adoption of generative AI, enterprises must adopt a proactive, comprehensive, and risk-weighted approach to cybersecurity due to the evolving and complex nature of cyber threats. Regulated utilities with CIP assets must proceed with utmost caution when deciding to adopt Generative AI. BCSI data must be secured and segregated with necessary access control to avoid accidental exposure to the Generative AI algorithms.?

In closing, just like any new technology when it meets the inflection point on the adoption curve, it has three potential trajectories it can take. It can stay the course, can take a nosedive, or can exponentially change its direction upwards, in my opinion, we must see AI in general getting adopted at an exponential rate with the Generative AI fueling its growth and adoption. However, history has taught us some lessons on how these types of radical technologies can profoundly impact mankind. Let’s start with the invention of the Wheel, it completely changed how humans move, fast forward millions of years since then, and you will see patterns of radical transformations that resulted.

  • The introduction of the Fiat currency replaced the complete barter system and created a new generation of banking systems as we know it today. The next-generation cryptocurrency has already proved to be an alternative to the Fiat system.
  • The history of the automobile dates to the 15th century when Leonardo da Vinci was creating designs and models for transport vehicles. It wasn’t until 1886 when Karl Benz applied for the patent for a vehicle powered by a gas engine, and in 1908 when Henry Ford truly took the automobile to the mass public, it completely redefined how the railroad industry worked and created a new mass market.
  • In 1927, when television was invented, it changed the broadcasting and entertainment industry as we know it today.
  • When Frank McNamara invented the Credit Card in 1949, little did he expect that he will plunge the Americans into nearly $1 trillion in credit card debt. However, it transformed the Banking and Financial services industry forever, while it was a blessing for the banking industry it also creates a debt spiral in which the entire world is stuck today, and trying to get out of the debt spiral will undoubtedly result in the collapse of the financial system.
  • When the first cellular phones were launched by Motorola in 1973 it created a whole new industry and ecosystem that forever changed how humans communicate.??
  • In 1983 when President Ronald Regan made the Navstar (GPS) public, it completely revolutionized multiple industries, and today it is an integral part of our daily lives and perhaps it is difficult to imagine a day without GPS.
  • Sir Tim Berners-Lee invented the World Wide Web in 1989, little did we imagine the revolutionary force that transformed the entire world in so many ways.
  • When Karlheinz Brandenburg released the MP3 file format in 1991, little did he realize that it will revolutionize the music industry.?

The above are just a few notable technologies that changed the course of human history when they hit the inflection points in the adoption curve, however, the common denominator of the above inventions was that none of them could come close to replacing human beings as they could not learn and self-improve with usage, and that is where Generative AI and its variants make the difference.

As a technology thought leader, I never missed an opportunity to adopt a new technologies that can transform business. I have always tried to challenge my teams and push the boundaries. I have used AI/ML technologies since 2010 to solve business problems.

  • In 2015 we partnered with IBM to leverage its Watson Assistant for Chatbots and Conversational AI for augmenting customer service functions. We trained the model with over half a million recorded customer calls by converting the call from speech to text. We then ingested the text data into the reinforcement learning model of Watson. In a matter of months, we witnessed that IBM Watson was able to handle complex queries from customers, and the overall customer satisfaction of Watson answered calls was higher than the trial group of agents. However, the ROI on the investment was negative due to high upfront costs.
  • ?Flotek, a specialty chemicals company changed its business model using IBM Watson AI, on its website the company describes its business as ‘Flotek is a technology-driven, specialty chemistry, and data company that serves customers across industrial, commercial and consumer markets.’ In early 2022 IBM sold off the data assets of its Watson Health unit to private equity firm Francisco Partners. The two firms did not share the purchase price, but previous reports pegged the value at around $1 billion as the demand and ROI for the customers were not on the scale. The article in NY Times in 2021, explains why IBM Watson Health was a flop.
  • ?Another similar example was in 2016 when we beta tested IPSoft’s Amelia platform to train a conversational chatbot that could answer and interact with the customer over a chat interface on the internet. Soon we realized that our data was to be better structured and categorized for the ML and AI learning model to work. While the platform proved to be extremely convincing and almost felt like interacting with real humans, the high back-end data ETL and maintenance cost failed the business case.
  • ?In 2018 we used Image recognition and AI-based reinforcement learning to train an AI to identify the defects in the finished products in the production line by feeding it over half a million images of what constitutes a defect in the product. Within months we saw our defects at customer rates drop a whopping 70% and our returns due to manufacturing defects dropped by nearly half. There are many such examples where using AI for point-use cases has proven to be extremely beneficial.


Imagine our kids learning without a need for a teacher in school, what will managers do if the employees can consult an AI for doing their jobs? What will happen to our libraries, how will humans’ creativity be changed forever, and so on? The effect of AI adoption by the mass public will have a far-reaching impact and it is only a matter of time when the AI will overtake human beings, and it is inevitable.

Please do not hesitate to send your feedback to?[email protected] .

Thank you!

Bebe Kanter

Founder of a business dedicated to real estate and environmental programs, specializing in sustainable housing solutions and eco-friendly practices in the industry.

11 个月

Unfortunately, until DeSantis gets behind electric utility reform, Florida cannot participate in the VPP revolution.

  • 该图片无替代文字
回复
A. Spencer Wilcox

Energy Industry Security, Technology and Risk Executive | CISSP, CISA, CPP, SSCP

1 年

Well written, Vishnu. AI hallucination will become a significant challenge, and detection of drift will be imperceptible until such time as it become intolerable. How do we begin to think about the design and definition of guardrails for detection of drift before it goes out of control? This seems to be a self-evident problem now, when results are Dali'esque (Q: What is the square root of infinity? A: Potato), but when the results drift imperceptibly over time, how does one assess the moment where drift from "right-thinking" in an LLM began? How does one assess its decisions from that point forward so that one can remediate the thinking of the AI? How does one unravel the automated decisions from that point? I particularly appreciate your point about legal defense of those decisions. How does one unravel due diligence obligations when one was dependent upon a machine to perform the due diligence, but the machine optimized itself out of the requirement? This promises to be a challenging process problem over the next several years. Your last point is particularly salient. As we achieve the singularity, what is the impact on society? Lets not forget the human impact of this technology which will outlast all of us.

Ganesh Parthasarathy

Client Partner - Technology & Business Relationship - Utilities Business Unit

1 年

Thought provoking questions and nicely written article in AI, GPT and Yes , it is better to start early rather than finding answers for all the questions.. Congratulations on the journey

回复
Bill Murphy

I am a Difference Maker, who works with Difference Makers for the Purpose of Making a Difference. | Securing the Health, Peace of Mind and Lives of a Billion People.

1 年

Vishnu Murali I like how you compared AI to technologies from the past. I do think we will need a Chief AI Officer. Possibly the first ones will be CIOs, CTOs and CDOs

回复
Jason Linkswiler

CoResolute CEO | Driving CRM Transformation for Mid-Market Clients Cost Effectively | AI Managed Services | Workflow Optimization

1 年

Well written Vishnu, with great depth. We should catch up soon.

回复

要查看或添加评论,请登录

Vishnu Murali的更多文章

社区洞察

其他会员也浏览了