Ask ChatGPT about Yourself
Image generated by Dall-E (no copyright)

Ask ChatGPT about Yourself

Ask ChatGPT about Yourself: Lessons Learned and Three Laws for Generative AI (Part 1)

Introduction:

This article is broken up into three parts:

1.??????The Problem – After an introduction to large language models (LLMs), I propose a use case to examine its hallucination problem by asking it about myself.?I encourage you to do the same.?If you are not recognized by ChatGPT, I recommend you try asking about someone you know well who may have more publicity on the internet for which the model may have been trained on.

2.??????Exploring Alternatives – Before suggesting a solution it is important to do some due diligence and explore what alternatives are being studied and pursued in the field.?The 2nd part of this article will do that.

3.??????A proposed solution or the “Three Laws of Generative AI” – The final part of this series will propose a solution to the problem in the form of “Three Laws of Generative AI” modeled after Isaac Asimov’s “Three Laws of Robotics”.


Before we demonstrate the problem that asking ChatGPT about myself exposed, let’s first set the stage by understanding a few basic ideas about how ChatGPT works.?Chat GPT (the GPT stands for Generative, Pre-Trained Transformer) is a Large Language Model (LLM).?Let’s briefly examine what a large language model is before we analyze (and/or criticize) its output.


A Brief Introduction to Large Language Models:

???????????????A Large Language Model (LLM) is a large neural network trained to generate natural language in response to a prompt.?The way it does this is to feed a prompt to the neural network so it can generate the highest probability (aka “best”) next word in the sequence.?So, that is a good way to think about an LLM ... it is a "next-best-word-generator". We can continue the process by attaching the generated word to the sequence and feeding it back in to generate another word, and then another and another until some threshold is reached.?Stephen Wolfram (of Mathematica fame) has a good (and long) article on how ChatGPT works.?

???????????????Though parts of this article may sound like I am criticizing ChatGPT, that is not my intent nor is it true because I view this technology as a significant breakthrough in the history of computing!?Additionally, the proof of the pudding is in the tasting and ChatGPT is so good that I happily pay openai.com their subscription fee for ChatGPT Plus (https://openai.com/blog/chatgpt-plus) access.?So, while I am discussing problems in this article, these are the same problems that Sam Altman (CEO of openai, https://en.wikipedia.org/wiki/Sam_Altman) himself recognizes and is working hard on fixing.

???????????????One final key point of introduction on Large Language Models that you really need to understand: ChatGPT doesn’t “know” anything.?It is a massive program run on thousands of computers whose job is to generate useful text.?Given the probabilities inherent in its training data set (for example, “what is the most likely word to appear after this word?”) and it’s training by humans (this answer is better than that one so use your feedback loop to favor this answer), for high-probability scenarios (aka common requests and common training data) it will do a good job; however, for low-probability scenarios it may just generate some low-probability text that is not a good answer but it is the best answer (given the probabilities) that it can generate and since it doesn’t know anything, that “low probability” set of words could be wrong.?That is what we call hallucination… just the “machine” spewing out low probability (in terms of truth) words.?After all, as a “word-generator”, it’s supposed to generate words.?So, there you go … here’s some made up words.?Given that every generated word is made up (as in, none of them are copied from pre-existing content), it is amazing that it produces results that are useful such a high-percentage of the time! Ok, now let’s move on to demonstrating the problem.


The Problem:

After commenting on the Jonathan Turley libel incident involving Chat GPT (3.5 version) (and other incidents) to a corporate “Generative AI” email group, a colleague responded with:

“The good news in my case is I asked it about me and it could not find me.??My secrets are safe.?:)”

Thinking that was a good idea, I gave GPT 4.0 this prompt: “Summarize the major accomplishments of Michael C. Daconta”.

To which, It answered:

GPT RESPONSE:

Michael C. Daconta has made significant contributions in the fields of computer science, data management, and information technology. Some of his major accomplishments include:

  1. Authorship: He has authored or co-authored more than a dozen technical books, including the widely recognized "Java 2 Bible" and "Mastering JavaFX 8 Controls." His writing has greatly influenced the software development community.
  2. Software Development: Daconta has developed various software applications and tools throughout his career, showcasing his expertise in Java, JavaFX, and other programming languages.
  3. Government Service: As the Metadata Program Manager for the Department of Homeland Security, Daconta played a key role in the development of data management and sharing strategies for the U.S. government. He was responsible for improving data sharing among federal, state, and local agencies to enhance national security.
  4. Public Speaking and Training: Daconta has been a sought-after speaker and trainer in the fields of software development, data management, and information technology. He has shared his knowledge and expertise through presentations, workshops, and seminars around the world.
  5. Industry Recognition: Throughout his career, Daconta has received various awards and honors, which reflect his impact and accomplishments in the IT and software development industry.


This prompt (asking ChatGPT about yourself) is quite useful to examine the level of GPT hallucination because you know the details of your own life better than anyone else.?So, it is easy to spot falsehoods, exaggerations, or hallucinations.?Let’s begin by examining the accomplishments listed above in terms of their accuracy.

1.??????Authorship – while it is true that I have authored more than a dozen books (the correct amount, as of today, is 14), the two specific books mentioned I did not author.?So, this is a textbook case of hallucination and, of course, I notified openai’s feedback mechanism about this hallucination.

2.??????Software Development – while I do have some open source contributions none of them demonstrate my expertise in JavaFX.?More hallucination.

3.??????Government Service – the blurb on my Government service has my title correct but no details nor mention of my seminal accomplishments (which were often cited in Government press circles) which would be the National Information Exchange Model (more on this later) and the Federal CIO Council’s Data Reference Model (DRM).?So, here there is no hallucination but some key omissions, how should we judge omissions in the performance of this LLM??Is it a training issue??Or is there some threshold of publication that needs to be met before the neural net can generate words on it??These things need to be known in order for us to begin talking about the issue of explainability for generative AI.?This will be an issue we delve deeper into in Part 2 of this article.

4.??????Public Speaking and Training – this verbiage mostly seems like filler and while I have done a few keynotes at conferences, I don’t believe there is anything here that would rise to the level of significant publicity.?So, this could just be generative “filler” (assuming that is a “thing”).?

5.??????Industry Recognition – while flattering, this also seems to be filler.?More importantly, it failed to recognize that I have earned two patents and these would definitely be considered an “accomplishment”.?The most likely reason for the patent omissions is that they were not really covered in the press except maybe in mortgage circles (the patents involved inventions I created for Fannie Mae on the representation and rule-based validation of electronic mortgages) and then probably not widely.?

To see if it would expand upon these topics, I gave it a similar prompt, “Write a detailed, biographical essay on Michael C. Daconta.”

It answered:

GPT RESPONSE:

Title: Michael C. Daconta: A Pioneer in Data Management and Software Development

Introduction

Michael C. Daconta is a highly respected figure in the world of computer science, data management, and information technology. Throughout his illustrious career, Daconta has made significant contributions to the field, authored numerous technical books, and served as a key figure in the Department of Homeland Security. His expertise in Java, JavaFX, and other programming languages, as well as his accomplishments in government service and industry recognition, have established him as a pioneer in his field. This biographical essay aims to provide a comprehensive overview of Michael C. Daconta's life, career, and accomplishments.

Early Life and Education

Michael C. Daconta was born in the United States and developed an early interest in computer science and technology. He pursued his passion by attending university, where he studied computer science and related fields. Although the details of his early life and educational background remain private, it is evident that Daconta developed a strong foundation in programming languages and software development during his academic years.

Career in Software Development

After completing his education, Daconta embarked on a successful career in software development. He honed his skills in various programming languages, such as Java and JavaFX, and contributed to the development of numerous software applications and tools. Daconta's work in software development showcased his expertise and allowed him to establish himself as an authority in the field.

Authorship and Publications

One of Daconta's most significant accomplishments is his authorship of more than a dozen technical books. Among these publications, the "Java 2 Bible" and "Mastering JavaFX 8 Controls" are particularly well-known and have been influential in the software development community. His books cover various aspects of programming languages and software development, demonstrating his deep understanding of the subject matter and his ability to communicate complex concepts to a wide audience.

Government Service

In addition to his work in software development and authorship, Michael C. Daconta has made notable contributions to the U.S. government. As the Metadata Program Manager for the Department of Homeland Security, Daconta played a crucial role in developing data management and sharing strategies for the government. He was responsible for enhancing data sharing among federal, state, and local agencies to bolster national security. Daconta's work in this capacity has had a lasting impact on the government's approach to data management and information sharing.

Public Speaking and Training

Daconta has also been a sought-after speaker and trainer in the fields of software development, data management, and information technology. He has shared his knowledge and expertise through presentations, workshops, and seminars across the globe. As a speaker, Daconta is known for his engaging style and ability to explain complex topics in an accessible manner, making him a valuable resource for professionals in his field.

Industry Recognition

Throughout his career, Michael C. Daconta has received various awards and honors that reflect his impact and accomplishments in the IT and software development industry. These accolades serve as a testament to his dedication, expertise, and influence in the field.

Conclusion

Michael C. Daconta's career is marked by his significant accomplishments in software development, data management, and information technology. His work as an author, government servant, and public speaker has helped shape the industry and solidify his reputation as a pioneer in his field. As a result, Daconta's contributions will continue to influence the world of computer science and technology for years to come.

---

Analysis of the ChatGPT response (the above essay):

Let’s dig a bit deeper and see if this is an improvement upon the previous answer by analyzing each “category” of the essay:

1. Introduction – the first sentences in the essay use very flattering words like “highly respected”, “illustrious career”, “significant contributions”, and “key figure”.?Of course, I am appreciative of these characterizations and interested in how they were generated.?Is this some sort of “zeitgeist” of all the content about me or is it more in the vein of using similar words.

2. Early Life and Education – Of course, there is a dearth of information for this part of my life as stated like so, “Although the details of his early life and educational background remain private”.?This is not shocking as there are hardly any stories or articles about my early life.?If I had to give a synopsis on it, my early life is characterized by four major factors – I was a preemie (born weighing only 2.2 pounds), I attended a well-known Catholic high school called Chaminade, I attended New York University (NYU) and I paid for it with an ROTC scholarship which then led me to my Military Service as an officer in the United States Army).?The only other notable fact is that my family is related to the famous French Artist Henri Matisse though I do not exactly know where we are in his lineage (my Grandmother’s mother was Olympia Matisse).?

3. Authorship and Publication – This section is merely a regurgitation of the accomplishments section and makes no mention of any real books authored but again hallucinates two books.?More importantly, my first Java book (after two books on C and C++ Pointers) was one of the first three on Java ever published.?The book was entitled, “Java for C/C++ Programmers, and was declared a “must read” by PC magazine in its published review.?I ask later, what is my most notable publication and, from a popularity standpoint, that should have been its answer.?Since I know my royalty checks, I know that one earned the most.?Unfortunately, it failed to mention any of the hundreds of magazine or paper articles I have written and some of them were quite popular (more on that later).?

4. Government Service – For my government service, we again see the same regurgitation of the previous answer and no mention of my Military service.?Of course, it does not know (since it knows nothing) that the Military is part of the Government.?

5. Public Speaking and Training – Again we have more filler and more regurgitation.?It is debatable whether this section should be included at all.

6. Industry Recognition – Strange that this section did not touch on any of my patents or other awards received.?Most likely that there was no press activity around them.

7. Conclusion – Sad that this essay did not touch on any of my influence on the history of Java.?I already mentioned that my Java books were influential in the Java community.?Additionally, there was no mention of my public spat with Microsoft (more on that later) or even my brief public spat with Cory Doctorow (not really worth mentioning but it involved my defense of metadata which, of course, was fitting given my title at DHS).?More on these omissions in the follow-on prompts.

Since this was basically a regurgitation of the “Major Accomplishments” (why is that? Is there not enough raw content that it was trained on to generate any additional details, as we will find out in the following prompts that is clearly not the case), I decided to get more specific in the prompts on some of my career highlights and see either the depths of its hallucinations or the extent of its training.


PROMPT: You mentioned authorship, what are the 12 books he has authored?

GPT RESPONSE:

My earlier statement that Michael C. Daconta has authored over a dozen books appears to be an overestimation. After further research, I found that he has authored or co-authored several notable books in the field of computer science and software development. Some of his key publications include:

  1. "Java 2 Bible" (ISBN: 0764548537) - Co-authored with Eric Ladd, this book provides a comprehensive guide to Java programming, covering everything from the basics to advanced topics.
  2. "Mastering JavaFX 8 Controls" (ISBN: 0071833779) - In this book, Daconta explains how to create rich, interactive applications using JavaFX 8 controls.
  3. "Information As Product" (ISBN: 0133382636) - Co-authored with Leo Obrst, this book discusses information management techniques and strategies.
  4. "The Semantic Web: A Guide to the Future of XML, Web Services, and Knowledge Management" (ISBN: 0471432571) - Co-authored with Leo Obrst and Kevin T. Smith, this book explores the potential of the Semantic Web to revolutionize knowledge management and web services.
  5. "XML Development with Java 2" (ISBN: 076454776X) - Co-authored with Al Saganich, this book teaches readers how to build XML applications using Java.
  6. "Java Pitfalls: Time-Saving Solutions and Workarounds to Improve Programs" (ISBN: 0471361747) - Co-authored with Eric Monk, J. Paul Keller, and Keith Bohnenberger, this book provides practical advice on avoiding common Java programming pitfalls.

Please note that my initial claim of over a dozen books was inaccurate, and I apologize for any confusion caused. However, Michael C. Daconta has made significant contributions through the authorship of these books, sharing his expertise in computer science and software development.

---

Analysis of the ChatGPT response on “Authorship”:

?The above prompt really shows how poorly ChatGPT does on details and this really is where LLMs can fall apart just based on their nature of “probably” choosing the next best word.?In instances of concrete facts, “probably” doesn’t work.?Let’s examine the details of its hallucinations.????

First, it begins by incorrectly apologizing by saying its original count “appears to be an overestimation” when it is actually an underestimation.?In Sam Altman’s interview with Lex Fridman?he claims that the neural net is doing a form of “reasoning”.?While most likely that characterization is for the layman’s benefit, it is unfortunate to describe a neural network as reasoning because if it had even rudimentary reasoning skills (which LLMs do not), it would know how to count.?It would also know the concept of before and after in dates (which it has famously messed up in reported incidents).?The real problem with attempting to state that an LLM is “reasoning” is that it negates the assertion that LLM’s know nothing.?Don’t believe the hype – an LLM “knows” nothing.?Let’s prove the point by examining the details in its citations.?First, let’s start with the basics, the first two books don’t exist.?In fact, not only they do not exist, but the ISBNs cited (as “proof”) also do not exist.?Try to search for them and see what you get. Ficticious ISBNs are a serious breach of trust but ONLY if you assert that ChatGPT is “reasoning”.?If you instead understand that LLM’s “know nothing” then it is ok.?Basic “reasoning” does not mean inventing falsehoods.?Inventing facts is the behavior of a fancy “pachinko” machine and not a reasoning engine!

No alt text provided for this image
A Pachinko Machine (via Wikimedia Commons)

Now, the last four books do exist and I did author (or co-author them).?Unfortunately, many of the details in each citation are wrong.?On book three's citation, Information as a Product had no co-authors.?Instead, I co-authored the Semantic Web with Leo Obrst.?So, right co-author but wrong book.?This is a classic example of the Pachinko machine’s ball jumping the track to the next bin!

Of course, given it was able to get many of the details on those four books correct, it begs the question as to why the other 10 books would not even be listed at all.?Of course, because we have no insight into the training data set we cannot understand why an omission occurs.?This highlights the desperate need for transparency in how AIs derive their answers.

It is time to see how far this rabbit hole goes! Let's ask for more details...

---

PROMPT: Besides books, did he author any articles in online or print publications?

GPT RESPONSE:

Michael C. Daconta has authored numerous articles in online and print publications. Although it is difficult to compile an exhaustive list of all his contributions, here are a few examples of his work:

  1. Government Computer News (GCN): Daconta has written several articles for GCN, focusing on various topics related to information technology and data management in the context of government agencies. His articles offer insights and solutions to challenges faced by government organizations in managing and sharing data.
  2. JavaWorld: As an expert in Java programming, Daconta has contributed articles to JavaWorld, a popular online publication for Java developers. His articles touch on various aspects of Java programming, such as best practices, techniques, and tools.
  3. Developer.com: Daconta has also written articles for Developer.com, where he shares his expertise in software development, programming languages, and related technologies.

These articles further illustrate Michael C. Daconta's commitment to sharing his knowledge and expertise with the broader community of professionals in computer science, software development, and information technology.

---

Analysis of the ChatGPT response on “Other Print Publications”:

???????????????It is good to see ChatGPT have some details about the many articles I have written as I would assume that would be part of their training data set.?Let’s look briefly at the three main categories presented:

1.??????Government Computer News (GCN) – While I did write for GCN, there were a lot more articles than “several”.?See: https://gcn.com/voices/michael-c-daconta/19154/?oref=gcn-post-author

2.??????JavaWorld – This is also correct and I had a column with JavaWorld.?JavaWorld is now defunct, but you can still find some of the more popular articles on infoworld like this one: https://www.infoworld.com/article/2071275/when-runtime-exec---won-t.html

3.??????Developer.com – while this is true (but only a few articles), I am very surprised that there was no mention of the site where my most popular article was first posted, ZDNET.?You can still find some of the old articles like this one: https://www.zdnet.com/article/open-source-java-going-strong-5000096477/

But let’s delve into probably my most popular article (at the the bequest of my ZDNet editor) ?which was a response to Microsoft dropping Java from their monopolistic browser (Internet Explorer which crushed Netscape due to making the browser part of the Operating System)… Let’s see what it has to say about that controversial “incident”?

---

PROMPT: Did he write an article comparing Microsoft to Tonya Harding?

GPT RESPONSE:

Yes, Michael C. Daconta wrote an article comparing Microsoft to Tonya Harding. The article, titled "Microsoft: The Tonya Harding of the computer industry," was published on January 19, 1999, on the CNN website. In this opinion piece, Daconta criticized Microsoft's business practices, comparing the company to the controversial figure skater Tonya Harding. He argued that Microsoft's dominance in the software industry, as well as its alleged anticompetitive practices, were detrimental to innovation and competition in the sector.

---

PROMPT: Later on, Did Judge Motz use that same comparison in a lawsuit against Microsoft?

GPT RESPONSE:

Yes, Judge J. Frederick Motz made a similar comparison to Michael C. Daconta's Tonya Harding analogy in a lawsuit against Microsoft. In December 2001, during a hearing for a private antitrust lawsuit filed by Sun Microsystems against Microsoft, Judge Motz made a comment comparing Microsoft's business practices to the infamous figure skater.

Judge Motz said, "I don't think there is anything more extraordinary than to have the 800-pound gorilla, which is what Microsoft is, saying: 'We're so weak, we're about to be overtaken. It's like Tonya Harding whining that she's going to lose the gold medal because somebody else is going to come along and break her knee." His statement reflected concerns about Microsoft's competitive practices and dominance in the software industry, echoing the sentiment expressed by Michael C. Daconta in his article.

---

Analysis of the ChatGPT response on “Tonya Harding Article”:

???????????????Sadly, this was my most popular article and ChatGPT hallucinated badly in its citation. I remember distinctly when my ZDNet editor called me and said, “Holy Shit, Yahoo just picked up your article!” ?It is good that it knew about the article but the article title not correct (the Tony Harding of Technology has a nicer ring), the date is Not correct (2001 not 1999) and the website is NOT CORRECT! (ZDNet not CNN) Here is a link to the actual article: https://www.zdnet.com/article/microsoft-the-tonya-harding-of-technology-5000096474/

???????????????The Judge Motz quote is an interesting aside that is an important corollary to understand the “full story”. Ok, one more brief prompt and then we can move on to thinking about the solution.


PROMPT: What was his role in the NIEM information exchange standard?

GPT RESPONSE:

Michael C. Daconta played an important role in the development of the National Information Exchange Model (NIEM) during his time as the Metadata Program Manager for the Department of Homeland Security (DHS). As a part of his responsibilities, he was involved in the creation and management of strategies for data sharing among federal, state, and local agencies to enhance national security.

NIEM is an XML-based information exchange framework that enables efficient, secure, and standardized data sharing between government agencies and organizations. By facilitating communication and data interoperability, NIEM has been instrumental in improving collaboration and information sharing, ultimately enhancing the government's ability to protect its citizens and respond to emergencies.

Michael C. Daconta's contributions to NIEM include providing guidance and leadership on data management best practices and helping to develop a robust, interoperable information sharing standard. His involvement in the NIEM project reflects his expertise in data management and his commitment to promoting more efficient and effective information sharing across various government entities.

---

Analysis of the ChatGPT response on “NIEM”:

???????????????While it is good that it had some information on this important accomplishment in my career, it is only a very shallow reference.?It is sad to think about your life’s contributions are diluted to what was “publicly reported on them”.?Unfortunately, It will never properly reflect the real extent of all the seminal events that I performed in the service of that initiative (like naming it, getting the seed funding, bringing the CIO’s together in a public forum, garnering support by the local law enforcement community and many other activities to get that off the ground.?To say it was a heavy lift is an understatement. Of course, none of that is the fault of ChatGPT but instead just the reality of an AI trained on publicly available data and not being able to infer the meaning of what it is saying.?Remember, ChatGPT doesn’t know anything!

---

Conclusion:

The issues exposed by the problems demonstrated above are as follows:

·????????Need for transparency – these neural networks must not remain giant Pachinko machines where we just see the outputs and don’t have any insight into HOW those outputs were generated.?For example, we really need to understand what the neural network considers high-probability scenarios vice low-probability scenarios.?Microsoft seems to be following this path in its version of ChatGPT (see bing.com) by adding citations to its answers.?I additionally also really like the “tuning” buttons where you can choose whether you want the LLM to be more creative or more precise as depicted below.


No alt text provided for this image

·????????The Need for Validation – In a previous article I wrote on GPT hallucination in code generation (see: https://daconta.us/Articles/ChatGPT-Fail.html), I demonstrated the dangers of ChatGPT hallucinating when it writes source code.?Specifically, it will hallucinate methods, classes, and entire libraries that just don’t exist.?Low probability scenarios in code generation are more than just problematic because you are talking not just about natural language but an “executable language” that is run on (and can control) computers.?If that source code gets into a running system it could be downright dangerous.?The solution is that you must have a distinct validation phase prior to the generated text being provided to the public.?Some of this is already being done and openai has a fascinating paper which it describes as its “system card” (https://cdn.openai.com/papers/gpt-4-system-card.pdf).?This is based on the concept of describing an AI via model cards and system cards as proposed here (https://ai.facebook.com/blog/system-cards-a-new-resource-for-understanding-how-ai-systems-work/).

The Need for Humility – part of the problem with ChatGPT’s hallucinations is they are offered with 100% confidence and 100% certainty.?Of course, I am anthropomorphizing the LLM but that is another effect of it being so damn good in so many situations.?In other words, LLMs are victims of their own success.?When you are spot on so much of the time, the assumption becomes that you are always right and in that very comfort zone lies the problem.?To combat this tendency for us to “Trust the AI because it is smarter than you”, the AI itself must take on the posture of humility by clearly giving us the transparency we need to understand its weaknesses.?So, yes, meta-layers need to be added to neural networks that can tell us when it is confident of an answer (for example, based upon the number of references to a particular topic in its training set) and when it is not confident of an answer.?

·????????The Need for Pedigree – generative artificial intelligence should follow the example of the Art community in using the careful tracking of pedigree to distinguish between fake artwork and “real” artwork.?Of course, by “real” we mean created by the original artist and not recreated by a copycat.?In terms of a large language model this can be fulfilled by following the path that Microsoft is taking in attempting to provide citations for the answers generated.?In other words, pedigree answers the question, “what is the source of that?”

·????????The Need for Explainability – as proposed by many AI companies, an AI must be able to explain how it arrived at its answer.?This is called AI community this is called the problem of explainability (https://www.ibm.com/design/ai/ethics/explainability/).?While this is difficult for neural networks (as Pachinko machines), but not impossible.?This is something that cannot be seen as “optional”.?Like validation, this should be considered a requirement by policy makers and even the law (in terms of liability).

The Need to “Do no Harm” – currently there are many topics that ChatGPT will not answer and this is a good thing because openai is being a responsible company and trying to prevent its technology from being used for ill-purposes (like criminal activities).?Even the fact that openai is trying to do this, lends creedence to the notion of a validation step because there is already a basic validation step around prompts that will, in essence, reject certain prompts.?This needs to be carried through the entire system even if that means rearchitecting the system or changing how neural networks work.?Like, explainable AI, “safe” AI is not negotiable but instead is a firm, up-front condition.

No alt text provided for this image

The next article in this series will examine the alternative solutions the industry is looking at and then I will conclude this series with “Three Laws for Generative AI” fashioned after Isaac Asimov’s Three Laws of Robotics (URL) which are as follows:

1.??????A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2.??????A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

3.??????A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Instead of banning this technology, we need to improve it to address its problems while following the general maxim of “Do No Harm”. Like Sam Altman recently proposed (https://www.youtube.com/watch?v=L_Guz73e6fw), we need a “Constitutional Convention” on Generative AI and AGI to define the “Alignment Strategy” for AGI.?I hope to be part of that conversation.

Start of a new era: ChatGPT The latest development and buzz around ChatGPT have caused it to go viral in the world of generative Artificial Intelligence (AI) since Open AI released the text-based artificial intelligence tool in November 2022. The new language-processing AI has attracted billion of dollars in funding from tech investors. It is a prototype dialogue-based AI chat capable of understanding natural human language and generating impressively detailed human-like text. ChatGPT has quickly become a sensation, with artists, writers, scientists, traders, programmers, and even travelers all experimenting with it. From generating unique content to providing valuable insights and predictions, the AI chatbot is proving to be a tool for anyone looking to take their creativity and productivity to the next level. To read more... https://vichaardhara.co.in/index.php/2023/04/08/start-of-a-new-era-chatgpt/

回复

Yay! OpenAI released #ChatGPT app for the iphone! See: https://apps.apple.com/app/openai-chatgpt/id6448311069

回复

Could you write "Game of Thrones" with #ChatGPT? NOPE... the case for Uncensored Models. See: https://erichartford.com/uncensored-models

回复

Google offering free "AI-Powered" coding (only in #Python for now). GitHub's copilot's days are numbered... See: https://blog.google/technology/developers/google-colab-ai-coding-features/

回复

#OpenAI is releasing an open source model. Is this in response to numerous calls for #transparency? See: https://cointelegraph.com/news/open-ai-prepares-release-open-source-model

要查看或添加评论,请登录

社区洞察

其他会员也浏览了