About large model based AI systems, because why not?
The public availablity of ChatGPT, DALLE2 and related or adjunctive systems like Bing has set the news cycle on fire and every company from A to Z is trying to incorporate large model artificial intelligence into their products regardless of whether these things are well suited for the task. If you want to write a sales pitch, brief article, or essay; ChatGPT can help and will even provide references. It will gladly argue any position you assign it along with supporting references in a clear and concise manner. If you want to generate some artwork or a photo like the one above, DALLE2 is your tool. Meanwhile, Bing's implementation is very Microsoft as in embrace and extend like most of their historical activity in creating software solutions. It has live access to the Internet and can search for any information to support even a very complex query requiring multiple steps to look up and summarize findings.
This all sounds quite lovely and marketing, sales, and research folk are tapping into these systems in droves. There is an inherent issue though. The data these things have been trained on is publicly available Internet content. This content is quite diverse in quality, accuracy, and bias. The outputs of these systems can vary a great deal in a similar manner. With DALLE2, the outputs can often be nightmarish, mysoginistic, or copyright infringements. With ChatGPT, the extensive disclaimers pushed onto it can't entirely hide or interdict the systems ability to produce biased or inaccurate content.
With Bing's live Internet accessible extension to ChatGPT, this gets worse. As an example, I asked a fairly complex question with a numerical answer that I already knew, but one that would require both research of publicly available Internet content in multiple locations as well as solid math skills to get us there. It asked me two relevant questions needed to perform the work (despite that one of those was answered in my question) and after I provided those (one again), poof! It gave me an answer that was wrong. I could tell where it went astray, pointed out the issue and asked it to revise it's answer. It acknowledged the error and researched the necessary adjusting data and recalculated. Poof! It was wrong again. Again I could see what it was missing, pointed out the problem and asked for another revision. It again acknowledged the error, did the necessary research and calculations and poof! It finally spat out the right number.
领英推荐
It had to do it this way, because it doesn't understand anything about the topic or how things work in the real world. People will absolutely anthropomorphize this thing as that one guy who was fired for sharing chats and claiming it was sentient. I'll agree that it's extremely smart , capable, and can simulate sentience very well. It really isn't though. Some have compared it to a predictive cell phone text software on steroids. It's more than that, but not by much.
This simulated intelligence causes two very real problems. The first is like Tesla's fully self driving autopilot, people will trust it when it is not actually capable of the task or earning the trust it has been given. Imagine training one as an FAA air traffic controller in an environment with no substantial terrain or weather and perfect data. It would do just fine. Add terrain variations, weather, spotty RADAR returns and poor communications and things will get ugly very quickly. The ability to handle edge cases well just isn't a possibility with something like this that actually doesn't understand the environment it operates in. Automated weapons systems are already getting these types of models and there is much to fear about a very real impending skynet styled scenario.
On the flip-side, these models can absolutely reduce man-hours and costs for researching and creating content similar to what they were trained on. Will they be properly restricted to these purposes? Absolutely not. Bing is just the tip of the public adoption iceberg as firms race to throw similar AI systems into every product they make. In my own company, we are exploring using ChatGPT and DALLE2 to help produce some marketing content. Marketing and graphic editing professionals are doing that work not to replace any humans, but to supplement them so they can be more effective and efficient. That represents a best use case as far as I can see. Other than that, getting a good answer to your queries is a complete crap-shoot, but I'm sure they'll be using something similar to write school text books soon. Be careful and Good luck!
GTM Expert! Founder/CEO Full Throttle Falato Leads - 25 years of Enterprise Sales Experience - Lead Generation Automation, US Air Force Veteran, Brazilian Jiu Jitsu Black Belt, Muay Thai, Saxophonist, Scuba Diver
2 周Lyle, thanks for sharing! Any good events coming up for you or your team? I am hosting a live monthly roundtable every first Wednesday at 11am EST to trade tips and tricks on how to build effective revenue strategies. I would love to have you be one of my special guests! We will review topics such as: -LinkedIn Automation: Using Groups and Events as anchors -Email Automation: How to safely send thousands of emails and what the new Google and Yahoo mail limitations mean -How to use thought leadership and MasterMind events to drive top-of-funnel -Content Creation: What drives meetings to be booked, how to use ChatGPT and Gemini effectively Please join us by using this link to register: https://www.eventbrite.com/e/monthly-roundtablemastermind-revenue-generation-tips-and-tactics-tickets-1236618492199
Global Chief Marketing, Digital & AI Officer, Exec BOD Member, Investor, Futurist | Growth, AI Identity Security | Top 100 CMO Forbes, Top 50 CXO, Top 10 CMO | Consulting Producer Netflix | Speaker | #CMO #AI #CMAIO
6 个月Lyle, thanks for sharing! How are you doing?
IT Director at Bloomfield Homes
1 年I decided to use ChatGPT recently to help me work on a somewhat complex SQL merge script after our SQL guru departed for larger pastures a few weeks ago. It was actually quite useful in helping me write the necessary code and debug my own missteps. By helpful, I mean probably 10 times faster than my own reading, testing and debugging work would have been. One huge surprise was when I ran across a source record field that was larger than the target record field and the statement terminated. I knew there was a command I could use to parse this field and truncate the data into the smaller target (not usually good, but acceptable for this use case), but I wasn't familiar with it at all. After supplying my code and asking how to do this WITHOUT specifying the actual field, ChatGPT actually came back with corrected code for the right field. This wasn't some psychic trick or omniscience on it's behalf, the field names were such that this one was more likely to contain some long text than the others. Still, it was unexpected for a correct answer to come from a question that didn't supply enough information for that answer. Marginal coders will be replaceable in short order.