ChatGPT and the Drone Industry

ChatGPT and the Drone Industry

Not long ago, a Drone Industry Insights team member received an email that read: “Do you find yourself in a race against the clock to spot accurate data and acquire deep market comprehension? See how AI + GPT can help you build a comprehensive market analysis at 10X speed.”

On the one hand, as a company that specializes in acquiring and understanding market data, this sounds like a great offer to expedite our work. On the other hand, looking beyond the initial excitement of such an easy win, one has to wonder: How exactly is ChatGPT going to acquire “deep market comprehension” at 10x the speed of a team that has been doing it for a decade? Let’s explore these issues more deeply.

The Promise of Artificial Intelligence

At this point, just about everyone has heard about the promise of AI helping to do things “at 10x the speed” or the risk of AI “replacing them” or anything in between. In these contexts, AI is presented as either a one-fits-all solution for every industry or a threat to everyone that will make jobs obsolete. Of course, there are cases of both: amazing victories (e.g., AI-generated videos), as well as epic, fails (e.g., Google Gemini’s image generation), but by and large, artificial intelligence is presented as an existential threat to many things (including humanity itself) because it “knows everything.”

This is precisely why the Drone Industry Insights team carried out a case study: to see just how useful a platform like ChatGPT could be to complement and facilitate the arduous and time-consuming process of analyzing and understanding the drone market.

Case Study: Using ChatGPT for Data about Drones

We tested three instances of ChatGPT: the generic (paid) ChatGPT, a customized DroneGPT, and an even more specific DII Data Explorer?(focused on only DII’s research and data). We then ran a series of the same four questions that requested: 1) a direct factual answer, 2) ignoring the previous command and providing some general advice, 3) elaborating on the advice and providing facts to back it up, 4) verifying the data and provide actionable advice. Since an in-depth discussion of each chat would require too much text/space, the analysis below focuses on the key takeaways. Here are the results.

1.?? To test ChatGPT's ability to provide factual data, we focused on drone stocks, which are often under-reported. The answers were generic and underwhelming. ChatGPT gave a similar response to Google, listing aviation and military drone companies, but only 3 out of 10 were proper drone stocks. DroneGPT also listed three proper drone stocks but included major corporations like Amazon and Alphabet, which aren't true "drone stocks." The best response came from the DII Data Explorer, which only listed drone stocks. However, this response relied on a pre-determined list made available to this GPT, so it didn't provide any new information that was not already given to it.

2.?? For the second task (providing advice for a commercial drone business), all GPTs displayed the inherent problem that ChatGPT has with drone technology: a lack of data. Every response mentioned “revolutionizing” which appears in just about every prompt a user tries regarding drone technology, and they also all mentioned the same industries (“agriculture, construction, real estate, and logistics”) as if these were the first or only ones where commercial drones are making a difference. Another issue that all three had in common was a complete lack of sources or data to support the advice, which would normally be standard business practice.

3.?? The third prompt (elaborating on advice and providing data) yielded the requested data, but quick searches quickly proved that these were likely hallucinations. ChatGPT provided the eye-catching figure that drones “reduce crop cost by 85%,” which it attributed to the American Farm Bureau Federation. However, further investigation showed that the source never actually published such a number.

4.?? The final prompt (verifying data and providing actionable insights) revealed the extent of the problem with using [any of the aforementioned instances of] ChatGPT for this sort of inquiry. Not only did ChatGPT attribute different data to the same source, but it also did not provide any legitimate URL or source to corroborate the data. Moreover, in terms of drone regulation, all GPTs provided the insight that “Understanding and complying with these regulations is crucial” yet they failed to give any actual source that a user could utilize to understand and comply with any regulation (other than name-dropping “the FAA”). Finally, DII Explorer (which generally performed the best overall) provided the insight that “real estate agents using drones for property marketing saw a 68% increase in listings sold (DroneDeploy)”. However, the figure was not something ever published or reviewed by Droneii. This number is most likely derived from Drone Genuity (i.e. it was not DroneDeploy and the source was wrong), and even when digging deeper into the sources that mention this figure, no actual source or study is listed when the figure is mentioned (i.e., even the original source is unclear, and therefore, its integrity)

In conclusion, this very simple case study of three different instances of ChatGPT revealed some paralyzing limitations when it comes to using ChatGPT to analyze and understand the drone market. Some may argue in favor of using more sophisticated and complex prompt engineering (which we have also tested without comparing different versions of GPT), yet the root of the issue remains the same: even basic/simple prompts lead to responses where there are both a lack of unique and reliable data and an issue of misinformation regarding the source and veracity of data that GPT provided.

Machine Learning vs General [Market] Intelligence

Ultimately, the root of the problem lies in what is described as algorithmic learning vs “artificial general intelligence” (AGI). Algorithmic learning, particularly machine learning, involves AI systems designed to perform specific tasks by learning from data. These systems excel at identifying patterns and making predictions within a narrow scope, such as language processing or image recognition. In contrast, Artificial General Intelligence (AGI) aims to replicate humans' broad, adaptable intelligence, enabling AI to understand, learn, and apply knowledge across a wide range of tasks without being explicitly programmed for each one. While algorithmic learning is widely used today for specific applications, AGI remains a theoretical goal, representing the pursuit of creating machines with human-like cognitive abilities.

ChatGPT is an AI tool that generates text by recognizing patterns from the data it has been trained on (i.e. algorithmic learning). It can give clear and relevant answers to specific questions but doesn't have the broad understanding to make connections (without being told [how] to do so). This means it can't independently research or fully understand complex areas in the drone market, such as drone applications and how to use drones commercially. While ChatGPT can help summarize existing information, it can't analyze market trends [without taking already-existing analyses] or grasp the competitive landscape [without human guidance regarding what is considered a competitor]. This highlights the difference between specialized AI like ChatGPT, the ambitious-but-not-yet-existent general artificial intelligence that researchers aim to develop, and the role of subject matter experts.

Furthermore, ChatGPT doesn't create new information or determine when something should be excluded; it only uses what it has learned. This can be a limitation when it comes to understanding what is or isn’t a “drone stock” or simply not using generic cliché terms such as “revolutionary” (as above). This makes it even harder to go into further detail or derive helpful advice considering the initial answer is already questionable to begin with. Perhaps this will change if and when a true artificial general intelligence is developed, but in the meantime, it is simply a terrible idea to rely on ChatGPT for actual decisions that affect business strategy in the drone market.

Conclusion

Over the past couple of years, an image from a 1979 presentation from IBM has become increasingly popular, which states, “A computer can never be held accountable. Therefore, a computer must never make a management decision.” Many have taken this as an opportunity to rephrase a “modern version” claiming that “A computer can never be held accountable, so it has been increasingly used to make management decisions.” Once a computer can be blamed for bad decisions, the managers or leaders who used it are “no longer responsible” for the mistake, yet the computer cannot be held accountable. In short, it’s a win-win for bad decision-making.

#chatgpt #artificialintelligence #droneindustry #research #ai

Emmanuel Fobi

Mechanical Engineer | 8+ years in Avionics & Electromechanical Troubleshooting | AI & Data Analytics Enthusiast | FSE Expert | Passionate about Semiconductor Materials, Robotics & Battery Tech

5 个月

Interesting perspective.

Robert Wood

sUAS Pilot/Inspections/Photogrammetry/Thermography

5 个月

Thanks for the amazing article. Finally a clear and sensible statement about the true state of current AI models. Highlighting these current model limitations is very important.

Liana Mansour

Experienced Partnerships & Client Relations Leader | Driving Growth & Innovation in High-Tech & Public Sectors | Strategic Project Management & Business Development

5 个月

Very interesting article! It would be interesting to run a similar test about drone market data on waldo.fyi.

要查看或添加评论,请登录

Drone Industry Insights的更多文章

社区洞察

其他会员也浏览了