Artificial Intelligence outrunning CEOs, CTOs, Corporate Lawyers & Boards
The Year of Artificial Intelligence
Finally. This is the "Year of Artificial Intelligence". AI for short.
Uh oh.
The rapid advancements in AI are sneakily disrupting not only business models, but begging for corporate change. Change in operations. Change in software methodologies. All the while adding liability in hidden ways.
Isn't the excitement akin to the 90's, when the web captured everyone's attention?
Like the 90's, things are moving so fast, that Corporate Boards, CTOs, CEOs and Lawyers need to do some deep thinking or they will be in trouble.
AI Assistants change coding forever
Let's take one example. AI Coding Assistants. In 1996, Microsoft released Intellisense in their Visual Basic product. Intellisense is like autocomplete but for code. What is autocomplete? You see autocomplete every time you type into a browser or a search bar. It tries to guess what you're typing and lets you accept their guess to save you on typing. As a programmer, it's extremely helpful. But it won't really write code for you. It will only complete the current text you're typing, ensuring proper syntax.
Recently, however, AI Coding Assistants like Microsoft's GitHub Copilot, Amazon's CodeWhisperer, and independent Tabnine, have taken this to a whole other level. They write out large chunks of code for you.
Andrej Karpathy , Former Sr. Director of AI at Tesla , and founding member of OpenAI , tweeted that "Copilot has dramatically accelerated my coding, it's hard to imagine going back to "manual coding". Still learning to use it but it already writes ~80% of my code, ~80% accuracy. I don't even really code, I prompt. & edit."
Wait. WHAT?
One of the best, most efficient programmers of our time doesn't even really code anymore? He has become a "Prompt Engineer"? And the result is that he DRAMATICALLY accelerated his coding? WOW. That tweet was written on December 30, 2022.
Viewed "only" 1.7M times at the time of this writing, I guarantee you that tweet has sold a ton of GitHub Copilot licenses. Heck, I personally got half a dozen people on Copilot by mentioning the tweet in private conversation.
CTO's needs to consider the impact of Coding Assistants
"So what?" you say. "This is great, now my developers will be much more efficient." Yes. True. But the ramifications are deep.
If you are a Digital Agency, this impacts every aspect of your business model. Are your developers using it? If yes, are they admitting it? Do they have a second and third job? Did they quiet quit, while taking a second job?
IT teams have learned since at least the iPhone, that there's a limit to how much they can dictate what employees do. If it's convenient, policy or no, employees will break the rules and use the tools they want.
Copilot has dramatically accelerated my coding...it already writes ~80% of my code, ~80% accuracy. I don't even really code, I prompt. & edit. — Andrej Karpathy
Which AI coding tool are they using? "Does it matter?", you ask? Yes. Yes, it matters a lot. Your CTO needs to drop everything and understand the tools. Play with them. See how they work. Read the critical articles. How does it impact Infosec? How about Accessibility?
AI is moving so fast that I must digress from the point for a moment. In searching for articles to link to in the last paragraph, I found out that someone, at least, did pay attention to accessibility. A GitHub R&D team, GitHubNext, released a super cool tool called "Brushes". They behave similar to a Photoshop brush, acting on the code underneath with a swipe of the brush. Who could think up such a thing? Wow. The homepage of GitHubNext.com is filled with cool experiments they are running.
Information Security, legality and contracts
What might a CTO learn from her explorations? Copilot is allegedly writing insecure code. I thought Copilot had the upper hand because Microsoft owns GitHub. I figured they had the rights to use the GitHub corpus.
It turns out that Tabnine was also able to mine the same corpus. However they chose a different approach. They foresaw considerable legal issues, which GitHub is already dealing with.
They also highlight some potential quality issues with approaches like Copilot's. Check out Tabnine's code privacy page:
领英推荐
Tabnine’s generative AI only uses open-source code with permissive licenses for our Public Code trained AI model
They further assure the user that unlike so many of these AI projects we are naively using, "your code, and AI data are NEVER used to train any models other than private code models."
That's not all. They claim to be picky about the data they use to train their models. "Trained code is filtered to ensure quality and avoid outdated code, esoteric code, auto-generated code, and other edge cases. The model is updated regularly to capture recent developments."
Does CodeWhisperer and Copilot do the same? They might, I don't know. But CTO's needs to be opinionated on which tool is used. Does the entire organization need to use the same tool? Or is it up to the developer's preferences? In this case, Legal should be brought in. Is Copilot using the entire corpus? Can your organization get sued for using code from Copilot that is copyrighted? Should your attorneys be calling the AI providers and getting indemnification clauses inserted?
Shouldn't you have a project measuring the impact of these tools on your organization's velocity? If your team can go 10 times faster, then do you demand that every developer use it? How about your projects in flight? Do you change your roadmap? Do you offer clients money back for hitting budget? Or do you book a profitable quarter? How about the next project? Do you stay silent until the client wakes up and realizes that either cost should go down or scope has to increase?
Every consideration the CTO of a Digital Agency might have, goes double for an Enterprise Company.
These are earthquakes. Good ones, but earthquakes nonetheless. They need to be handed with intention.
Will I be charged for letting my vendor use my data for the next product?
That's not all. The AI is hungry. Hungry for training data. When you use their products, they want permission to use your data. And they're slipping it into the TOS. Even lawyers don't read the TOS before using a website.
What will happen to the Artists?
Et tu, Brute? The indignity of it all to artists. People are scared. Will clients still pay for art? Or will generative AI replace them? That's why the TOS is key. There is speculation that Adobe will use its own users designs to sell generative AI. Adobe Creative Cloud automatically opted all users into a "content analysis" program. In this case, I suspect it's going to turn out to be a false alarm. I don't think Adobe will risk upsetting their users in this way. It's not worth it. There are other data sources. Time will tell.
AI will make personalized spear phishing and social engineering attacks ubiquitous
Infosec (information security, ie protection against hackers) will get much harder. Researchers are hard at work with demonstrating that ChatGPT can be tricked into helping with phishing and spear phishing attacks. This is obvious. I mean, there's a product out now called BHuman. It uses AI to help you create thousands of personalized emails for your prospects. All created dynamically by using your recording to realistically show you saying whatever is edited in a script. How long until this is used to convince people they are someone else and get access to treasure?
If you're a CTO, how do you protect against it? Spear phishing and social engineering may get so good, that we'll have to be way more proactive to get a handle on it. I have an idea.
Chaos Monkey to the rescue?
How about we take a page from Chaos Monkey? Netflix wanted to increase their fault tolerance in order to be an anti-fragile system. To achieve that, Adrian Cockcroft , when he was the cloud architect at Netflix , created a Chaos Monkey that would randomly take down servers and systems. No, this wasn't done to make his employees angry and to quit. It was to make sure their systems were not dependent on any one server or system. This became famous in the world of DevOps and Engineering.
I suggest that we train employees not to be fooled by phishing attempts and social engineering by doing the infosec version of Chaos Monkey. This exists to some degree with red teaming, a security assessment method that simulates an attacker's perspective to identify vulnerabilities in an organization's systems and networks.
Infosec firms should make red teaming an ongoing and more high profile activity. It should serve as a training exercise for all employees to get better at foiling spear phishing and social engineering attempts.
Employees should know that a red team is always trying to foil them and results will be collated and discussed whether the employees foil or fall for the ruse. This isn't to be mean. It's the only way to raise awareness. Once every year or two won't be enough.
How about copyright clearance?
Is the Generative AI you don't even know your employees are using, clear of copyright? There are already a bunch of tools to create art out of thin air. In fact, Bryce Drennan created "imaginAIry", a GitHub project that lets you create and edit art from the command line! But some of them are getting sued by the likes of Getty Images and some artists.
Moving fast, breaking things, dealing with lawsuits and paying for lobbyists is an old VC playbook, so I suspect most of these companies will weather the legal storm.
That said, should Fortune 500 companies worry about getting roped into the expensive legalities? At the very least, they should modernize and tighten up all contracts. From employment contracts to ensure employees are required to follow the company's guidelines, all the way to vendor contracts to make sure vendors aren't putting your company in harm's way.
I've spoken with executives in more modern public companies with a tech bent. They and their legal teams are paying attention. They can't sleep. Worried about their business models getting disrupted. About all the things I mention in this article. But how about the executives at legacy companies? My sense is, many are not even thinking about it.
In the meantime, I'm diving into all these details in order to be ready to help executives and even boards navigate this new landscape. Companies will need a good solid audit and recommendations of what to do next. If there was ever a need for an innovation center and a Corporate VC strategy, it is now.
Joe Devon is a serial entrepreneur. He is the Chair of the GAAD Foundation and CEO of Yuda Digital, a consultancy that provides Fractional C-Suite Services, Digital Transformation and AI audits.
President @ Incept Data Solutions | Advancing data excellence through partnerships
1 年Joe, thanks for sharing!
Chief Technology Officer at New Western
2 年Prepared, dedicated, ready to innovate but most of all ready to upgrade everything to the next level and blow customers minds.
Partner Management | Edtech | SAAS | Executive Sales Leader | Higher Ed | Partner & Reseller Management | Music Biz ?? |
2 年Joe Devon ?? post. Appreciated the short/long term thinking although it’s all really short term as developments are moving at sonic speed. Your take on costs was what really caught my eye. As AI scales and it makes it infinitely cheaper to do “anything” how will that affect pricing. Although your example was specific to agency work it’s application is ubiquitous. Keep them coming. Great insights Joe. Thx. To 2023??