My Mistakes for 2022:
Kyrtin Atreides
COO | Cognitive Architecture & Cognitive Bias Researcher | Co-founder
At the close of 2022, it is time to step away from myself and look back on my mistakes for the past year. I’ve found it helpful to preface any new year by better understanding the one that is left behind. Many mistakes seem obvious in hindsight, but integrating those lessons into forward motion requires clarity and focused intention. Holding oneself accountable in the eyes of colleagues is also a potentially helpful tool for reinforcing positive habit formation, as demonstrated in the research literature.
1.??????In my default mode of thinking I made the mistake of applying a version of the classical (economic) theory of rational humans when anticipating the reactions of media and potential investors.
As is usually the case, few things fall further from the mark of rationality than a human. Media often show more interest in trivial technology and complete nonsense than actual breakthroughs because those circulate more virally than real news. They’ve adapted to their environment and the algorithms governing how their stories circulate across each platform, and they demonstrate a strong in-group bias to assume the catch-22 that anything newsworthy is already being covered by someone else, or from a source they’ve covered before.
Many investors rely on correlation and vetting-by-proxy, a bias where they only deal with connections to their existing network and assume those connections to be higher quality than opportunities outside of that network. As that bias acts as the gatekeeper, it doesn’t actually matter how profitable, robust, or potent a startup is or any business materials they’ve prepared. The gatekeeper has no concept of such things, biasing strongly in favor of networking-oriented extroverts, and directly against many of the best founders. Thanks in part to survivorship bias, investors often fail to notice this.
By allowing my expectations to default to rational reactions I predicted these poorly.
2.??????I expected experts to apply their expertise.
In reaching out to experts in cognitive bias, AI, Ethics, and related research domains I discovered that only a small subset of such people actually put any effort into applying their own expertise in their daily lives. These people often reply within 1-2 hours, and usually no more than a day, and their responses stand in stark contrast to most. This aligns with Daniel Kahneman’s telling study where the vast majority of statisticians failed to apply statistical thinking when put to the test.
As someone who has studied and published on cognitive bias, I'm guilty of making this mistake myself in the form of number 1 on my list.
3.??????I gave too many chances.
There is definitely room to tailor roles in a company to better fit the individual, but it is also easy to give someone too much rope with which to metaphorically hang themselves. In a startup environment, people require resilience to stress and grit to persevere. When those qualities are absent in an individual, they won’t be a good fit for the company regardless of the role. The difference between people who can work in this kind of environment and those who cannot is a night and day contrast when stressors emerge.
It is best to draw lines early on, to prevent gradual drift from creeping over those margins.
4.??????I underestimated the appeal of shiny objects.
While tools like Stable Diffusion and Dalle-2 were noteworthy advances, they also relied heavily on humans doing much of the work without realizing how much they statistically contributed to the process. These tools are still quite bad at 1-shot generation of quality material, usually requiring many rounds of experimentation and cherry-picking to reach reasonable results, but people easily forget this, as it acts on the reward centers of the human brain much like “gamification”. ChatGPT demonstrated the same dynamics in text, even though it still can’t compete with what the Uplift research system accomplished on 64 GB of RAM and with orders of magnitude less human feedback as early as 2019. Many hailed ChatGPT as a breakthrough, but that couldn’t be further from the truth, as it couldn’t compete on a single milestone tested with a system that cost about 5 orders of magnitude less, created by a bootstrapped startup on volunteered engineering time.
If such simple systems can prove so alluring to many, that sets the lowest bar for UX on the systems we're preparing now.
Numbers 1 and 2 were costly in terms of time, setting more ambitious expectations for public and investor interest than proved feasible. Number 3 was a bullet dodged. Number 4 is a worthy consideration for the UI/UX of the systems being prepared for commercial deployment in 2023. All of these were valuable lessons to integrate.
Looking forward, the most significant and predictable events for 2023 are on our engineering roadmap, even if most making predictions remain oblivious. If we’re able to locate even a single serious investor, then that roadmap may be accelerated to substantial mutual benefit. If no such people exist, we’ll still deploy the technology, just at the same pace we’ve worked at over all of our years previous through bootstrapping. If such would-be investors only appear after we’ve demonstrated the new commercial product in beta, they’ll pay 20x more at a bare minimum, if they get the opportunity to invest at all.
There are a few non-trivial variables for 2023, but the advances of narrow AI aren’t among them. What anyone can predict is that those who don’t evaluate where their expectations for 2022 fell short won’t (statistically) make very accurate predictions for 2023 either.
Every year or two I look back and see a different person, and in so doing I understand the progress that has been made. This serves as potent motivation to continue learning and evolving, to always strive to be more tomorrow than I am today.
.
2 年Given the level of human input involved which side of the current debate about AI generated art and human artists do you fall on?