Desktop, Touch, Browser, Now AI? The Next OS in Computing
Remember the first time you touched a computer screen instead of typing commands?
We’ve lived through distinct epochs of human-computer interaction: the cryptic beauty of command lines, the intuitive dance of graphical interfaces, & the ubiquity of browser-based computing in the SaaS era.
It’s different now. When I manage spreadsheets, I don’t want to manipulate formulas anymore. Instead, I want to instruct the computer as I would explain to a colleague : “run the correlations on these variables to see if anything meaningful pops out, then plot it, & add it to the deck.”
When I parse research reports, I don’t want to read them line by line ; I want to ask for summaries & surprising conclusions.
When I program computers, I don’t want to work at the level of lines & functions & arguments, but the style of a webpage like this blog’s theme.
When I read an email, I want to command an AI to follow up on this in a week, or research the new prompt engineering technique in this newsletter & send it to my Kindle so I can read it that night, or compare the revenue multiples on health care records companies to horizontal CRM.
领英推荐
So, I’ve taken to keeping an AI open on a separate monitor. That’s step one.
But I’m starting to run models on my laptop so I can fire all kinds of questions at its feet & watch it dance. Can it draft an email, or critique a blog post well?
Running Llama3.3 on my computer is the nearly same as asking any of the major alternatives - but more private & a bit slower - which for the moment works for me. It buys me time to read the AI’s output and reflect on this new working relationship.
Talking to the computer using the transcription program I built (that I’ll soon share), reinforces the sense of collaboration.
I can see a world where I’m not flipping between applications. Instead, I’m telling the AI to send an email or research multiples & that’s my interface - that’s my OS. Not the command line, not the desktop, not the browser.
The AI choreographs all that work underneath.
Would love to understand this workflow better. What are the actual tools you're using here? Is the idea to keep something like ChatGPT open in a tab and then switch over to it every time you have an AI task? How does it integrate with other tools and the OS? Maybe it all gets answered when we see your transcription program.
Tech Equities Diversified Investor. AI and Tech Companies Builder. AAA+++ Talent Network. Roosetta. Instaperq. 30,000+ Wife & Husband Team. Top 1% Recruiters. Csuite all + Boards.
2 个月Warren Buffett 41% now AMEX + AAPL
Multi-Asset Investor (Private Equity & Venture Capital Focused) | Investment Banker, Family Office & Board Advisor
2 个月I'm using Raycast, which allows me to make it a part of the command interface. I know its a keyboard stroke away. Trying out Apple Intelligence, but not getting the results I hoped for.
CEO for Growth Companies
2 个月This feels even more transformative than my first interaction with the GUI of my Mac128. And I don't think it hyperbole to view AI as the emerging OS in all forms of computing from personal to enterprise.
Co-Founder and CEO @ Nobie | ex JP Morgan, Census
2 个月Agree. AI at the application level has started to change human behavior (chatgpt moment). But AI at the OS level is going to redefine how humans interact with software (and what they expect from. it). I think the emergent properties and changes in human behavior from that sort of ambient/ubiquitous AI will surprise us