?? GitHub & Git: for non-technical founders If you've ever nodded along when developers mentioned "pushing to main" or "creating a pull request" — while secretly having no idea what they were talking about — you're not alone. GitHub sits at the centre of nearly every tech project, but remains a mysterious black box for many non-technical founders. And that knowledge gap is costing you. In this post: - Watch Ben Tossell do it step-by-step below where he demonstrates setting up a GitHub project and using Git commands - The real-world translation of GitHub jargon that developers assume everyone understands - A step-by-step system for going from complete GitHub novice to confidently managing your tech projects - The exact workflow used by successful non-technical founders to deploy their first websites - Critical mistakes that cause communication breakdowns between founders and developers (and how to avoid them) - A practical reference guide with essential GitHub terms and Git commands explained in plain English What is GitHub? Let's simplify what GitHub actually is. Think of GitHub as a super-powered Google Docs for code. That's it. Google Docs lets multiple people edit documents simultaneously and keeps a history of changes. GitHub does the same thing for code, but with extra safeguards to prevent people from breaking things. Ever restored an old version of a Google Doc after someone made unwanted changes? GitHub does that for code, but with industrial-strength controls that make it nearly impossible to lose your work. Why GitHub matters for non-technical founders? As a founder building with AI coding tools, GitHub fundamentals let you: 1/ Store and organize AI-generated code securely 2/ Track changes across multiple AI sessions 3/ Experiment freely without breaking what works 4/ Deploy independently without developer assistance Version control systems like GitHub are essential tools that help transform AI code suggestions into actual products you can deploy and control. The GitHub workflow for absolute beginners: https://lnkd.in/dzqr6xPp
Ben's Bites
网络新闻
Your daily dose of what's going on in AI. In 5 minutes or less, with a touch of humour. Read by over 60,000 others.
关于我们
Your daily dose of what's going on in AI. In 5 minutes or less, with a touch of humour. Read by over 60,000 others from Google, a16z, Sequoia, Amazon, Meta and more.
- 网站
-
https://www.bensbites.co/
Ben's Bites的外部链接
- 所属行业
- 网络新闻
- 规模
- 2-10 人
- 类型
- 上市公司
Ben's Bites员工
-
Amie Pollack
Technologist
-
Ben Tossell
Founder at Ben’s Bites. Helping you use AI to work smarter, not harder. Follow for daily AI tips and tutorials.
-
Daniel Díez
Doing stuff @Ben's Bites & @Bidcrunch
-
Shanice Stewart-Jones
COO of Ben's Bites - The 115k+ AI Learning Community | AI Education for Founders, SMEs & Creators
动态
-
The new Codeium's Windsurf Tab upgrades next-line suggestions in the code editor with an enhanced context of your terminal (errors and results), clipboard (for copied information) and Cascade (Windsurf’s Agent) They say it’s for professional developers but we like it the same Also, it’s unlimited for all users (just a bit slower in free plans)
-
Gemini has two new features (again): Canvas and Audio Overviews Canvas brings an Artifacts-like preview within Gemini to collaborate with AI for writing and coding Since it is Google, the output can be exported to Drive and Google Docs Audio Overviews bring the hype of NoteBookLM, an AI podcast generated from your documents, to Gemini. Both features are available to all Gemini users NoteBookLM also got a new upgrade: you can now create interactive mindmaps from your documents More info: https://lnkd.in/dA_A3GUR
-
?? On my mind… Do AI models make us better problem solvers, or just faster ones? While watching a session from Steph Smith’s Internet Pipes last night, I learned about the OFFSET function in a spreadsheet—a perfect solution to a common problem. It made me wonder, would AI have told me about this solution? In general, does AI help you get better at solving unseen problems? So I ran a little experiment. I shared the problem with various AI models (without any prompting techniques) and asked them to help me fix it. Only o1 and o3-mini-high acted like real experts—immediately highlighting two common solutions (including introducing the offset function). Other thinking models (Claude, Gemini, o3-mini) did hint at the offset function but buried the lede with other much more complex answers. I have to act as the expert to find what’s best for me. Others weren’t any help at all. Just lists of overcomplicated answers. This is a single instance, and prompting might change the results but I feel this does hint towards the fact the “reasoners” or “thinking models” are better teachers. And it matters because when you’re vibe coding, you encounter new problems every day, ending up in the “you don’t know what you don’t know” trap. so next time you feel like it, try a new chat with these models. — By Keshav Jindal?
-
Google is shipping ?? lately and the current batch of updates shines up the Gemini app It has three major upgrades: 1/ Deep Research in Gemini is now available to free users too. It also has a new model underneath, which does more than just summarizing web articles. It tries to reason over them. The reasoning can still work but the vast number of resources it searches over makes it stand out. 2/ This new model is an upgraded version of Gemini 2.0 Flash Thinking (still experimental). It has better performance and a longer context window—allowing it to answer complex questions and use multiple apps like Search, YouTube and Notes in a single go. 3/ Gemini has a new mode called Personalization. It asks you to connect your Google search history with Gemini and uses it when you’re asking Gemini questions. For example: when we asked it a question about Slack’s API, it looked at our search history, figured that we were using that with Webflow’s API and tailored its answer based on that. More info: https://lnkd.in/dz2JX5wy
-
-
Two AI model launches flew under the radar Both target different backbone tasks of modern AI applications and smoke the old competition 1/ Mistral AI released a new model to replace OCR. It turns scanned PDFs into typed text for easier use and processes 1000 pages per $ https://lnkd.in/gsSE77uV 2/ Google released a new text embedding model to power RAG apps. It’s still experimental, but it performs 10% better than other embedding models on the market https://lnkd.in/gimca8AZ
-
OpenAI has three new tools for building agents Developers can now use web search, file search and computer use To make using these easier, OpenAI has a new Responses API and Agents SDK The completions API will stay, and the Assistant API has numbered days until 2026 More info: https://lnkd.in/ggRYFF_D
-
-
Google also released Gemma 3, their open LLM It comes in four sizes: 1B, 4B, 12B and 27B The 27B variant beats DeepSeek AI V3 and can be trained further to surpass even DeepSeek R1 (the hyped one) Two important improvements in Gemma 3 are a) its context window—supporting 128k tokens now and b) vision—i.e. can understand images More info: https://lnkd.in/gMWqfmkw
-
-
Google released native image generation for developers If you’ve used any image-generation tool and felt like they are a little dumb, this fixes it Native generation allows the same model to generate text and images (vs calling a separate model like DallE3 or Imagen). It also enables natural and selective editing of images without changing the entire image You can use it on Google AI Studio by changing the model to “Gemini 2.0 Flash Experimental” Plus, another upgrade in Google AI Studio - You can now share a YouTube link and ask questions about the VIDEO, not the transcript More info here: https://lnkd.in/g43Wu42f