Can Bad AI be Good?
This week we look at the fallout from the ChatGPT launch. How concerned should we be that ChatGPT is often wrong? Can we trust it? Should we use it? If so what for?
Contents
Editorial:?Can Bad?AI?be Good?
Editorial
Can Bad AI be Good?
Shawn Purcell is head of data science at SignalRank. He sent me an article this week, written by?Alan Kay. Alan was a leading innovator, at Xerox Parc. He led or worked on graphical user interfaces, object oriented programming and specifically the smalltalk language. He conceived the “Dynabook”, an early concept close to that of a laptop.
Shawn’s brother Steve worked for Alan Kay at Parc, in the 1970s.
The article was written by Alan Kay -?THE EARLY HISTORY OF SMALLTALK. It starts with a couple of incredible thoughts that are highly relevant to ChatGPT3 and the reaction to it today. I have bolded the key parts.
Most ideas come from previous ideas.?The sixties, particularly in the ARPA community, gave rise to a host of notions about "human-computer symbiosis" through interactive time-shared computers, graphics screens and pointing devices. Advanced computer languages were invented to simulate complex systems such as oil refineries and semi-intelligent behavior.?The soon-to-follow paradigm shift of modern personal computing, overlapping window interfaces, and object-oriented design came from seeing the work of the sixties as something more than a "better old thing." That is, more than a better way: to do mainframe computing; for end-users to invoke functionality; to make data structures more abstract. Instead the promise of exponential growth in computing/$/volume demanded that the sixties be regarded as "almost?a new thing" and to find out what the actual "new things" might be.?For example, one would compute with a handheld "Dynabook" in a way that would not be possible on a shared mainframe; millions of potential users meant that the user interface would have to become a learning environment along the lines of Montessori and Bruner; and needs for large scope, reduction in complexity, and end-user literacy would require that data and control structures be done away with in favor of a more biological scheme of protected universal cells interacting only through messages that could mimic any desired behavior.
Early Smalltalk was the first complete realization of these new points of view as parented by its many predecessors in hardware, language and user interface design. It became the exemplar of the new computing, in part, because?we were actually trying for a qualitative shift in belief structures—a new Kuhnian paradigm in the same spirit as the invention of the printing press—and thus took highly extreme positions which almost forced these new styles to be invented.
Kay and his colleagues were not organic to their present. They were, of course, building on the past, but were seeking to leap forward to a completely new paradigm and not make merely iterative change.
Their first attempts were lacking in all kinds of ways, but they were formative of the next 50 years in computing software, hardware, networking, user interfaces and much more.
The breakthroughs and the limits come out nicely in this opening paragraph of the paper:
I'm writing this introduction in an airplane at 35,000 feet. On my lap is a five pound notebook computer—1992's "Interim Dynabook"—by the end of the year it sold for under $700. It has a flat, crisp, high-resolution bitmap screen, overlapping windows, icons, a pointing device, considerable storage and computing capacity, and its best software is object-oriented. It has advanced networking built-in and there are already options for wireless networking. Smalltalk runs on this system, and is one of the main systems I use for my current work with children. In some ways?this is more?than a Dynabook (quantitatively),?and some ways not quite there?yet (qualitatively). All in all, pretty much what was in mind during the late sixties.
ChatGPT is a modern embodiment of many of these things. It seems almost futuristic. You can ask it questions and get plausibly correct answers from it across a very wide array of human knowledge. It is not a mimic, it is reasoning - at least in what it selects to answer from.
It is also really bad at answering correctly. Ben Thompson (below) has a great example concerning its knowledge of Hobbes and Locke. Entirely mistaking what they agreed on and what they differed on.
I did a similar test using Adam Smith, David Richard and Karl Marx, asking it about Marx’s understanding of capitalism’s “tendency for the rate of profit to fall”.?
ChatGPT3 thought that only Marx had the theory, but that is not right.?
Smith and Ricardo both understood that capitalism has a tendency to concentrate more and more technology into production, at the expense of labor, thus reducing profit rates.
Since then there have been many stories about what ChatGPT has gotten wrong. Many of them are in this weeks reading below.
But what is eye opening is what it gets right. And the likelihood that over time, more and more will be right as it learns from its errors. OpenAI’s decision to build ChatGPT and make it free is vindicated by the learning it gets from the users who interact with it.
The end game is clear - a fabulous technology capable of simplifying human endeavor and reducing the time needed to perform complex and simple tasks. The starting point is flawed, but not so flawed that it isn’t great for many tasks.
Here is my request for it to write a song:
Write a song with a chorus in the style of Bruce Springsteen. Make the topic be about the feelings of a young person who is optimistic for the future but believes that change is needed. Make it appear to be a conversation between him and his mother. Personalize it by giving the Mother a first name, like the Beatles in Eleanor Rigby.
Verse 1: Mama, I know you're worried 'bout me I can see it in your eyes But I've got a fire burning deep inside And I won't let it die
Chorus: Martha, Martha, don't you see I've got big dreams and they won't wait I know the world ain't always kind But I've got to try to make a change
Verse 2: Mama, I know you had your struggles And you did the best you could But things ain't the same as they were in your day And I've got to make them good
Chorus: Martha, Martha, don't you see I've got big dreams and they won't wait I know the world ain't always kind But I've got to try to make a change
Bridge: I know it's not gonna be easy But I won't let that hold me back I'll keep pushing forward, no matter what
Chorus: Martha, Martha, don't you see I've got big dreams and they won't wait I know the world ain't always kind But I've got to try to make a change
Outro: Martha, Martha, don't you worry I'll make a difference in this world I won't let anything stand in my way I'll chase my dreams and make them real.
OK, I agree; not a hit. But still, not bad.
When the Yahoo directory was first published we all thought it was great. Then AltaVista came along, and indexing meant it could scale far better than a had crafted directory. And then Google figured out ranking and because of that still owns search. ChatGPT is the Yahoo directory of this moment. It will lead to change we cannot yet imagine, and fast. Yahoo to Google was 1994 to 1998 or so. A lot happened in 95-98.
So, is ChatGPT bad? Definitely by the standard of, is it right? But is it good? For me there is no question about it. I am thrilled to see what happens.
A lot more this week on the Twitter papers, Sam Bankman Fried, and much more.
This is the last That Was The Week of the Year. I am headed to a family vacation in my wife’s home country - South Africa. Back on January 8th. So to all of you. Happy Holidays and a wonderful 2023. I can’t wait.
I will send an email before the year’s end linking to a new newsletter from my company @signalRank. It will be the first full explanation of the impact of data and AI on venture capital’s ability to identighy and get access to the best startups. I hope you will read it and subscribe.
Video
The Video and Podcast with @kteare and @ajkeen accompanying?That Was The Week?is recorded separately and delivered to paying subscribers via email on Friday or Saturday each week.
To?subscribe, go to our home?at Substack.?
ChatGPT
The promise and the peril of ChatGPT
The AI era is dawning — are any of us ready?
On Monday, StackOverflow, a question-and-answer platform for developers to get help writing code, said it would temporarily ban users from posting answers generated by the buzzy new bot ChatGPT. The bot, which is a free product of the artificial intelligence startup OpenAI, has captivated tech enthusiasts since its surprise release on Wednesday. But while it can often be shockingly accurate in its answers, it can also be loudly and confidently wrong.
The result was that StackOverflow was filling up with wrong answers to difficult questions, worsening the quality of the site. Here’s?James Vincent at?The Verge:
“The primary problem is that while the answers which ChatGPT produces have a high rate of being incorrect, they typically?look like?they?might?be good and the answers are?very?easy to produce,” wrote the mods (emphasis theirs). “As such, we need the volume of these posts to reduce [...] So, for now, the use of ChatGPT to create posts here on Stack Overflow is not permitted. If a user is believed to have used ChatGPT after this temporary policy is posted, sanctions will be imposed to prevent users from continuing to post such content, even if the posts would otherwise be acceptable.” […]
This is one of several?well-known failings?of AI text generation models, otherwise known as large language models or LLMs. These systems are trained by analyzing patterns in huge reams of text scraped from the web. They look for statistical regularities in this data and use these to predict what words should come next in any given sentence. This means, though, that they lack hard-coded rules for how certain systems in the world operate, leading to their propensity to generate “fluent bullshit.”
Stack Overflow’s move to ban ChatGPT capped off an unusually eventful three-day period in tech, in which early adopters alternately thrilled at the potential of a powerful new set of capabilities and recoiled at the tool’s high potential for harm and disruption.
By?Rebecca Heilweil?Dec 7, 2022, 3:00pm EST
A few weeks ago, Wharton professor Ethan Mollick told his MBA students to play around with GPT, an artificial intelligence model, and see if the technology could write an essay based on one of the topics discussed in his course. The assignment was, admittedly, mostly a gimmick meant to illustrate the power of the technology. Still, the algorithmically generated essays — although not perfect and a tad over-reliant on the passive voice — were at least reasonable, Mollick recalled. They also passed another critical test: a screening by Turnitin, a popular anti-plagiarism software. AI, it seems, had suddenly gotten pretty good.?
It certainly feels that way right now. Over the past week or so, screenshots of conversations with ChatGPT, the newest iteration of the AI model developed by the research firm OpenAI, have gone viral on social media. People have directed the tool, which is freely available online, to make jokes, write TV episodes, compose music, and even debug computer code — all things I got the AI to do, too. More than a million people have now played around with the AI, and even though it doesn’t always tell the truth or make sense, it’s still a pretty good writer and an even more confident bullshitter. Along with the recent updates to DALL-E, OpenAI’s art-generation software, and Lensa AI, a controversial platform that can produce digital portraits with the help of machine learning, GPT is a stark wakeup call that artificial intelligence is starting to rival human ability, at least for some things.
ChatGPT’s Most Charming Trick Is Also Its Biggest Flaw
The articulate new chatbot has won over the internet and shown how engaging conversational AI can be—even when it makes stuff up.
LIKE MANY OTHER?people over the past week, Bindu Reddy recently fell under the spell of ChatGPT, a free?chatbot?that can answer all manner of questions with stunning and unprecedented eloquence.?
Reddy, CEO of?Abacus.AI, which develops tools for coders who use?artificial intelligence, was charmed by ChatGPT’s ability to answer requests for definitions of love or creative new cocktail recipes. Her company is already exploring how to use ChatGPT to help write technical documents. “We have tested it, and it works great,” she says.
ChatGPT, created by startup?OpenAI, has become the darling of the internet since its release last week. Early users have enthusiastically posted screenshots of their experiments, marveling at its ability to?generate short essays on just about any theme,?craft literary parodies, answer?complex coding questions, and much more. It has prompted predictions that the service will make conventional search engines and homework assignments obsolete.
Yet the AI at the core of ChatGPT is not, in fact, very new. It is a version of?an AI model called GPT-3?that generates text based on patterns it digested from huge quantities of text gathered from the web. That model, which is available as a commercial API for programmers, has already shown that it can answer questions and generate text very well some of the time. But getting the service to respond in a particular way required crafting the right prompt to feed into the software.
A.I. content has fully invaded our feeds and, as more of these services launch and turn out to be actually sort of good, I’ve found that our ability to talk about these tools is breaking down and getting fuzzier. In fact, even keeping up with what these services are and how people are using them has become pretty difficult.
If we’re talking about A.I. art, the big three are DALL-E 2, Midjourney, and Stable Diffusion. As for video, the most exciting is Luma AI’s neural radiance field (NeRF) app. If we’re talking about text A.I., the newest and most impressive is ChatGPT, which was created by OpenAI, the same company that made DALL-E 2. And, as for the services that turn your own photos into A.I. art, Midjourney can do it, but Lensa, created by Prisma Labs, and a Chinese app called Different Dimension Me, which was created by Tencent, have also become super popular recently. Lensa costs money and generates a pack of avatars after scanning photos on your camera roll. Different Dimension Me specializes in turning a singular photo into an anime character.
I’ve heard a bunch of different frameworks for thinking about the current A.I. explosion. The most common is the “Photoshop argument,” which dismisses very real concerns about A.I. art by claiming that we’re just treating generative A.I. the way we treated digital art programs 20 years ago. (For the few zoomers that might be reading this, in high school, I had to ask for permission to use Adobe apps to complete my final project in art class because, at the time, Photoshop and similar programs were not allowed.) I don’t think this is quite right.
I also recently heard what I’ll call the “Napster argument”. While reporting a?Fast Company?piece this month, technologist Andy Baio told me he thinks it’s possible that the larger A.I. firms end up being dragged into court and are forced to show exactly what content was in the data set that trained their A.I. As Baio explained, this wouldn’t kill the smaller world of A.I. copyright infringement, but it would make the bigger A.I. companies have to keep their data above board. I imagine A.I. lawsuits would also lead to more tools like Luma or Lensa, that ask users to provide their own data to scan. This argument’s logical endpoint would also be “Spotify, but for A.I. data,” which is as troubling as it is interesting to consider.
AI Homework
Posted on Monday, December 5, 2022,?
Ben ThompsonIt happened to be Wednesday night when my daughter, in the midst of preparing for “The Trial of Napoleon” for her European history class, asked for help in her role as Thomas Hobbes, witness for the defense. I put the question to ChatGPT, which had?just been announced by OpenAI?a few hours earlier:
This is a confident answer, complete with supporting evidence and a citation to Hobbes work, and it is completely wrong. Hobbes was a proponent of absolutism, the belief that the only workable alternative to anarchy — the natural state of human affairs — was to vest absolute power in a monarch; checks and balances was the argument put forth by Hobbes’ younger contemporary John Locke, who believed that power should be split between an executive and legislative branch. James Madison, while writing the U.S. Constitution, adopted an evolved proposal from Charles Montesquieu that added a judicial branch as a check on the other two.
The ChatGPT Product
It was dumb luck that my first ChatGPT query ended up being something the service got wrong, but you can see how it might have happened: Hobbes and Locke are almost always mentioned together, so Locke’s articulation of the importance of the separation of powers is likely adjacent to mentions of Hobbes and Leviathan in the homework assignments you can find scattered across the Internet. Those assignments — by virtue of being on the Internet — are probably some of the grist of the GPT-3 language model that undergirds ChatGPT; ChatGPT applies a layer of?Reinforcement Learning from Human Feedback (RLHF)?to create a new model that is presented in an intuitive chat interface with some degree of memory (which is achieved by resending previous chat interactions along with the new prompt).
What has been fascinating to watch over the weekend is how those refinements have led to an explosion of interest in OpenAI’s capabilities and a burgeoning awareness of AI’s impending impact on society, despite the fact that the underlying model is the two-year old GPT-3. The critical factor is, I suspect, that ChatGPT is easy to use, and it’s free: it is one thing to read examples of AI output, like we saw when GPT-3 was first released; it’s another to generate those outputs yourself; indeed, there was a similar explosion of interest and awareness when Midjourney made AI-generated art easy and free (and that interest has taken another leap this week with an update to?Lensa AI?to include Stable Diffusion-driven magic avatars).
Welcome to State of AI Report 2022
Published by Nathan Benaich and Ian Hogarth on 11 October 2022.
This year, new research collectives have open sourced breakthrough AI models developed by large centralized labs at a never before seen pace. By contrast, the large-scale AI compute infrastructure that has enabled this acceleration, however, remains firmly concentrated in the hands of NVIDIA despite investments by Google, Amazon, Microsoft and a range of startups.
Produced in collaboration with my friend Ian Hogarth, this year’s?State of AI Reportalso points to an increase in awareness among the AI community of the importance of AI safety research, with an estimated 300 safety researchers now working at large AI labs, compared to under 100 identified in last year's report.
Small, previously unknown labs like Stability.ai and Midjourney have developed text-to-image models of similar capability to those released by OpenAI and Google earlier in the year, and made them available to the public via API access and open sourcing. Stability.AI’s model cost less than $600,000 to train, while Midjourney’s is already proving profitable and has become one of the leaders in the text-to-image market alongside OpenAI’s Dall-E 2. This demonstrates a fundamental shift in the previously accepted AI research dynamic that larger labs with the most resources, data, and talent would continually produce breakthrough research.
Meanwhile, AI continues to advance scientific research. This year saw the release of 200M protein structure predictions using AlphaFold, DeepMind’s advancement in nuclear fusion by training a reinforcement learning system to adjust the magnetic coils of a tokamak, and the development of a machine learning algorithm to engineer an enzyme capable of degrading PET plastics. However, as more AI-enabled science companies appear in the landscape, we also explore how methodological failures like data leakage and the ongoing tension between the speed of AI/ML development and the slower pace of scientific discovery might affect the landscape.
The report is a collaborative project and we’re incredibly grateful to?Othmane Sebbouh, who made significant contributions for a second year running, and?Nitarshan Rajkumar, who supported us this year, particularly on AI Safety. Thank you to our Reviewers and to the AI community who continue to create the breakthroughs that power this report.
Updating my blog: a quick GPT chatbot coding experiment
2022 Dec 06
The?GPT chatbot?has been all the rage the last few days. Along with many important use cases like?writing song lyrics, acting as a?language learning buddy?and coming up with convincing-sounding arguments?for arbitrary political opinions, one of the things that many people are excited about is the possibility of using the chatbot to write code.
In a lot of cases, it can succeed and write some pretty good code especially for common tasks. In cases that cover less well-trodden ground, however, it can fail: witness its hilariously broken attempt to?write a PLONK verifier:
(In case you want to know how to do it kinda-properly,?here is a PLONK verifier?written by me)
But how well do these tools actually perform in the average case? I decided to take the GPT3 chatbot for a spin, and see if I could get it to solve a problem very relevant to me personally: changing the IPFS hash registered in my?vitalik.eth?ENS record, in order to make the?new article?that I just released on my blog viewable through ENS.
Picture Limitless Creativity at Your Fingertips
Artificial intelligence can now make better art than most humans. Soon, these engines of wow will transform how we design just about everything.
PICTURE LEE UNKRICH,?one of Pixar’s most distinguished animators, as a seventh grader. He’s staring at an image of a train locomotive on the screen of his school’s first computer.?Wow, he thinks. Some of the magic wears off, however, when Lee learns that the image had not appeared simply by asking for “a picture of a train.” Instead, it had to be painstakingly coded and rendered—by hard-working humans.
Now picture Lee 43 years later, stumbling onto DALL-E, an artificial intelligence that generates original works of art based on human-supplied prompts that can literally be as simple as “a picture of a train.” As he types in words to create image after image, the?wow?is back. Only this time, it doesn’t go away. “It feels like a miracle,” he?says. “When the results appeared, my breath was taken away and tears welled in my eyes. It’s that magical.”
Stack Overflow, the go-to question-and-answer site for coders and programmers, has?temporarily banned?users from sharing responses generated by AI chatbot ChatGPT.
The site’s mods said that the ban was temporary and that a final ruling would be made some time in the future after consultation with its community. But, as the mods explained, ChatGPT simply makes it too easy for users to generate responses and flood the site with answers that seem correct at first glance but are often wrong on close examination.
“The primary problem is [...] the answers which ChatGPT produces have a high rate of being incorrect.”
“The primary problem is that while the answers which ChatGPT produces have a high rate of being incorrect, they typically?look like?they?might?be good and the answers are?very?easy to produce,” wrote the mods (emphasis theirs). “As such, we need the volume of these posts to reduce [...] So, for now, the use of ChatGPT to create posts here on Stack Overflow is not permitted. If a user is believed to have used ChatGPT after this temporary policy is posted, sanctions will be imposed to prevent users from continuing to post such content, even if the posts would otherwise be acceptable.”
Essays of the Week
In 1973, the BBC broadcast an episode of?The Ascent of Man, a series on the history of science written and presented by the academic Jacob Bronowski, a short scene from which is commonly considered to be among the greatest moments of British, or any other, television. The episode concerns the quest of scientists for absolute knowledge, and it ends with Bronowski walking slowly in the grounds of Auschwitz.
In a single, unrehearsed take lasting more than two minutes, he walks towards a pond of rainwater, delivering his devastating account of what can happen when humankind “aspires to the knowledge of gods”. He pauses at the edge of the pond, and then steps into it, in shoes that are demonstrably not fit for purpose. He bends down and picks up a handful of mud, representing the ashes of the millions who died in the camps. “We have to close the distance between the push-button order and the human act,” he says as the film goes into slow-motion, and then finally a freeze-frame. “We have to touch people.”
I was reminded of the sequence watching the first episode of?Simon Schama’s History of Now, as Schama stands on the Prague balcony where Václav Havel addressed the protesters of 1989’s Velvet Revolution. Not for any historical parallels, but for the way in which the presenter, visibly moved by his discussion of the subject, suddenly seems to go off-script and appears to improvise.
Why DPI is king
领英推荐
"If 'cash is king' for businesses, 'DPI is king' for LPs. In order to best predict the success of the current generations of funds, you should take a close look at their current DPI," writes Erez Shachar, Managing Partner at Qumra Capital
Benchmarking and measuring venture capital performance has always been challenging. The metrics available are numerous with huge variability and VCs tend to highlight the ones that best fit their needs. Multiples are the most common measures of how much money an investment has made, but their limitation is that they are time agnostic. Highlighting that a fund has generated “3x” while its peer has generated “2x” is not very useful if we don’t know how long the process has taken. This is where IRRs come to work, incorporating the amount of time required to generate those returns. But this metric is also limited, because if a fund manager doesn’t call the capital committed, its IRR may still be high but make no money for investors. So how can an investor evaluate the performance of a fund? Let’s take it from the top.
There is an abundance of acronyms being used for measuring performance – Distributions to Paid In Capital (DPI), Total Value to Paid In Capital (TVPI), Residual Value to Paid In Capital (RVPI), Multiple on Invested Capital (MOIC), Net Asset Value (NAV), Fair Market Value (FMV), Gross IRR, net IRR, loss ratio, and many more are being used interchangeably. While some KPIs are objective, performance is basically measured by the funds themselves, and are based on the funds assessment of the Fair Market Value of their holdings in the portfolio companies. In most cases these valuations are based on last round valuations, but in recent years, marking up companies that did not raise capital for a long period of time became common, and private company book valuations became even more fuzzy.
Friday, 2 December 2022
In the immediate aftermath of?Twitter’s mass layoffs and subsequent resignations, there were widespread reports that the staffing situation and collective brain drain were so dire that the site would collapse. Two weeks later — with World Cup soccer drama fueling record usage — such concerns seem to have been overblown.
Twitter Inc.’s mass exodus of employees leaves the platform vulnerable to a broad range of malfunctions. The social network will succumb to a major glitch at some point, technologists predict. It’s just a matter of when. [...]
Multiple teams that were critical for keeping the service up and running are completely gone, or borrowing engineers from other groups, according to people familiar with the matter. That includes infrastructure teams to keep the main feed operational and maintain tweet databases. #RIPTwitter trended on the site, as users and departed employees predicted an imminent shutdown and said their goodbyes.
Joseph Menn and Cat Zakrzewski at The Washington Post, “Twitter Death Watch Captivates Millions”:
Several critical teams essential to keeping the site functioning were cut to a single engineer or none by the departures Thursday, leaving the company partially on autopilot and likely to crash sooner or later, engineers said.
“I know of six critical systems (like ‘serving tweets’ levels of critical) which no longer have any engineers,” a former employee said. “There is no longer even a skeleton crew manning the system. It will continue to coast until it runs into something, and then it will stop.”
Remaining and departing Twitter employees told The Verge that, given the scale of the resignations this week, they expect the platform to start breaking soon. One said that they’ve watched “legendary engineers” and others they look up to leave one by one. [...]
Multiple “critical” teams inside Twitter have now either completely or near-completely resigned, said other employees who requested anonymity to speak without Musk’s permission. That includes Twitter’s traffic and front end teams that route engineering requests to the correct backend services. The team that maintains Twitter’s core system libraries that every engineer at the company uses is also gone. “You cannot run Twitter without this team,” a departing employee said.
Two weeks later and it seems they?can?run Twitter without that team. Or, perhaps, it’s just been luck and collapse is imminent.
Podcasts of the Week
Harry Stebbings with Martin Casado on?20VC
1.?From $1.26BN Founder to Leading Enterprise Investing for a16z:
2.?The VC Model is Broken and Why:
3.?Surviving a Crash - What Founders Need To Know:
4.?The Changing Guard at a16z:
5.)?The Makings of a Great Board:
ChatGPT?
with Benedict Evans and Toni Cowan-Broan
DECEMBER 4TH, 2022 |?40:31?|?S3:E25
EPISODE SUMMARY
When machine learning started really working, back in 2012-13-14, the demos were amazing, but it wasn't immediately obvious how universal the applications would be. The same with Generative AI now - now - the demos are cool, but what will they mean? How will this generalize to change search or law firms?
EPISODE NOTES
When machine learning started really working, back in 2012-13-14, the demos were amazing, but it wasn't immediately obvious how universal the applications would be. The same with Generative AI now - now - the demos are cool, but what will they mean? How will this generalize to change search or law firms?
SHOW CONTRIBUTORS
Benedict Evans, Toni Cowan-Brown
News of the Week
Apple has "fully resumed" advertising on the Twitter social network, Twitter CEO Elon Musk said today during a two-hour Twitter Spaces chat highlighted by?Bloomberg. Musk also confirmed that Apple is Twitter's largest advertiser.
Just five days ago, Musk?accused Apple?of hating "free speech," "making moderation demands," and ceasing ad spending, causing a slew of press coverage about a potential battle brewing between Apple and Twitter. Musk publicly claimed that Apple had "mostly stopped" offering ads on Twitter and that it had also threatened to "withhold Twitter from its?App Store."
Then, two days after making those statements, Musk met with Apple CEO?Tim Cook, and ended up reversing course. After the meeting, Musk said that there had actually been a "misunderstanding" about Twitter potentially being removed from the App Store , and Cook "was clear that Apple never considered doing so."
Thursday December 8, 2022 2:45 am PST by?Sami Fathi
Apple?yesterday announced?that end-to-end encryption is coming to even more sensitive types of?iCloud?data, including device backups, messages, photos, and more, meeting the longstanding demand of both users and privacy groups who have rallied for the company to take the significant step forward in user privacy.
iCloud end-to-end encryption, or what Apple calls "Advanced Data Protection," encrypts users' data stored in iCloud , meaning only a trusted device can decrypt and read the data. iCloud data in accounts with Advanced Data Protection can only be read by a trusted device, not Apple, law enforcement, or government entities.
Following its announcements, the EFF or Electronic Frontier Foundation, a group that has long-called for Apple to enable end-to-end encryption and take more steps to safeguard user privacy, put out?a statement?applauding the new feature and Apple's renewed commitment to privacy.
Venture Capital Investment More Effective for Fintech Start-Ups Than Credit Availability - AlphaWeek
Venture capital availability has a greater impact on the formation of new fintech start-ups than credit availability from banking institutions in countries with a strong fintech scene, according to new research from Vlerick Business School.
The researchers found that one standard deviation increase in venture capital availability increased the number start-ups formed the next year by 26% for an average country. Meanwhile, one standard deviation increase in credit availability from banks increased the number of start-ups formed the next year by 12.5% for an average country.
Interestingly however, the confidence band of the effect of credit availability is generally much larger than the confidence band around the effect of VC availability, despite it being less effective at stimulating new fintech start-ups.
These findings come from research by David Veredas, Professor of Finance and Sustainability at Vlerick Business School, alongside Dr. Dimitrios Kolokas, Doctoral Researcher at Vlerick, as well as colleagues from Carlson School of Management and University of Exeter Business School.?The Professors wanted to understand how countries can stimulate entrepreneurship growth in new, emerging industries and whether venture capital and credit markets affect fintech entrepreneurship in differently in each country.
To do so, the researchers reviewed fintech entrepreneurship data across 53 countries, for the years 2009-2017. The researchers reviewed the investment at a country-level in both venture capital for fintech start-ups, as well as credit availability, and then reviewed the impact this had on start-up formations the following year.
Venture veteran Danny Rimer has shot to the top of the Midas List Europe thanks to writing the first check for $20 billion design startup Figma.
Rimer has been a newer presence on the Europe list after returning to London in 2018, but had been a pillar of the international Midas List for more than a decade thanks to investments in Dropbox, Etsy, and Patreon from Index Ventures’s San Francisco office, which he set up in 2011.
Gené Teare?December 8, 2022
Four newly minted unicorn companies joined?The Crunchbase Unicorn Board?in November 2022, while three companies were removed.
High-value closures
Three cryptocurrency companies vaporized $45 billion in value from the board. Bahamas-based cryptocurrency platform?FTX?and its U.S. counterpart based in San Francisco,?FTX US, entered bankruptcy proceedings. And New Jersey-based crypto lender?BlockFi?was shuttered in late November. BlockFi had been bailed out by a?loan from FTX?in June 2022 to cover its losses.
FTX was the most highly valued unicorn to close, based on an analysis of Crunchbase data, followed by Pittsburgh-based self-driving startup?Argo AI, which shuttered in October as it was unable to raise new funds. Argo AI — funded by?Ford?and?VW?— was last valued at $12.4 billion in July 2021 in a partnership funding with?Lyft.
Theranos?is the third most highly valued company to close. It was valued at $9 billion in a 2014 funding and shut down in September 2018.
And earlier this year,?Celsius Network, another crypto lender, filed for bankruptcy?in July.
Tonight,?Matt Taibbi?broke a potentially huge political corruption story by publishing a detailed account of the censoring of the NY Post’s Hunter Biden laptop article. But he didn’t place it at the New York Times, Washington Post, or any other media outlet. He tweeted it.
If you’ve been following the story, this isn’t that surprising: Elon’s?made it clear?he wants Twitter to become the “most accurate source of information in the world.” And, true to his word, he broke a big story on the platform today.
We’re all for that mission. But tonight, the experience of reading Taibbi’s scoop was suboptimal and headache-y. It seemed to take forever for him to update the thread with the next tweet, and on mobile, you had to open the horizontally-aligned text screenshots, zoom in, then scroll back and forth to read the content.
Here’s the thread in its entirety, with its text screenshots enlarged (we ran them through an image-to-text translator, copied the text into a vertically-aligned Notes window, and screenshotted them) so that it’s easier to read. Enjoy.
Boston-based?Circle Internet Financial?called off its proposed merger agreement with special-purpose acquisition company Concord Acquisition Corp. — ending a year-and-a-half long SPAC saga which would have valued the company at $9 billion.
“We are disappointed the proposed transaction timed out, however, becoming a public company remains part of Circle’s core strategy to enhance trust and transparency, which has never been more important,” said?Jeremy Allaire, co-founder and CEO of Circle, in a?release.
A SPAC story
Circle’s proposed merger with blank-check firm Concord, which is backed by former?Barclays?boss?Bob Diamond, has been its own long and winding story.
The company — an issuer of USD Coin, a type of stablecoin — announced in July 2021 it would merge with Concord in a deal that would value the company at $4.5 billion.
Startup of the Week
Coordinating people, projects and their various locations has become a headache for companies in the post-pandemic world of remote and hybrid working.
There is expensive office space to maintain and logistics to figure out — which employees are going to actually be in and who’s working remotely each day. Let’s face it, earlier, hot-desk management software — if it was ever used — is no longer up to the task.
Into this world, the Chargifi —?which we wrote about in 2015?— startup found itself. They had been building Chargifi for seven years, enabling users to access mobile power using the free app in any public location with a “Chargifi Spot” — such as a bar, stadium, hotel or office. But when the pandemic hit, demand completely dried up, for obvious reasons.
The team decided to repurpose the software and products they had already built for managing wireless charging networks in offices.
Relaunching as?Kadence, the startup now coordinates people, places and projects to enable hybrid co-working inside teams.
It’s now raised a $10 million seed funding round led by Kickstart Fund with participation from Manta Ray, Hambro Perks and Vectr7, as well as Shadow Ventures and Forward VC.
Kadence also attracted angel investors including Cal Henderson, co-founder and CTO at Slack; Shaun Ritchie, CEO and founder of Teem; and Nick Bloom, Stanford professor, research leader and worldwide authority on remote work.
Attended panjibe
1 年Hi, Sir/Ma'am I hope this finds you well! I am Waris, the marketing specialist of Hareem SEO Management Company. Using our services, you can get quality backlinks from our blog outreach network to help your SEO services grow faster while improving their rankings on Google. Hi! I'm wondering if you'd be interested in a quick chat to discuss this further. It sounds like we can both provide what the other needs! Our Services: Gen Blog Post CBD Blog Post Casino Blog Post Link Insertion Service Content Writing Service We have more than 50,000+ sites that accept Guest Post and Link Insertion. Below are few high quality sites, https://docs.google.com/spreadsheets/d/1h3WhdSMDf9cF-fKXjSiq82fXmA5YTJ3_tIrZ8xF6WJs/edit#gid=0 https://docs.google.com/spreadsheets/d/1W-Blma5wgCBsTp7ETdc1SMoqhQluE7XcPojrdS7Lpzs/edit?usp=sharing https://docs.google.com/spreadsheets/d/1LOFes5KIc8lEIzfJAMcPAaoANKNfXTb57GQfYcaMgns/edit?u
Attended panjibe
1 年hi