GP Bullhound's weekly review of the latest news in public markets.

GP Bullhound's weekly review of the latest news in public markets.

This week’s update covers key takes from the AWS re:Invent conference, more data points in AI and GPU availability, and software reporting, including Salesforce.

Our Technology Predictions 2024 report is out. Find out what's in store for the tech sector in 2024.

Market:?A generally quiet week, with some last quarter results (and year) rolling in.

Portfolio:?We significantly reduced our position in Workday this week (see comments below), though balanced by our significant portfolio position in Salesforce.?

AWS held its annual re:Invent conference this week.?Jensen Huang turned up on stage, and it all returned?to chips and Nvidia.?

The opening keynote on Monday night was about the?move to serverless computing as the next iteration in the cloud.?The way companies have run and built their business has changed significantly over the past 20 years: if you were a startup in the early 2000s, you bought Sun servers and Cisco routers and built your application on that hardware. Then cloud came along and allowed you to rent a server (and rent specific capacity) within a third-party datacentre, effectively moving up-front capex cost to ratable opex. The next step has been the move to serverless –?getting rid of servers and letting you run your workloads across a broad layer of compute and storage infrastructure, scaling automatically for the capacity you need.?It has the benefits of delivering much better efficiency and, for the end customer, being?much more cost-effective – because you only pay for the compute you use. In a world?where compute??driven by AI – is exploding, that becomes even more important.?Efficiency means access to cheaper compute, and more affordable AI compute will drive?more innovation and AI use cases (a la OpenAI/ChatGPT).?

Amazon already has its ASIC chips??Trainium and Graviton??and launched new generations?this week. Both new iterations feature much?more memory bandwidth??the Trainium2 3x more and the Graviton4 75% more than the prior generations. That goes back to the importance of memory as the performance bottleneck we’ve spoken about (and, relatedly, Micron positively pre-released this week around AI demand).?

We’ve said before that it makes sense for hyperscalers to build their own chips?– they have?enough utilisation, specific use cases, and specific workloads to apply to ASICs.?And there’s no doubt that?no player wants to be entirely tied into one very powerful supplier in Nvidia.?It’s important that they build?ASICs and not GPUs, however?– these have much higher performance but?are very specific to the application and have much more limited scope in terms of workloads that can run on them.?

While ASICs will rebalance some workloads, AI GPUs will?remain the majority of the market?because we are still?so early on in AI use cases and, therefore, the flexibility that comes with a GPU vs a custom chip is much more important. It’s not to say that there won’t be a point in the future when we have more ASIC deployments – that will likely come with more stabilisation in the applications and algorithms that run on chips (telecom base stations run on ASIC equivalents – that speaks to the maturity of that market). But, in the?short to mid-term, we expect GPUs??where Nvidia and AMD are really the only credible offerings?–?to represent the bulk (90%+) of AI infrastructure?and for ASICs?not to cannibalise?GPU workloads. Look at the comments below from Dell – it might not be able to get hold of any Nvidia GPUs for 39 weeks, but, equally, it can’t sub them out for ASICs.

As an aside, AMD said at a competitor conference this week that they expect?to exceed their prior $2bn target for their pure GPU offering?(we think it will come closer to $5bn).?

That gets us to?Nvidia and Jensen on stage with AWS CEO Adam Selipsky.?The reality is that while it’s more cost-effective for Amazon to run the workloads it can on its own chips,?AWS (and Google and Azure) customers want to train their models using Nvidia, as developers continue to remain tied into using CUDA.?AWS will now be the first to offer?DGX Cloud?– Nvidia’s AI training as a service – which allows enterprises to take their data and plug it into Nvidia’s pre-trained models or more easily build their models with their datasets run on the hyperscalers.?

This makes sense for Nvidia –?if more of the innovation happens on top of large language models further up the stack, the risk is that value will shift away from CUDA. Nvidia presumably hopes that customers choose to implement the full Nvidia stack, and it has both CUDA and DGX cloud as the competitive moat, ultimately controling more of the value chain.?

It’s less clear why the hyperscalers want to do this. Nvidia is trying to go direct to the customers. In a perfect world for Nvidia,?enterprises will choose Nvidia for their cloud service first and foremost. Whether running on AWS, Azure, or GCP will be secondary?– we assume that Nvidia will implement its architecture so that you won’t get wildly different experiences on whichever cloud provider it’s built upon. There is?potential for a value shift and price competition for hyperscaler capacity (why wouldn’t I go for the cheapest cloud service if Nvidia assures me the experience is the same?)

It seems like Nvidia is abstracting away the middle layer, which is the hyperscaler capacity, and capturing all the value in?both?the front end (owning the customer) and the back end (chips).?Will these partnerships continue in a more normal GPU supply world??Still, for now, Nvidia is the king master, and everyone has to play nice – the hyperscalers need as close a relationship with Nvidia as possible.?

The other announcements worth noting from the event were around LLMs and AI services,?which are particularly interesting in the context of the OpenAI debacle.?Amazon announced it is rolling out a workplace chatbot, Amazon Q, to compete directly with Microsoft’s Copilot.?The thing it’s missing is the broader integration Microsoft can offer with its full office suite, which is why it’s priced more cheaply?– $20 per month vs $30 for Copilot.?

Finally, and in the context of last week’s OpenAI news,?the focus on Amazon’s LLM service Bedrock was very much around being LLM agnostic.?“There’s not going to be one model to rule them all,” Adam Selipsky pitched.?We’ve talked about the uncertainties in LLMs – how many there will be, and will there be a winner? Will different models work better for different use cases??For Amazon, the OpenAI drama was a perfect opportunity to double down on the fact that they haven’t bet their house on one LLM (with an apparent shot fired towards Microsoft).?

Onto results and newsflow:

AI servers still GPU supply gated??Nvidia visibility

  • Talking of servers, while we don’t own it, the?HPE?call this week helped articulate?the current state of play in AI, and we’d highlight many points from it. In the context of Amazon’s Serverless keynote, there are still many instances where servers are needed, although there is no doubt that?more and more of the middling hardware and software are being abstracted away.?We don’t own any of the dedicated server players.?
  • HPE called out?exploding demand?for AI?7 times on the call,?and they are seeing it, with a backlog of $3.6bn in HPC and AI.?They need to know when that will convert into revenue,?which is all around when they can get hold of Nvidia chips to fulfil the orders –?it is currently GPU supply gated.?The expectation is that revenue will start accelerating from?Q2 2024 as GPU supply increases, and 2024 will be a significant growth year.
  • Elsewhere its edge/traditional business continues to face challenges, with inventory correction still playing out and cannibalisation from AI spend.
  • Dell?also reported this week. The most important thing for us (outside of cautious demand in its PC business and continued pricing pressure) was its commentary around Nvidia lead times. Like we saw in HP commentary,?demand for Dell’s AI servers significantly exceeds supply (the pipeline tripled in the quarter), with the primary constraint being Nvidia chips.?Last quarter, it had a 39-week lead time, and that hasn’t changed. That’s effectively 39 weeks of visibility for Nvidia.?Dell COO said:

I wish I could tell you that the backlog was less than??or the lead time – was less than 39 weeks. I can’t today.?We are on the phone, working every available channel opportunity, as you might imagine, with our supply chain capabilities, to improve supply, to improve supply availability.?We’ve offered our services. We’ll help where we can. I’m hopeful for the day to tell you that supply has improved greatly. Lead times have reduced, and we can work the backlog down faster. That’s our job. I don’t have those answers today.?It’s 39 weeks. We’re trying to continue to get more supply. That’s where we are. As we look forward into 2024, there’s clearly alternatives coming. There’s work to be done in those alternatives, software stacks have to be taken care of, resolving the opportunities around them. But there’s more options coming. Their adoption rate, we’ll see. But right now, that?multibillion-dollar pipeline that I referenced, the backlog that we’ve talked about is Nvidia-based 39-week lead time, we’re working our behinds off every day to get more supply.

Portfolio view:?We don’t own any dedicated server providers – as per our intro on Amazon Cloud and serverless, ultimately, much more of the value either lies with the underlying chips or up the value chain at the application layer. The server providers have low barriers to entry and little pricing power. But it’s clear evidence that the AI server shift is happening,?with Nvidia chips, the key components everyone wants to get their hands on, still limited by supply.

Semis?–?AI beneficiaries beyond GPUs?

  • Marvell?(owned) reported a solid set of results, but more important was it upping (again) its?AI revenue targets,?driving a reliable data centre beat and guide.
  • Marvell focuses on chips for connectivity across data centres. It makes (alongside other chipsets) DSPs (Digital Signal Processors), which take inputs from the switch ASICs, convert them from analogue to digital, and perform operations like signal conditioning, equalisation, and error correction before converting those back to analogue and pushing them forward.?
  • With the scale of data and bandwidth required in AI processing and AI clusters, that requires much higher optics performance, which is driving demand for Marvell’s PAM4 DSP platform (a lot of this directly sitting alongside Nvidia’s H100 and A100 infrastructure).
  • To give some context on the upward momentum around AI, in Q1, Marvell indicated that it was on track to doubling its AI-based revenue to $400m in 2023 from demand pull-in on its DSP chip sets and then doubling again to 2024.?In Q3, Marvell said it expected to exit Q4 at a quarterly run rate of $200m – already at the $800m 2024 target.?Now, Marvell thinks it will land “significantly above” that $200m forecast.?
  • It’s helping to drive the Q4 datacentre forecast of 35% sequential growth, on top of the in-quarter datacentre beat (vs cons ~20%).?
  • It is worth noting that the “traditional” cloud is?also back to yr/yr growth and expected to grow sequentially??that market is looking much healthier all around.?
  • The rest of Marvell’s business was more mixed – carrier, consumer and enterprise storage are all seeing corrections.
  • Marvell has an additional opportunity around custom silicon (ASICs)?–?working with hyperscalers (back to the first comments on Amazon – though we don’t know yet which hyperscalers Marvell is working with).?They didn’t size it explicitly on the call, but it’s an area to watch closely and could be a significant opportunity ramping up as early as next year.

Portfolio view:?We own Marvell and see it as a clear (though perhaps less appreciated) beneficiary of AI; in some cases, its DSPs are built into AI systems at a more than one-to-one attach rate with GPUs.?

AI Memory inflexion driving pricing and revenue upgrades

  • Micron?positively profit warned around better pricing in memory –?specifically around HBM demand?(which we’ve commented on before – and note Amazon’s chips above).
  • We’ve spoken before about the semiconductor content increases moving from a traditional enterprise server to an AI server – the most significant increase is in GPU (which typically isn’t present at all in a conventional server), but there is also a considerable?memory upgrade??6-8x more DRAM content.?
  • Die sizes of HBM (High Bandwidth Memory),?which is required in AI given higher processing speeds, are twice the size of equivalent capacity die sizes??that’s important because larger die sizes naturally limit industry supply growth (which has both a positive impact on pricing and ultimately needs more semicap equipment).

  • Portfolio view: This memory downturn is the worst the market has seen in over 10 years. While much of this results from extraordinary circumstances (pandemic, inflation), some of it is also the nature of the industry.?We don’t own any memory players in the portfolio as the commoditised nature and reliance on each player staying rational on supply means that it doesn’t match our sustainable return on invested capital process. Indeed, while Micron upgraded revenues, it’s seeing less leverage than you would hope for dropping down to the bottom line, given ongoing underutilisation charges. That said, pricing is helpful in the context of semicap equipment spending. A higher price environment typically sees more capex spending, with LAM Research particularly exposed in the portfolio.??

Software demand holding up?–?billings and cRPO dynamics need to be monitored closely?

  • Salesforce?(owned) beat and raise. Revenue and forward-looking cRPO were both better (+10% cc and 13% cc, respectively, cRPO accelerating very slightly from last quarter).?
  • It also narrowed FY24 guidance to the top of the range on revenue and?raised FY24 guidance again?on operating margin (that’s the 4th raise in a row).?
  • Mulesoft and Tableau, in particular, had better results on data (pitched as at least in part driven by AI demand).
  • Salesforce’s execution and ability to increase operating margin without hurting growth have been among the most impressive stories in software this year.?
  • On top of that, it has the benefit of price increases on their way (announced over the summer, which will start to have an impact over the next 1-3 years). If it gets AI pricing that can begin to reaccelerate growth,?it’s easy to see them being able to compound earnings in the high teens.?
  • While it’s difficult with Benioff to get past the heavy dose of PR/bullishness, there are reasons to believe Salesforce stands a good chance of being one of the enterprise software winners in AI with a suite of products. Given the product set around data cloud, marketing and commerce clouds and slack, it is well set up to utilise AI. In just the same way as Microsoft operating AI across its suite of apps is compelling, Salesforce should be able to do the same.?
  • The “new AI stat” this quarter was that 17% of the Fortune 100 are Einstein GPT Copilot customers.?
  • It’s unclear how much of this is explicitly priced as a standalone product (Einstein GPT is priced at $50 per month) vs bundled with the broader sales suite. So, the debate around AI monetisation remains.
  • Workday?also?reported this week – a small (1-2%) beat on cRPO, subscription revenue and profitability, and FY24 guidance raised slightly.?
  • Overall commentary was positive – an improvement in net new logo growth and still benefitting from spend consolidation in its customer base.?
  • Snowflake?(not owned) reported a sequential product revenue increase from Q3 and Q4 guidance, which was slightly better. Its rep commented: “Consumption trends have improved. We are seeing stability in customer expansion patterns,” noting that nine out of 10 customers saw qtr/qtr consumption growth. This suggests that the worst in terms of cloud optimisation might now be behind us (relevant for everyone but particularly the hyperscalers, which we have exposure to).?Elastic’s?reporting last night showed much better than expected cloud consumption and spending trends.
  • On cyber,?Zscaler?and?Crowdstrike?reported good customer demand, which now appears to be stabilising. Crowdstrike had some of the same comments around billings and contract duration that we heard from Palo Alto (see also portfolio comments below) but, overall, it was a very solid week for software reporting.

  • Portfolio view:?We think Salesforce continues to execute and is on a path to high teens compounded earnings – YTD share price performance is >80%, which is astounding, but it’s almost all earnings-driven. Estimates have gone from $5.5 for FY24 this time last year and will come in over $8). It remains the perfect profitable growth story and trades on <25x next year’s EPS.?

We significantly reduced our position in Workday in the fund this week, given a lack of conviction around short-term results and new management creating uncertainty in the outlook.?With the shares back close to their highs before downgrading its guidance in September,?we felt the risk/reward balance was to the downside. We will revisit it as we continue to reassess the risk/reward of our portfolio positions.?

More broadly in software, we’re digging around contract durations and the impact on billings/cRPO. It seems to us that there are a few?more datapoints this results season that suggest we’ve been through a period of companies securing sales with multi-year discounts,?which customers were happy to do when money was free, which made billings look good (but hurts revenues down the line) and which is now rolling off. Ultimately, rolling back to shorter duration contracts doesn’t impact the P&L and earnings. Still, it’s important to scrutinise why any metric might be deteriorating and what it might say about the underlying demand.?

Retail??Cyber week and China smartphone data

  • Robust early reports from Black Friday and US online spending strength?– Adobe Analytics reporting spending up 7.5% from last year.
  • We’ll know more once we get early reports next year, but?one thing to be mindful of is the share shifts with the Chinese players?(Temu in particular).
  • In China’s domestic retail, Counterpoint released data around China Single’s Day, which showed?iPhones significantly underperforming Huawei’s Mate 60 and Xiaomi.?Xiaomi announced it was starting a 2 trillion RMB “ambition project” to support localisation and a China-made iPhone.

Portfolio view:?Still no real signs of weakness in consumer, though from a technology company perspective, it’s still an area we have limited exposure to.

For enquiries, please contact: Inge Heydorn, Partner, at [email protected] Jenny Hardy, Portfolio Manager, at [email protected] Nejla-Selma Salkovic, Analyst, at [email protected]

About GP Bullhound GP Bullhound is a leading technology advisory and investment firm, providing transaction advice and capital to the world’s best entrepreneurs and founders. Founded in 1999 in London and Menlo Park, the firm today has 14 offices spanning Europe, the US and Asia.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了