Marvell Technology, Inc. (NASDAQ:MRVL) Citi’s 2023 Global Technology Conference September 6, 2023 1:00 PM ET

Company Participants

Matt Murphy - CEO

Ashish Saran - VP, Investor Relations

Conference Call Participants

Atif Malik - Citi

Atif Malik

Welcome to Day 1 of Citi Global Technology Conference. My name is Atif Malik. I cover U.S. semiconductors, semiconductor equipment and communication equipment stocks here at Citi. It's my pleasure to welcome Matt Murphy, CEO of Marvell Technology; and Ashish Saran, VP, Investor Relations.

I'm going to kick it off with my questions first. And then I'll open it up to the audience. If you have a question, just wait for the mic to come to you, and then you can ask your questions.

Welcome, Matt.

Matt Murphy

Great to see you. Thank you.

Question-and-Answer Session

Q - Atif Malik

Matt, I'm going to start with the topic that is on everyone's mind, artificial intelligence, and you guys are on a great play on that team. When it comes to AI, Marvell is one of the Company that has multiple AI opportunities. It is uniquely positioned from compute to networking and passing by electro-optics. Can you walk us through your strategy and why you think you are best positioned to address this team?

Matt Murphy

Sure. Happy to do it, and great to see everybody. And just a quick side note out I was reflecting. This was the first conference I did when I became CEO of Marvell in 2016. And my esteemed friend here had a sell rating on the Company, and I think our stock was at $10. It was a grim situation. And I think the sell rating was probably still warranted a design. Anyway, how far we've come, right?

You fast forward and to your question about AI. I mean, just as a quick kind of overview, the pivot we made back then was to really refocus Marvell on what we believed, really, what I believe would be the biggest kind of SAM growth opportunity in the semi industry, which was really the growth of the data infrastructure and kind of data platform companies driving huge semi growth, right?

And so you fast forward we're seven years later. I think that's played out well. And within that, within this data infrastructure SAM that's grown, the AI and generative AIPs and accelerated computing even more broadly has become a massively important growth driver as a part of our strategy, if you follow me. And I would say that we think that we're at the very beginning of a long cycle here.

And I'm almost thinking about accelerated computing is kind of that just like we thought about data infrastructure as a platform for us seven years ago, okay? And within that, I mean -- so meaning it has the same attributes. It's going to be high performance, it's going to be large SAM growth, and it's going to be driven by multiple products and technologies to be successful.

So the three you mentioned, one was in the connectivity side, right, which is really our high-speed optical communications products where we have a very strong leadership position both in technology that powers inside data center communications, both for -- in traditional cloud infrastructure as well as in AI clusters.

We have leading technology that also connects data centers together because that's going to be a more and more important part of the equation as you scale out your data centers and you move more inference and processing to the edge of the network, close to where the consumers are. We have a growing position in custom silicon, which we think is going to be a really -- the growth will accelerate with accelerated computing there.

We can talk about that. Very strong offering in this 5-nanometer cycle we're in right now, and we've got some pretty exciting programs that we won several years ago, right, that are now coming to fruition because this isn't a business and infrastructure where you can just decide, oh, wow, this is super exciting. How do I get in on that? I mean, we won some of these programs back in 2020 and now they're finally going to production.

But that's an exciting opportunity, especially as that whole compute SAM really opens up with the move to acceleration. And then in networking, we have a -- in switching technology, we have a growing position really driven by the acquisition of a company we did called Innovium. And we're now in a great position with our latest product, which is a 5-nanometer technology, 51.2 terabit switch.

It's done on the Marvell process flow. It has our own IP, our own SerDes, but it has the great innovation and architecture from Innovium. And so those three things, if you think about it is processing the data, moving the data with inside data centers and around the data centers, those are going to be they're fundamental, right, to the performance of these types of systems. So, we can talk more about in detail about those, but we're kind of in the heart of really where all the action is right now.

Atif Malik

Sure. Just to start with the electro-optics side. That section of your business seems to be the primary driver right now for AI sales, sales growing to $200 million quarterly run rate, $800 million annual run rate way ahead of schedule. Can you just pull the curtain a little bit and dive deeper into how cloud providers are approaching their build?

Matt Murphy

Sure. Yes, to your point, I mean, we're very pleased with how that business has really strengthened throughout the year. I think we're in the middle of a more broad semi cycle correction. And this -- I think the folks that are levered to AI right now are obviously doing extremely well. The -- and I'd say that, that -- those sort of order trends, forecast, everything is really, really continue to improve every month really and really taking up starting back really when ChatGPT was kind of announced and then there was a little bit of a lag.

And just to put it into context, the $200 million a quarter with, as you said, most of it coming from this electro-optics area. This technology came from an acquisition we did of a company called Inphi, which if you remember, I think you followed them or maybe your colleague did, but when we bought the Company, trailing 12 months revenue was around $680 million for Inphi, it was projected the year we were going to close it to do about $800 million. I think street had them at $810 million.

And so the fast forward two years later, and we're talking about exiting the year just on electro-optics for AI at $200 million a quarter. That was the size of all of what we predicted, the whole code to be just two years ago. So, it's been a tremendous asset for us. And to answer your question, I think how they're thinking about deploying is, yes, very broad based, and you're seeing it in NVIDIA's numbers and sort of the CapEx trend changes. It's a massive deployment cycle. And all of it is driven by obviously needing the right AI sort of GPU technology or in some cases, it's our customers' custom chips.

But all of those have optical interconnect attached to it, every single system. And in some cases, the ratio of the attach is actually more than 1:1, right, in terms of an AI element versus the connectivity. Now the ASPs are obviously much different, what is driving a tremendous growth cycle we've had upsides, big upsides that we're doing a great job of meeting with our manufacturing partners. So yes, we see strong growth in that business clearly this year, and also we see strong growth again going into next year in that part of the portfolio, both on the between data center or DCI as well as the inside data center side.

Atif Malik

Matt, are you seeing most of the demand on 800G? or are you also seeing some demand for your in pipe products on 400G? And the beauty is that you guys are agnostic to opinion or Ethernet. So maybe just talk about 800G versus 400G demand.

Matt Murphy

Yes. I think if I characterize it at a high level, the way to think about it is that -- and this is, I think, what our this is our -- how we sort of project going forward is that the AI systems are going to drive the highest frequency and the highest performance optics. And so, today, the -- almost all of the growth in 800 gig is due to AI. And then the way to think about the rest of the PAM portfolio is the traditional cloud infrastructure is still really either at 200 gig or 400 gig PAM, and some haven't even upgraded quite to it yet, but that's coming next year.

So, the NRZ transition is not done yet. NRZ being the old technology that now moved to PAM. So what we see is, going forward, and we announced that OFC, our next-generation product, which is double the bandwidth at 1.6 terabit per second. That will be deployed in AI first. And then we'll see sort of the traditional cloud stuff move to 800 gig. And then by the time that moves to 1.6T, we'll probably be on the 3.2T. So, that's sort of the cycle we're on. And I'd say the AI refresh rate looks to be around half in terms of the speed.

So call it, 18 to 24 months versus three to four to five years on the other side. So, we -- I'd say the new product development intensity level has actually picked up on the optical side. And given the fact that the capacity, just the raw computing capacity has gone up so much, there's real throughput limitations to actually get the data on and off the card, the cluster, et cetera. And so, I think it's going to drive a pretty big refresh cycle on optics, on switching and on data center interconnect as a sort of a tailwind or a byproduct of all the growth in AI.

Ashish Saran

And maybe, Atif, just to add, I think the key takeaway to your question is, not only are we seeing a lot of growth in AI, which is fairly expected. But in our cloud business, even the non-AI portion, networking in particular, is also growing very strongly. It grew very strong sequentially Q2 to Q3, and we're expecting that to continue, right? So, I think that's the other thing to keep in mind is we're seeing broad growth within the infrastructure. AI, clearly faster, but even the non-AI portion, right, after going through maybe a couple of quarters of an inventory correction very early in the year has started to come back out again.

Matt Murphy

Yes. I think that's an important point because there's -- we've had some great meetings this morning, and I think it continues to be worry on investors' minds that with the shift, so kind of the hard pivot really to the AI CapEx spend is what gets impacted on the traditional cloud infrastructure side. And who gets impacted, and there's only so many dollars, right, available. But what we're seeing in our business real time given our product mix, which on the traditional systems is not really compute intensive. It's really networking and connectivity intensive. That business is growing really well. And we guided our third quarter in data center up kind of mid-teens.

We said that was with a headwind by the way. There's a piece of that data center business that's on-premise, that's a legacy, that's actually down. So you got to think, okay, if that's down, then the rest of it's up. And we said, obviously, AI is up a bunch. But the traditional cloud infrastructure stuff is up like double digits plus. Q2 to Q3, that's what we guided. And we said it was going to keep going in Q4, and it was going to grow through next year. So yes, I think because of the reasons I mentioned earlier, kind of our product mix lends itself to growing kind of in both segments of the cloud, if you will, even if there's a CapEx shift.

Atif Malik

Sure. So that's an interesting observation. So is there kind of a lag effect to networking? Is this more specific that the cloud is -- the non-AI part of the cloud is also growing? Or is there a lag effect between compute and network and when things go?

Matt Murphy

Yes, I don't think it's specific to us. I mean I think you could ask our large peer competitors, and they probably are seeing that business grow. And I actually look out to next year really in the next couple of years, I mean, there's a big Ethernet refresh cycle coming again, right?

Because remember, most of the networking today is done at kind of the 12.8 terabit per second switch platforms that are out there. We have some portion of that. We have one large competitor that does really well there. Mostly, the industry skipped this 25.6 generation, and then everyone kind of waited for the next one, which is at 51.2 because they get a quadrupling of bandwidth.

So, those products are going to be released to the market industry-wide, ourselves and really, really one of their large competitor. And I think that's going to drive a very significant networking silicon TAM expansion upgrade cycle over the next few years. And that will be driven because of some of the AI stuff, but also it's been like four years, right, since you have the sort of 50-gig I/O, this is the 100-gig transition.

So anyway, it's pretty exciting because you've got sort of this networking tailwind that's a little bit independent, but obviously helped by AI. And then the connectivity, we actually have more content and more sort of dollars if it's an AI system versus not, right? So that trend benefits us too.

Atif Malik

Great. Just staying on Inphi. How do you see the changing AI landscape impacting DSPs versus linear drives versus co-packaging optics? I mean, it sounds like AI is accelerating everything, and that should help you guys kind of protect your 90% plus market share.

Matt Murphy

Yes. At least -- so I think there's kind of a short-term view of this and then there's the longer-term view. And in the short term, all of these current systems that are out there that are being deployed AI systems, I mean, these designs were done and qualification was done on these -- like two or three years ago, okay? So there's kind of a -- there's a lot that's been baked already because the qual process has been long. These systems have been under development.

And so for the kind of current generations we see in the foreseeable future, this need for pluggable optics is only going to continue, okay? And that's going to be the preponderance of all the deployments for a long time. And there's a lot of reasons for that. But the main one is sort of scalability, interoperability, assurance of supply and the fact that once it's qualified, it's qualified. And if you need to change it, it's pluggable, so you can remove it. And there's some challenges with some of the other technologies you mentioned.

But I would say, longer term, our view is the number of ports, if you really believe the accelerated computing trend is going to just drive a massive disruption in the TAM for semis. The number of ports that is going to be deployed is going to explode, okay? And our view is we never want to get caught in the innovators dilemma. We're not like head in the sand, well, we have this one business, and we -- we're looking at, hey, how do you develop and deliver the best solution for these customers, right? So we're not opposed at all.

We can get content, by the way, in linear direct drive. It's not a problem. We may not get the full DSP, but we can get a TIA, we can get a driver, we can actually help our customers solve problems. Same in the area of co-package optics or silicon photonics, I mean we're shipping high-volume two-day of silicon photonics. You look at every Marvell/Inphi, 100-gig or 400-gig ZR module that we ship for data center interconnect we have our own silicon photonics inside. We're in high-volume production. So we have that technology for sure.

The question is, when is it needed? Is it deployable at scale? Does it work technically or not? And then what's the trade-off between just, hey, I can just ramp up today because I know I can get access to a myriad of pluggable optics supply from a number of companies or do I go more proprietary and bespoke? And we -- our view at Marvell is we're prepared to supply the necessary technology to the industry to really enable and drive accelerated computing. And it's not a negative.

People shouldn't go, oh my gosh, well, if that happens, and that was a worry like back at OFC, right? Marvell's business is going to disappear this year because somebody showed a demo of linear direct drive. We've known what in linear direct driver is for a long time. It's hasn't made sense yet. You know what I mean. So -- and I don't think that, that's played out. I think our optics business has only gotten stronger this year, and so we're going to get stronger next year.

But we're not hitting the sand. And I think if we're -- if we do this right, we can be really the invaluable supplier to our customers by providing a suite of options for them. And that will only be -- the pie will only get bigger if you can do things in a cost-optimized way. So if somebody doesn't need pluggables, that's okay. We have a whole plan to go address that. And if somebody wants to do really, really dense customized designs that are kind of controlled by them with silicon photonics, and that's something that we can invest in as well.

So we have the building blocks. It's really what actually makes sense and what's going to get deployed. So we're kind of looking at that more than here's a PowerPoint we could show, and it's got some cool things. And that's fine, but I think we tend to look at things very practically at the end of the day at Marvell. And so we're kind of prepared. We're not -- we're okay because the ports are going to be so big, it doesn't event matter.

Ashish Saran

Now in a realistic time frame, meaning three to five years, I mean, pluggable, not just our view, I think the industry remains that is the technology of choice. We're already shipping 800-gig. We already have announced, and we're the only ones who have announced actually a 200 gigabits per wavelength, which is a 1.6 product, which is absolutely critical for increasing densities when you go to these next-generation in a cluster.

So, the reality also is it's not just what can compete on the current generation, which is where these alternative technology demos have taken place. reality is we've already gone to 1.60, and you should believe we have a 3.2T in the road map, right? So I think as long as the feedback from our customers, which is what matters at the end of the day, pluggables a primary choice for certain niche applications, we absolutely want you to invest in investigate alternative solutions in the long time frame in case we do need them at some point. So that's kind of the summary, I would say, of where we see the industry going.

Matt Murphy

And I would add a final point. I would say the way to think about it, too, is the faster the beat rate of these upgrade cycles for AI systems the longer, quite frankly, pluggables last. Because otherwise, you're just making a trade-off to say, well, let me slow everything down, and let me try this brand-new technology and let's hope it works, so I can save a lot, save $1. And I think there will be a time for sure. But actually, our thesis internally is just, hey, as long as people want to keep cranking on these -- this level of product development, our customers, then pluggables is going to be around for a very, very long time because it just doesn't make sense to halt everything and try to switch to something new.

Atif Malik

Great. Just to finish off the discussion data center, parts of that business were weaker on the last earnings call, storage, though it's growing, but it kind of remains subdued because of your end customers and demand weakness in the fiber channel. So walk us through like what part of your data center business are slowing down? And when do you expect those areas to stabilize?

Matt Murphy

Sure. Well, on the storage side, as it relates to data center, I guess our view was, well, it can only go up. I mean they basically completely beyond bottom, right, in our first quarter. And so we actually said, hey, good news, it grew in Q2, and it's going to grow again in Q3, but the real million question is when does it come back to whatever run rate you want to pick? A lot of people want to just say, kind of what was the pre-pandemic level? And we strongly believe it has to come back to at least where it was at some point, right, so whether it gets to 80% of that or 85% of that.

So yes, it grew from -- I'm saying storage data center, right, grew from Q1 to Q2, and it's going to grow again in Q2 to Q3. But it does -- and we said in the last call, we just wanted to reset expectations because it's hard for us to predict. We're back in the supply chain on this, and it's really hard for us to read through all the layers of the chain to get to like where is the actual inventory and what's going on there. But our view is proving all the way to the end-to-end customer level. I mean, the TCO case for continuing for them to invest in new storage technologies to drive all this exabyte growth is still intact. That was sort of, I guess, the reassuring thing, right?

There's -- on the hard drive side, on the flash side, I mean, despite the inventory and kind of how those companies are doing in the middle, the end usage of it is considered to be just mission-critical for these large cloud companies. I mean it's actually how they measure in a lot of ways, the value of their customers, hey, how many petabytes are you going to bring me, right? Because that's storage in, and it's really hard to get the storage out. So we think exabytes will continue to come back and grow. We think that in the cloud, it's going to still be -- preponderance will be hard drive-based and near-line drive base.

It's just very hard for us to predict when it comes back. And I think even if we try, we're not going to get it right. So we've just sort of pushed it out to the right and said, at some point, it comes back. And then that's part of what was a headwind always becomes a tailwind at some point. So -- but we said it pushed out meaningfully from where we were before, which was basically we thought by the fourth quarter, it was like a quarter or two ago, we would probably be back maybe a little lower than we were, but kind of getting back in line. And it seems like the whole industry slid that a couple of quarters to the right.

Ashish Saran

Yes, I think the way I think about it is it comes back at some point, it's a net positive for us. In the near term, it's quite frankly, doesn't matter. We're powering right through some of these kind of mini downturns, right, whether it's storage, whether it's enterprise on-prem. If you look at our overall data center footprint, revenue from cloud is it significantly, significantly higher than enterprise once enterprise is come much smaller part of the business, and that's going to continue quite fan even when it comes back, because the cloud portion just keeps growing much, much faster. And with AI, it's got a kicker on acceleration essentially, right? So as I look at the back half of this year, as we guided, we said we'll be up mid-teens sequentially, which is all driven by cloud, which is growing a lot faster, and you should expect that's going to be a bigger number as you get into Q4.

Matt Murphy

Yes. There's always the glass half full, half empty, right? I mean take the example of in 2019 when we had that fun little downturn, the industry, if you remember, there was the whole 2018 tariff thing and the correction. There was a storage correction we got hit with. During that cycle, we took a lot of pain during that cycle. If you remember, we still had some legacy like PC hard drive exposure. Remember, we always needed to kind of get rung out. And instead of taking like two years to wring out, we rang it out like two quarters. And then that exposure was gone.

So to the point, when things actually came back, we got it out of the way. And kind of to your point, in some cases, that mix in our data center business of the cloud and AI to enterprise on-prem as painful as it is to kind of go through a down cycle. The good part, the growthy part will be a higher percentage of that business just structurally even when the legacy stuff normalizes, if that makes sense. So mix just kind of gets better, although it -- the glass half empty as it hurts now, the glass half full is you feel better on the other side.

Atif Malik

Got it. And then on the custom ASIC compute side, you guys have talked about working with two hyperscale customers. And we also hear from your competitors, they're involved with certain hyperscalers. So the question I get from investors is like, how do we get confidence in terms of you ramping up sales with those hyperscalers? And longer term, is there increasingly more competition in this market to from an Asia fabless coming in as well?

Matt Murphy

Yes. Well, I think on the first question, it is the million-dollar question. I wish we could provide better visibility to investors at this point. But I think everybody understands how dynamic and how fast moving the whole Gen AI thing is. I mean if you just even look at our optics is like a small portion of that, like how much our view of what that business could do this year changed in like two quarters.

And so now trying to call the ball on next year ramp of some of these custom programs. It's just a bit early. They're both tracking in line with what we had said in the last couple of quarters in terms of the new product development and qual activities, so that's positive. I think we need more time to really give investors a better view of what's the real scope and revenue expectation. The good news on that is like a year from now, that business will have ramped up. It will be at a certain level.

And then we'll be able to actually kind of understand what's the run rate. And then in theory, '25 gets a little easier, if you know what I mean. But it's just -- it's very new for us. On the competitive side, look, I think the custom silicon and custom ASIC market has really moved where the TAM has really moved us to data center, right? And it used to be heavy enterprise, heavy carrier, consumer and the volume has really shifted. And so companies are trying to move there.

I just think to do the really bleeding edge, state-of-the-art in like Tier 1 hyperscale class custom ASICs at the bleeding edge of complexity, there's a huge barrier there. There's a huge barrier there. And I think there's really -- in my view, there's really us in one very, very good competitor in North America who have the process technology, packaging, IP, scale, supply chain relationship, long-term planning, focus, suite of products to sell, and that can be completely trusted to actually deliver, right, and deliver in a system for three years, four years, five years and be able to meet all of the requirements that are needed.

And I think there is also geopolitical concern as well. I think when you talk about who's going to trust their business ultimately to a partner. I think more and more it's going to look -- the U.S. guys are going to look at the U.S. guys and so forth. And it's just really hard to do. And I think it only gets harder and the distance only grows quite frankly, as you have to make these technology jumps because it's not just nanometers, it's not, well, I had 5 nanometers, and now I'm going to 3 nanometers. I mean you've got to double the IO speeds.

You've got to completely -- you got a new CPU subsystem, you've got a whole new suite of IPs you've got to go develop, and everything just gets harder. And because Moore's Law is slowing down, you're just not getting the bang for the buck as much as you used to. So you're having to solve thermals in a different way and power management and -- so I think the complexity is going up dramatically each of these generations and the cost to go do that and the scale required. And I just think it's going to be more and more rarefied there who can really compete for the long term there.

Atif Malik

Let's see if there are any questions in the audience. If you have a question, please raise your hand, and the mic will come to you.

Unidentified Analyst

Can you just talk a little bit about your shareholder returns, capital allocation going forward? I know that you all talked about repaying some debt and maybe earlier this year that you're going to resume shareholder returns. But just kind of given some of the headwinds, macro headwinds that you face in leverage is kind of trending higher right now after going down for a couple of years. Just kind of how do you weigh the capital allocation versus the change in direction and leverage?

Matt Murphy

Sure. Yes. No, thanks for the question. And I think, yes, to your exact point, I think we've had a really -- we've got a really good progress on the leverage on the Company over the last few years. We definitely leaned in to buy and stretch to get Inphi. And over time, we've continued to drive sort of the trailing 12-month EBITDA up and then we just paid down $500 million in June, and so that's sort of moving in the right direction.

To your point on some of the macro stuff we're dealing with, it's definitely impacted free cash flow and a few other items, you guys all see that. We're having to kind of go through that in the cycle. But our view very much is to focus like crazy on that and then really resume buybacks and shareholder return. I mean we've focused on shareholder return. We worked really hard to put together this portfolio of technology.

And we did a lot of M&A to go do it. So there was times where we were off spending money over there. But our view has been still very consistent from really our Analyst Day in 2021, which is we did what we needed to do to put the portfolio together we need it. And now it's really about driving our organic growth and then anything excess that we've got is really focusing it back to shareholders.

So we're laser-focused on it. We're a little behind relative to where I wanted to be, just given quite frankly, this macro pocket. And at the same time, we've got some of our businesses going through inventory correction. And then we got this massive AI upside. And that's, quite frankly, guys, has signed up some capital, right, because we're doing a bunch of wafer starts and having to buy ahead. And so I hope that answered your question. Do you want to add to it?

Ashish Saran

Yes. I mean -- I don't think anything has actually changed from a financial policy. I think as Matt said, our focus is organic growth. I think it was -- we said we're going to go pay out that $0.5 billion of debt down, which we did. And we did start -- we started buybacks basically in Q3. And you should expect nothing to really change kind of going forward, right? So back on a growth track, right? As you'll see, the operating leverage is starting to kick back in, and you'll see that more of that as we go through the back half of this year into next year. So overall, I would say nothing really changed as you should expect us to remain very consistent from a financial policy perspective.

Unidentified Analyst

Yes. I have a quick question on the [indiscernible] business or maybe even same versus might not [indiscernible] be as both of them together. I guess one thing that would be helpful would be, as we move forward, we hope we move towards more inference versus training, is that where estimation will have a bigger opportunity? Or is it when models are starting to get optimized? Just wanted to have a better understanding on when will custom bases [indiscernible].

Matt Murphy

Okay. We already hear that. I'm worried the mic wasn't not. Did everybody get the question? Let me just repeat it real. The high-level question was you have AI, you have GPUs, you have custom ASIC. What does that mix look like over time? Maybe a second question, hey, how much of your stuff is training versus inference? How does that play out? How do we think about that?

Yes. And our view, I think, is pretty -- so the answer is, we've had a view for probably four years, okay, which was the following. And this is an AI was much more nascent, but we were involved in this. I mean, some of you may remember when we acquired Cavium, we had an AI chip they had an inference chip. We brought it over. It was called M1K. We had a whole team working on it, and we actually shut it down in 2019. got out, closed, pack it in.

Our view then, myself and my President, Raghib, was this AI market is going to be NVIDIA plus the hyperscale companies doing their own custom chips. That's it. No one else is going to be successful. And I think we are right at least so far. And of course, NVIDIA has just gotten much bigger than anybody sort of could have comprehended, which is just what an amazing job they've done. At the same time, the growth of custom for AI driven by one large company now, but I think there's more coming. That's definitely happened, too. That's a much bigger spend.

So our view is it's going to continue to coexist. That's our view. And I can't really get into us and what we do and what our mix is going to look like and what we're working on with our customers. That's obviously very, very sensitive, and there's already enough articles, rumor things about Marvell on this or somebody on that and it's this chip. So I really can't comment on our mix. But our view is that those continue to coexist because I think there's different opportunities. I think the broader accelerated computing, it's not just you make one chip and you solve everything.

And I think -- so I think the people that are doing their own custom programs are going to keep doing those and keep optimizing. And you see all of them are also announcing with NVIDIA too. And so I think people are going to figure out how to make all this work. And in the end, it's -- this will all be good for anybody that believes accelerated computing is a real game-changing industry, disruptive industry trend that happens. And I think both when -- and of course, for us, we provide all sort of the basic building blocks to enable all that, even outside custom silicon, right, with our optics and our switching products.

Unidentified Analyst

Maybe just as a follow-up to that. As you said, that there's no this $4 billion of competitors' business and $3 billion of that's one product. Do you think that, that's the way customer silicon goes. It's some lumpy whale hunting or is it lots of projects that you can kind of build up and becomes a sizable business?

Matt Murphy

Yes. Hard to say, but I would say, based on what happened so far, given the -- let me say this differently. To justify the investment to do one of these chips, it's going to, by nature, be very large and very significant. There's honestly no such thing as a small 5-nanometer design and much less a 3-nanometer design. I mean you're going to -- and I don't know what you're going to spend, but somebody's got to be willing to spend $1 billion lifetime. I mean it's got to be a big enough thing to justify the cost to develop it.

So I think there's fewer and fewer big, big sockets as you go to these newer nodes. And so I think by design, they any custom program, I would say. So the stakes get higher, but also the value you can deliver if you think about if you do it really well, if you could really nail it the TCO, the savings you get from spending that could be actually enormous. And that's not just an AI thing. I mean that's also in other markets that we serve or other opportunities in data center. There are always -- these companies are always going to look at the TCO and the return and -- but so far, it's -- that trend is only going in one direction.

Unidentified Analyst

Can I ask a question on the attach rate of PAM-4 GPUs? So I guess you said in some cases, it goes above 1:1. My question is, when -- what use cases drive that attach rate to go up? And as a move to 1.6T and beyond in the future, how does that impact the attach rate?

Ashish Saran

Maybe I'll take that. So in AI, the reality is the attach rate to an accelerator from our DSP is almost always higher, well higher than 1:1. The 1:1 was one example, essentially, which is attaching clusters to each other directly. But that's just the first level. Think about it from like a server to top of rack connections, the first hop essentially is already optical in an AI system versus it's typically not in a traditional server infrastructure. So you get the 1:1 right there itself.

But then remember, you've got an entire layer of leaf and spine switches, which have to connect to each other, right, to actually form the network, which has a huge number of additional optical connections. Now every customer is slightly different. So we can't give you an exact ratio. But the key point is, to your question is the attach rate is significantly higher than 1:1 in AI starting today.

Now even with using the highest speed optical connectivity, which is 800 gigabits per second, you're actually not using full throughput, right? The throughput or the total bit rates within those clusters is anywhere from almost 10x higher than what's coming out of those clusters. So what happens as you go forward is you'll most likely see the density of optical connections go up in terms of the physical number of connections. That's one way you fix the problem.

And the second way you fix it is, to your point, you go to the next generation. You go from an 800-gig to 1.6T. In reality, both will happen, right? You'll go to Monex, right, when possible as well as go to the next higher speed. And that's how you essentially get more bandwidth. So that's how I would look at it.

Atif Malik

Okay. We're almost out of time. Matt and Ashish, thank you for coming to Citi Conference.

Matt Murphy

Thank you.

Ashish Saran

Thank you, Atif, thank you for hosting us.