5G Will Connect Nearly Everything... and Will Completely Change Your Business
bigstockphoto.com

5G Will Connect Nearly Everything... and Will Completely Change Your Business

Watch an hour of television and you’ll probably see at least one commercial touting the benefits of a 4G network. (Yes, we can all hear you now.)

But that will change sooner than you think, because 5G is already on the horizon. It has the potential to create a flexible network that will connect almost literally everything, from smart parking meters to self-driving cars, and enable nearly split-second connectivity. And a few startups already are poised to take advantage – and to change their respective industries.

One of those startups is Rift.io, which provides an open source platform to construct and automatically deploy scalable, virtualized network services. The Massachusetts company has raised funding from the likes of North Bridge Venture Partners and, more recently, Intel Capital.

Intel Capital made it possible for me to speak with Rift.io founder and CTO Matt Harper about the problem his company solves and how 5G will help solve it. Matt is a smart and accomplished guy: he’s been the VP Technology for Affirmed Networks and System Architect for Starent Networks and for the USR/3Com Total Control multi-access system… and he holds 25 hardware and software communication systems patents.

So: if you sense I struggled to keep up… you’re right.

When I talk to startups I like to start with “who,” because who always informs “why?”

That’s certainly true in our case. Our CEO was a founder of Starent Networks, which was acquired by Cisco. The products they created now carry the vast majority of 3G and 4G mobile data in the world.

A number of our people were involved in developing broadband access, dialup access, robotics, credit card authorization systems… we have tremendous experience building telecommunications-grade services that are deployed by Fortune 1000 companies. We basically built the last generation of hardware-based 3G and 4G equipment, and the architecture for those solutions were cloud-in-a-box solutions.

Which leads us to why…

We saw a fundamental transformation taking place in the industry. The internet and telecom service providers were being asked to deliver more and more new types of services, very quickly, and they were being asked to deliver those same services for a radically lower cost. They were asked for much greater service velocity at an order of magnitude cheaper cost per bit.

In simple terms, they looked at the model of companies like Google and Facebook with envy, but fundamentally there is a huge difference between what those companies do and what service providers do. For service providers, their fundamental business is building networks and deploying packet transit functions on top of those networks. Firewalls, routers, tunneling boxes, subscriber management boxes… those are the types of things they do. That is very different from what over-the-top web-scale companies do.

We knew the problem space, and we decided that to accomplish what they want to accomplish, service providers need a platform like web scale companies have. They needed a mashup of web scale and Telco technologies.

That makes sense… but I have no idea how you get from here to there.

This new platform is like a table with four legs. One leg is effectively a platform of open hardware. Instead of specific commercial solutions, the community needed generic servers like the web scale guys, but also programable white box networking switches.

On the computer side, Intel integrated PCI express lanes directly onto their CPU sockets enabling high-throughput I/O. That turned CPUs into network processing units, and the developers realized now they could use general-purpose Intel processors to do what NPUs (networking processing units) had done in the past. That’s the first leg of the table.

Another leg was creating an open set of platforms to control this set of programmable servers, and also software that would control these programmable switching devices. With the advent of OpenStack and OpenDaylight, we saw it was now possible to control large sets of computers and switching architecture.

The third leg is one we wanted to address: the leg of automation. Web scale companies have done quite a bit in terms of automation, but much of that work has been done without a model in place. Effectively, it’s all ad hoc automation. What we need is model-driven automation.  Orchestration is automation based on a model: the idea that people are going to need components to be connected, and you need standardized elements so they can be automated in conjunction. You shouldn’t have to write custom code for each of the elements you want to connect. We took on that leg and built an ETSI MANO-based orchestration format.

The fourth leg is compelling VNFs (Virtualized Network Functions. (Think software-only routers or firewalls.) You can have better orchestration, but if telco-grade VNFs don’t show up to the party, that’s a problem. Unlike telco hardware that has full internal redundancy, web-scale servers are not nearly so robust -- so we decided to take on that leg as well. We developed a hyper-scale engine that shows how to build distributed VNFs that are model-driven, elastic, and resilient … and we dog-fooded that distributed application framework and built our Orchestrator on top of it. Essentially, we simplify building the compelling telco-grade VNFs that can run on off-the-shelf hardware.

Why would you model on unreliable hardware?

Traditionally, vendors who build Telco software assume only a single failure can happen at a time. Unlike a common sever, the legacy telco hardware itself had to be 5-nines reliable (99.999% reliable), everything fully-redundant… basically it needed to be impossible to have multiple software faults at any given time.

In a cloud world, losing multiple servers is actually very common. The reliability of a web-style infrastructure is what we want to address.

I get the four legs. But how do you get there?

For this whole thing to work, and to break down all the vendor-specific silos, there needs to be a level playing field for various vendors to bring in their VNFs.

The operator needs to control that environment, and the only way to do that and create a level playing field is to have an open source community. We worked with Intel, Telefonica, and others to build a service-provider-led community. The service-providers come up with the requirements instead of the commercial VNF vendors.

Keep in mind there are only a few service-provider led communities. Typically, open source tends to be led by one vendor who then pushes their agenda in that community.

That’s why having this be operator-led makes such a big difference. The key to NFV (Network Functions Visualization) is to give the service providers control.  And there’s a good precedent for the operators to run the show. The carriers have learned they need to act as the system integrator for their networks and that standards shouldn’t just be in hardware, but also in software. They need an infinitely wide chassis with software they control, so vendors that build associated hardware can’t create silos.

How does 5G play into all this?

Let’s fast-forward to 5G, specifically network slicing in 5G. The operators need to be able to provide differentiated services in their Enhanced Packet Core (EPC) and offer different SLAs (service level agreements) for different types of data, different levels of resiliency, and different types of inline services… and do that much faster than existing 4G, based on what custom hardware can do… And they need to be very agile, so they can roll out new types of services very quickly and innovate new solutions quickly… and perhaps even on a per-customer basis. That’s how this technology rolls into 5G.

That means you’re building for a future state, not the present.

As I mentioned earlier, many of our folks helped build 3G and 4G. So we see what will happen with 5G. We know new types of services like the Internet of Things are going to proliferate, and instead of selling EPC (Evolved Packet Core) for a dollar or two per session, you need to be able to offer it for pennies per session. We see the inevitability of these markets requiring a radically different platform.

Fundamentally, all software is that way. You always want to out-innovate yourself. If you’re not building innovating and building new products that make your old products obsolete, someone else will.

5G will be transformational… but what are the barriers to proliferation?

In the Telco world, introducing new technologies like 5G take place on very long cycles. It’s very capital intensive. If you look at what it costs to roll out new radio technologies, particularly something like 5G that will require more sites because of spectrum requirements, there will have to be a business case that justifies it.

The same thing is true with the investments required to build ASICs (Application-Specific Integrated Circuits) for handsets. Those are long cycles requiring huge investments. What will gate 5G are the compelling businesses cases to build devices and deploy services.

In a way that’s like auto manufacturing; it’s hard to start a car company, because the capital requirements are so big.

There’s a famous quote from the ex-CEO of AMD, Jerry Sanders, that “Only real men have fabs.” As technologies get more mature and more complex, as you integrate more and more components, the capital expense is just tremendous.

If you want to roll out a 5G service across the spectrum, the cost is in the billions of dollars. It’s not a startup business. It’s a tough business just based on the investment in IP, and then the physical investment to build a fab. Even with the technology we’re building, the cost involved to roll out 5G means the major players will have to drive widespread use.

The business case will be driven by customer demand. The question will be when, not if.

Absolutely. These transformations give the consumer lower cost per bit and, importantly, lower cost for the first bit. Many IoT applications are low bandwidth, but they also need super low cost devices that are super low power. The IoT is a distributed sensor network. Devices will transmit a small amount of data, so to make it transformational for the consumer will be to lower the cost of the first bit.

What is your biggest challenge?

The challenge with orchestration is building a strong community.

It’s great being a vendor in isolation. You can do whatever you want, you can capture some customers and innovate with those specific customers… but for this market to be transformational, the key is to capture an entire community.

If you look at OSM (Open Source MANO), the amount of community engagement is tremendous. We have more than 60 members now; some of the recent notable additions are Verizon and CableLabs. The fact that some of the traditional players, who usually wait, are not waiting is a game changer. We also see operators realizing the way they’ve operated in the past -- getting both their VNFs and their infrastructure from the same vendor -- doesn’t provide them with an optimal solution. They need an open infrastructure they can bring all the vendors into, so they can eliminate silos and make sure their VNFs actually work.

You’ve mentioned open source a couple of times. Was there any hesitation about going the open source route?

We had no hesitation about open source. In fact, that was our plan from day one. Open source is the key to making this market work. Otherwise, people that have been in this space for a long time have a vested interest in keeping things closed.

What we’re attempting to do is disrupt a space, and to do that you need disruptive technology -- but you also need inflection points to insert that technology. 5G is one of those inflection points, and the IoT is another compelling use case… The economics are rapidly changing for the operators, and they need a fundamentally new way of building services and maintaining service agility.

Service providers really need this technology, too. Not having it is an existential threat. 

One of the problems is that we are treating RF spectrum in the same that way oil and fossil fuels were once treated - cheap and infinite resources. In reality, spectrum like oil, is getting scarcer by the day and not all regions of the electromagnetic spectrum have the same properties in terms of propagation, ease and cost of generating the signals. With this in mind, assuming that the worlds electromagnetic wants (rather than needs) can be endlessly pushed into the higher and higher frequency parts of the spectrum is, in my view, seriously unrealistic.

Victor Hernando

Contract & Scope Management

7 年

The Key of full deployment will be the Business Case. And it is not good for shaping the future.

Matt Green

AI-enabled Smart Communities and Organisations - Data Driven Digital Transformation - Reality Modelling

7 年

the future is forming ... it's feasible, flexible and fast ??

A few 5G Benefits: o Much High bandwidth per device ~50x-1000x (think augmented reality, VR, video) o Radically lower latency (self-driving cars communicating in real-time, augmented reality, etc.) o Support for massive # of devices (100B+ Devices - IoT) o Much lower device power (improved battery life for IoT devices, low-power USB-powered 5G data dongles)

Tony Isaac

Sr. Engineering Manager at TCP Software

7 年

What problem does 5G really solve? I'm an IT professional, and after reading the interview, I don't get it. Mr. Harper said a lot of stuff that sounded great, but he didn't really say anything concrete, as far as I could tell. It was "5G will make everything better" without actually illustrating what kind of thing 5G would make better. I'm all for 5G, but I'm wary of the "Internet of Things."

要查看或添加评论,请登录

社区洞察

其他会员也浏览了