WLAN Vendors: The NBASE-T Ball is in Your Court

I’m going to echo the position of Marcus BurtonLee Badman, and Andrew von Nagy. Despite the marketing push, there really is no need for > 1 Gbps link to a 802.11ac Wave 2 AP. Not at the access layer at least.

https://www.youtube.com/watch?v=2dVMs5_Kgew

This Wi-Fi gauge goes up to 6.8 Gbps, so that means we need a 10 gig port for every AP, right?

Back to reality: 802.11ac Wave 2 is roughly 7 Gbps max with 8 spatial streams and 160 Mhz channel bandwidth, but most will use 4 spatial streams at max, cutting that in half. If you’re lucky you’ll will use 40 Mhz wide channels, cutting that four-fold. Then take off another 40% for layer 2 overhead and you get roughly 500 Mbps at half duplex on the wire. Maybe a gigabit with 80 Mhz channel width and absolutely ideal conditions that don’t exist outside of the lab.

Even if you have room for 160 Mhz wide channels, which might be possible if the FCC expands 5 Ghz unlicensed spectrum and then client adapters are updated to support those new channels, what is it that clients are using that calls > 1 Gbps of throughput? Remember that Wi-Fi is access layer technology. What applications are you supporting today or anytime in the foreseeable future that call for > 1 Gbps to a small group of clients?

Probably nothing coming over a WAN link. That leaves LAN applications. The list of potential applications provided by the NBASE-T Alliance doesn’t establish the need very well. Security cameras and signage systems?

It’s hard to think of many scenarios where it will be necessary to provide > 1 Gbps of throughput at the access layer. I can’t think of any of those that wouldn’t be better served by a wired solution.

Point to point Wi-Fi links in the distribution layer could benefit from NBASE-T, but how short must those links be to support 256 QAM, and is 8 spatial streams really possible outdoors with limited multipath? I don’t know the answers to those questions, but I don’t imagine that many locations exist that need > 1 Gbps of throughput that haven’t already provided that with fiber. We’re talking serious edge cases here, not typical enterprise Wi-Fi.

In any event, despite the hype, it will be a long time before the need for > 1 Gbps switchports extends outside of the network core and distribution layers.

WLAN vendors have an important decision to make here. Because 802.11ac appears to be the major justification for NBASE-T switches, I imagine they are under a lot of pressure right now. To get real interest in these new NBASE-T switches,  AP’s will have to be built with NBASE-T interfaces that use the faster speeds. I assume that those interfaces will be more expensive than standard gigabit interfaces. Given that NBASE-T supports 10 Gbps I bet they will be a whole lot more expensive than the gigabit interfaces used today. Just a guess though. Time will tell what this stuff really costs.

Will WLAN vendors become accessories to this marketing crime and include these potentially expensive interfaces in their Wave 2 AP’s? Aruba and Ruckus recently joined Cisco in the NBASE-T Alliance. We’ll have to wait and see their plans for the technology.

I hope that 802.11ac Wave 2 enterprise AP’s are still made with standard gigabit interfaces. Speciality AP’s like those used in point to point links could benefit from multigigabit interfaces, but the AP’s that are sold by the dozens for typical enterprise purposes do not. The added cost of an underutilized NBASE-T interface is not justified by real world needs.

Perhaps the usual product cycle will repeat itself. The first Wave 2 AP’s will have the highest end hardware and NBASE-T interfaces. Then the mid- and low-range AP’s that follow and actually get sold will have gigabit interfaces.

Whatever the case, it’s going to be interesting to see how the marketing hype about 802.11ac Wave 2 evolves as more people get clued in to its real world performance.

Hotspot 2.0 Can Disrupt the Cellular Marketplace

When it comes to cellular in the U.S. there are two major carriers, AT&T and Verizon, and everybody else. While Sprint and T-Mobile both also compete in the national market, they have far fewer subscribers and a reputation for poor coverage. This has essentially been the state of affairs since Cingular bought AT&T Wireless in 2004 and continued business using the AT&T brand. There are some smaller regional competitors, but their market share is limited, and their customers roam onto one of these national networks when they leave their regional service area.

I think the combination of Hotspot 2.0 and Voice-over-Wi-Fi (VoWiFi), or “Wi-Fi Calling” as it’s known has the potential to disrupt the current cellular marketplace dynamics.

Sprint and T-Mobile have been dropping their prices to try to attract customers away from the Big Two (AT&T and Verizon) for years, even offering to pay early termination fees and give trade-in credit for phones, but it appears that this has largely been unsuccessful. When you can’t make a call from within your own home or office, who cares how cheap the service is?

Part of the problem for T-Mobile is that a lot of the spectrum they own is higher frequency than their competitors, so it doesn’t penetrate buildings as well due to the increase in attenuation that occurs as wavelength decreases. That’s a tough problem to solve.

carriers

VoWiFi and Hotspot 2.0 can change all of that.

VoWiFi extends the network’s voice coverage into the subscriber’s home and office, where subscribers can easily connect their phone to the W-Fi network, which takes care of that concern. Sprint and T-Mobile could also partner with SOHO Wi-Fi router manufacturers so that Hotspot 2.0 roaming integration was preconfigured for their networks on these products. Imagine if a subscriber could buy a NETGEAR “T-Mobile Edition” router and have VoWiFi calling work out of the box, without any configuration on their phones.

Imagine if Sprint and T-Mobile aggressively pursued Hotspot 2.0 integrations with major public Wi-Fi providers. Their subscribers would have seamless VoWiFi coverage in the areas where they currently have the biggest problem: indoors. As public Wi-Fi continues to expand, the voice coverage for these carriers could expand right along with it.

In fact, if we assume a properly designed WLAN, in very high density environments the indoor service for these carriers could be superior to the Big Two. Ever go to a ballgame and been unable to make a call or use data in a full stadium? That’s a common experience and Wi-Fi roaming integration solves that. Wi-Fi was designed to meet LAN access needs like this. Why not actually use it that way?

This could make Sprint and T-Mobile attractive again. Although I don’t imagine the costs would be very significant as it doesn’t involve building new towers and deploying more of their own hardware, they would probably need to compensate large public Wi-Fi operators for the use of their networks. That would allow them to keep their service priced below the Big Two.

Cellular data offload is commonly thought of as a driver for the adoption of Hotspot 2.0. Voice coverage expansion for smaller carriers may be more important.

Channel Planning isn’t Easy for Algorithms

If you’ve ever had to create a manual channel plan where spectrum is scarce, you know how hard it is to get it right. You run out of virgin spectrum, then the difficult choice of channel reuse is encountered. Often, what looks acceptable on an architectural plan, doesn’t hold up to post-deployment validation. Two AP’s 6 classrooms away on the same channel can hear each other at a loud and clear -65 dBm RSSI. Reuse the same channel in the classroom directly above, and the signal disappears below the noise floor. An extra inch of concrete make all the difference. To get it right, you have test, change, check, test, change, check, etc.

24channelplan
A 2.4Ghz channel plan

Given the challenge, it’s no surprise that I’ve never encountered an automatic channel selection algorithm that produced better results. At least, not in high density designs where spectrum is scarce, which is more and more just about everything I design. AP’s directly adjacent to one and other end up on the same channel, blasting away at max transmit power.

Speaking of power, I’ve also never encountered an algorithm that satisfactorily handled transmit power control in a high density network. They always turn things up WAY too high. As in, I’m manually taking AP’s that were auto-set from 12-20 dBm down to 4-6 dBm to shrink their cells away from the AP’s that share their channel. That can mean a 10x reduction in power! And even when power levels are auto-set to an acceptable level, I’ve yet to meet an algorithm that proportionally adjusts an AP’s receive sensitivity to accommodate the smaller cell.

Another thing algorithms don’t do well is handle DFS channels. I use DFS channels in many high density designs, but there are always some clients that don’t support them. The best thing to do in that case is to evenly distribute DFS channels throughout the WLAN, and only use them where AP density would otherwise cause non-DFS channel overlap. In those environments I like to alternate non-DFS channels with DFS channels so that clients without DFS support are still within range of a 5 GHz radio they can use. My experience with channel selection algorithms has been that a group of adjacent of AP’s may all be set to a DFS channel, creating a 5 Ghz dead-zone for clients that don’t support DFS channels.

What gives?

This is all too bad because auto channel/power features would be ideal as it dynamically adjusts to changes in the RF environment. A neighbor puts up a new AP on one of your channels and, without intervention, the algorithm moves your AP to clean spectrum elsewhere. In urban environments, this is a highly desirable feature because there is so much RF in your environment that is out of your control.

Every WLAN vendor offers automatic channel/power selection. They all ticked that box a long time ago. But who’s got an algorithm that actually works?