The State of Guest Wi-Fi Security

encryption_lock

Most guest Wi-Fi networks today are open SSID’s with no encryption that have a captive portal that requires users to click through some terms and conditions. It would be nice to be able to secure these networks the same way we do with internal SSID’s–mutual authentication of the client and network, and strong layer 2 encryption, but that challenge has proven too difficult to accomplish without a high degree of friction. You could make users suffer through a lengthy and confusing onboarding process, but imagine doing that at every location where there is guest Wi-Fi? Not good. I agree with Keith Parsons’ take: Guest Wi-Fi should be fast, free, and easy. Security should be too.

How can we make this better? The Wi-Fi Alliance is certifying devices for a new security protocol called Opportunistic Wireless Encryption (OWE). Their certification is called Wi-Fi Enhanced Open, but I’ll refer to it as OWE for the purposes of this blog. OWE adds encryption to open WLAN’s with no client authentication, but it does not provide for server authentication, which leaves users vulnerable to man-in-the-middle (MitM) attacks. The authors of the RFC understood this, and wrote that “the presentation of the available SSID to users should not include special security symbols such as a ‘lock icon.'” Aruba Networks has already announced support for OWE, and I hope other vendors follow suit.

Unfortunately The Wi-Fi Alliance did not choose to make OWE support mandatory in WPA3. It’s a separate and optional certification. Perhaps they will right this wrong by requiring OWE support in Wi-Fi 6 certification, which could require WPA3 support just as 802.11n required WPA2 support. Why not tack on OWE to Wi-Fi 6 as well?

Secure Guest Wi-Fi with Hot Spot 2.0/Passpoint

I once believed that Hot Spot 2.0/Passpoint (HS2.0) was the future of secure guest Wi-Fi, because it allowed for anonymous authentication to a WPA2-Enterprise network. The problem is that users are still required to go through a high-friction onboarding process on every anonymous HS2.0 WLAN they wish to use. That means dealing with captive portals, terms and conditions, installing configuration profiles, etc.

HS2.0 does allow for automatic authentication with user creds from other identity providers. That would allow a user to login with pre-installed creds from their cellular carrier, Facebook, Amazon, Google, Apple, etc.

Telcos are the best choice here as their creds are already installed on mobile phones to authenticate with their cellular networks. However, telcos are unlikely to open their authentication service to WLAN operators for several reasons.

  • They want to be paid for providing this service, but SMB and many large enterprises don’t want to pay to increase the security of their guest networks.
  • It gives an implied endorsement of the security, quality, and reliability of the WLAN, which the telco knows nothing about.

That’s why you see telcos integrating with Boingo, for example, but not smaller players.

But what if there was a HS2.0 open roaming consortium that federated authentication from any identity provider that wanted to join? Something like eduroam for anyone.

The biggest problem is that WLAN authentication in such a scenario tells you nothing about the identity or security of… the WLAN. Users authenticate with their identity provider’s RADIUS servers, and the result is strong encryption in the air, but no guarantee of security on the wired network. They don’t get any information about the identity of the wired LAN that their bits are traversing, because the authentication is abstracted away from the network they are using. HS2.0 provides no identity verification of the network that users are actually using.

This is a smaller problem in eduroam, where most WLAN’s are run by higher education institutions and they agree to operate their networks a certain way. There is some homogeneity there, and users can expect similar security and terms of use between member networks.

An open roaming consortium would allow users to authenticate to a university’s WLAN and a dingy laundromat’s WLAN as if there was no difference. In fact, roaming between those networks would happen automatically without any user interaction. That’s an acceptable risk when all the networks in the consortium are similar (eduroam), but it isn’t when nothing can be assumed about the quality and security of member networks in an open roaming consortium.

Is it reasonable to assume an end-user wants to connect to any WLAN that supports their HS2.0 creds? My answer to that is a definite “no.” One benefit of the non-HS2.0 model is that a user must express an intent to connect to a new WLAN, which gives them the ability to decide if it is trustworthy or not. HS2.0 circumvents this process, and if it becomes more open and widespread, users may end up connecting to networks they don’t trust.

Secure Guest Wi-Fi with an On-Premises Solution

There are several on-premises BYOD or SGW onboarding solutions. They don’t solve the high-friction onboarding problem mentioned previously–they compound it, because the credentials they issue cannot be used between networks. Users must wrestle with a high-friction onboarding process with every SGW network they want to use.

The fundamental problem with Hot Spot 2.0 and On-Premises solutions is that they require client credentials. Authenticating users is not a requirement for SGW in my opinion, and I imagine that’s a common view. It creates unnecessary complexity for users and administrative overhead to WLAN operators. We need a solution for anonymous SGW.

An HTTPS-like Solution

For secure guest Wi-Fi, a security model similar to HTTPS would be great. Client identity is not important, but the WLAN identity should be verified, not just the RADIUS server. Strong encryption must be used, wireless network access must be resistant to MitM attacks, and users should only connect to a SGW network when they have expressed the intent to do so.

Additionally, all of the necessary configuration and complexity to accomplish this should be handled by the WLAN operator. For the end-user, it should “just work.”

Take the example of HTTPS: A web admin requests and is issued a DNS-validated TLS certificate signed by a public certificate authority. She then installs the cert on her web server, configures it for strong encryption, and adds an HTTP to HTTPS 301 redirect. Now visitors to the website are able to verify the website’s identity and connect to it with strong encryption, and they had to do nothing to get those security benefits except run a modern web browser. SGW should be just as easy for end users.

OWE gets us halfway there, but crucially, does not address the threat of MitM attacks. We need a WLAN-centric public key infrastructure (PKI) for that, and that’s the rub. Suddenly there’s a lot of administrative overhead to make this work. Perhaps it would look something like this:

An “Open RADIUS Certificate Authority,” or ORCA, would only issue certs to validated network operators, and those certs could only be used with specific SSID’s.

ORCA’s root cert would have to be be preinstalled and trusted by client devices for EAP authentication.

Wi-Fi clients would connect to an ORCA-enrolled SGW SSID and authenticate anonymously, then validate the ORCA-signed cert presented by the RADIUS server. The client verified that the cert has not been revoked and that it is connecting to an SSID that the cert has been permitted for use. The session is encrypted and the WLAN’s identity is verified. Clients only connect to ORCA-enrolled WLAN’s when they intend to, by clicking/tapping on the SSID in their Wi-Fi menu/settings.

All the end user has to do is tap/click on the SGW SSID to connect to it. Everything else is handled by the client device, the WLAN, and ORCA.

Ta da, we now have low-friction SGW, but for all this work, what have we really gained, today, in 2018?

If you run a packet capture on an open guest network today, you’ll see DNS queries and a whole lot of TLS sessions, not much else. Yes, SGW would add another layer of security on top of this, but at what cost? Making ORCA work is no small task, if it is even achievable in the first place.

Conclusions

OWE gives us layer 2 encryption, so that passive sniffing doesn’t reveal those DNS queries anymore. While OWE doesn’t address MitM rogue AP attacks, coupling it with 802.11w protected management frames, which is required for Wi-Fi Enhanced Open certification, adds resistance to malicious deauth attacks.

The work necessary to make my SGW scheme function doesn’t balance with the small gain in security. It’s better to take a perimeterless networking approach (e.g. BeyondCorp), only deploy hardened applications, and assume the networks your users use will not be trustworthy. If you do not use applications that expose their data to network-level interception or abuse, then have at it. How can an end-user ever truly know if a network is trustworthy anyway?

We can add a bit more security through OWE to help obscure the small amount of guest network traffic that remains unencrypted, and 802.11w protected management frames to prevent some rogue AP attacks. That’s going to have to be good enough.

Roaming Analysis using only a Mac and Wireshark

There are many ways to examine the roaming performance of a Wi-Fi client. Perhaps the gold standard is to follow the client with a laptop running Omnipeek and several Wi-Fi adapters all capturing frames on different channels. I’m also impressed with 7signal’s recent update to Mobile Eye which now logs roaming data as well. But what if you don’t have that, or want to do something quickly with a Mac without switching to Windows and hooking up your Wi-Fi adapter array?

Using a Mac laptop to capture frames on a single channel with Airtool, you can still get valuable information about the roaming performance of a Wi-Fi client with a few Wireshark display filters and some I/O Graphs magic.

The process is simple. Discover the channel the test client is using, and start an over-the-air capture on that channel. Take you Mac and the test client and move out of the current AP’s cell so the client roams away, then come back so that the client roams back. Repeat as necessary until you have captured both a roam-away and a roam-back.

roam_capture
Let’s roam

Now it’s time to look at the captured frames. First, let’s build a display filter to only show the frames to/from the test client, as well as all of the AP’s beacon frames. We’re including the AP’s beacon frames so that we can see the changes in RSSI as the client moved away from then back towards the AP.

wlan.addr == aa:bb:cc:dd:ee:ff || ( wlan.ta contains 11:22:33:44:55 && wlan.fc.type_subtype == 8)

aa:bb:cc:dd:ee:ff is the MAC of the test client. 11:22:33:44:55 is the first five octects of the AP’s BSSID. By matching on the first five octets of the AP’s BSSID rather than the exact BSSID, we preserve the beacon frames from all of the AP’s BSSID’s, which will gives us more data to calculate the RSSI of the AP.

Once applied, export the displayed packets only to a new file that we’ll generate the graphs from. Open the new file and now we can configure the I/O Graphs. These are some of the display filters I use:

7925_roam_graph.png
The roam-away is on the left, and the roam-back is on the right.

AVG Tx Data Rate needs to be set with the test client MAC address, and AP RSSI needs to have the first five octets of the AP’s BSSID.

By zooming into the beginning of the graph, we can observe the client’s data frames, retries, Tx data rate, and the RSSI at which it roamed away. A benefit of dBm being measured in values less than zero is that it is separated from the rest of the data on the graph, so we have layer 1 data below 0, and layer 2 data above.

7925_roam_follow_roam

This Cisco 7925G phone roams away before the AP’s RSSI drops to 70 dBm, and before retries start to increase. We see similar good behavior when it roams back below.

7925_roam_follow_back

Let’s take a look at a Wi-Fi client that roams poorly. Here’s a client-that-shall-remain-nameless roaming away from an AP. You can see retries spiking and its data rate plummeting well before it roams away. The AP’s RSSI drops into the -80’s for most of a minute before it decides to roam!

bad_roam-filtered
This graph includes the test client’s average Tx data rate.

Of course, this approach has some limitations. You must know that a client like the one above was in range of a louder AP operating on a channel it supports when it started having trouble before you decide it’s a sticky client, otherwise it’s doing exactly what it should be doing–trying to maintain the only association it can.

You know when the client decided to roam, but you don’t know how long it took.

As you move away from the AP, you might see the AP’s RSSI spike to 0. That happens when your laptop’s adapter is unable to demodulate beacon frames from the AP due to poor SNR.

Also, the AP RSSI is measured by a Mac laptop that is following the test client. Unless the test client is the same model of Mac laptop, it will probably hear the AP differently, most likely with less sensitivity. My MacBook Pro is a 3×3:3 client, and the two test clients I looked at for this blog are both 1×1, so it’s reasonable to assume the Mac benefits from a significant increase in RSSI from MRC. Taking that into consideration, the poor roaming from the client-that-shall-remain-nameless is probably even worse than it looks.

Splunking Wi-Fi DFS Events

splunk-logo

One aspect of wireless networking that I’ve always struggled with is visibility into DFS events. Usually I catch them by chance by noticing two nearby AP’s on a site map using the same non-DFS channel, or maybe by casually looking through logs, but I’ve never felt like I had the reporting and alerting that should be in place for DFS events, because they can be very disruptive. An AP will abruptly change the channel it is operating on, and if it switches back, it may observe a “quiet period” of 60 seconds in which is does not transmit any data. Not good.

Enter Splunk.

Splunk is a powerful log analysis tool that you can think of as “Google for the data center.” It takes log data from almost any source and makes it as searchable as Google has made the web. For wireless network engineers, you can quickly and easily search syslog and SNMP data, build reports, and create alerts. Splunk Light is free and will process up to 5 GB of data a day, which should be plenty for most WLAN’s. It also runs easily on macOS if you just want to demo it locally.

Using Splunk I very quickly created this dashboard of real DFS data from SNMP traps coming from a Cisco WLC. It’s a little rough around the edges still (I need to figure out how to clean-up those AP names and channels), but it still shows me a lot of the valuable data.

splunk-dfs-report
Yes, DFS is a problem at this site.

I can easily create email alerts too, so that if a DFS event occurs an email is triggered, or if say 10 DFS events occur within 30 minutes an email is triggered.

How To

I installed Splunk on a Mac then setup the built-in snmptrapd to listen for incoming traps and log them to a file. For snmptrapd to interpret the SNMP traps from a Cisco WLC, download the Cisco MIB’s and copy them to /usr/share/snmp/mibs/. Then you can start snmptrapd.

Here’s the CLI one-liner to do that:

sudo snmptrapd -Lf /var/log/snmp-traps --disableAuthorization=yes -m +ALL

Next configure the WLC to send SNMP traps to the Splunk box by adding its IP address under Management -> SNMP -> Trap Receivers. While you’re there go to Trap Controls and turn everything on you want to analyze.

wlc-snmp

Even though DFS events only generate SNMP traps, it’s still a good idea to send syslog messages to Splunk too, so do that under Management -> Logs -> Config. Set the Syslog Level to “Informational” to get a lot of good data. “Debugging” is probably way too much. The Syslog Facility isn’t important.

wlc-syslog

Monitor the file snmptrapd is writing traps to to make sure it is working. Run this command on the Mac and you should see traps streaming in. If not you have some troubleshooting to do.

tail -f /var/log/snmp-traps

Now add the file to Splunk under Data -> Data inputs -> Files & directories, and you should be able to see the traps in searches.

Have a look at Splunk’s documentation on SNMP data for more setup help. Setting up syslog is easier. Under Data -> Data inputs -> UDP add UDP port 514 with the Source type “syslog.”

Once the data is coming into Splunk you can start searching it and creating fields. Search “RADIO_RC_DFS” (with quotes) to see all the DFS traps. From that search click “Extract new fields” and select the tab delimiter to parse the data. Give the AP name field a label, and then you can create visualizations of DFS events by AP name. Any search can also be used to trigger an alert, such as an email.

Cisco has published a WLC SNMP Trap Guide as well as a WLC syslog Guide that is helpful when working with this data. Find the messages you are looking for in those guides, then search for them in Splunk.

From there it’s all up to your own creativity. DFS events is just scratching the surface of Splunk’s potential. You can look at authentication events, monitor RRM, and there might be some interesting roaming analysis that can be done with this data as well. I’m sure there are some bright engineers out there that have taken this a lot farther. Please share your work!

Use Let’s Encrypt Certificates with FreeRADIUS

lets_encrypt

Let’s Encrypt is a certificate authority that generates TLS certificates automatically, and for free. It’s been great for web server administrators because it allows them to automate the process of requesting, receiving, installing, and renewing TLS certificates, taking the administrative overhead out of setting up a secure website. And did I mention it’s free and supported by all the major web browsers now?

Getting all of that to work with a RADIUS server is challenging however, mostly because of the way Let’s Encrypt works. The Let’s Encrypt client runs on a web server with a public domain name. The client requests a TLS cert from Let’s Encrypt and before Let’s Encrypt issues the cert, it verifies that the client is connecting from the same domain name that it is requesting a cert for, and that the client can put some hidden files on the server’s website. Do you see the problem? Unless you run a public-facing web server on your RADIUS server (unlikely), Let’s Encrypt will not issue certs to your server. It needs a web server it can interact with in order to validate the domain name of the client’s request.

Why use a certificate from a public CA like Let’s Encrypt for 802.1X/PEAP authentication? While a private CA offers more security, a public CA has the advantage of having a pre-installed root certificate on virtually all RADIUS supplicants, including BYOD clients that are unmanaged. If you don’t have an MDM or BYOD onboarding solution, you can’t get your private root cert onto BYOD clients very easily.

Unmanaged clients are a security risk, however, because the end-user can easily override security warnings that occur when connecting to an evil twin network with a bogus cert. A good MDM solution will allow network admins configure BYOD clients properly so that TLS failures cannot be bypassed.

A few considerations before you get too excited:

  • Again, a better, more secure solution is to use a private CA and distribute the RADIUS server cert to clients using an MDM solution and/or BYOD onboarding solution.
  • Let’s Encrypt certs are only good for three months at a time, and some supplicants will prompt users to accept the new certificate when it is renewed.
  • Build in some error handling, logging, and notification. E.g. an email from the web server when the cert renewal routine runs, including its output, and an email from the RADIUS server when it copies the new certs and reloads FreeRADIUS.
  • It works as root, but there’s probably a way to accomplish this without using root. Do it that way.
  • You can accomplish the same thing with Windows servers and Powershell.
  • You broke it, not me.

To get this working, we need a public web server with the same domain same as you’d use in your RADIUS server’s cert common name. This means internal domain names with a .local TLD won’t work.

I setup two Ubuntu servers, one running the nginx web server with a public IP, and another on my local network running FreeRADIUS. The web server will run the Let’s Encrypt client and create and renew the certs. The RADIUS server will copy those certs from the web server and use them for PEAP authentication. Once setup, the process of renewing and installing the certs on the RADIUS server happens automatically, just like it would on a web server.

First, a public DNS A record needs to be setup with the domain name which will be used on the TLS cert common name, we’ll use radius1.example.com, and point it to the IP address of the web server.

Once that is done, you can install and run the Let’s Encrypt client on the web server. It works with Apache too, but if you prefer nginx like me, follow these directions to get it setup with Ubuntu 14.04 or Ubuntu 16.04. Don’t skip over the part about using cron to run the renewal routine.

Now that we have the certs on the web server, we’ll turn our attention to the RADIUS server. The first thing we need to do is setup ssh public key authentication between the two servers. I used the root account on both servers to do this, so that I would have permissions everywhere I needed it. With public key authentication in place securely copying the certs in the future can happen automatically, without getting stopped by a password request. Here are instructions to get that working.

Now we’ll start configuring FreeRADIUS on the RADIUS server. I’m assuming you already have a working FreeRADIUS server. I’m using FreeRADIUS 3, and you should be too. I like to use a separate directory for the Let’s Encrypt certs.

root@freeradius:~# mkdir /etc/freeradius/certs/letsencrypt/

Now let’s try copying the certs from the web server to this directory on the RADIUS server. If public key authentication is working, you should not be prompted for a password.

root@freeradius:~# scp root@radius1.example.com:/etc/letsencrypt/live/radius1.example.com/fullchain.pem /etc/freeradius/certs/letsencrypt/
root@freeradius:~# scp root@radius1.example.com:/etc/letsencrypt/live/radius1.example.com/privkey.pem /etc/freeradius/certs/letsencrypt/

Did it work? If so, you should see the certs in the new folder we created.

root@freeradius:~# ls /etc/freeradius/certs/letsencrypt/
fullchain.pem  privkey.pem

Now we need to configure FreeRADIUS to use the Let’s Encrypt certs for PEAP authentication. I have a previous blog about using different CA’s for PEAP and EAP-TLS on FreeRADIUS that should come in handy here. If you are using EAP-TLS too, be sure not to change that CA from your private CA! All we need to do now is modify /etc/freeradius/mods-enabled/eap with our new certs in the TLS section used for PEAP.

root@freeradius:~# nano /etc/freeradius/mods-enabled/eap

tls-config tls-peap should be changed to:

…
tls-config tls-peap {
 private_key_file = ${certdir}/letsencrypt/privkey.pem
 certificate_file = ${certdir}/letsencrypt/fullchain.pem
…

If you aren’t using multiple TLS configurations, this section is named tls-config tls-common. You can leave it like that.

Reload FreeRADIUS for the change to take effect.

root@freeradius:~# service freeradius reload
 * Checking FreeRADIUS daemon configuration...               [ OK ] 
 * FreeRADIUS daemon is running
 * Reloading FreeRADIUS daemon freeradius                    [ OK ]

Now when connecting to the WLAN that is configured to use this RADIUS server for 802.1X/PEAP  authentication, the client is presented with a valid Let’s Encrypt server certificate.

mac_cert_challenge

OK, we have a working FreeRADIUS server using Let’s Encrypt certs for 802.1X/PEAP authentication. Now let’s automate the process of getting renewed certs from the web server to the RADIUS server. We’ll use scp and cron to get this done.

On the RADIUS server, add these commands to root’s crontab, with the appropriate domain names.

root@freeradius:~# crontab -e
# m h dom mon dow command
0 3 * * 1 scp root@radius1.example.com:/etc/letsencrypt/live/radius1.example.com/fullchain.pem /etc/freeradius/certs/letsencrypt/
0 3 * * 1 scp root@radius1.example.com:/etc/letsencrypt/live/radius1.example.com/privkey.pem /etc/freeradius/certs/letsencrypt/
5 3 * * 1 service freeradius reload

At 3:00 AM every Monday, cron will run copy the TLS certs from the web server the reload FreeRADIUS at 3:05 AM to put them into production. Now the Let’s Encrypt certs are automatically installed on the RADIUS server a few minutes after they are renewed on the web server. The certs are good for three months at a time and renewable one month in advance, so you’ll get renewed certs automatically installed every two months.

Presto! You now have Let’s Encrypt certs automatically renewed and installed on your RADIUS server. While a private CA is a better solution for 802.1X authentication, this isn’t bad for a $0 software stack.

Clear To Send Podcast Episode 62: K12 Wi-Fi Deployments

podcast_logoI recently had the pleasure of joining Rowell Dionicio on the Clear to Send Podcast to talk about Wi-Fi in K12 schools. Clear To Send is a great podcast about enterprise wireless networking and a great way to stay current with the Wi-Fi community.

We talked about K12 requirements, challenges, funding, my design process, security, and everyone’s favorite K12 subject, 1 AP per classroom!

After listening to the podcast, I thought about some other K12 Wi-Fi considerations that I didn’t bring up on the air.

  • K12 often has requirements for mDNS applications like Apple AirPlay for AppleTV or Google Cast for Chromecast. This is a challenge in an enterprise network because mDNS does not cross layer 2 boundaries. It’s important to consider that when designing a new WLAN and selecting the vendor. Many WLAN vendors do have features that can assist with relaying mDNS traffic between vlans. Be careful to limit this traffic to only the vlans where it is required.
  • Excessive multicast traffic can be a burden on channel utilization when it is not controlled. Many WLAN vendors have features that intelligently filter broadcast/multicast traffic, instead of always forwarding it out the AP radio interfaces at the lowest data rate. If you are dealing with mDNS or large subnets (common in K12) it’s worthwhile to understand how the WLAN can manage broadcast/multicast traffic.
  • MSP’s are a great way to get well-designed enterprise Wi-Fi into small to medium size schools that don’t have the internal resources to handle it themselves. MSP’s can be hired to support and operate the WLAN after installing it, which gives them an incentive that VAR’s who just sell the hardware might not have–to design the WLAN properly. E-Rate funding is now available to reimburse schools for managed services contracts with MSP’s.
  • eduroam is available for K12 schools, not just higher education. Check it out!
  • It’s hard to listen to the sound of your own voice.

I really enjoyed talking Wi-Fi with Rowell and I’d love to return to the podcast in the future. Maybe we can talk about healthcare Wi-Fi next? Thanks Rowell!

Have a listen here: CTS 062: K12 Wi-Fi Deployments – Clear To Send

802.11ac Encryption Upgrade

encryption

The security features provided by the IEEE 802.11 standard haven’t changed much since the 802.11i amendment was ratified in 2004, which is more commonly known by its Wi-Fi Alliance certification name WPA2. 802.11w protected management frames were introduced in 2009, but it is only recently that Wi-Fi chipsets for client devices have included support for it. WPA2 introduced the robust CCMP encryption protocol as a replacement for the compromised WEP-based encryption schemes of the past. CCMP utilizes stronger 128 bit AES encryption keys. As a general rule of thumb, if you aren’t using CCMP on a Wi-Fi network designed for security, you’re doing it wrong. It’s been out for a long time and older protocols have well-established weaknesses.

11acHowever, there are some new encryption changes in the 802.11ac amendment which have mostly flown under the radar. Besides 256 QAM, wider channels, and MU-MIMO, 802.11ac now includes support for 256 bit AES keys and the GCMP encryption protocol. Galois Counter Mode Protocol is a more efficient and performance-friendly encryption protocol than CCMP.

A few interesting nuggets from section 11.4 of the 802.11ac amendment:

The AES algorithm is defined in FIPS PUB 197-2001. All AES processing used within CCMP uses AES with either a 128-bit key (CCMP-128) or a 256-bit key (CCMP-256).

And…

CCMP-128 processing expands the original MPDU size by 16 octets, 8 octets for the CCMP Header field and 8 octets for the MIC field. CCMP-256 processing expands the original MPDU size by 24 octets, 8 octets for the CCMP Header field, and 16 octets for the MIC field.

By the way, you can download the 802.11ac amendment or the entire 802.11-2012 standard from the IEEE here for free. For more on these security changes read sections 8.4.2.27 and 11.4 of the 802.11ac amendment.

It seems odd that these changes were included in the 802.11ac amendment, and not in a separate security-focussed amendment like 802.11w and 802.11i. Nothing wrong with it, just unexpected. I’m curious to see if the 802.11ax amendment includes security changes as well.

Why the addition of 256 bit AES keys? It could have something to do with a few chinks in the armor of 128 bit AES keys. The current attacks appear to be impractical, but future attacks that take advantage of quantum computing may put 128 bit AES keys at risk. NIST thinks that larger key sizes are needed to defend symmetric AES keys like those used in WPA2 against quantum computer attacks, which they say will be operational within the next 20 years. I’ll take their word for it.

Because the amendment only specifies CCMP-128 as mandatory for RSN compliance, it’s very unlikely that we’ll see CCMP-256/GCMP-256 in use anytime soon. Further, enabling 256 bit cipher suites effectively disables support for all non-802.11ac clients as well as 802.11ac clients that only support the mandatory cipher suites (most of them?). That’s because CCMP-256 and GCMP-256 pairwise keys are only compatible with 256 bit group keys, breaking backwards compatibility with legacy clients. There are also a lot of 802.11n clients out there that aren’t going away anytime soon, so actually deploying CCMP-256/GCMP-256 will require a separate CCMP-256/GCMP-256-only SSID. Excited yet?

Further, I can’t find any documentation that suggests that infrastructure vendors have implemented CCMP-256/GCMP-256 at all, just a few slide decks here and there with an overview of the changes. These cipher suites appear to be optional, so I wonder if any VHT clients or AP’s actually support them today, and when they will in the future. The Linux Wi-Fi configuration API cfg80211 and driver framework mac80211 have added software support for it. That’s about all the implementation I have found. Perhaps PCS compliance or Wi-Fi Alliance certification will eventually force the issue, or perhaps it will go the way of 802.11n Tx beamforming and never be implemented. There are a lot obstacles to overcome before 256 bit keys become practical.

However, a VHT client can negotiate a GCMP-128 RSNA within a BSS that uses a backwards-compatible CCMP-128 group key, and the 802.11 standard does support multiple pairwise cipher suites within a BSS (remember TSN’s?). That allows the GCMP-128 pairwise cipher suite to be used alongside everyday CCMP-128 pairwise and group keys on real, production networks.

To tell if a BSS is using one of the new cipher suites in a packet capture, look at a beacon frame’s RSN information element. The cipher suite selector is always 00-0F-AC for the CCMP/GCMP encryption protocols, it’s the cipher suite type that distinguishes between the specific cipher suites. For example, 00-0F-AC:4 is the default CCMP-128, 00-0F-AC:9 indicates GCMP-256 and 00-0F-AC:10 indicates CCMP-256. Group keys for a BSS with protected management frames have their own suite type numbers. Look for multiple pairwise cipher suites to find support for the new stuff. Here’s the table of the new cipher suites. I’m on the lookout for 00-0F-AC:8 (GCMP-128), but I’ve yet to find a beacon frame with it advertised.

Table 8-99—Cipher suite selectors

OUI

Suite type  Meaning
00-0F-AC  4 CCMP-128 – default pairwise cipher suite and default group cipher suite for data frames in an RSNA
 00-0F-AC  6  BIP-CMAC-128—default group management cipher suite in an RSNA with management frame protection enabled
 00-0F-AC  8  GCMP-128 – default for a DMG STA
 00-0F-AC  9  GCMP-256
 00-0F-AC  10  CCMP-256
 00-0F-AC  11  BIP-GMAC-128
 00-0F-AC  12  BIP-GMAC-256
 00-0F-AC  13  BIP-CMAC-256

Interesting note that GCMP-128 is the default for a DMG STA, which is a directional multi-gigabit station defined in the 802.11ad amendment for operation in the 60 GHz band.

The standard limits the mixing of cipher suites so that the key sizes of the pairwise and group keys must match, and GCMP group keys can only be used with GCMP pairwise keys.

 

 

Hardening TLS for WLAN 802.1X Authentication

encryption_lockThis post outlines some configuration changes which can enhance the security of 802.1X EAP methods PEAP and EAP-TTLS, which use a temporary layer 2 TLS tunnel to protect a less secure inner authentication method. While EAP-TLS doesn’t create a full TLS tunnel, it does use a TLS handshake to provide keying material for the four-way handshake. It needs strong TLS too.

Standard 802.1X security best practices should also be implemented such as using strong passwords, disabling insecure EAP methods, disabling TKIP, proper supplicant configuration, deploying sha-2 certificates, and anonymous outer usernames. The focus here is the TLS tunnel exclusively.

Not all RADIUS servers can implement all of these suggestions, but some can certainly do more than others. My experience has been with Microsoft NPS and FreeRADIUS servers so that is what I’ll refer to when discussing specific implementations. I welcome input from Aruba ClearPass and Cisco ISE administrators on configuring those servers as well.

Why go through all the trouble? It turns out the same encryption techniques that are used by web clients and servers to protect data in HTTPS sessions are also used when EAP methods rely on a TLS encrypted session. Ask any web server admin, and they’ll tell you that not all HTTPS is created equally. The same vulnerabilities that web server admins deal with exist in TLS-assisted EAP methods used on the WLAN as well. There is a lot to be learned from the TLS best practices that are recommended for web server admins.

At the end of the day, the TLS session is all that stands between user credentials and would-be hackers. It needs careful consideration to verify that it is meeting current security standards.

Here’s what to do.

Disable SSL

We’re talking specifically about SSLv2 and SSLv3 here, not TLS, the collection of which is often referred to simply as “SSL.” SSLv2 and SSLv3 were cracked long ago.

Consider TLS Methods

TLS 1.2 is the most secure TLS method available, so why not disable TLS 1.0 and TLS 1.1? Right now supplicant support for TLS 1.1 and TLS 1.2 is far from universal, and TLS 1.0 with strong ciphers is still considered secure. Keep TLS 1.0 enabled for now.

Disable Weak Cipher Suites

Cipher suites are the specific encryption algorithms that are used in a TLS session. Supplicants and servers support a broad range of them, and some of them are better than others. Many RADIUS servers have older insecure cipher suites enabled by default. This allows old supplicants that do not support newer cipher suites to still function. Unless you have older supplicants, you can disable many of these cipher suites to enhance 802.1X security.

A current listing of strong cipher suites can be found at Cipherli.st. While the website focuses on web server configuration, TLS is TLS.

Be aware that EAP-TLS requires TLS_RSA_WITH_3DES_EDE_CBC_SHA.

Microsoft NPS

Microsoft NPS relies on Schannel to provide encryption for TLS-tunneled EAP methods. In order to control the protocols Schannel uses, an administrator must alter these registry keys. Note that changing these keys affects all TLS functionality on the server, so if you run IIS or RDS with TLS, these changes will affect those applications as well. Proceed with caution. The registry keys can be found in:

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\]

A full listing of cipher suites supported by Schannel can be found here.

If the prospect of manually editing dozens of registry keys on a Windows Server doesn’t appeal to you, the good people at Nartac Software have developed an application that allows these changes to be managed in a user-friendly GUI interface. IIS Crypto allows you to make all of the registry settings necessary for this, while also including some handy templates including Best Practices, PCI, FIPS 140-2, and Defaults.

Here is IIS Crypto displaying the default Schannel configuration of a Windows Server 2012 R2 server. There is a lot not to like here…

iis_crypto_defaults

And here is the Best Practices template. Note the obsolete protocols and cipher suites that are disabled, and the order in which cipher suites are prefered is updated as well.

iis_crypto_bp

Be aware that manually taking control of the Schannel TLS configuration means you’re in charge of it going forward. If Microsoft updates the default configuration, your manual config may still be in place. Stay up-to-date on new TLS vulnerabilities and periodically review your configuration for needed changes.

FreeRADIUS

FreeRADIUS 3 is the current supported stable release and you should be thinking about upgrading to it if you have not already. SSLv2 and SSLv3 are not supported by FreeRADIUS 3, only TLS 1.0, TLS 1.1, and TLS 1.2.

For FreeRADIUS to require stronger cipher suites, add this to the EAP-TLS configuration in the “eap” configuration file. Alternatively, specify a colon-separated list of specific cipher suites.

cipher_list = "HIGH"

Also be aware that  FreeRADIUS 2.2.6 and 3.0.7 and contain a critical bug that prevents successful TLS 1.2 sessions from starting. You should update these servers as soon as possible.

Harden Supplicants Too

Few 802.1X supplicants allow you to alter their TLS configuration. The best thing to do with supplicants is to routinely install system updates and retire clients that are EOL.

Documentation for the TLS capabilities of client supplicants is hard to come by. Microsoft published an update to Windows 7 and above to allow the use of TLS 1.1 and TLS 1.2 in its 802.1X supplicant, if configured manually for now. wpa_supplicant for Linux supports TLS 1.2 in version 2.0 and version 2.6 enabled it by default. TLS 1.2 is the default TLS version used in the supplicants for Windows 10Mac OS 10.11, iOS 9, and Android 6.0 (Update: It appears that Apple has deferred their decision to default to TLS 1.2 in iOS 9/ Mac OS 10.11 until a later release).

Lab it Up

To know definitively what a client supplicant is capable of, run a packet capture on TLS-tunneled EAP authentication and observe the TLS negotiation frames, or TLS handshake, that occur right after 802.11 association and EAP identity request/response frames.

The client will send a “Client Hello” frame in which Wireshark will mark as a TLS protocol frame. This frame includes the TLS version requested by the client along with its supported cipher suites. The TLS version is the highest version the client supports.

tls_client_hello

Next, the RADIUS server will respond with a “Server Hello” frame which specifies the TLS version and cipher suite to be used during the TLS session, and includes the server certificate as well. The server will choose the best cipher suite that both client and server support and the highest TLS version that both support as well.

tls_server_hello

A few more frames are exchanged to setup the TLS session, and then EAP authentication takes place within the encrypted TLS session. It’s these first two frames that are of most concern when documenting client TLS capabilities.

This is also a useful technique to use to verify that highly secure TLS encryption is occurring in production.