The Best Wi-Fi Mesh-Networking Kits for Most People: Wirecutter Reviews


I describe the testing process in detail below, and if you’re curious, you’ll want to look at that to understand how each kit dealt with the challenges of our test house and its simulated busy network. If you want only the overview, the following stacked graphs show aggregate performance for each kit across all of our test locations and workloads.

We believe that mesh’s purpose isn’t to make Wi-Fi fast somewhere, it’s to make Wi-Fi fast everywhere. So we tested throughput only in the two absolute toughest-to-reach spots in the house: the far bedroom on the top floor, and the spot on the bottom floor farthest from an access point. For single-hop and two-piece setups, that’s the den right below the living room; for multi-hop configurations it’s the downstairs bedroom.

As the chart shows, every single mesh kit we tested trounced our stand-alone router pick, the Netgear R7000P, at long range. At close range, the R7000P is as fast or faster than all of the mesh kits’ access points, but having a slower mesh access point nearby is always better than having a faster router that’s far away. That’s why mesh kits beat the stuffing out of any stand-alone router in a large space like ours.

This is the same stacked throughput graph, this time including only multi-hop results. If you have a long, narrow house, or multiple hard-to-reach floors, you’ll probably want a multi-hop configuration.

In the above graphs, we look at relatively simple maximum throughput from the absolute toughest-to-reach spots in the house. Upstairs, we test in the far upstairs bedroom, at a site 43 feet and four interior walls away from the router. Downstairs we test in the farthest possible site from an access point. If there are access points upstairs in the far bedroom and the kitchen (for three-piece, single-hop setups), that means our test point downstairs is in the den, right below the living room. If there’s only one satellite access point upstairs on the living room TV island (for two-piece kits), that makes our downstairs test point the farthest corner of the downstairs bedroom.

We’re testing using real HTTP downloads of a 1 MB test file here. Testing with real HTTP, just like you use when downloading things at home, means we’re testing things you’ll actually benefit from, rather than just getting a big number to wave around excitedly.

We also included a multi-hop only version of the same graph. If you have a long or tall house and your main router is stuck all the way at one end of it, you should only consider kits that perform well in multi-hop deployments.

The long version

Click for full-resolution version. These illustrations show where the pieces of each mesh kit were placed in our real-world test environment. Illustrations: Kim Ku

These throughput graphs are worthwhile for giving you an idea of your best-case performance when you’re the only one on the network, but they don’t tell the most important story. To do that, we needed to abandon the single-laptop model, and distribute four separate laptops around the house for simultaneous testing, simulating a busy little real-world network; we detailed how we set up the network and what tests we’re running on which laptop above. As a brief recap, we have laptops upstairs simulating a 4K video screen, a VoIP call, and a large-file download, and a laptop downstairs, in the bedroom or den (depending on mesh configuration) attempting to browse the Web.

The Web browsing test is both the most realistic example of your experience actually using the Wi-Fi, and the “canary in the coal mine” that almost always fails before anything else does. By running it in the area of the house farthest from the networking closet (and as far as possible from an access point) we test the worst-case scenario for a mesh networking kit.

Modern webpages consist of a pretty large number of resources that must all be fetched before the page renders in your browser—you need the text and HTML of the main page itself, you need CSS libraries that control how they’re formatted, javascript libraries that control how you interact with them in the browser, images which the page has to arrange itself around, and more.

This is a YSlow analysis of my Facebook feed. It loaded more than 150 separate resources after a single click! At least half of those resources can single-handedly prevent the entire page from displaying in your browser until they’re loaded.

As a result, what seems like a simple task—click a link, get me a webpage—is actually very complex, and a relatively minor number of errors and slowdowns can magnify rapidly into “webpages don’t load and I have to hit refresh”. Our Web browsing test, by more closely simulating what your browser really has to do behind the scenes, exposes those problems more accurately.

This chart shows the mean latency of each workload and site tested during our multiple-client tests. The most important figure here is the largest one: Web browsing latency. A big browsing bar means frustration waiting for slow page loads.

For Web browsing, latency—how long it takes between a request and a response—is more important than raw throughput. By looking at the latency of requests made while all four laptops are busy simultaneously, we can get a good measure of how well the network functions during a busy time, instead of when everybody else is asleep and you have the Wi-Fi all to yourself.

The results here diverge significantly from the simple, single-device throughput tests: The two-piece Orbi kits move from the top half of the pack to the bottom, for example, and the kits in a multi-hop configuration rocket to the top. What we’re seeing here is the importance of having client devices closer to an access point. Yes, you sacrifice some throughput, but for well-designed kits, the added reliability is well worth it. The Google Wifi and TP-Link Deco M5 are noticeable exceptions—neither has a particularly robust backhaul connection or manages it particularly well, so the downstairs client benefits very little or not at all from their multi-hop, unlike the Eero, Orbi, and Velop kits.

Stacked means latency again, but only for the multi-hop kits. These are the only numbers you should consider if you’ve got a long, narrow house, or one with multiple hard-to-reach stories.

Because testing Web browsing in the farthest part of the house is so revealing of the quality of a mesh-networking kit, let’s take a deeper look at the latency over a five-minute test run with each kit. We broke this into three charts, from best performers to worst, for ease of reading.

We’re looking at the time it takes to fetch our 16-element “webpage,” by percentile. The median (50th percentile) results are on the left, the worst (99th percentile) results on the right; lower results are better. Note that all four of our top performers were in multi-hop configuration: If you want stuff to happen faster when you click, you want an access point as close to you as possible.

Orbi RBK50 leads the pack of our next five performers. Though it didn’t do quite as well as the Eero + 2 Beacons, it does offer higher throughput, no cloud dependencies, and fewer devices to place.

Everything else is here. None of these kits had a decent result at the 75th percentile (i.e., one request out of every four), or even a very good one at median. Note that some kits did great when in multi-hop configuration, but wound up here in the “yuck” pile when all deployed upstairs in a standard “star” topology—access-point placement matters!

In the three graphs above, we look at the top, middle, and lowest performance groups individually, as measured by the time to load a webpage downstairs during multiclient testing. All four of the top performers are kits in multi-hop configuration; if stuff happening fast when you click is a priority, you want to have an access point nearby.

Rather than just looking at the mean (average) of all results during our five-minute test window, here we look across the spectrum of results, from good to bad. The median—or typical—result is 50th percentile, over on the left. Moving right, the 75th percentile shows you how bad one out of every four clicks will be, and the 99th/100th percentile is the worst result of the five-minute run. If a kit rockets off the chart at the 75th percentile, that means one frustratingly slow page load out of every four. This is a tough test: We’re not modeling an easy time when nobody’s home, we’re modeling a time when several devices are busy doing various things. We think that’s fair, though. A good network should be able to satisfy you all the time.

TP-Link’s Deco M5 was the only kit tested that actually did worse in multi-hop placement than in “star” (all satellites connected directly to the router) placement. Technically, this indicates poor backhaul quality and/or management in the Deco M5 kit. Practically, this means that adding extra M5 access points is unlikely to significantly improve the quality of a Deco M5 network. You can significantly improve an Orbi, Eero, or Plume network by adding another access point in a hard-to-reach spot, but you shouldn’t expect similar gains with M5.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *