One of the more common mapping questions I get asked is about predicting VDSL speeds – who might benefit from an upgrade to Fibre to the Cabinet (FttC) and who might be caught in what might be called the “NGA but not superfast” trap where premises are connected to an upgraded cabinet but don’t really benefit from it.
VDSL, just like its sibling technology ADSL before it, delivers diminishing speed with distance; in this case the further from a cabinet a customer resides the slower the speed they might expect. This decay is fairly well documented by manufacturers so can be predicted where the cabinet location and the copper network routes are known – but often only the location of the cabinet is known, or at least only the location of the cabinet is relatively easily found out, so some method of estimating the speed is needed based on some assumptions about the network.
The traditional method often used is to simply draw a circle of a given radius around a cabinet; at 1.4 km of wire VDSL is likely to deliver around 24 Mbps to a 1 km radius is often used to make some provision for winding roads. This can be misleading, as there is no standard degree of complexity in our road networks – some cabinets may well be located at the cross roads of a clean radial network while others are at the heart of a labyrinth of twists and turns.
A better method would be to use the road network and a “drive time” or isochrone algorithm but this is more complex and is more resource hungry when a large number so cabinets are needed. However, it is possible to create a model using this approach which works very well for clusters of cabinets using QGIS, PostGIS 2.x, and the OpenStreetMap road network data loaded into a PostgreSQL database – all free resources!
In the map above, the radial model (hatches slanting left) was simply achieved by creating a 1 km buffer around the cabinet. The road version (hatching slanting right) was created using an SQL script that combines the driving distance function of PostGIS with the new concave hull function, applied to a routable copy of the OpenStreetMap road layer. Rather than using the usual speed or time as the cost, this approach uses distance; I found it easier to create an SQL function that generates the polygons using the id of the nearest node of the OSM network to the cabinet and the wire-line distance as inputs, returning the geometry of the polygon. I also recommend using a current version of QGIS to benefit from the multiprocessor support, especially if you plan to model more than one cabinet at a time.
Even in this simple case, based on a cabinet location in Thurlestone in Devon, its clear that a simple radial method would not be a good predictor or who would benefit from upgrading a cabinet with some unlikely to receive the service they might otherwise have expected while others may benefit unexpectedly; in this case the latter group appears to outnumber the former but that will vary from area to area.
While the routing approach adds a layer of detail to the model, it remains a somewhat idealised model. The key constraints are that its based on an optimised routing with the cabinet at its centre when this can’t be known, and the quality of the cabling can’t be known. As a result this approach will tend to produce an optimistic maximum coverage with the true extent of high speed services likely to be somewhat less but it is still a somewhat better solution than a simple circle.
So which method is best?
If the need is to consider the likely impact on a large area producing the aggregated average expected numbers to benefit then the radial approach is probably good enough. If, however, you are considering an intervention and need to see who is likely to benefit over a smaller area then the routed road network method is worth the extra processing cycles.