07.23.25
The fight over BEAD funding for satellite
Last week, the Washington Post published an article about Starlink’s supposed capacity limitations, based on a paper from X-Lab. This is part of the larger fight over the future of BEAD funding and how much should be redirected from fiber to satellite, with a rival ITIF paper suggesting the opposite, that it’s a myth that “LEOs Don’t Belong in BEAD”.
SpaceX has also been lobbying hard on this topic, publishing a network update that notes speeds and latency have both been improving in the US, even with more than 2M active users, and regulatory chief Dave Goldman highlighting his conversations with the FCC, NTIA and others “about how Starlink will make gigabit speeds available to people across the country”. Countering that, several articles have been published suggesting that Starship might never succeed, which would mean SpaceX being unable to launch the larger V3 satellites that the company is “targeting to begin launching…in the first half of 2026″
As one might expect, the lobbyists take an extreme position and the reality is somewhere in the middle: fundamentally there must be some limit to how much it is worth spending on fiber deployment to the most rural and remote locations, when Starlink (and in the future hopefully Kuiper) can provide an high quality, cost-effective residential broadband service. But on the other hand, putting fiber in the ground is a long term investment and it is comparing apples and oranges to equate that to the cost of a Starlink user terminal that the company expects to have a useful life of three years.
The X-Lab paper suggests that Starlink shouldn’t be funded by BEAD in areas where the population density is more than 6.7 Broadband Service Locations (BSLs) per square mile (which corresponds to limiting the addressable market to just over 3M homes around the country). However, when Starlink had waitlists in parts of the US such as the Pacific Northwest in January this year (since replaced by “congestion charges”), these were in regions with an average of about 4-5 customers per square mile, based on Starlink’s estimated US subscriber base in the area deemed “sold out”.
Since not all households would be expected to actually subscribe to internet service, this suggests that Starlink has already seen plenty of demand in areas at or above the proposed 6.7 BSL per square mile density limit, and those customers certainly found it worth paying for, even if the uplink speeds often fell short of the BEAD benchmark. Regardless of when/if Starlink actually gets to orbit, even the current Falcon 9 launch tempo is allowing the capacity of the Starlink service to improve significantly over time, so this proposed cutoff seems too low in limiting where Starlink can usefully provide service.
More to the point, the calculations in the paper simply don’t match the actual constraints on the Starlink service. The assumption is that only one satellite can serve a given cell, but a Starlink user would realize that’s not how it works in practice because if you set up a portable Starlink terminal and take it down each evening, one day you may be told (by the app) to point it say northeast, and the next day you may be told to point it west. That’s because the system is load balancing across the multiple satellites serving a given cell.
At the moment, the primary constraint on the downlink is the FCC’s limit on spectrum re-use (known technically as Nco=1) which means Starlink can only serve a single cell once with a given channel across Starlink’s 2GHz of downlink spectrum (10.7-12.7GHz). While the efficiency of spectrum use varies (for example it’s lower for a Starlink mini than a regular terminal), a reasonable estimate is ~3-4bps/Hz. So 2GHz of spectrum would equate to a maximum of ~7Gbps in a cell, which isn’t too different to the 6Gbps assumed in the paper. However, the FCC has allowed Starlink’s Gen1 and Gen2 satellites to be counted separately for the purposes of the re-use limit, and so the current theoretical maximum downlink speed in a cell is actually twice this level. And now the FCC is consulting on loosening these limits further.
The X-Lab paper focuses more on the uplink capacity as the key density constraint and it is certainly the case that the amount of spectrum available to Starlink is more limited there, because only 500MHz of Ku-band spectrum is allocated to uplink (14.0-14.5GHz) compared to 2GHz for downlink. However, the primary determinant of uplink capacity for Starlink end users is the number of timeslots allocated to uplink transmission, because the network uses Time Division Duplex (TDD) and was originally only configured to support transmission up to 10% of the time. That was intended to ensure that the terminal cannot produce enough radiation to heat up the head of someone standing in front of it (what the FCC refers to as SAR limits). Over time SpaceX has been able to improve this percentage (now 15.5% of the time for uncontrolled use) and professionally installed terminals can go even higher. So there’s no reason to conclude that the supposed 0.4Gbps per beam assumed in the paper is a hard limit.
On the other side of the lobbying effort, the ITIF paper ignores the fact that the BEAD funding mechanisms are extremely poorly suited to fund satellite deployments, as I discussed in this thread on X/Twitter. BEAD has been set up so you bid for money to deploy infrastructure in a particular geographical area, regardless of how many customers actually sign up. That makes sense when funding fiber or even wireless infrastructure: if you build a tower or lay a fiber line, the only way to make a return is to sell service within that coverage area. However, if you fund a satellite operator to build more LEO satellites, then those satellites will spend only a tiny fraction of 1% of the time over that area as they go around the Earth, and can devote 99%+ of the orbit to earning money from more valuable customers. So there is no real incentive for a satellite operator to actually sell service to the unserved customers.
The best way to square this circle would be to provide affordability instead of deployment incentives (i.e. a subsidy for terminals and/or monthly service), so that the satellite operator only earns money when end users in these unserved areas actually sign up, which was how the Affordable Connectivity Program (ACP) was structured. Otherwise the satellite operator is getting paid for something they are already doing: Starlink has over 7000 satellites in orbit already and is launching dozens every week, why pay them to launch a few hundred more? One possibility is to structure reimbursement payments “based on the number of subscribers the provider serves and/or enrolls” rather than “in equal installments throughout the period of performance”.
And when it comes to bidding, why wouldn’t any satellite operator bid a very low amount for the right to deploy service in unserved areas? If they can prevent terrestrial broadband technologies like fiber and wireless from getting subsidies for deployment, then they have a captive market to themselves. Certainly if both Starlink and Kuiper are bidding against one another, and these reimbursements are independent of the number of customers served, it would be logical for their deployment bids to be particularly low, since the cost of simply making service available is essentially zero. We saw in the RDOF auction (when Starlink didn’t face any meaningful competition from other satellite operators) that SpaceX was able to undercut terrestrial technologies, but the fight over whether or not they actually were going to receive their $885M in winning bids, made absolutely no difference to the number of satellites that the company put into orbit.
So in conclusion, satellite has a great opportunity to enhance broadband service in rural areas, potentially in more places than the very lowest density parts of the country. But unless the BEAD payments are linked to the number of customers served, the program will not do a good job of helping consumers realize those benefits.
