As the FCC’s incentive auction draws to a close, some further clues emerged about the bidding when the FCC split licenses between reserved and unreserved spectrum. What stood out was that in Los Angeles, San Diego and another 10 smaller licenses (incidentally all located in the southwestern US), only 1 license is classified as reserved. That means there is only 1 bidder that has designated itself as reserve-eligible when bidding for these licenses and that bidder only wants a single 5x5MHz block of spectrum. In contrast, in LA there are five 5x5MHz blocks going to non-reserved bidders (and 1 block spare).
This leads me to believe that T-Mobile may not be holding quite as much spectrum as anticipated, at least in that part of the country, while some potentially reserve-eligible bidders (i.e. entities other than Verizon and AT&T) have not designated themselves as reserve-eligible. That election can be made on a PEA-by-PEA basis, but it would be very odd for a major bidder like Comcast not to designate itself as reserve-eligible. On the other hand, speculators whose intention is to sell their spectrum to Verizon or AT&T, very likely would not want to be reserve-eligible, since that could cause additional problems in a future sale transaction.
A plausible conclusion is that if T-Mobile’s bidding is more constrained, then Comcast may be bidding more aggressively than expected, but is primarily focused on areas where it already has cable infrastructure (i.e. not Los Angeles, San Diego, etc.), and T-Mobile, AT&T and Comcast may all end up with an average of roughly 10x10MHz of spectrum on a near-national basis. We already know that one or more speculators are bidding aggressively, due to the gap between gross and net bids (note that the FCC reports this gap without regard to the $150M cap on DE discounts so it could be a single aggressive player with $2B+ in exposure) and thus the balance of the 70MHz of spectrum being sold would then be held by other players (but with these holdings likely skewed towards more saleable larger markets, including Los Angeles).
Its interesting to note that speculation is now revving up about the transactions to come after the auction is complete, with most attention focused on whether Verizon is serious about a bid for Charter, or if this is a head fake to bring DISH to the table for a spectrum-focused deal, after Verizon apparently sat out the incentive auction. Incidentally, Verizon’s expressed interest in Charter would also tend to support the notion that Verizon believes Comcast may want to play a bigger role in the wireless market, by acquiring a significant amount of spectrum in the incentive auction and perhaps even buying a wireless operator at a later date.
However, when you look at Sprint’s recent spectrum sale-leaseback deal, which was widely highlighted for the extraordinarily high valuation that it put on the 2.5GHz spectrum band, Verizon’s need for a near term spectrum transaction is far from compelling. I’m told that the appraisal analysis estimated the cost of new cellsites that Verizon would need to build with and without additional 2.5GHz spectrum, but that either way, there is no need for Verizon to engage in an effort to add substantial numbers of macrocells until 2020 or beyond, given its current spectrum holdings and the efficiency benefits accruing from the latest LTE technology. And if mmWave spectrum and massive MIMO are successful, then Verizon’s need for spectrum declines considerably.
So it seems there is little reason for Verizon to cave now, and pay Ergen’s (presumably high) asking price, when it does not need to start building until after the March 2020 buildout deadline for DISH’s AWS-4 licenses. It would not be a surprise if Verizon were willing to pay the same price as is achieved in the incentive auction (i.e. less than $1/MHzPOP), but the question is whether Ergen will be prepared to accept that.
Of course, DISH bulls suggest that the FCC will be happy to simply extend this deadline indefinitely, even if DISH makes little or no effort to offer a commercial service before 2020. The most important data point on that issue will come in early March 2017, when DISH passes its initial 4 year buildout deadline without making any effort to build out a network. Will the FCC take this opportunity to highlight the need for a large scale buildout that DISH promised in 2012 and the FCC noted in its AWS-4 order? Certainly that would appear to be good politics at this point in time.
“…we observe that the incumbent 2 GHz MSS licensees generally support our seven year end-of-term build-out benchmark and have committed to “aggressively build-out a broadband network” if they receives terrestrial authority to operate in the AWS-4 band. We expect this commitment to be met and, to ensure that it is, adopt performance requirements and associated penalties for failure to build-out, specifically designed to result in the spectrum being put to use for the benefit of the public interest.”
“In the event a licensee fails to meet the AWS-4 Final Build-out Requirement in any EA, we adopt the proposal in the AWS-4 NPRM that the licensee’s terrestrial authority for each such area shall terminate automatically without Commission action…We believe these penalties are necessary to ensure that licensees utilize the spectrum in the public interest. As explained above, the Nation needs additional spectrum supply. Failure by licensees to meet the build-out requirements would not address this need.”
Given the current status of the FCC incentive auction, which is making broadcasters (or at least their auction advisers) suicidal and leaving Wall St analysts perplexed, it important to note that this really is a “great game” with billions of dollars at stake for the winners and losers. So I though it might be helpful to summarize the winners and losers in previous large FCC auctions, and take a stab at predicting how this time will be different.
2006 AWS-1 auction: Winner: SpectrumCo, Loser: Wireless DBS (DISH/DirecTV), Biggest Loser: Verizon
In the AWS-1 auction, SpectrumCo picked up a national 20MHz block of licenses at the cheapest price per MHzPOP of any participant due to smart advice from Paul Milgrom, which saved them over $1B, as highlighted in this excellent paper. In contrast, Wireless DBS, the partnership of DISH and DirecTV pulled out early without buying any licenses, while Verizon paid the most for its F-block spectrum and didn’t even come away with a national footprint because it ran out of eligibility.
2008 700MHz auction: Winner: Verizon, Loser: Google, Biggest Loser: AT&T
In the 700MHz auction, AT&T painted a target on its back by buying Aloha’s lower C-block spectrum just before the auction. That made it entirely predictable that AT&T would want to acquire the adjacent lower B-block, allowing Verizon to park eligibility in that block and push up the price, while leaving Google to bid against itself for the upper C-block with its open access conditions. This was so obvious that I pointed the situation out while the auction was still going on, even though the bidding was anonymous. Verizon ended up getting the 22MHz upper C-block spectrum very cheaply, while AT&T paid at least $5B more for a similar amount of spectrum.
2014-15 AWS-3 auction: Winner: DISH, Loser: T-Mobile, Biggest Loser: AT&T
In the AWS-3 auction, DISH confused all the other bidders and most external observers, by bidding through three entities simultaneously, and ultimately acquiring all of its licenses via its two Designated Entities, Northstar and SNR, while pushing up the prices to astonishingly high levels. This forced T-Mobile to exit from the auction without gaining the spectrum it wanted, but more importantly, AT&T’s fixed going in position of “get 10x10MHz everywhere” caused it to spend far more than either DISH or Verizon (which was either smarter or just read my blog post on what was happening). Again AT&T spent at least $5B more than necessary in the auction.
Its notable that AT&T has been the biggest loser in both the 700MHz and AWS-3 auctions and has wasted over $10B in the process. But as I noted above, I think this time will be different, presumably because AT&T has hired some smart consultants, and decided to play the game strategically rather than conforming to a fixed spectrum target from the start. So my prediction for the incentive auction is as follows:
2016-17 Incentive auction: Winner: AT&T, Losers: T-Mobile, DISH, Biggest Loser: Broadcasters
AT&T appears to have been the driving force in Stage 1 of the auction, threatening to strand DISH in a handful of expensive top licenses (New York, Los Angeles, Chicago and San Francisco) in Stage 1 and forcing DISH to exit. Then with Comcast also trying to get out after its MVNO deal with Verizon, Verizon not even playing the game, and AT&T set to win the FirstNet spectrum, AT&T clearly holds the winning hand. AT&T can now keep dropping the licenses it held at the end of Stage 1 until broadcasters are forced to accept a tiny fraction of their originally expected receipts, leave T-Mobile (plus a bunch of spectrum speculators in various DEs) holding most of the spectrum (that AT&T can later strand, by supporting the broadcasters in their efforts to delay the transition and ensuring that it remains non-standard because AT&T and Verizon won’t bother supporting the band) and screw DISH by setting a new national benchmark of ~$0.90/MHzPOP for low band spectrum (helpfully also making sure T-Mobile doesn’t need any more spectrum from DISH because it has a surfeit of low band holdings).
Am I giving AT&T too much credit? After all, there is not much existing evidence that they know how to behave smartly in FCC auctions. Perhaps, but on the other hand, I think this is the scenario that best fits what we’ve seen so far (though by stating it so explicitly, I do worry that I might trigger a rush for the exits in the next stage(s) of the forward auction).
What will broadcasters do now? Will they cave on price and accept less than $14B for 84MHz of spectrum cleared (so the auction can close at the Stage 4 reserve price)? Will this drag on further, with both the dollars raised and spectrum sold falling further? That’s unclear, but either way, its not going to be a Happy New Year if you are a broadcaster trying to sell your spectrum.
Here’s a question for FCC incentive auction watchers: why did Stage 1 of the forward auction stop suddenly in Round 27 with proceeds of $23.1B? After all, that was substantially more than the first component (reserve price) target of $15.9B and dramatically less than the second component target (clearing costs) of $88.4B. So was it just random, or was there a deliberate decision by one or more large bidders to stop in that round by dropping demand to match supply in all of the top 40 high demand markets?
If you analyze the data carefully, you can see that in fact that stopping in Round 27 was precisely calibrated to match the reserve price target in Stage 4 and beyond, when it resets to a subtly different formulation. To be specific, “the first component, which aims to ensure that winning bids for forward auction licenses reflect competitive prices, will be satisfied if, for a given stage of the auction:
The clearing target is at or below 70 megahertz and the benchmark average price per MHz-pop for Category 1 blocks in high-demand PEAs in the forward auction is at least $1.25 per MHz-pop; or
The clearing target is above 70 megahertz and the total proceeds associated with all licenses in the forward auction exceed the product of the price benchmark of $1.25 per MHz-pop, the forward auction spectrum benchmark of 70 megahertz, and the total number of pops associated with the Category 1 blocks in high-demand PEAs.”
[UPDATED 12/21] Its clear that Round 27 was the first round in which “the benchmark average price per MHz-pop for Category 1 blocks in high-demand PEAs in the forward auction is at least $1.25 per MHz-pop” (although this will only be achieved in Stage 4 if one or more of the spare licenses in Los Angeles is taken up). Thus, at least one bidder was looking ahead to a situation where the auction would have to go into Stage 4 or beyond (the FCC pointed out in its public notice that the starting price for high demand markets in Stage 4 was $1.22/MHzPOP). That conclusion very likely explains why we saw no further bidding in Stages 2 and 3, as additional bids were dropped. It also tends to confirm that DISH was no longer present at the end of Stage 1 to force up the price of spectrum above the minimum necessary.
Now we’ll have to see how the game continues (and you can read more about who we think is responsible in our industry report for subscribers published last week), but the carefully calibrated outcome of Stage 1 ensures that the first component can be met as soon as one or both of the spare licenses in Los Angeles are taken up, but (if they still have eligibility to play with in the top 40 markets) the bidders could continue to drop license demand and simply wait until the clearing costs drop below the total forward auction bids. That would mean a realized average price for spectrum across the US as a whole of less than $0.90/MHzPOP.
When could that happen? Well, with FCC staff apparently suggesting that as little as 40MHz of spectrum might be sold, it could be a while yet, and net proceeds might be as low as $10B (at 40MHz sold in Stage 7) or $12-13B (at 50MHz sold in Stage 6). With $1.9B deducted from that figure for repacking costs, broadcasters could quite plausibly be left with little more than $10B in reverse auction payments. That might be too pessimistic, but at this stage it seems like a decent bet that the final net proceeds in the forward auction will be below the $19B raised (from 52MHz of spectrum sold) in the 700MHz auction back in 2008 and essentially certain that the average price per MHzPOP will be lower than the $1.28/MHzPOP achieved back in 2008.
Probably the most surprising thing about today’s announcement that Softbank is investing $1B in OneWeb as part of a $1.2B funding round, is the lack of a spoiler announcement from SpaceX. That’s happened in the past on both of the two occasions when OneWeb made a major announcement, in January 2015 (when OneWeb announced its initial agreements with Qualcomm and Virgin) and in June 2015 (when OneWeb announced its initial $500M equity round).
In fact one of the more important fights that is going on behind the scenes is related to regulatory priority in terms of ITU filings, where SpaceX is some way behind. OneWeb is acknowledged to be have the first filing for an NGSO Ku-band system, but also needs access to the Ka-band for its gateway links. That led Telesat to request that the FCC deny OneWeb’s petition for a US licenses, based on “Canadian ITU filings associated with Telesat’s Ka-band NGSO system [that] date back to 2012 and January 6, 2015″ whereas “the earliest ITU filing date priority for OneWeb is January 18, 2015.” LeoSat also claimed that it had priority over OneWeb in November 2016, based on “French ITU filings for LeoSat’s Ka-band MCSAT-2 LEO-2 network [that] date back to November 25, 2014.”
However, OneWeb now appears to have attempted something of an end run around these objections, acquiring rights to the French MCSAT LEO with an ITU advance publication date of April 2, 2013 network from Thales Alenia Space. That’s particularly odd because LeoSat, which states specifically in its FCC application that it “will deploy the LeoSat System in conjunction with Thales Alenia Space,” might now find TAS’s own filings being used against it.
UPDATE (12/20): I’m told that the relevant ITU coordination dates for the different Ka-band NGSO proposals are as follows:
Telesat (Comstellation): December 20, 2012
LeoSat (MCSAT2 LEO2): November 25, 2014
OneWeb’s newly acquired MCSAT LEO filing: December 3, 2014
SpaceX: December 27, 2014
OneWeb’s original Ka-band filing: January 18, 2015.
That would imply that OneWeb has now jumped ahead of SpaceX at the ITU, but remains behind Telesat and LeoSat, although I’m sure there will be many arguments to come.
All this fighting to be first in line at the ITU will also have to take into account the FCC’s attempt to clarify the rules for new NGSO systems in an NPRM released on Thursday, December 15. The FCC’s rules state that NGSO systems should share spectrum through the “avoidance of in-line interference events” and the NPRM proposed new language in an attempt to make this more explicit. However, this language is far from clear about whether the sharing of spectrum is required on a global basis or just in the US, specifically the key paragraph in the newly proposed §25.261 states:
(a) Scope. This section applies to NGSO FSS satellite systems that communicate with earth stations with directional antennas and that operate under a Commission license or grant of U.S. market access under this part in the 10.7-12.7 GHz (space-to-Earth), 12.75-13.25 GHz (Earth-to-space), 13.75-14.5 GHz (Earth-to-space), 17.8-18.6 GHz (space-to-Earth), 18.8-19.4 GHz (space-to-Earth), 19.6-20.2 GHz (space-to-Earth), 27.5-29.1 GHz (Earth-to-space), or 29.3-30 GHz (Earth-to-space) bands.
whereas the existing language states:
(a) Applicable NGSO FSS Bands. The coordination procedures in this section apply to non-Federal-Government NGSO FSS
satellite networks operating in the following assigned frequency bands: The 28.6-29.1 GHz or 18.8-19.3 GHz frequency bands.
The pertinent question here, which is left unresolved by the proposed changes shown in the italicized text above, are whether a “satellite network” consists of both an FCC-licensed satellite system and the earth station it is communicating with, and if so whether both of these or just the satellite system itself must “operate under a Commission license or grant of U.S. market access” according the new text. If it is the former, then the new rules will clearly apply only in the US (where the earth station is licensed by the FCC), whereas if it is the latter, then the rules could be taken to imply that any recipient of a satellite system license from the FCC in the current processing round may have to agree to comply with these sharing rules on a global basis.
It therefore seems that regulatory lawyers will have plenty of work for the next year arguing on behalf of their clients. However, OneWeb will have the money to move forward quickly and extend its lead over other NGSO systems, apart from O3b, which is currently building its next batch of 8 satellites. It remains to be seen if other systems will catch-up, but Telesat (which has already ordered two test satellites) is potentially best positioned to be a third player, especially if it can secure Canadian government backing for universal service in the Arctic region.
Then we need to see how the market evolves. Greg Wyler highlighted his ambitions for OneWeb to serve 100M people by 2025 and after the alliance with Softbank, this will most likely be in the form of cellular backhaul from tens or hundreds of thousands of small cells in remote areas, just as Softbank already does at over 6000 cell sites in Japan using IPStar capacity. In contrast, O3b should continue its plans to serve highly concentrated demand hotspots, like remote islands needing connectivity to the outside world and large cruise ships.
Most of the other NGSO proposals, including Telesat and SpaceX, appear to have a fairly similar plan to O3b, with small beams used to serve a select number of demand hotspots. So the question then becomes, how much concentrated demand exists for satellite connectivity? O3b will generate roughly $100M of revenues in 2016 and has a clear path to growth into the $200M-$300M range. But is it a multi-billion dollar opportunity and is there room for one or more additional systems in this niche? And can new systems overtake O3b, given its multi-year lead in this market? Only time will tell, but if OneWeb can maintain its focus on low cost cellular backhaul and gain anchor tenant commitments from Softbank, Bharti Airtel and perhaps others, these competitive dynamics are going to be much more of an issue for O3b.
UPDATED Feb 5, 2017
There’s been a lot of recent news about Chinese investments in satellite companies, including the planned takeover of Spacecom, which is now being renegotiated (and probably abandoned) after the loss of Amos-6 in September’s Falcon 9 failure, and the Global Eagle joint venture for inflight connectivity.
There were also rumors that Avanti could be sold to a Chinese group, which again came to nothing, with Avanti’s existing bondholders ending up having to fund the company instead in December 2016. The latest of these vanishing offers was a purported $200M bid from a Chinese company, China Trends, for Thuraya in mid-January 2017, which Thuraya promptly dismissed, saying it had never had discussions of any kind with China Trends.
Back in July Inmarsat was also reported to have approached Avanti, but then Inmarsat declared it had “no intention to make an offer for Avanti.” I had guessed that Inmarsat appeared to have done some sort of deal with Avanti, when the Artemis L/S/Ka-band satellite was relocated to 123E, into a slot previously used by Inmarsat for the ACeS Garuda-1 L-band satellite (as Avanti’s presentation at an event in October 2016 confirmed).
However, I’m now told that the Indonesian government reclaimed the rights to this slot after Garuda-1 was de-orbited, and is attempting to use the Artemis satellite to improve its own claim to this vacant slot before these rights expire. I also understand that with Artemis almost out of fuel, various parties were very concerned that the relocation would not even work and the Artemis satellite could have been left to drift along the geostationary arc, an outcome which thankfully has been avoided.
The action by the Indonesian government seems to hint at a continued desire to control its own MSS satellite, which could come in the shape of the long rumored purchase of SkyTerra-2 L-band satellite for Indonesian government use, similar to the MEXSAT program in Mexico. If that is the case, then presumably the Indonesians would also need to procure a ground segment, similar to the recent $69M contract secured by EchoStar in Asia (although that deal is for S-band not L-band).
Meanwhile Inmarsat still appears to be hoping to secure a deal to lease the entire payload of the 4th GX satellite to the Chinese government, which was originally expected back in October 2015, when the Chinese president visited Inmarsat’s offices. That contract has still not been signed, apparently because the Chinese side tried to negotiate Inmarsat’s price down after the visit. Although Inmarsat now seems to be hinting to investors that the I5F4 satellite will be launched into the Atlantic Ocean Region for incremental aeronautical capacity, last fall Inmarsat was apparently still very confident that a deal could be completed in the first half of 2017 once the I5F4 satellite was launched.
So it remains to be seen whether Inmarsat will be any more successful than other satellite operators in securing a large deal with China or whether, just like many others, Inmarsat’s deal will vanish into thin air. China has already launched its own Tiantong-1 S-band satellite in August 2016, as part of the same One Belt One Road effort that Inmarsat was hoping to participate in with its GX satellite, and Tiantong-1 has a smartphone which “will retail from around 10,000 yuan ($1,480), with communication fees starting from around 1 yuan a minute — a tenth of the price charged by Inmarsat.” Thus Inmarsat potentially faces growing pressure on its L-band revenues in China, and must hope that it can secure some offsetting growth in Ka-band.
Although there have been plenty of news articles describing the proposed 4000 satellite constellation that SpaceX filed with the FCC last week, to date there has been no analysis of how technically plausible this proposal actually is. That is perhaps unsurprising because the Technical and Legal Narratives included with the submission omit or obscure many of the most salient points needed to analyze the system and determine how realistic the claims made in SpaceX’s Legal Narrative actually are.
In particular, SpaceX claims that it has “designed its system to achieve the following objectives”:
High capacity: Each satellite in the SpaceX System provides aggregate downlink capacity to users ranging from 17 to 23 Gbps, depending on the gain of the user terminal involved. Assuming an average of 20 Gbps, the 1600 satellites in the Initial Deployment would have a total aggregate capacity of 32 Tbps. SpaceX will periodically improve the satellites over the course of the multi-year deployment of the system, which may further increase capacity.
High adaptability: The system leverages phased array technology to dynamically steer a large pool of beams to focus capacity where it is needed. Optical inter-satellite links permit flexible routing of traffic on-orbit. Further, the constellation ensures that frequencies can be reused effectively across different satellites to enhance the flexibility and capacity and robustness of the overall system.
Broadband services: The system will be able to provide broadband service at speeds of up to 1 Gbps per end user. The system’s use of low-Earth orbits will allow it to target latencies of approximately 25-35 ms.
Worldwide coverage: With deployment of the first 800 satellites, the system will be able to provide U.S. and international broadband connectivity; when fully deployed, the system will add capacity and availability at the equator and poles for truly global coverage.
Low cost: SpaceX is designing the overall system from the ground up with cost effectiveness and reliability in mind, from the design and manufacturing of the space and ground-based elements, to the launch and deployment of the system using SpaceX launch services, development of the user terminals, and end-user subscription rates.
Ease of use: SpaceX’s phased-array user antenna design will allow for a low-profile user terminal that is easy to mount and operate on walls or roofs.
What is particularly interesting is that the application says nothing whatsoever about the size of the user terminal that will be needed for the system. One hint that the user terminals are likely to be large and expensive is that SpaceX assures the FCC that “[t]he earth stations used to communicate with the SpaceX System will operate with aperture sizes that enable narrow, highly-directional beams with strong sidelobe suppression”. More importantly, by analyzing the information on the satellite beams given at the end of the Schedule S, it is clear that the supposed user downlink capacity of 17-23Gbps per satellite assumes a very large user terminal antenna diameter, because there are only 8 Ku-band user downlink beams of 250MHz each per satellite, and thus a total of only 2GHz of user downlink spectrum per satellite.
In other words this calculation implies a link efficiency of somewhere between 8.5 and 11.5bps/Hz. For comparison, OneWeb has 4GHz of user downlink spectrum per satellite, and is estimated to achieve a forward link efficiency of 0.55bps/Hz with a 30cm antenna and up to 2.73bps/Hz with a 70cm antenna. Put another way, OneWeb is intending to operate with twice as much forward bandwidth as SpaceX but with only half as much forward capacity per satellite.
That’s because OneWeb is intending to serve small, low cost (and therefore less efficient) terminals suitable for cellular backhaul in developing countries, or for internet access from homes and small businesses in rural areas. In contrast SpaceX’s system appears much more focused on large expensive terminals, similar to those used by O3b, which can cost $100K or more, and are used to connect large cruise ships or even an entire Pacific Island to the internet with hundreds of Mbps of capacity. While this has proved to be a good market for O3b, it is far from clear that this market could generate enough revenue to pay for a $10B SpaceX system. Even then, an assumption that SpaceX could achieve an average downlink efficiency of 10bps/Hz seems rather unrealistic.
SpaceX is able to gain some increased efficiency compared to OneWeb by using tightly focused steered gateway and user beams, which the Technical Narrative indicates will provide service in “a hexagonal cell with a diameter of 45 km” (Technical Annex 1-13). But there are only 8 user downlink beams per satellite, and so the total coverage area for each satellite is extremely limited. A 45km diameter hexagon has an area of 1315 sq km (or 1590 sq km for a 45km circle). Taking the more generous measure of 1590 sq km, over 5000 cells would be needed to cover the 8 million sq km area of the continental US. And SpaceX states (Technical Annex 2-7) that even in a fully deployed constellation, 340 satellites would be visible at an elevation angle of at least 40 degrees. So this implies that even when the constellation is fully deployed, only about half the land area of CONUS will be able to be served simultaneously. And in the initial deployment of 1600 satellites, potentially only about 30% of CONUS will have simultaneous service.
SpaceX could use beamhopping technology, similar to that planned by ViaSat for ViaSat-2 and ViaSat-3, to move the beams from one cell to another within a fraction of a second, but this is not mentioned anywhere in the application, and would be made even more challenging, especially within the constraints of a relatively small satellite, by the need for avoidance of interference events with both GEO and other LEO constellations.
In summary, returning to the objectives outlined above, the claim of “high capacity” per satellite seems excessive in the absence of large, expensive terminals, while the “worldwide coverage” objective is subject to some question. Most importantly, it will likely be particularly challenging to realize the “low cost” and “ease of use” objectives for the user terminals, if the phased array antennas are very large. And the system itself won’t be particularly low cost, given that each satellite is expected to have a mass of 386kg: taking the Falcon Heavy launch capacity of 54,400kg to LEO and cost of $90M, it would take at least 32 Falcon Heavy launches (and perhaps far more given the challenge of fitting 140 satellites on each rocket), costing $2.8B or more, just to launch the 4425 satellites.
Instead one of the key objectives of the narrow, steerable beams in the SpaceX design appears to be to support an argument that the FCC should continue with its avoidance of in-line interference events policy, with the spectrum shared “using whatever means can be coordinated between the operators to avoid in-line interference events, or by resorting to band segmentation in the absence of any such coordination agreement.”
This continues SpaceX’s prior efforts to cause problems for OneWeb, because OneWeb provides continuous wide area coverage, rather than highly directional service to specified locations, and therefore (at least in the US, since it is unclear that the FCC’s rules could be enforced elsewhere) OneWeb may be forced to discontinue using part of the spectrum band (and thereby lose half of its capacity) during in-line events.
OneWeb is reported to be continuing to make progress in securing investors for its system, and it would be unsurprising if Elon Musk continues to bear a grudge against a space industry rival. But given the design issues outlined above, and the many other more pressing problems that SpaceX faces in catching up with its current backlog of satellite launches, it is rather more doubtful whether SpaceX really has a system design and business plan that would support a multi-billion dollar investment in a new satellite constellation.
So now Trump has won the White House, the opportunity for Globalstar to secure approval for its Terrestrial Low Power Service (TLPS) that was first proposed four years ago has finally disappeared. Instead of a 22MHz WiFi Channel 14, that was supposed to have access to a “massive and immediate ecosystem” (an assertion that was challenged by opponents), Globalstar is now asking for a low power terrestrial authorization in only its 11.5MHz of licensed spectrum.
That takes us back essentially to the compromise that Jay Monroe rejected in summer 2015, apparently because he didn’t believe that it would be possible to monetize the spectrum for low power LTE. However, with the FCC still keen to allow Iridium to share more of the L-band MSS spectrum for NEXT, and even Google supporting the concept of Globalstar using only its licensed spectrum for terrestrial operations, an approval seems very plausible in the near term, albeit with a further comment period required on the proposed license modification, as Globalstar acknowledges in its ex parte letter.
UPDATE (11/11): This email, produced earlier in the year by the FCC in response to a FOIA request, gives some further insight into the key June 2015 meeting with Globalstar that I referred to in my post. With its reference to “the conditions for operation in Channels 12 and 13″ and changes to “out-of-band emission levels in the MSS licensed spectrum” it seems clear that FCC staff were contemplating operation by unlicensed users right up to the 2483.5MHz boundary at least, presumably in conjunction with some reciprocity for Globalstar to operate below 2483.5MHz. Thus the deal proposed by FCC staff (although not necessarily validated with Commissioners’ offices) and rejected by Globalstar appears to have been somewhat different to this latest proposal from Globalstar (and perhaps more similar to the Public Knowledge proposals of shared use that came to the fore later in 2015). However, it seems hard to argue that the deal on the table in summer 2015 wouldn’t have been more favorable to Globalstar (due to the ability to actually offer a full 22MHz TLPS WiFi channel), if approved by Commissioners, than Globalstar’s latest proposal.
So the question now becomes, is there value in a non-standard 10MHz TDLTE channel, which is restricted to operate only at low power? Back in June 2015, I noted that there clearly would be some value for standard high power operation, but the question is a very different one for a low power license. After all, even Jay didn’t believe this type of authorization would have meaningful value last year.
Of course, its only to be expected that lazy analysts will cite the Sprint leaseback deal, which supposedly represented a huge increase in the value of 2.5GHz spectrum (though in practice this deal included cherry picked licenses for owned spectrum in top markets, and the increase in value was actually quite modest). And they will also presumably overlook the impact of the power restrictions and lack of ecosystem.
What is really critical is whether Globalstar could use such an approval to raise further funds before it runs out of money next year. Globalstar’s most recent Q3 10-Q admitted that “we will draw all or substantially all of the remaining amounts available under the August 2015 Terrapin Agreement to achieve compliance with certain financial covenants in our Facility Agreement for the measurement period ending December 31, 2016 and to pay our debt service obligations.”
In other words, Globalstar does not have the money to pay its interest and debt payments in June 2017. And with an imminent Terrapin drawdown of over $30M in December, Globalstar really needs an immediate approval to get its share price up to a level where Terrapin won’t be swamping the market with share sales next month. So how will the market react to the prospects of a limited authorization, and will investors be willing to put up $100M+ just to meet Globalstar’s obligations under the COFACE agreement in 2017?
Its important to note that the biannual debt repayments jump further in December 2017 and Globalstar will not be able to extend the period in which it makes cure payments beyond December 2017 unless “the 8% New Notes have been irrevocably redeemed in full (and no obligations or amounts are outstanding in connection therewith) on or prior to 30 June 2017″. Thus its critical that the financing situation is resolved through a major cash injection in the first half of 2017. As a result, it looks like we should find out pretty soon whether this compromise is sufficient for Thermo (or more likely others) to continue funding Globalstar.
Back in November 2014, I published my analysis of what was happening in the AWS-3 spectrum auction to scorn from other analysts, who apparently couldn’t believe that Charlie Ergen would bid through multiple entities to push up the price of paired spectrum. Now we’re seeing relatively little speculation about who is doing what in the incentive auction (other than an apparently mystifying consensus that it will take until at least the end of September to complete Stage 1), so I thought it would be useful to give my views about what is happening.
The most important factor to observe in analyzing the auction is that overall demand relative to the amount of spectrum available (calculated as first round bidding units placed divided by total available supply measured in bidding units) has been considerably lower than in previous large auctions (AWS-1, 700MHz) and far short of the aggressive bidding seen in the AWS-3 auction.
That’s attributable partly to the absence of Social Capital, but much more to the 100MHz of spectrum on offer, compared to the likelihood that of the five remaining potential national bidders (Verizon, AT&T, T-Mobile, DISH and Comcast), none of them are likely to need more than about 30MHz on a national basis.
What’s become clear so far over the course of the auction is that most license areas (Partial Economic Areas) are not attracting much excess demand, apart from the top PEAs (namely New York, Los Angeles and Chicago) in the first few rounds. I said before the auction that DISH’s best strategy would probably be to bid for a large amount of spectrum in a handful of top markets, in order to drive up the price, and that appears to be exactly what happened.
However, it now appears we are very close to reaching the end of Stage 1, after excess eligibility dropped dramatically (by ~44% in terms of bidding units) in Round 24. In fact a bidder dropped 2 blocks in New York and 3 blocks in Los Angeles, without moving this eligibility elsewhere, somewhat similar to what happened on Friday, when one or more bidders dropped 5 blocks in Chicago, 3 blocks in New York and 1 block in Los Angeles during Round 20.
However, a key difference is that a significant fraction of the bidding eligibility that moved out of NY/LA/Chicago during Round 20, ended up being reallocated to other second and third tier markets, whereas in Round 24, total eligibility dropped by more than the reduction in eligibility in New York and Los Angeles. It is natural that a bidder such as T-Mobile (or Comcast) would want licenses elsewhere in the country if the top markets became too expensive, whereas if DISH’s objective is simply to push up the price, then DISH wouldn’t necessarily want to bid elsewhere and end up owning second and third tier markets.
This suggests that DISH has been reducing its exposure in the top three markets, in order to prevent itself from becoming stranded with too much exposure there. My guess is that DISH exited completely from Chicago in Round 20 and is now reducing exposure in New York and Los Angeles after bidding initially for a full complement of licenses there (i.e. 10 blocks in New York and Chicago and 5 blocks in Los Angeles).
If DISH is now down to about 8 blocks in New York and only 2 blocks in Los Angeles, then its maximum current exposure (if all other bidders dropped out) would be $4.52B, keeping DISH’s exposure under what is probably a roughly $5B budget. Of course DISH could potentially drop out of Los Angeles completely and let others fight it out (for the limited allocation of 5 blocks), if its objective is simply to maximize the end price, but this may not be possible in New York, because there are 10 license blocks available, which could give Verizon, AT&T, T-Mobile and Comcast enough to share between them.
Regardless, with the price increasing by 10% in each round, the price per MHzPOP in New York and Los Angeles would exceed that in the AWS-3 auction before the end of this week, implying that a resolution has to be reached very soon. If DISH is the one to exit, then it looks like Ergen will not be reallocating eligibility elsewhere, and DISH’s current eligibility (256,000 bidding units if it is bidding on 8 blocks in New York and 2 in Los Angeles) is likely higher than the excess eligibility total of all the remaining bidders combined (~182,000 bidding units at the end of Round 24 if all the available licenses were sold). This implies that a rapid end to Stage 1 of the auction is now likely, perhaps even this week and almost certainly before the end of next week, with total proceeds in the region of $30B.
Of course we will then need to go back to the next round of the reverse auction, but it looks plausible that convergence may be achieved at roughly $35B-$40B, potentially with as much as 80-90MHz sold (i.e. an average price of ~$1.50/MHzPOP). If DISH is forced out in Stage 1, then prices in key markets would probably not go much higher in future rounds of the forward auction, so the main question will be how quickly the reverse auction payments decline and whether this takes 1, 2 or 3 more rounds.
Also, based on the bidding patterns to date, it seems likely that Comcast may well emerge from the auction with a significant national footprint of roughly 20MHz of spectrum, potentially spending $7B-$10B. In addition, unless the forward auction drops to only 70MHz being sold, all four national bidders could largely achieve their goals, spending fairly similar amounts except in New York and Los Angeles, where one or two of these players are likely to miss out. In those circumstances, it will be interesting to see who would feel the need to pay Ergen’s asking price of at least $1.50/MHzPOP (and quite possibly a lot more) for his AWS-3 and AWS-4 spectrum licenses.
UPDATE (8/30): Bidding levels in New York and Los Angeles dropped dramatically in Round 25 (to 10 and 8 blocks respectively), with total bidding units placed (2.096M) now below the supply of licenses (2.177M) in Stage 1. This very likely means that DISH has given up and Stage 1 will close this week at an even lower price of ~$25B, with convergence of the forward and reverse auction values probably not achieved until the $30B-$35B range. This lower level of bidding activity increases the probability that 4 stages will now be required, with only 70MHz being sold in the forward auction at the end of the day.
Its been interesting to see the various reactions to today’s announcement from the FCC that Stage 1 of the Reverse Auction concluded with a total clearing cost of $86.4B (apparently excluding nearly $2B for the $1.75B relocation fund and other auction costs).
Most opinions, including my own, were that this amount is laughable in view of how much wireless operators have available to spend on buying spectrum. Some have suggested this means that broadcasters are pricing themselves out of the auction by asking for an excessive amount of money. But the reality is that the FCC set the initial prices (of up to $1B per station) and all broadcasters had to decide was whether or not to participate and if so, at what point to drop out.
Importantly, if the FCC had no excess supply of TV stations willing to offer their spectrum in the auction, then it was obligated to freeze the bids at the opening price. It seems very unlikely that if a broadcaster was willing to participate at an opening bid of say $900M (in New York) then it would decide to drop out at $800M or even $500M. And notably the total opening bids if the FCC moved every single station off-air would be only $342B.
So even though the FCC has described broadcaster participation in the auction as “strong”, it seems that this statement may be code for “somewhat disappointing” because it has proved impossible to obtain sufficient participation to lower the opening bids in a number of key markets, if the full 126MHz target set by the FCC is to be cleared.
Of course the FCC would have been criticized if it had set a lower initial clearance target and it subsequently became evident that sufficient participation existed to reach the maximum. However, it now seems plausible that Round 1 of the forward auction could go nowhere, because there is little reason for participants to reveal their bidding strategies if it is essentially impossible for the clearing costs to be covered. That will probably also lead to criticism of the FCC for miscalculating the level of demand for spectrum, and certainly broadcasters will be highlighting that they apparently value spectrum more highly than the wireless carriers.
As a result, we are likely to see multiple rounds of the reverse auction, in which the clearing target is gradually reduced, until a more reasonable level of clearing costs (perhaps $30B or so) is reached. Although we could see quite a sharp reduction in clearing costs in Round 2 once more markets are unfrozen, it may need as many as 3 more rounds, with 84MHz cleared (representing 70MHz of spectrum to be auctioned), assuming the FCC incrementally reduces the target from 100MHz auctioned to 90MHz to 80MHz to 70MHz. At that point DISH could have even more reason to bid up the prices aggressively, because less spectrum will be available to its competitors, especially T-Mobile, so we might actually end up with the final forward auction bids exceeding the clearing costs by $10B+.
But for now, speculation as to which broadcasters declined to participate is likely to intensify. My suspicion is that fewer of the small and non-commercial broadcasters than expected might have decided to participate. After all as one station in Pennsylvania told the WSJ back in January, “it won’t consider going off the air…because it would lose its PBS affiliation and go against the station’s stated mission of serving the public”. That would mean more of the reverse auction proceeds potentially going to commercial ventures, especially those that were bought up by investment firms with the explicit aim of selling their licenses.
Moreover, it may even be reasonable to guess at some of the markets which may have been frozen at the opening bids: for example, it seems likely that this must include some of the biggest cities, such as New York or Chicago, for such a high total clearing cost to have been reached. No doubt investors will be contemplating what that might mean for those companies that own broadcast licenses in these areas, especially if they have indicated their willingness to participate.
As I predicted last week, TLPS missed its chance for approval on April 22, despite Jay Monroe being convinced that it was in the bag when he presented at the Burkenroad conference earlier that day. He presumably had been assured of that by Globalstar’s General Counsel, Barbee Ponder, who thought they had answered all the FCC’s questions in late March and seemingly didn’t bother to follow-up after that point.
Now today we have seen an experimental license filing from Microsoft to test TLPS in Redmond, WA. Microsoft’s application states:
“Microsoft will test terrestrial operations in the 2473-2483.5 MHz unlicensed band and the adjacent 2483.5-2500 MHz band, consistent with Globalstar Inc.s proposal to operate a terrestrial low-power service on these frequencies nationwide (see IB docket no. 13-213). Microsoft seeks to quantify the affect [sic] of such operations on the performance and reliability of unlicensed operations in the 2.4 GHz ISM band.”
The application also includes the incidental admission that Gerst is correct that the Ruckus APs have been modified (by removal of coexistence filters) from the approved versions (the testing will include “the use of an intentional radiator in the 2473-2483.5 MHz unlicensed band that has not received an equipment authorization as ordinarily required under 47 C.F.R. § 15.201″) although it should be noted that Microsoft plans to use different APs from those in Globalstar’s own tests, including a consumer model which was one of Microsoft’s primary concerns.
The duration of the experimental license is requested to be 6 months, from May 23 to November 23, suggesting that we may not see results until the fall. This could perhaps permit FCC consideration of the results after the November election if Microsoft identified no problems whatsoever (or if the FCC sets a hard deadline for further testing, though as noted below Bloomberg is reporting that the initial authorization will last at least a year), but more likely it will set the scene for additional back and forth between Globalstar and its opponents in the period before the next FCC Chairman gets his or her feet under the desk in spring 2017.
UPDATE (5/13): Despite Microsoft’s experimental application, Globalstar’s TLPS proposal has finally made it onto the FCC’s circulation list this afternoon. That raises the question of whether Microsoft’s application was made with Globalstar’s cooperation (as I had assumed) or if Microsoft anticipated the issuance of an order that all sides acknowledged would require more testing and simply jumped the gun in preparing to conduct its own testing after that point (which now seems the most plausible explanation).
So now the focus will shift to what this order contains. It seems to basically be taken for granted that there will be increased sharing of L-band spectrum with Iridium (though that would come in a separate parallel ruling by the International Bureau on delegated authority) and that additional power limits will be imposed as an interim measure, probably at a 200mW level. Bloomberg is also reporting that there will be constraints on the number of APs that may be deployed, with a limit of 825 in the first year, and “the FCC will assess whether they cause interference to other services”. However, prior to the rejected deal last summer the FCC also contemplated changes to the OOBE restrictions that would permit increased use of Channels 12 and 13 by terrestrial users, and it will be interesting to see if these changes are still present, or if they have been modified, perhaps due to concerns about possible impacts on Bluetooth LE users in the upper part of the unlicensed spectrum.
« Previous entries Next Page » Next Page »