UPDATED Feb 5, 2017
There’s been a lot of recent news about Chinese investments in satellite companies, including the planned takeover of Spacecom, which is now being renegotiated (and probably abandoned) after the loss of Amos-6 in September’s Falcon 9 failure, and the Global Eagle joint venture for inflight connectivity.
There were also rumors that Avanti could be sold to a Chinese group, which again came to nothing, with Avanti’s existing bondholders ending up having to fund the company instead in December 2016. The latest of these vanishing offers was a purported $200M bid from a Chinese company, China Trends, for Thuraya in mid-January 2017, which Thuraya promptly dismissed, saying it had never had discussions of any kind with China Trends.
Back in July Inmarsat was also reported to have approached Avanti, but then Inmarsat declared it had “no intention to make an offer for Avanti.” I had guessed that Inmarsat appeared to have done some sort of deal with Avanti, when the Artemis L/S/Ka-band satellite was relocated to 123E, into a slot previously used by Inmarsat for the ACeS Garuda-1 L-band satellite (as Avanti’s presentation at an event in October 2016 confirmed).
However, I’m now told that the Indonesian government reclaimed the rights to this slot after Garuda-1 was de-orbited, and is attempting to use the Artemis satellite to improve its own claim to this vacant slot before these rights expire. I also understand that with Artemis almost out of fuel, various parties were very concerned that the relocation would not even work and the Artemis satellite could have been left to drift along the geostationary arc, an outcome which thankfully has been avoided.
The action by the Indonesian government seems to hint at a continued desire to control its own MSS satellite, which could come in the shape of the long rumored purchase of SkyTerra-2 L-band satellite for Indonesian government use, similar to the MEXSAT program in Mexico. If that is the case, then presumably the Indonesians would also need to procure a ground segment, similar to the recent $69M contract secured by EchoStar in Asia (although that deal is for S-band not L-band).
Meanwhile Inmarsat still appears to be hoping to secure a deal to lease the entire payload of the 4th GX satellite to the Chinese government, which was originally expected back in October 2015, when the Chinese president visited Inmarsat’s offices. That contract has still not been signed, apparently because the Chinese side tried to negotiate Inmarsat’s price down after the visit. Although Inmarsat now seems to be hinting to investors that the I5F4 satellite will be launched into the Atlantic Ocean Region for incremental aeronautical capacity, last fall Inmarsat was apparently still very confident that a deal could be completed in the first half of 2017 once the I5F4 satellite was launched.
So it remains to be seen whether Inmarsat will be any more successful than other satellite operators in securing a large deal with China or whether, just like many others, Inmarsat’s deal will vanish into thin air. China has already launched its own Tiantong-1 S-band satellite in August 2016, as part of the same One Belt One Road effort that Inmarsat was hoping to participate in with its GX satellite, and Tiantong-1 has a smartphone which “will retail from around 10,000 yuan ($1,480), with communication fees starting from around 1 yuan a minute — a tenth of the price charged by Inmarsat.” Thus Inmarsat potentially faces growing pressure on its L-band revenues in China, and must hope that it can secure some offsetting growth in Ka-band.
Although there have been plenty of news articles describing the proposed 4000 satellite constellation that SpaceX filed with the FCC last week, to date there has been no analysis of how technically plausible this proposal actually is. That is perhaps unsurprising because the Technical and Legal Narratives included with the submission omit or obscure many of the most salient points needed to analyze the system and determine how realistic the claims made in SpaceX’s Legal Narrative actually are.
In particular, SpaceX claims that it has “designed its system to achieve the following objectives”:
High capacity: Each satellite in the SpaceX System provides aggregate downlink capacity to users ranging from 17 to 23 Gbps, depending on the gain of the user terminal involved. Assuming an average of 20 Gbps, the 1600 satellites in the Initial Deployment would have a total aggregate capacity of 32 Tbps. SpaceX will periodically improve the satellites over the course of the multi-year deployment of the system, which may further increase capacity.
High adaptability: The system leverages phased array technology to dynamically steer a large pool of beams to focus capacity where it is needed. Optical inter-satellite links permit flexible routing of traffic on-orbit. Further, the constellation ensures that frequencies can be reused effectively across different satellites to enhance the flexibility and capacity and robustness of the overall system.
Broadband services: The system will be able to provide broadband service at speeds of up to 1 Gbps per end user. The system’s use of low-Earth orbits will allow it to target latencies of approximately 25-35 ms.
Worldwide coverage: With deployment of the first 800 satellites, the system will be able to provide U.S. and international broadband connectivity; when fully deployed, the system will add capacity and availability at the equator and poles for truly global coverage.
Low cost: SpaceX is designing the overall system from the ground up with cost effectiveness and reliability in mind, from the design and manufacturing of the space and ground-based elements, to the launch and deployment of the system using SpaceX launch services, development of the user terminals, and end-user subscription rates.
Ease of use: SpaceX’s phased-array user antenna design will allow for a low-profile user terminal that is easy to mount and operate on walls or roofs.
What is particularly interesting is that the application says nothing whatsoever about the size of the user terminal that will be needed for the system. One hint that the user terminals are likely to be large and expensive is that SpaceX assures the FCC that “[t]he earth stations used to communicate with the SpaceX System will operate with aperture sizes that enable narrow, highly-directional beams with strong sidelobe suppression”. More importantly, by analyzing the information on the satellite beams given at the end of the Schedule S, it is clear that the supposed user downlink capacity of 17-23Gbps per satellite assumes a very large user terminal antenna diameter, because there are only 8 Ku-band user downlink beams of 250MHz each per satellite, and thus a total of only 2GHz of user downlink spectrum per satellite.
In other words this calculation implies a link efficiency of somewhere between 8.5 and 11.5bps/Hz. For comparison, OneWeb has 4GHz of user downlink spectrum per satellite, and is estimated to achieve a forward link efficiency of 0.55bps/Hz with a 30cm antenna and up to 2.73bps/Hz with a 70cm antenna. Put another way, OneWeb is intending to operate with twice as much forward bandwidth as SpaceX but with only half as much forward capacity per satellite.
That’s because OneWeb is intending to serve small, low cost (and therefore less efficient) terminals suitable for cellular backhaul in developing countries, or for internet access from homes and small businesses in rural areas. In contrast SpaceX’s system appears much more focused on large expensive terminals, similar to those used by O3b, which can cost $100K or more, and are used to connect large cruise ships or even an entire Pacific Island to the internet with hundreds of Mbps of capacity. While this has proved to be a good market for O3b, it is far from clear that this market could generate enough revenue to pay for a $10B SpaceX system. Even then, an assumption that SpaceX could achieve an average downlink efficiency of 10bps/Hz seems rather unrealistic.
SpaceX is able to gain some increased efficiency compared to OneWeb by using tightly focused steered gateway and user beams, which the Technical Narrative indicates will provide service in “a hexagonal cell with a diameter of 45 km” (Technical Annex 1-13). But there are only 8 user downlink beams per satellite, and so the total coverage area for each satellite is extremely limited. A 45km diameter hexagon has an area of 1315 sq km (or 1590 sq km for a 45km circle). Taking the more generous measure of 1590 sq km, over 5000 cells would be needed to cover the 8 million sq km area of the continental US. And SpaceX states (Technical Annex 2-7) that even in a fully deployed constellation, 340 satellites would be visible at an elevation angle of at least 40 degrees. So this implies that even when the constellation is fully deployed, only about half the land area of CONUS will be able to be served simultaneously. And in the initial deployment of 1600 satellites, potentially only about 30% of CONUS will have simultaneous service.
SpaceX could use beamhopping technology, similar to that planned by ViaSat for ViaSat-2 and ViaSat-3, to move the beams from one cell to another within a fraction of a second, but this is not mentioned anywhere in the application, and would be made even more challenging, especially within the constraints of a relatively small satellite, by the need for avoidance of interference events with both GEO and other LEO constellations.
In summary, returning to the objectives outlined above, the claim of “high capacity” per satellite seems excessive in the absence of large, expensive terminals, while the “worldwide coverage” objective is subject to some question. Most importantly, it will likely be particularly challenging to realize the “low cost” and “ease of use” objectives for the user terminals, if the phased array antennas are very large. And the system itself won’t be particularly low cost, given that each satellite is expected to have a mass of 386kg: taking the Falcon Heavy launch capacity of 54,400kg to LEO and cost of $90M, it would take at least 32 Falcon Heavy launches (and perhaps far more given the challenge of fitting 140 satellites on each rocket), costing $2.8B or more, just to launch the 4425 satellites.
Instead one of the key objectives of the narrow, steerable beams in the SpaceX design appears to be to support an argument that the FCC should continue with its avoidance of in-line interference events policy, with the spectrum shared “using whatever means can be coordinated between the operators to avoid in-line interference events, or by resorting to band segmentation in the absence of any such coordination agreement.”
This continues SpaceX’s prior efforts to cause problems for OneWeb, because OneWeb provides continuous wide area coverage, rather than highly directional service to specified locations, and therefore (at least in the US, since it is unclear that the FCC’s rules could be enforced elsewhere) OneWeb may be forced to discontinue using part of the spectrum band (and thereby lose half of its capacity) during in-line events.
OneWeb is reported to be continuing to make progress in securing investors for its system, and it would be unsurprising if Elon Musk continues to bear a grudge against a space industry rival. But given the design issues outlined above, and the many other more pressing problems that SpaceX faces in catching up with its current backlog of satellite launches, it is rather more doubtful whether SpaceX really has a system design and business plan that would support a multi-billion dollar investment in a new satellite constellation.
So now Trump has won the White House, the opportunity for Globalstar to secure approval for its Terrestrial Low Power Service (TLPS) that was first proposed four years ago has finally disappeared. Instead of a 22MHz WiFi Channel 14, that was supposed to have access to a “massive and immediate ecosystem” (an assertion that was challenged by opponents), Globalstar is now asking for a low power terrestrial authorization in only its 11.5MHz of licensed spectrum.
That takes us back essentially to the compromise that Jay Monroe rejected in summer 2015, apparently because he didn’t believe that it would be possible to monetize the spectrum for low power LTE. However, with the FCC still keen to allow Iridium to share more of the L-band MSS spectrum for NEXT, and even Google supporting the concept of Globalstar using only its licensed spectrum for terrestrial operations, an approval seems very plausible in the near term, albeit with a further comment period required on the proposed license modification, as Globalstar acknowledges in its ex parte letter.
UPDATE (11/11): This email, produced earlier in the year by the FCC in response to a FOIA request, gives some further insight into the key June 2015 meeting with Globalstar that I referred to in my post. With its reference to “the conditions for operation in Channels 12 and 13″ and changes to “out-of-band emission levels in the MSS licensed spectrum” it seems clear that FCC staff were contemplating operation by unlicensed users right up to the 2483.5MHz boundary at least, presumably in conjunction with some reciprocity for Globalstar to operate below 2483.5MHz. Thus the deal proposed by FCC staff (although not necessarily validated with Commissioners’ offices) and rejected by Globalstar appears to have been somewhat different to this latest proposal from Globalstar (and perhaps more similar to the Public Knowledge proposals of shared use that came to the fore later in 2015). However, it seems hard to argue that the deal on the table in summer 2015 wouldn’t have been more favorable to Globalstar (due to the ability to actually offer a full 22MHz TLPS WiFi channel), if approved by Commissioners, than Globalstar’s latest proposal.
So the question now becomes, is there value in a non-standard 10MHz TDLTE channel, which is restricted to operate only at low power? Back in June 2015, I noted that there clearly would be some value for standard high power operation, but the question is a very different one for a low power license. After all, even Jay didn’t believe this type of authorization would have meaningful value last year.
Of course, its only to be expected that lazy analysts will cite the Sprint leaseback deal, which supposedly represented a huge increase in the value of 2.5GHz spectrum (though in practice this deal included cherry picked licenses for owned spectrum in top markets, and the increase in value was actually quite modest). And they will also presumably overlook the impact of the power restrictions and lack of ecosystem.
What is really critical is whether Globalstar could use such an approval to raise further funds before it runs out of money next year. Globalstar’s most recent Q3 10-Q admitted that “we will draw all or substantially all of the remaining amounts available under the August 2015 Terrapin Agreement to achieve compliance with certain financial covenants in our Facility Agreement for the measurement period ending December 31, 2016 and to pay our debt service obligations.”
In other words, Globalstar does not have the money to pay its interest and debt payments in June 2017. And with an imminent Terrapin drawdown of over $30M in December, Globalstar really needs an immediate approval to get its share price up to a level where Terrapin won’t be swamping the market with share sales next month. So how will the market react to the prospects of a limited authorization, and will investors be willing to put up $100M+ just to meet Globalstar’s obligations under the COFACE agreement in 2017?
Its important to note that the biannual debt repayments jump further in December 2017 and Globalstar will not be able to extend the period in which it makes cure payments beyond December 2017 unless “the 8% New Notes have been irrevocably redeemed in full (and no obligations or amounts are outstanding in connection therewith) on or prior to 30 June 2017″. Thus its critical that the financing situation is resolved through a major cash injection in the first half of 2017. As a result, it looks like we should find out pretty soon whether this compromise is sufficient for Thermo (or more likely others) to continue funding Globalstar.
I’ve been thinking a lot about the failure of Google Fiber and if there are any wider lessons about whether Silicon Valley will ever be able to compete effectively as an owner and builder of telecom networks, or indeed in other large scale capex intensive businesses (such as cars).
One conclusion I’ve come to is that there may be a fundamental incompatibility between the planning horizon (and deployment capabilities) of Silicon Valley companies and what is needed to be a successful operator of national or multinational telecom networks (whether fiber, wireless or satellite). The image above is taken from Facebook’s so-called “Little Red Book” and summarizes pretty well what I’ve experienced living and working in Silicon Valley, namely that the prevailing attitude is “There is no point having a 5-year plan in this industry” and instead you should think just about what you will achieve in the next 6 months and where you want to be in 30 years.
In software that makes a lot of sense – you can iterate fast and solve problems incrementally, and scaling up (at least nowadays) is relatively easy if you can find and keep the customers. In contrast, building a telecom network (or a new car design) is at least two or three year effort, and by the time you are fully rolled out in the market, its four or five years since you started. So when you start, you need to have a pretty good plan for what you’ll be delivering (and how its going to be operated at scale) five years down the road.
For an existing wireless operator or car company that planning and implementation is obviously helped by years of experience in operating networks or manufacturing facilities at scale. But a new entrant has to learn all of that from scratch. And its not like technology is transforming the job of deploying celltowers, trenching fiber or running a vehicle manufacturing line. Software might change the service that the end customer is buying, but its crazy to think that “if tech companies build cars and car companies hire developers, the former will win.”
Of course self-driving cars will drastically change what people do with vehicles in the future. But those vehicles still have to be made on a production line, efficiently and with high quality. Mobile has changed the world dramatically over the last 30 years, but AT&T, Deutsche Telekom, BT, etc. are still around and absorbed some of the most successful wireless startups.
Moreover, Silicon Valley companies simply don’t spend capex on anything like the scale of telcos or car companies. In 2015 Alphabet/Google’s total capex for all of its activities worldwide was $9.9B and Facebook’s capex was only $2.5B (surprisingly, at least to me, Amazon only spent $4.6B, though Apple spent $11.2B and anticipated spending $15B in 2016).
But the US wireless industry alone invested $32B in capex in 2015, which is more than Facebook, Google, Amazon and Apple put together, and that excludes the $40B+ spent on spectrum in the AWS-3 auction last year. In the car industry, GM and Ford each spent more than $7B on capex in 2015. So in round numbers, total wireless industry and car industry capex on a global basis are both of order $100B+ every year, a sum that simply can’t be matched by Silicon Valley.
So when Silicon Valley companies aren’t used to either planning for or spending tens of billions of dollars on multi-year infrastructure developments, why are people surprised when it turns out Google can’t support the investment needed to build a competitive national fiber network? (Indeed its not been widely reported, but I’m told that earlier this year Google’s board also turned down a $15B+ partnership with DISH to build a new national wireless network.) Or when it appears “The Apple dream car might not happen” and “Google’s Self-Driving Car Project Is Losing Out to Rivals“?
Instead it appears that we may be shifting towards a model where the leading Silicon Valley companies work on new technology development and “give away the blueprints…so that anyone from local governments to Internet service providers can construct a new way to get Internet signals into hard-to-reach places“. Similar Google could “enable [rather] than do” in the field of self-driving cars. Whether that will lead to these technologies being commercialized remains to be seen, but it does mean that Facebook and Google won’t have to change their existing ways of working or radically increase their capital expenditures.
Undoubtedly some other Silicon Valley companies will end up try to build their own self-driving cars. But after the (continuing) struggles of Tesla to ramp up, it seems more likely that most startups will end up partnering with or selling their technology to existing manufacturers instead. And similarly, in the telecom world, does anyone believe Google (or any other Silicon Valley company) is going to build a new national wireless broadband network that is competitive with AT&T, Verizon and Comcast?
It seems to me that about the best we could hope for is for Google to push forward the commercialization of new shared access/low cost frequency bands like 3.5GHz (e.g. as part of an MVNO deal with an existing operator) so that the wireless industry no longer has to spend as much on spectrum in the future and can deliver more data at lower cost.
However, that’s not necessarily all bad news. It seems almost quaint to look back a year or two at how wireless operators were reportedly “terrified” of Facebook and concerned about how Project Loon could “hand Google an effective monopoly over the Internet in developing countries.”
If Facebook and Google are now simply going to come up with clever technology to reduce network costs (rather than building rival networks) or even just act as a source of incremental demand for mobile data services, then that will be good for mobile operators. Those operators may just be “dumb pipes,” but realistically, despite Verizon’s (flailing) efforts, that’s pretty much all they could hope for anyway.
Back in November 2014, I published my analysis of what was happening in the AWS-3 spectrum auction to scorn from other analysts, who apparently couldn’t believe that Charlie Ergen would bid through multiple entities to push up the price of paired spectrum. Now we’re seeing relatively little speculation about who is doing what in the incentive auction (other than an apparently mystifying consensus that it will take until at least the end of September to complete Stage 1), so I thought it would be useful to give my views about what is happening.
The most important factor to observe in analyzing the auction is that overall demand relative to the amount of spectrum available (calculated as first round bidding units placed divided by total available supply measured in bidding units) has been considerably lower than in previous large auctions (AWS-1, 700MHz) and far short of the aggressive bidding seen in the AWS-3 auction.
That’s attributable partly to the absence of Social Capital, but much more to the 100MHz of spectrum on offer, compared to the likelihood that of the five remaining potential national bidders (Verizon, AT&T, T-Mobile, DISH and Comcast), none of them are likely to need more than about 30MHz on a national basis.
What’s become clear so far over the course of the auction is that most license areas (Partial Economic Areas) are not attracting much excess demand, apart from the top PEAs (namely New York, Los Angeles and Chicago) in the first few rounds. I said before the auction that DISH’s best strategy would probably be to bid for a large amount of spectrum in a handful of top markets, in order to drive up the price, and that appears to be exactly what happened.
However, it now appears we are very close to reaching the end of Stage 1, after excess eligibility dropped dramatically (by ~44% in terms of bidding units) in Round 24. In fact a bidder dropped 2 blocks in New York and 3 blocks in Los Angeles, without moving this eligibility elsewhere, somewhat similar to what happened on Friday, when one or more bidders dropped 5 blocks in Chicago, 3 blocks in New York and 1 block in Los Angeles during Round 20.
However, a key difference is that a significant fraction of the bidding eligibility that moved out of NY/LA/Chicago during Round 20, ended up being reallocated to other second and third tier markets, whereas in Round 24, total eligibility dropped by more than the reduction in eligibility in New York and Los Angeles. It is natural that a bidder such as T-Mobile (or Comcast) would want licenses elsewhere in the country if the top markets became too expensive, whereas if DISH’s objective is simply to push up the price, then DISH wouldn’t necessarily want to bid elsewhere and end up owning second and third tier markets.
This suggests that DISH has been reducing its exposure in the top three markets, in order to prevent itself from becoming stranded with too much exposure there. My guess is that DISH exited completely from Chicago in Round 20 and is now reducing exposure in New York and Los Angeles after bidding initially for a full complement of licenses there (i.e. 10 blocks in New York and Chicago and 5 blocks in Los Angeles).
If DISH is now down to about 8 blocks in New York and only 2 blocks in Los Angeles, then its maximum current exposure (if all other bidders dropped out) would be $4.52B, keeping DISH’s exposure under what is probably a roughly $5B budget. Of course DISH could potentially drop out of Los Angeles completely and let others fight it out (for the limited allocation of 5 blocks), if its objective is simply to maximize the end price, but this may not be possible in New York, because there are 10 license blocks available, which could give Verizon, AT&T, T-Mobile and Comcast enough to share between them.
Regardless, with the price increasing by 10% in each round, the price per MHzPOP in New York and Los Angeles would exceed that in the AWS-3 auction before the end of this week, implying that a resolution has to be reached very soon. If DISH is the one to exit, then it looks like Ergen will not be reallocating eligibility elsewhere, and DISH’s current eligibility (256,000 bidding units if it is bidding on 8 blocks in New York and 2 in Los Angeles) is likely higher than the excess eligibility total of all the remaining bidders combined (~182,000 bidding units at the end of Round 24 if all the available licenses were sold). This implies that a rapid end to Stage 1 of the auction is now likely, perhaps even this week and almost certainly before the end of next week, with total proceeds in the region of $30B.
Of course we will then need to go back to the next round of the reverse auction, but it looks plausible that convergence may be achieved at roughly $35B-$40B, potentially with as much as 80-90MHz sold (i.e. an average price of ~$1.50/MHzPOP). If DISH is forced out in Stage 1, then prices in key markets would probably not go much higher in future rounds of the forward auction, so the main question will be how quickly the reverse auction payments decline and whether this takes 1, 2 or 3 more rounds.
Also, based on the bidding patterns to date, it seems likely that Comcast may well emerge from the auction with a significant national footprint of roughly 20MHz of spectrum, potentially spending $7B-$10B. In addition, unless the forward auction drops to only 70MHz being sold, all four national bidders could largely achieve their goals, spending fairly similar amounts except in New York and Los Angeles, where one or two of these players are likely to miss out. In those circumstances, it will be interesting to see who would feel the need to pay Ergen’s asking price of at least $1.50/MHzPOP (and quite possibly a lot more) for his AWS-3 and AWS-4 spectrum licenses.
UPDATE (8/30): Bidding levels in New York and Los Angeles dropped dramatically in Round 25 (to 10 and 8 blocks respectively), with total bidding units placed (2.096M) now below the supply of licenses (2.177M) in Stage 1. This very likely means that DISH has given up and Stage 1 will close this week at an even lower price of ~$25B, with convergence of the forward and reverse auction values probably not achieved until the $30B-$35B range. This lower level of bidding activity increases the probability that 4 stages will now be required, with only 70MHz being sold in the forward auction at the end of the day.
Its been interesting to hear the feedback on my new ViaSat profile that I published last weekend, especially with regard to ViaSat’s supposed technical advantages over the HTS competition. As I noted in the report, ViaSat has apparently been struggling with its beamhopping technology, reducing the capacity of its upcoming ViaSat-2 satellite from an originally planned 350Gbps (i.e. 2.5 times the capacity of ViaSat-1) to around 300Gbps at the moment.
However, even that reduced target may require extra spectrum to achieve, with ViaSat asking the FCC in late May for permission to use 600MHz of additional spectrum in the LMDS band. Fundamentally this appears to be due to the reduced efficiency that ViaSat now expects to achieve relative to that set out in its original beamhopping patent. The patent suggested that for a ViaSat-2 design (with only 1.5GHz of spectrum, rather than the 2.1GHz ViaSat now intends to use), the efficiency could be as high as 3bps/Hz on the forward link (i.e. 225Gbps) and 1.8bps/Hz on the return link (i.e. 135Gbps) for a total of 360Gbps of capacity. But at Satellite 2016, ViaSat’s CEO indicated that an efficiency (apparently averaged between the forward and return links) of only 1.5bps/Hz should be expected, no better than existing HTS Ka-band satellites and nearly 40% lower than ViaSat originally estimated.
A notable side-effect of this additional spectrum utilization (even assuming approval is granted by the FCC) is that new terminals will be required, including replacement of both the antenna and the modem for aircraft that want to make use of the extended coverage of ViaSat-2. That’s why American Airlines is waiting until the second half of 2017 for this new terminal to be developed, before it starts to install ViaSat’s connectivity on new aircraft.
While the FCC’s Spectrum Frontiers Order yesterday does contemplate continued use of the LMDS band for satellite gateways (though utilization by user terminals appears more difficult), it looks like other Ka-band providers intend to shift more of their future gateway operations up to the Q/V-band, rather than building hundreds of Ka-band gateways as ViaSat will need for its ViaSat-3 satellite. That decision could reduce the costs of competing ground segment deployments substantially, while retaining continuity for user links. Thus, as a result of the lower than expected beamhopping efficiency, it remains to be seen whether ViaSat’s technology will now be meaningfully superior to that of competitors, notably SES and Inmarsat who both appear poised to invest heavily in Ka-band.
SES gave a presentation at the Global Connected Aircraft Summit last month, depicting its plans to build three new Ka-band HTS satellites for global coverage as shown above, and the first of these satellites could be ordered very shortly, because as SES pointed out in its recent Investor Day presentation, it has EUR120M of uncommitted capex this year and nearly EUR1.5B available in the period through 2020.
Meanwhile Inmarsat is hard at work designing a three satellite Inmarsat-7 Ka-band system, with in excess of 100Gbps of capacity per satellite. Although the results of the Brexit referendum may complicate its efforts, Inmarsat is hoping to secure a substantial European Commission investment later this year, which would replace the four proposed Ka-band satellites that Eutelsat had previously contemplated building using Juncker fund money.
So now it appears we face (at least) a three way fight for the global Ka-band market, with deep-pocketed rivals sensing that ViaSat may not have all the technological advantages it had expected and Hughes poised to secure at least a 6 month (and possibly as much as a 9-12 month) lead to market for Jupiter-2 compared to ViaSat-2. Victory for ViaSat is far from certain, and perhaps even doubtful, but beyond 2020 Ka-band therefore appears very likely to be the dominant source of GEO HTS capacity.
Its been interesting to see the various reactions to today’s announcement from the FCC that Stage 1 of the Reverse Auction concluded with a total clearing cost of $86.4B (apparently excluding nearly $2B for the $1.75B relocation fund and other auction costs).
Most opinions, including my own, were that this amount is laughable in view of how much wireless operators have available to spend on buying spectrum. Some have suggested this means that broadcasters are pricing themselves out of the auction by asking for an excessive amount of money. But the reality is that the FCC set the initial prices (of up to $1B per station) and all broadcasters had to decide was whether or not to participate and if so, at what point to drop out.
Importantly, if the FCC had no excess supply of TV stations willing to offer their spectrum in the auction, then it was obligated to freeze the bids at the opening price. It seems very unlikely that if a broadcaster was willing to participate at an opening bid of say $900M (in New York) then it would decide to drop out at $800M or even $500M. And notably the total opening bids if the FCC moved every single station off-air would be only $342B.
So even though the FCC has described broadcaster participation in the auction as “strong”, it seems that this statement may be code for “somewhat disappointing” because it has proved impossible to obtain sufficient participation to lower the opening bids in a number of key markets, if the full 126MHz target set by the FCC is to be cleared.
Of course the FCC would have been criticized if it had set a lower initial clearance target and it subsequently became evident that sufficient participation existed to reach the maximum. However, it now seems plausible that Round 1 of the forward auction could go nowhere, because there is little reason for participants to reveal their bidding strategies if it is essentially impossible for the clearing costs to be covered. That will probably also lead to criticism of the FCC for miscalculating the level of demand for spectrum, and certainly broadcasters will be highlighting that they apparently value spectrum more highly than the wireless carriers.
As a result, we are likely to see multiple rounds of the reverse auction, in which the clearing target is gradually reduced, until a more reasonable level of clearing costs (perhaps $30B or so) is reached. Although we could see quite a sharp reduction in clearing costs in Round 2 once more markets are unfrozen, it may need as many as 3 more rounds, with 84MHz cleared (representing 70MHz of spectrum to be auctioned), assuming the FCC incrementally reduces the target from 100MHz auctioned to 90MHz to 80MHz to 70MHz. At that point DISH could have even more reason to bid up the prices aggressively, because less spectrum will be available to its competitors, especially T-Mobile, so we might actually end up with the final forward auction bids exceeding the clearing costs by $10B+.
But for now, speculation as to which broadcasters declined to participate is likely to intensify. My suspicion is that fewer of the small and non-commercial broadcasters than expected might have decided to participate. After all as one station in Pennsylvania told the WSJ back in January, “it won’t consider going off the air…because it would lose its PBS affiliation and go against the station’s stated mission of serving the public”. That would mean more of the reverse auction proceeds potentially going to commercial ventures, especially those that were bought up by investment firms with the explicit aim of selling their licenses.
Moreover, it may even be reasonable to guess at some of the markets which may have been frozen at the opening bids: for example, it seems likely that this must include some of the biggest cities, such as New York or Chicago, for such a high total clearing cost to have been reached. No doubt investors will be contemplating what that might mean for those companies that own broadcast licenses in these areas, especially if they have indicated their willingness to participate.
As I predicted last week, TLPS missed its chance for approval on April 22, despite Jay Monroe being convinced that it was in the bag when he presented at the Burkenroad conference earlier that day. He presumably had been assured of that by Globalstar’s General Counsel, Barbee Ponder, who thought they had answered all the FCC’s questions in late March and seemingly didn’t bother to follow-up after that point.
Now today we have seen an experimental license filing from Microsoft to test TLPS in Redmond, WA. Microsoft’s application states:
“Microsoft will test terrestrial operations in the 2473-2483.5 MHz unlicensed band and the adjacent 2483.5-2500 MHz band, consistent with Globalstar Inc.s proposal to operate a terrestrial low-power service on these frequencies nationwide (see IB docket no. 13-213). Microsoft seeks to quantify the affect [sic] of such operations on the performance and reliability of unlicensed operations in the 2.4 GHz ISM band.”
The application also includes the incidental admission that Gerst is correct that the Ruckus APs have been modified (by removal of coexistence filters) from the approved versions (the testing will include “the use of an intentional radiator in the 2473-2483.5 MHz unlicensed band that has not received an equipment authorization as ordinarily required under 47 C.F.R. § 15.201″) although it should be noted that Microsoft plans to use different APs from those in Globalstar’s own tests, including a consumer model which was one of Microsoft’s primary concerns.
The duration of the experimental license is requested to be 6 months, from May 23 to November 23, suggesting that we may not see results until the fall. This could perhaps permit FCC consideration of the results after the November election if Microsoft identified no problems whatsoever (or if the FCC sets a hard deadline for further testing, though as noted below Bloomberg is reporting that the initial authorization will last at least a year), but more likely it will set the scene for additional back and forth between Globalstar and its opponents in the period before the next FCC Chairman gets his or her feet under the desk in spring 2017.
UPDATE (5/13): Despite Microsoft’s experimental application, Globalstar’s TLPS proposal has finally made it onto the FCC’s circulation list this afternoon. That raises the question of whether Microsoft’s application was made with Globalstar’s cooperation (as I had assumed) or if Microsoft anticipated the issuance of an order that all sides acknowledged would require more testing and simply jumped the gun in preparing to conduct its own testing after that point (which now seems the most plausible explanation).
So now the focus will shift to what this order contains. It seems to basically be taken for granted that there will be increased sharing of L-band spectrum with Iridium (though that would come in a separate parallel ruling by the International Bureau on delegated authority) and that additional power limits will be imposed as an interim measure, probably at a 200mW level. Bloomberg is also reporting that there will be constraints on the number of APs that may be deployed, with a limit of 825 in the first year, and “the FCC will assess whether they cause interference to other services”. However, prior to the rejected deal last summer the FCC also contemplated changes to the OOBE restrictions that would permit increased use of Channels 12 and 13 by terrestrial users, and it will be interesting to see if these changes are still present, or if they have been modified, perhaps due to concerns about possible impacts on Bluetooth LE users in the upper part of the unlicensed spectrum.
So now the Kerrisdale report has been released, along with a prebuttal from Citigroup, claiming that Kerrisdale is “Absolutely Not a Thesis Changer”. However, as I noted last week, the biggest issue in thinking about the future for DISH is likely its capital structure, which neither of the reports address at all.
I also wish that both of them were better at math, when they try to assess whether today’s wireless networks are operating at capacity (though perhaps its hardly surprising when these calculations have been a recurring problem for New Street Research, CTIA, the FCC, Ofcom and even the ITU). Citi criticize Kerrisdale for considering New Albany, Ohio as a representative location, given its lower than average population density amongst US metropolitan areas (suggesting that 1000 people per sq km is more representative than 258 people per sq km), and also allege that Kerrisdale “ignores the variance in usage during the day” (suggesting that 40% of traffic needs to be accommodated in each of the 2 hour long morning and evening rush hours).
By making these adjustments, Citi claims that average utilization with a 25MHz downlink spectrum allocation would be 280% in the morning and evening peaks, compared to the 15% daily average estimated by Kerrisdale. Of course Citi exaggerate on the upside and Kerrisdale exaggerate on the downside.
The correct calculation for a “typical” situation should take the average number of subscribers per cellsite (144M subs on 48.6K cellsites for Verizon, according to Citi’s own report, or 2965 subs/site, compared to 1773 in Kerrisdale’s report and 7092 subs/site in Citi’s report) and the average busy hour ratio for mobile traffic (6.9% according to Cisco’s Feb 2016 VNI report which says the busy hour has 66% more traffic than average, although carriers typically build to around an 8% busy hour, perhaps 9%-10% in very peaky locations) and should also derate by the share of traffic carried on the downlink (around 87% in the US according to Sandvine), which neither report takes into consideration.
That would result in a daily downlink traffic of around 258GB per site (vs 177GB for Kerrisdale and 709GB for Citi) and a busy hour traffic of 17.8GB rather than the 142GB estimated by Citi. Then, using their own assumptions about capacity per site, each site would see a busy hour utilization of 35%, not 15% and certainly not 280%, suggesting that there is some headroom on most cellsites, just as you would expect, but that carriers will need to continue to upgrade their networks in years to come, to cope with future traffic growth, especially in peak locations.
That brings us back to my original thesis: Verizon might find DISH’s spectrum useful, but its network is not in danger of imminent collapse without it. Instead Verizon might prioritize increased use of small cells, sectorization and beamforming, and treat buying spectrum as the last thing to do, as Bob Azzi (former Sprint CTO) suggested on today’s Tegus call that I participated in.
This is a poker game, and if Ergen can prolong his license term (by building out fixed wireless broadband, as all of the call participants agreed would be logical) then he may be able to wait for Verizon to come to the table. If DISH can push up the price of spectrum in the incentive auction and prevent LightSquared/Ligado from offering a midband alternative then Ergen will be in a stronger position. So it seems more logical to me to talk about where that money will come from over the next couple of years, and not whether DISH will be forced to sell at a discounted price or Verizon will be forced to pay whatever DISH demands.
Its interesting to hear that Kerrisdale is apparently raising $100M to short DISH stock, after its previous attacks on Globalstar and Straight Path. As I said back in October 2014 when Kerrisdale mounted its attack on Globalstar, all spectrum (even that owned by Globalstar) does have value, its just a question of what someone will pay for it.
The spectrum bubble has clearly deflated considerably since late 2014 (despite DISH’s successful efforts to push up the price of the paired spectrum sold in the AWS-3 auction) with DISH’s share price roughly halving since that time, and other spectrum plays (like LightSquared/Ligado) also trading at much lower price levels.
This reduction in valuation clearly seems be the result of lower anticipated prices in the upcoming incentive auction, and last week’s FCC announcement that the initial clearing target will be at the maximum level of 126MHz, allowing 100MHz to be auctioned in most of the country (other than close to the Mexican border), will reduce the initial reserve price from $1.25/MHzPOP to $0.875/MHzPOP (although that could rise if broadcasters bid too much in the reverse auction to give up their spectrum).
However, arguments that DISH’s share price will fall much further seem to rely on DISH being forced to sell its spectrum at an even more discounted price due to impending buildout deadlines, competition (not least from Ligado, which secured a Public Notice from the FCC on its revised spectrum plans a couple of weeks ago) and very low prices in the incentive auction. Its useful to look back at what has happened previously, when operators have overpaid for spectrum, notably Verizon’s lower A and B block purchases in the 700MHz auction in 2008. In that case Verizon simply waited until someone (T-Mobile and AT&T respectively for those blocks) was prepared to pay the same as Verizon had paid (plus its cost of capital over the holding period).
If DISH is not under time or competitive pressure, then Ergen could also potentially wait to realize a reasonable price, at least in line with what DISH has already invested in its spectrum holdings (including the $10B spent on AWS-3 spectrum). Verizon might even decide to do a deal to buy spectrum from DISH sooner rather than later if it believes Ergen can wait indefinitely. And all of these problems can be addressed by spending money: bidding up the prices in the incentive auction this year, outbidding Ligado for the 1675-80MHz spectrum next year, and building out a fixed broadband network in the 1695-1710/1995-2020MHz band (as I suggested in November) to meet its FCC buildout deadlines.
Those three items could represent a sizable amount of money: perhaps $3B-$5B in the incentive auction (something that does not appear to be contemplated by most of the analysts covering the auction), $1B-$2B for 1675-80MHz and $2B-$3B for a fixed wireless broadband network, for a total of $6B-$10B over the next couple of years. And at first blush many would view that as another negative for DISH’s equity.
But the key issue here is to understand DISH’s capital structure and how the money flows around. DISH has raised its existing debt in the form of unsecured bonds at the DBS subsidiary, with no recourse to the parent entity which holds DISH spectrum assets. And the DBS bonds are not protected from being structurally subordinated to new bank debt at DBS (indeed part of the plan to fund a bid for T-Mobile last year was to raise $10B-$15B of new bank debt at DBS).
So if Ergen agrees with his kids, that its crazy to be in the pay TV business long term, then it makes absolute sense to raise say $6B+ of bank debt at DBS, flow that up to the parent company to support its spectrum plans, and (depending on how quickly the pay TV business declines and if a spectrum deal can be done in the interim) eventually file Chapter 11 for the DBS subsidiary (but not the DISH parent company). In fact raising that money sooner rather than later would make sense – DISH would have to convince the bankruptcy court that the subsidiary was not insolvent at the time of the transfer, unless that transfer happened several years before the filing. Note also that DISH has no need to spin off a spectrum holdco – the parent is exactly that, once it has extracted all of the equity value from the DBS subsidiary via this bank debt.
What does all this mean? All of the upside from any spectrum deal flows to DISH’s equity, but relatively little of the downside from no deal being struck: that downside is much more likely to be a problem for the DBS bonds. So if Kerrisdale’s plan is to short the equity, then it may face an asymmetric exposure risk. In fact, some people taking the other side of Kerrisdale’s trade might even decide to be long DISH equity and buy credit default swaps on the DBS bonds. That would create the possibility that they could win from a spectrum deal taking place, but also win if DISH can’t do a deal this year and has to raise money to wait out Verizon and/or AT&T.
In conclusion, remember what DISH said in response to rumors of the Kerrisdale report: “We will continue to manage the business for the long-term benefit of our shareholders as we have done over the last 35 years.” After all, Charlie Ergen’s a shareholder, with much of his wealth tied up in DISH stock, but as far as I know he’s not a DBS bondholder.
« Previous Page — « Previous entries « Previous Page · Next Page » Next entries » — Next Page »