12.19.16

Me first…

Posted in Broadband, Regulatory, Spectrum at 10:46 am by timfarrar

Probably the most surprising thing about today’s announcement that Softbank is investing $1B in OneWeb as part of a $1.2B funding round, is the lack of a spoiler announcement from SpaceX. That’s happened in the past on both of the two occasions when OneWeb made a major announcement, in January 2015 (when OneWeb announced its initial agreements with Qualcomm and Virgin) and in June 2015 (when OneWeb announced its initial $500M equity round).

In fact one of the more important fights that is going on behind the scenes is related to regulatory priority in terms of ITU filings, where SpaceX is some way behind. OneWeb is acknowledged to be have the first filing for an NGSO Ku-band system, but also needs access to the Ka-band for its gateway links. That led Telesat to request that the FCC deny OneWeb’s petition for a US licenses, based on “Canadian ITU filings associated with Telesat’s Ka-band NGSO system [that] date back to 2012 and January 6, 2015″ whereas “the earliest ITU filing date priority for OneWeb is January 18, 2015.” LeoSat also claimed that it had priority over OneWeb in November 2016, based on “French ITU filings for LeoSat’s Ka-band MCSAT-2 LEO-2 network [that] date back to November 25, 2014.”

However, OneWeb now appears to have attempted something of an end run around these objections, acquiring rights to the French MCSAT LEO with an ITU advance publication date of April 2, 2013 network from Thales Alenia Space. That’s particularly odd because LeoSat, which states specifically in its FCC application that it “will deploy the LeoSat System in conjunction with Thales Alenia Space,” might now find TAS’s own filings being used against it.

UPDATE (12/20): I’m told that the relevant ITU coordination dates for the different Ka-band NGSO proposals are as follows:
Telesat (Comstellation): December 20, 2012
LeoSat (MCSAT2 LEO2): November 25, 2014
OneWeb’s newly acquired MCSAT LEO filing: December 3, 2014
SpaceX: December 27, 2014
OneWeb’s original Ka-band filing: January 18, 2015.
That would imply that OneWeb has now jumped ahead of SpaceX at the ITU, but remains behind Telesat and LeoSat, although I’m sure there will be many arguments to come.

All this fighting to be first in line at the ITU will also have to take into account the FCC’s attempt to clarify the rules for new NGSO systems in an NPRM released on Thursday, December 15. The FCC’s rules state that NGSO systems should share spectrum through the “avoidance of in-line interference events” and the NPRM proposed new language in an attempt to make this more explicit. However, this language is far from clear about whether the sharing of spectrum is required on a global basis or just in the US, specifically the key paragraph in the newly proposed §25.261 states:

(a) Scope. This section applies to NGSO FSS satellite systems that communicate with earth stations with directional antennas and that operate under a Commission license or grant of U.S. market access under this part in the 10.7-12.7 GHz (space-to-Earth), 12.75-13.25 GHz (Earth-to-space), 13.75-14.5 GHz (Earth-to-space), 17.8-18.6 GHz (space-to-Earth), 18.8-19.4 GHz (space-to-Earth), 19.6-20.2 GHz (space-to-Earth), 27.5-29.1 GHz (Earth-to-space), or 29.3-30 GHz (Earth-to-space) bands.

whereas the existing language states:

(a) Applicable NGSO FSS Bands. The coordination procedures in this section apply to non-Federal-Government NGSO FSS
satellite networks operating in the following assigned frequency bands: The 28.6-29.1 GHz or 18.8-19.3 GHz frequency bands.

The pertinent question here, which is left unresolved by the proposed changes shown in the italicized text above, are whether a “satellite network” consists of both an FCC-licensed satellite system and the earth station it is communicating with, and if so whether both of these or just the satellite system itself must “operate under a Commission license or grant of U.S. market access” according the new text. If it is the former, then the new rules will clearly apply only in the US (where the earth station is licensed by the FCC), whereas if it is the latter, then the rules could be taken to imply that any recipient of a satellite system license from the FCC in the current processing round may have to agree to comply with these sharing rules on a global basis.

It therefore seems that regulatory lawyers will have plenty of work for the next year arguing on behalf of their clients. However, OneWeb will have the money to move forward quickly and extend its lead over other NGSO systems, apart from O3b, which is currently building its next batch of 8 satellites. It remains to be seen if other systems will catch-up, but Telesat (which has already ordered two test satellites) is potentially best positioned to be a third player, especially if it can secure Canadian government backing for universal service in the Arctic region.

Then we need to see how the market evolves. Greg Wyler highlighted his ambitions for OneWeb to serve 100M people by 2025 and after the alliance with Softbank, this will most likely be in the form of cellular backhaul from tens or hundreds of thousands of small cells in remote areas, just as Softbank already does at over 6000 cell sites in Japan using IPStar capacity. In contrast, O3b should continue its plans to serve highly concentrated demand hotspots, like remote islands needing connectivity to the outside world and large cruise ships.

Most of the other NGSO proposals, including Telesat and SpaceX, appear to have a fairly similar plan to O3b, with small beams used to serve a select number of demand hotspots. So the question then becomes, how much concentrated demand exists for satellite connectivity? O3b will generate roughly $100M of revenues in 2016 and has a clear path to growth into the $200M-$300M range. But is it a multi-billion dollar opportunity and is there room for one or more additional systems in this niche? And can new systems overtake O3b, given its multi-year lead in this market? Only time will tell, but if OneWeb can maintain its focus on low cost cellular backhaul and gain anchor tenant commitments from Softbank, Bharti Airtel and perhaps others, these competitive dynamics are going to be much more of an issue for O3b.

12.08.16

Chinese checkers, Indonesian intrigue…

Posted in Broadband, Financials, Handheld, Inmarsat, Operators, Regulatory, Services, Spectrum at 9:18 am by timfarrar

UPDATED Feb 5, 2017

There’s been a lot of recent news about Chinese investments in satellite companies, including the planned takeover of Spacecom, which is now being renegotiated (and probably abandoned) after the loss of Amos-6 in September’s Falcon 9 failure, and the Global Eagle joint venture for inflight connectivity.

There were also rumors that Avanti could be sold to a Chinese group, which again came to nothing, with Avanti’s existing bondholders ending up having to fund the company instead in December 2016. The latest of these vanishing offers was a purported $200M bid from a Chinese company, China Trends, for Thuraya in mid-January 2017, which Thuraya promptly dismissed, saying it had never had discussions of any kind with China Trends.

Back in July Inmarsat was also reported to have approached Avanti, but then Inmarsat declared it had “no intention to make an offer for Avanti.” I had guessed that Inmarsat appeared to have done some sort of deal with Avanti, when the Artemis L/S/Ka-band satellite was relocated to 123E, into a slot previously used by Inmarsat for the ACeS Garuda-1 L-band satellite (as Avanti’s presentation at an event in October 2016 confirmed).

However, I’m now told that the Indonesian government reclaimed the rights to this slot after Garuda-1 was de-orbited, and is attempting to use the Artemis satellite to improve its own claim to this vacant slot before these rights expire. I also understand that with Artemis almost out of fuel, various parties were very concerned that the relocation would not even work and the Artemis satellite could have been left to drift along the geostationary arc, an outcome which thankfully has been avoided.

The action by the Indonesian government seems to hint at a continued desire to control its own MSS satellite, which could come in the shape of the long rumored purchase of SkyTerra-2 L-band satellite for Indonesian government use, similar to the MEXSAT program in Mexico. If that is the case, then presumably the Indonesians would also need to procure a ground segment, similar to the recent $69M contract secured by EchoStar in Asia (although that deal is for S-band not L-band).

Meanwhile Inmarsat still appears to be hoping to secure a deal to lease the entire payload of the 4th GX satellite to the Chinese government, which was originally expected back in October 2015, when the Chinese president visited Inmarsat’s offices. That contract has still not been signed, apparently because the Chinese side tried to negotiate Inmarsat’s price down after the visit. Although Inmarsat now seems to be hinting to investors that the I5F4 satellite will be launched into the Atlantic Ocean Region for incremental aeronautical capacity, last fall Inmarsat was apparently still very confident that a deal could be completed in the first half of 2017 once the I5F4 satellite was launched.

So it remains to be seen whether Inmarsat will be any more successful than other satellite operators in securing a large deal with China or whether, just like many others, Inmarsat’s deal will vanish into thin air. China has already launched its own Tiantong-1 S-band satellite in August 2016, as part of the same One Belt One Road effort that Inmarsat was hoping to participate in with its GX satellite, and Tiantong-1 has a smartphone which “will retail from around 10,000 yuan ($1,480), with communication fees starting from around 1 yuan a minute — a tenth of the price charged by Inmarsat.” Thus Inmarsat potentially faces growing pressure on its L-band revenues in China, and must hope that it can secure some offsetting growth in Ka-band.

11.22.16

What about the dish?

Posted in Broadband, Regulatory, Services, Spectrum at 4:42 pm by timfarrar

Although there have been plenty of news articles describing the proposed 4000 satellite constellation that SpaceX filed with the FCC last week, to date there has been no analysis of how technically plausible this proposal actually is. That is perhaps unsurprising because the Technical and Legal Narratives included with the submission omit or obscure many of the most salient points needed to analyze the system and determine how realistic the claims made in SpaceX’s Legal Narrative actually are.

In particular, SpaceX claims that it has “designed its system to achieve the following objectives”:

High capacity: Each satellite in the SpaceX System provides aggregate downlink capacity to users ranging from 17 to 23 Gbps, depending on the gain of the user terminal involved. Assuming an average of 20 Gbps, the 1600 satellites in the Initial Deployment would have a total aggregate capacity of 32 Tbps. SpaceX will periodically improve the satellites over the course of the multi-year deployment of the system, which may further increase capacity.

High adaptability: The system leverages phased array technology to dynamically steer a large pool of beams to focus capacity where it is needed. Optical inter-satellite links permit flexible routing of traffic on-orbit. Further, the constellation ensures that frequencies can be reused effectively across different satellites to enhance the flexibility and capacity and robustness of the overall system.

Broadband services: The system will be able to provide broadband service at speeds of up to 1 Gbps per end user. The system’s use of low-Earth orbits will allow it to target latencies of approximately 25-35 ms.

Worldwide coverage: With deployment of the first 800 satellites, the system will be able to provide U.S. and international broadband connectivity; when fully deployed, the system will add capacity and availability at the equator and poles for truly global coverage.

Low cost: SpaceX is designing the overall system from the ground up with cost effectiveness and reliability in mind, from the design and manufacturing of the space and ground-based elements, to the launch and deployment of the system using SpaceX launch services, development of the user terminals, and end-user subscription rates.

Ease of use: SpaceX’s phased-array user antenna design will allow for a low-profile user terminal that is easy to mount and operate on walls or roofs.

What is particularly interesting is that the application says nothing whatsoever about the size of the user terminal that will be needed for the system. One hint that the user terminals are likely to be large and expensive is that SpaceX assures the FCC that “[t]he earth stations used to communicate with the SpaceX System will operate with aperture sizes that enable narrow, highly-directional beams with strong sidelobe suppression”. More importantly, by analyzing the information on the satellite beams given at the end of the Schedule S, it is clear that the supposed user downlink capacity of 17-23Gbps per satellite assumes a very large user terminal antenna diameter, because there are only 8 Ku-band user downlink beams of 250MHz each per satellite, and thus a total of only 2GHz of user downlink spectrum per satellite.

In other words this calculation implies a link efficiency of somewhere between 8.5 and 11.5bps/Hz. For comparison, OneWeb has 4GHz of user downlink spectrum per satellite, and is estimated to achieve a forward link efficiency of 0.55bps/Hz with a 30cm antenna and up to 2.73bps/Hz with a 70cm antenna. Put another way, OneWeb is intending to operate with twice as much forward bandwidth as SpaceX but with only half as much forward capacity per satellite.

That’s because OneWeb is intending to serve small, low cost (and therefore less efficient) terminals suitable for cellular backhaul in developing countries, or for internet access from homes and small businesses in rural areas. In contrast SpaceX’s system appears much more focused on large expensive terminals, similar to those used by O3b, which can cost $100K or more, and are used to connect large cruise ships or even an entire Pacific Island to the internet with hundreds of Mbps of capacity. While this has proved to be a good market for O3b, it is far from clear that this market could generate enough revenue to pay for a $10B SpaceX system. Even then, an assumption that SpaceX could achieve an average downlink efficiency of 10bps/Hz seems rather unrealistic.

SpaceX is able to gain some increased efficiency compared to OneWeb by using tightly focused steered gateway and user beams, which the Technical Narrative indicates will provide service in “a hexagonal cell with a diameter of 45 km” (Technical Annex 1-13). But there are only 8 user downlink beams per satellite, and so the total coverage area for each satellite is extremely limited. A 45km diameter hexagon has an area of 1315 sq km (or 1590 sq km for a 45km circle). Taking the more generous measure of 1590 sq km, over 5000 cells would be needed to cover the 8 million sq km area of the continental US. And SpaceX states (Technical Annex 2-7) that even in a fully deployed constellation, 340 satellites would be visible at an elevation angle of at least 40 degrees. So this implies that even when the constellation is fully deployed, only about half the land area of CONUS will be able to be served simultaneously. And in the initial deployment of 1600 satellites, potentially only about 30% of CONUS will have simultaneous service.

SpaceX could use beamhopping technology, similar to that planned by ViaSat for ViaSat-2 and ViaSat-3, to move the beams from one cell to another within a fraction of a second, but this is not mentioned anywhere in the application, and would be made even more challenging, especially within the constraints of a relatively small satellite, by the need for avoidance of interference events with both GEO and other LEO constellations.

In summary, returning to the objectives outlined above, the claim of “high capacity” per satellite seems excessive in the absence of large, expensive terminals, while the “worldwide coverage” objective is subject to some question. Most importantly, it will likely be particularly challenging to realize the “low cost” and “ease of use” objectives for the user terminals, if the phased array antennas are very large. And the system itself won’t be particularly low cost, given that each satellite is expected to have a mass of 386kg: taking the Falcon Heavy launch capacity of 54,400kg to LEO and cost of $90M, it would take at least 32 Falcon Heavy launches (and perhaps far more given the challenge of fitting 140 satellites on each rocket), costing $2.8B or more, just to launch the 4425 satellites.

Instead one of the key objectives of the narrow, steerable beams in the SpaceX design appears to be to support an argument that the FCC should continue with its avoidance of in-line interference events policy, with the spectrum shared “using whatever means can be coordinated between the operators to avoid in-line interference events, or by resorting to band segmentation in the absence of any such coordination agreement.”

This continues SpaceX’s prior efforts to cause problems for OneWeb, because OneWeb provides continuous wide area coverage, rather than highly directional service to specified locations, and therefore (at least in the US, since it is unclear that the FCC’s rules could be enforced elsewhere) OneWeb may be forced to discontinue using part of the spectrum band (and thereby lose half of its capacity) during in-line events.

OneWeb is reported to be continuing to make progress in securing investors for its system, and it would be unsurprising if Elon Musk continues to bear a grudge against a space industry rival. But given the design issues outlined above, and the many other more pressing problems that SpaceX faces in catching up with its current backlog of satellite launches, it is rather more doubtful whether SpaceX really has a system design and business plan that would support a multi-billion dollar investment in a new satellite constellation.

11.10.16

Back to the Future Part 2…

Posted in Globalstar, Iridium, Operators, Regulatory, Spectrum at 10:31 am by timfarrar

So now Trump has won the White House, the opportunity for Globalstar to secure approval for its Terrestrial Low Power Service (TLPS) that was first proposed four years ago has finally disappeared. Instead of a 22MHz WiFi Channel 14, that was supposed to have access to a “massive and immediate ecosystem” (an assertion that was challenged by opponents), Globalstar is now asking for a low power terrestrial authorization in only its 11.5MHz of licensed spectrum.

That takes us back essentially to the compromise that Jay Monroe rejected in summer 2015, apparently because he didn’t believe that it would be possible to monetize the spectrum for low power LTE. However, with the FCC still keen to allow Iridium to share more of the L-band MSS spectrum for NEXT, and even Google supporting the concept of Globalstar using only its licensed spectrum for terrestrial operations, an approval seems very plausible in the near term, albeit with a further comment period required on the proposed license modification, as Globalstar acknowledges in its ex parte letter.

UPDATE (11/11): This email, produced earlier in the year by the FCC in response to a FOIA request, gives some further insight into the key June 2015 meeting with Globalstar that I referred to in my post. With its reference to “the conditions for operation in Channels 12 and 13″ and changes to “out-of-band emission levels in the MSS licensed spectrum” it seems clear that FCC staff were contemplating operation by unlicensed users right up to the 2483.5MHz boundary at least, presumably in conjunction with some reciprocity for Globalstar to operate below 2483.5MHz. Thus the deal proposed by FCC staff (although not necessarily validated with Commissioners’ offices) and rejected by Globalstar appears to have been somewhat different to this latest proposal from Globalstar (and perhaps more similar to the Public Knowledge proposals of shared use that came to the fore later in 2015). However, it seems hard to argue that the deal on the table in summer 2015 wouldn’t have been more favorable to Globalstar (due to the ability to actually offer a full 22MHz TLPS WiFi channel), if approved by Commissioners, than Globalstar’s latest proposal.

So the question now becomes, is there value in a non-standard 10MHz TDLTE channel, which is restricted to operate only at low power? Back in June 2015, I noted that there clearly would be some value for standard high power operation, but the question is a very different one for a low power license. After all, even Jay didn’t believe this type of authorization would have meaningful value last year.

Of course, its only to be expected that lazy analysts will cite the Sprint leaseback deal, which supposedly represented a huge increase in the value of 2.5GHz spectrum (though in practice this deal included cherry picked licenses for owned spectrum in top markets, and the increase in value was actually quite modest). And they will also presumably overlook the impact of the power restrictions and lack of ecosystem.

What is really critical is whether Globalstar could use such an approval to raise further funds before it runs out of money next year. Globalstar’s most recent Q3 10-Q admitted that “we will draw all or substantially all of the remaining amounts available under the August 2015 Terrapin Agreement to achieve compliance with certain financial covenants in our Facility Agreement for the measurement period ending December 31, 2016 and to pay our debt service obligations.”

In other words, Globalstar does not have the money to pay its interest and debt payments in June 2017. And with an imminent Terrapin drawdown of over $30M in December, Globalstar really needs an immediate approval to get its share price up to a level where Terrapin won’t be swamping the market with share sales next month. So how will the market react to the prospects of a limited authorization, and will investors be willing to put up $100M+ just to meet Globalstar’s obligations under the COFACE agreement in 2017?

Its important to note that the biannual debt repayments jump further in December 2017 and Globalstar will not be able to extend the period in which it makes cure payments beyond December 2017 unless “the 8% New Notes have been irrevocably redeemed in full (and no obligations or amounts are outstanding in connection therewith) on or prior to 30 June 2017″. Thus its critical that the financing situation is resolved through a major cash injection in the first half of 2017. As a result, it looks like we should find out pretty soon whether this compromise is sufficient for Thermo (or more likely others) to continue funding Globalstar.

09.14.16

Silicon Valley’s planning problem…

Posted in DISH, General, Operators, Spectrum at 3:18 pm by timfarrar

I’ve been thinking a lot about the failure of Google Fiber and if there are any wider lessons about whether Silicon Valley will ever be able to compete effectively as an owner and builder of telecom networks, or indeed in other large scale capex intensive businesses (such as cars).

One conclusion I’ve come to is that there may be a fundamental incompatibility between the planning horizon (and deployment capabilities) of Silicon Valley companies and what is needed to be a successful operator of national or multinational telecom networks (whether fiber, wireless or satellite). The image above is taken from Facebook’s so-called “Little Red Book” and summarizes pretty well what I’ve experienced living and working in Silicon Valley, namely that the prevailing attitude is “There is no point having a 5-year plan in this industry” and instead you should think just about what you will achieve in the next 6 months and where you want to be in 30 years.

In software that makes a lot of sense – you can iterate fast and solve problems incrementally, and scaling up (at least nowadays) is relatively easy if you can find and keep the customers. In contrast, building a telecom network (or a new car design) is at least two or three years’ effort, and by the time you are fully rolled out in the market, it’s four or five years since you started. So when you start, you need to have a pretty good plan for what you’ll be delivering (and how it is going to be operated at scale) five years down the road.

For an existing wireless operator or car company that planning and implementation is obviously helped by years of experience in operating networks or manufacturing facilities at scale. But a new entrant has to learn all of that from scratch. And it’s not like technology is transforming the job of deploying celltowers, trenching fiber or running a vehicle manufacturing line. Software might change the service that the end customer is buying, but it’s crazy to think that “if tech companies build cars and car companies hire developers, the former will win.”

Of course self-driving cars will drastically change what people do with vehicles in the future. But those vehicles still have to be made on a production line, efficiently and with high quality. Mobile has changed the world dramatically over the last 30 years, but AT&T, Deutsche Telekom, BT, etc. are still around and have absorbed some of the most successful wireless startups.

Moreover, Silicon Valley companies simply don’t spend capex on anything like the scale of telcos or car companies. In 2015 Alphabet/Google’s total capex for all of its activities worldwide was $9.9B and Facebook’s capex was only $2.5B (surprisingly, at least to me, Amazon only spent $4.6B, though Apple spent $11.2B and anticipated spending $15B in 2016).

But the US wireless industry alone invested $32B in capex in 2015, which is more than Facebook, Google, Amazon and Apple put together, and that excludes the $40B+ spent on spectrum in the AWS-3 auction last year. In the car industry, GM and Ford each spent more than $7B on capex in 2015. So in round numbers, total wireless industry and car industry capex on a global basis are both of order $100B+ every year, a sum that simply can’t be matched by Silicon Valley.

So when Silicon Valley companies aren’t used to either planning for or spending tens of billions of dollars on multi-year infrastructure developments, why are people surprised when it turns out Google can’t support the investment needed to build a competitive national fiber network? (Indeed its not been widely reported, but I’m told that earlier this year Google’s board also turned down a $15B+ partnership with DISH to build a new national wireless network.) Or when it appears “The Apple dream car might not happen” and “Google’s Self-Driving Car Project Is Losing Out to Rivals“?

Instead it seems that we may be shifting towards a model where the leading Silicon Valley companies work on new technology development and “give away the blueprints…so that anyone from local governments to Internet service providers can construct a new way to get Internet signals into hard-to-reach places“. Similarly, Google could “enable [rather] than do” in the field of self-driving cars. Whether that will lead to these technologies being commercialized remains to be seen, but it does mean that Facebook and Google won’t have to change their existing ways of working or radically increase their capital expenditures.

Undoubtedly some other Silicon Valley companies will end up trying to build their own self-driving cars. But after the (continuing) struggles of Tesla to ramp up, it seems more likely that most startups will end up partnering with or selling their technology to existing manufacturers instead. And similarly, in the telecom world, does anyone believe Google (or any other Silicon Valley company) is going to build a new national wireless broadband network that is competitive with AT&T, Verizon and Comcast?

It seems to me that about the best we could hope for is for Google to push forward the commercialization of new shared access/low cost frequency bands like 3.5GHz (e.g. as part of an MVNO deal with an existing operator) so that the wireless industry no longer has to spend as much on spectrum in the future and can deliver more data at lower cost.

However, that’s not necessarily all bad news. It seems almost quaint to look back a year or two at how wireless operators were reportedly “terrified” of Facebook and concerned about how Project Loon could “hand Google an effective monopoly over the Internet in developing countries.”

If Facebook and Google are now simply going to come up with clever technology to reduce network costs (rather than building rival networks) or even just act as a source of incremental demand for mobile data services, then that will be good for mobile operators. Those operators may just be “dumb pipes,” but realistically, despite Verizon’s (flailing) efforts, that’s pretty much all they could hope for anyway.

08.29.16

What’s Charlie’s game now?

Posted in AT&T, DISH, Operators, Regulatory, Spectrum, T-Mobile, Verizon at 3:44 pm by timfarrar

Back in November 2014, I published my analysis of what was happening in the AWS-3 spectrum auction to scorn from other analysts, who apparently couldn’t believe that Charlie Ergen would bid through multiple entities to push up the price of paired spectrum. Now we’re seeing relatively little speculation about who is doing what in the incentive auction (other than an apparently mystifying consensus that it will take until at least the end of September to complete Stage 1), so I thought it would be useful to give my views about what is happening.

The most important factor to observe in analyzing the auction is that overall demand relative to the amount of spectrum available (calculated as first round bidding units placed divided by total available supply measured in bidding units) has been considerably lower than in previous large auctions (AWS-1, 700MHz) and far short of the aggressive bidding seen in the AWS-3 auction.

That’s attributable partly to the absence of Social Capital, but much more to the 100MHz of spectrum on offer, compared to the likelihood that of the five remaining potential national bidders (Verizon, AT&T, T-Mobile, DISH and Comcast), none of them are likely to need more than about 30MHz on a national basis.

What’s become clear so far over the course of the auction is that most license areas (Partial Economic Areas) are not attracting much excess demand, apart from the top PEAs (namely New York, Los Angeles and Chicago) in the first few rounds. I said before the auction that DISH’s best strategy would probably be to bid for a large amount of spectrum in a handful of top markets, in order to drive up the price, and that appears to be exactly what happened.

However, it now appears we are very close to reaching the end of Stage 1, after excess eligibility dropped dramatically (by ~44% in terms of bidding units) in Round 24. In fact a bidder dropped 2 blocks in New York and 3 blocks in Los Angeles, without moving this eligibility elsewhere, somewhat similar to what happened on Friday, when one or more bidders dropped 5 blocks in Chicago, 3 blocks in New York and 1 block in Los Angeles during Round 20.

However, a key difference is that a significant fraction of the bidding eligibility that moved out of NY/LA/Chicago during Round 20, ended up being reallocated to other second and third tier markets, whereas in Round 24, total eligibility dropped by more than the reduction in eligibility in New York and Los Angeles. It is natural that a bidder such as T-Mobile (or Comcast) would want licenses elsewhere in the country if the top markets became too expensive, whereas if DISH’s objective is simply to push up the price, then DISH wouldn’t necessarily want to bid elsewhere and end up owning second and third tier markets.

This suggests that DISH has been reducing its exposure in the top three markets, in order to prevent itself from becoming stranded with too much exposure there. My guess is that DISH exited completely from Chicago in Round 20 and is now reducing exposure in New York and Los Angeles after bidding initially for a full complement of licenses there (i.e. 10 blocks in New York and Chicago and 5 blocks in Los Angeles).

If DISH is now down to about 8 blocks in New York and only 2 blocks in Los Angeles, then its maximum current exposure (if all other bidders dropped out) would be $4.52B, keeping DISH’s exposure under what is probably a roughly $5B budget. Of course DISH could potentially drop out of Los Angeles completely and let others fight it out (for the limited allocation of 5 blocks), if its objective is simply to maximize the end price, but this may not be possible in New York, because there are 10 license blocks available, which could give Verizon, AT&T, T-Mobile and Comcast enough to share between them.

Regardless, with the price increasing by 10% in each round, the price per MHzPOP in New York and Los Angeles would exceed that in the AWS-3 auction before the end of this week, implying that a resolution has to be reached very soon. If DISH is the one to exit, then it looks like Ergen will not be reallocating eligibility elsewhere, and DISH’s current eligibility (256,000 bidding units if it is bidding on 8 blocks in New York and 2 in Los Angeles) is likely higher than the excess eligibility total of all the remaining bidders combined (~182,000 bidding units at the end of Round 24 if all the available licenses were sold). This implies that a rapid end to Stage 1 of the auction is now likely, perhaps even this week and almost certainly before the end of next week, with total proceeds in the region of $30B.

Of course we will then need to go back to the next round of the reverse auction, but it looks plausible that convergence may be achieved at roughly $35B-$40B, potentially with as much as 80-90MHz sold (i.e. an average price of ~$1.50/MHzPOP). If DISH is forced out in Stage 1, then prices in key markets would probably not go much higher in future rounds of the forward auction, so the main question will be how quickly the reverse auction payments decline and whether this takes 1, 2 or 3 more rounds.

Also, based on the bidding patterns to date, it seems likely that Comcast may well emerge from the auction with a significant national footprint of roughly 20MHz of spectrum, potentially spending $7B-$10B. In addition, unless the forward auction drops to only 70MHz being sold, all four national bidders could largely achieve their goals, spending fairly similar amounts except in New York and Los Angeles, where one or two of these players are likely to miss out. In those circumstances, it will be interesting to see who would feel the need to pay Ergen’s asking price of at least $1.50/MHzPOP (and quite possibly a lot more) for his AWS-3 and AWS-4 spectrum licenses.

UPDATE (8/30): Bidding levels in New York and Los Angeles dropped dramatically in Round 25 (to 10 and 8 blocks respectively), with total bidding units placed (2.096M) now below the supply of licenses (2.177M) in Stage 1. This very likely means that DISH has given up and Stage 1 will close this week at an even lower price of ~$25B, with convergence of the forward and reverse auction values probably not achieved until the $30B-$35B range. This lower level of bidding activity increases the probability that 4 stages will now be required, with only 70MHz being sold in the forward auction at the end of the day.

07.15.16

Hopping mad…

Posted in Aeronautical, Broadband, Operators, Spectrum, ViaSat, VSAT at 2:28 pm by timfarrar

Its been interesting to hear the feedback on my new ViaSat profile that I published last weekend, especially with regard to ViaSat’s supposed technical advantages over the HTS competition. As I noted in the report, ViaSat has apparently been struggling with its beamhopping technology, reducing the capacity of its upcoming ViaSat-2 satellite from an originally planned 350Gbps (i.e. 2.5 times the capacity of ViaSat-1) to around 300Gbps at the moment.

However, even that reduced target may require extra spectrum to achieve, with ViaSat asking the FCC in late May for permission to use 600MHz of additional spectrum in the LMDS band. Fundamentally this appears to be due to the reduced efficiency that ViaSat now expects to achieve relative to that set out in its original beamhopping patent. The patent suggested that for a ViaSat-2 design (with only 1.5GHz of spectrum, rather than the 2.1GHz ViaSat now intends to use), the efficiency could be as high as 3bps/Hz on the forward link (i.e. 225Gbps) and 1.8bps/Hz on the return link (i.e. 135Gbps) for a total of 360Gbps of capacity. But at Satellite 2016, ViaSat’s CEO indicated that an efficiency (apparently averaged between the forward and return links) of only 1.5bps/Hz should be expected, no better than existing HTS Ka-band satellites and nearly 40% lower than ViaSat originally estimated.

A notable side-effect of this additional spectrum utilization (even assuming approval is granted by the FCC) is that new terminals will be required, including replacement of both the antenna and the modem for aircraft that want to make use of the extended coverage of ViaSat-2. That’s why American Airlines is waiting until the second half of 2017 for this new terminal to be developed, before it starts to install ViaSat’s connectivity on new aircraft.

While the FCC’s Spectrum Frontiers Order yesterday does contemplate continued use of the LMDS band for satellite gateways (though utilization by user terminals appears more difficult), it looks like other Ka-band providers intend to shift more of their future gateway operations up to the Q/V-band, rather than building hundreds of Ka-band gateways as ViaSat will need for its ViaSat-3 satellite. That decision could reduce the costs of competing ground segment deployments substantially, while retaining continuity for user links. Thus, as a result of the lower than expected beamhopping efficiency, it remains to be seen whether ViaSat’s technology will now be meaningfully superior to that of competitors, notably SES and Inmarsat who both appear poised to invest heavily in Ka-band.

SES gave a presentation at the Global Connected Aircraft Summit last month, depicting its plans to build three new Ka-band HTS satellites for global coverage as shown above, and the first of these satellites could be ordered very shortly, because as SES pointed out in its recent Investor Day presentation, it has EUR120M of uncommitted capex this year and nearly EUR1.5B available in the period through 2020.

Meanwhile Inmarsat is hard at work designing a three satellite Inmarsat-7 Ka-band system, with in excess of 100Gbps of capacity per satellite. Although the results of the Brexit referendum may complicate its efforts, Inmarsat is hoping to secure a substantial European Commission investment later this year, which would replace the four proposed Ka-band satellites that Eutelsat had previously contemplated building using Juncker fund money.

So now it appears we face (at least) a three way fight for the global Ka-band market, with deep-pocketed rivals sensing that ViaSat may not have all the technological advantages it had expected and Hughes poised to secure at least a 6 month (and possibly as much as a 9-12 month) lead to market for Jupiter-2 compared to ViaSat-2. Victory for ViaSat is far from certain, and perhaps even doubtful, but beyond 2020 Ka-band therefore appears very likely to be the dominant source of GEO HTS capacity.

06.29.16

Speaking in code…

Posted in DISH, Financials, Operators, Regulatory, Spectrum at 1:54 pm by timfarrar

Its been interesting to see the various reactions to today’s announcement from the FCC that Stage 1 of the Reverse Auction concluded with a total clearing cost of $86.4B (apparently excluding nearly $2B for the $1.75B relocation fund and other auction costs).

Most opinions, including my own, were that this amount is laughable in view of how much wireless operators have available to spend on buying spectrum. Some have suggested this means that broadcasters are pricing themselves out of the auction by asking for an excessive amount of money. But the reality is that the FCC set the initial prices (of up to $1B per station) and all broadcasters had to decide was whether or not to participate and if so, at what point to drop out.

Importantly, if the FCC had no excess supply of TV stations willing to offer their spectrum in the auction, then it was obligated to freeze the bids at the opening price. It seems very unlikely that if a broadcaster was willing to participate at an opening bid of say $900M (in New York) then it would decide to drop out at $800M or even $500M. And notably the total opening bids if the FCC moved every single station off-air would be only $342B.

So even though the FCC has described broadcaster participation in the auction as “strong”, it seems that this statement may be code for “somewhat disappointing” because it has proved impossible to obtain sufficient participation to lower the opening bids in a number of key markets, if the full 126MHz target set by the FCC is to be cleared.

Of course the FCC would have been criticized if it had set a lower initial clearance target and it subsequently became evident that sufficient participation existed to reach the maximum. However, it now seems plausible that Round 1 of the forward auction could go nowhere, because there is little reason for participants to reveal their bidding strategies if it is essentially impossible for the clearing costs to be covered. That will probably also lead to criticism of the FCC for miscalculating the level of demand for spectrum, and certainly broadcasters will be highlighting that they apparently value spectrum more highly than the wireless carriers.

As a result, we are likely to see multiple rounds of the reverse auction, in which the clearing target is gradually reduced, until a more reasonable level of clearing costs (perhaps $30B or so) is reached. Although we could see quite a sharp reduction in clearing costs in Round 2 once more markets are unfrozen, it may need as many as 3 more rounds, with 84MHz cleared (representing 70MHz of spectrum to be auctioned), assuming the FCC incrementally reduces the target from 100MHz auctioned to 90MHz to 80MHz to 70MHz. At that point DISH could have even more reason to bid up the prices aggressively, because less spectrum will be available to its competitors, especially T-Mobile, so we might actually end up with the final forward auction bids exceeding the clearing costs by $10B+.

But for now, speculation as to which broadcasters declined to participate is likely to intensify. My suspicion is that fewer of the small and non-commercial broadcasters than expected might have decided to participate. After all as one station in Pennsylvania told the WSJ back in January, “it won’t consider going off the air…because it would lose its PBS affiliation and go against the station’s stated mission of serving the public”. That would mean more of the reverse auction proceeds potentially going to commercial ventures, especially those that were bought up by investment firms with the explicit aim of selling their licenses.

Moreover, it may even be reasonable to guess at some of the markets which may have been frozen at the opening bids: for example, it seems likely that this must include some of the biggest cities, such as New York or Chicago, for such a high total clearing cost to have been reached. No doubt investors will be contemplating what that might mean for those companies that own broadcast licenses in these areas, especially if they have indicated their willingness to participate.

05.11.16

Jay cries uncle (or not)…

Posted in Globalstar, Operators, Regulatory, Spectrum at 5:47 pm by timfarrar

As I predicted last week, TLPS missed its chance for approval on April 22, despite Jay Monroe being convinced that it was in the bag when he presented at the Burkenroad conference earlier that day. He presumably had been assured of that by Globalstar’s General Counsel, Barbee Ponder, who thought they had answered all the FCC’s questions in late March and seemingly didn’t bother to follow-up after that point.

Now today we have seen an experimental license filing from Microsoft to test TLPS in Redmond, WA. Microsoft’s application states:

“Microsoft will test terrestrial operations in the 2473-2483.5 MHz unlicensed band and the adjacent 2483.5-2500 MHz band, consistent with Globalstar Inc.s proposal to operate a terrestrial low-power service on these frequencies nationwide (see IB docket no. 13-213). Microsoft seeks to quantify the affect [sic] of such operations on the performance and reliability of unlicensed operations in the 2.4 GHz ISM band.”

The application also includes the incidental admission that Gerst is correct that the Ruckus APs have been modified (by removal of coexistence filters) from the approved versions (the testing will include “the use of an intentional radiator in the 2473-2483.5 MHz unlicensed band that has not received an equipment authorization as ordinarily required under 47 C.F.R. § 15.201″) although it should be noted that Microsoft plans to use different APs from those in Globalstar’s own tests, including a consumer model which was one of Microsoft’s primary concerns.

The duration of the experimental license is requested to be 6 months, from May 23 to November 23, suggesting that we may not see results until the fall. This could perhaps permit FCC consideration of the results after the November election if Microsoft identified no problems whatsoever (or if the FCC sets a hard deadline for further testing, though as noted below Bloomberg is reporting that the initial authorization will last at least a year), but more likely it will set the scene for additional back and forth between Globalstar and its opponents in the period before the next FCC Chairman gets his or her feet under the desk in spring 2017.

UPDATE (5/13): Despite Microsoft’s experimental application, Globalstar’s TLPS proposal has finally made it onto the FCC’s circulation list this afternoon. That raises the question of whether Microsoft’s application was made with Globalstar’s cooperation (as I had assumed) or if Microsoft anticipated the issuance of an order that all sides acknowledged would require more testing and simply jumped the gun in preparing to conduct its own testing after that point (which now seems the most plausible explanation).

So now the focus will shift to what this order contains. It seems to basically be taken for granted that there will be increased sharing of L-band spectrum with Iridium (though that would come in a separate parallel ruling by the International Bureau on delegated authority) and that additional power limits will be imposed as an interim measure, probably at a 200mW level. Bloomberg is also reporting that there will be constraints on the number of APs that may be deployed, with a limit of 825 in the first year, and “the FCC will assess whether they cause interference to other services”. However, prior to the rejected deal last summer the FCC also contemplated changes to the OOBE restrictions that would permit increased use of Channels 12 and 13 by terrestrial users, and it will be interesting to see if these changes are still present, or if they have been modified, perhaps due to concerns about possible impacts on Bluetooth LE users in the upper part of the unlicensed spectrum.

Do the math…

Posted in DISH, Financials, Operators, Spectrum at 1:58 pm by timfarrar

So now the Kerrisdale report has been released, along with a prebuttal from Citigroup, claiming that Kerrisdale is “Absolutely Not a Thesis Changer”. However, as I noted last week, the biggest issue in thinking about the future for DISH is likely its capital structure, which neither of the reports address at all.

I also wish that both of them were better at math, when they try to assess whether today’s wireless networks are operating at capacity (though perhaps its hardly surprising when these calculations have been a recurring problem for New Street Research, CTIA, the FCC, Ofcom and even the ITU). Citi criticize Kerrisdale for considering New Albany, Ohio as a representative location, given its lower than average population density amongst US metropolitan areas (suggesting that 1000 people per sq km is more representative than 258 people per sq km), and also allege that Kerrisdale “ignores the variance in usage during the day” (suggesting that 40% of traffic needs to be accommodated in each of the 2 hour long morning and evening rush hours).

By making these adjustments, Citi claims that average utilization with a 25MHz downlink spectrum allocation would be 280% in the morning and evening peaks, compared to the 15% daily average estimated by Kerrisdale. Of course Citi exaggerate on the upside and Kerrisdale exaggerate on the downside.

The correct calculation for a “typical” situation should take the average number of subscribers per cellsite (144M subs on 48.6K cellsites for Verizon, according to Citi’s own report, or 2965 subs/site, compared to 1773 in Kerrisdale’s report and 7092 subs/site in Citi’s report) and the average busy hour ratio for mobile traffic (6.9% according to Cisco’s Feb 2016 VNI report which says the busy hour has 66% more traffic than average, although carriers typically build to around an 8% busy hour, perhaps 9%-10% in very peaky locations) and should also derate by the share of traffic carried on the downlink (around 87% in the US according to Sandvine), which neither report takes into consideration.

That would result in a daily downlink traffic of around 258GB per site (vs 177GB for Kerrisdale and 709GB for Citi) and a busy hour traffic of 17.8GB rather than the 142GB estimated by Citi. Then, using their own assumptions about capacity per site, each site would see a busy hour utilization of 35%, not 15% and certainly not 280%, suggesting that there is some headroom on most cellsites, just as you would expect, but that carriers will need to continue to upgrade their networks in years to come, to cope with future traffic growth, especially in peak locations.

That brings us back to my original thesis: Verizon might find DISH’s spectrum useful, but its network is not in danger of imminent collapse without it. Instead Verizon might prioritize increased use of small cells, sectorization and beamforming, and treat buying spectrum as the last thing to do, as Bob Azzi (former Sprint CTO) suggested on today’s Tegus call that I participated in.

This is a poker game, and if Ergen can prolong his license term (by building out fixed wireless broadband, as all of the call participants agreed would be logical) then he may be able to wait for Verizon to come to the table. If DISH can push up the price of spectrum in the incentive auction and prevent LightSquared/Ligado from offering a midband alternative then Ergen will be in a stronger position. So it seems more logical to me to talk about where that money will come from over the next couple of years, and not whether DISH will be forced to sell at a discounted price or Verizon will be forced to pay whatever DISH demands.

« Previous Page« Previous entries « Previous Page · Next Page » Next entries »Next Page »