I’ve been thinking a lot about the failure of Google Fiber and if there are any wider lessons about whether Silicon Valley will ever be able to compete effectively as an owner and builder of telecom networks, or indeed in other large scale capex intensive businesses (such as cars).
One conclusion I’ve come to is that there may be a fundamental incompatibility between the planning horizon (and deployment capabilities) of Silicon Valley companies and what is needed to be a successful operator of national or multinational telecom networks (whether fiber, wireless or satellite). The image above is taken from Facebook’s so-called “Little Red Book” and summarizes pretty well what I’ve experienced living and working in Silicon Valley, namely that the prevailing attitude is “There is no point having a 5-year plan in this industry” and instead you should think just about what you will achieve in the next 6 months and where you want to be in 30 years.
In software that makes a lot of sense – you can iterate fast and solve problems incrementally, and scaling up (at least nowadays) is relatively easy if you can find and keep the customers. In contrast, building a telecom network (or a new car design) is at least two or three year effort, and by the time you are fully rolled out in the market, its four or five years since you started. So when you start, you need to have a pretty good plan for what you’ll be delivering (and how its going to be operated at scale) five years down the road.
For an existing wireless operator or car company that planning and implementation is obviously helped by years of experience in operating networks or manufacturing facilities at scale. But a new entrant has to learn all of that from scratch. And its not like technology is transforming the job of deploying celltowers, trenching fiber or running a vehicle manufacturing line. Software might change the service that the end customer is buying, but its crazy to think that “if tech companies build cars and car companies hire developers, the former will win.”
Of course self-driving cars will drastically change what people do with vehicles in the future. But those vehicles still have to be made on a production line, efficiently and with high quality. Mobile has changed the world dramatically over the last 30 years, but AT&T, Deutsche Telekom, BT, etc. are still around and absorbed some of the most successful wireless startups.
Moreover, Silicon Valley companies simply don’t spend capex on anything like the scale of telcos or car companies. In 2015 Alphabet/Google’s total capex for all of its activities worldwide was $9.9B and Facebook’s capex was only $2.5B (surprisingly, at least to me, Amazon only spent $4.6B, though Apple spent $11.2B and anticipated spending $15B in 2016).
But the US wireless industry alone invested $32B in capex in 2015, which is more than Facebook, Google, Amazon and Apple put together, and that excludes the $40B+ spent on spectrum in the AWS-3 auction last year. In the car industry, GM and Ford each spent more than $7B on capex in 2015. So in round numbers, total wireless industry and car industry capex on a global basis are both of order $100B+ every year, a sum that simply can’t be matched by Silicon Valley.
So when Silicon Valley companies aren’t used to either planning for or spending tens of billions of dollars on multi-year infrastructure developments, why are people surprised when it turns out Google can’t support the investment needed to build a competitive national fiber network? (Indeed its not been widely reported, but I’m told that earlier this year Google’s board also turned down a $15B+ partnership with DISH to build a new national wireless network.) Or when it appears “The Apple dream car might not happen” and “Google’s Self-Driving Car Project Is Losing Out to Rivals“?
Instead it appears that we may be shifting towards a model where the leading Silicon Valley companies work on new technology development and “give away the blueprints…so that anyone from local governments to Internet service providers can construct a new way to get Internet signals into hard-to-reach places“. Similar Google could “enable [rather] than do” in the field of self-driving cars. Whether that will lead to these technologies being commercialized remains to be seen, but it does mean that Facebook and Google won’t have to change their existing ways of working or radically increase their capital expenditures.
Undoubtedly some other Silicon Valley companies will end up try to build their own self-driving cars. But after the (continuing) struggles of Tesla to ramp up, it seems more likely that most startups will end up partnering with or selling their technology to existing manufacturers instead. And similarly, in the telecom world, does anyone believe Google (or any other Silicon Valley company) is going to build a new national wireless broadband network that is competitive with AT&T, Verizon and Comcast?
It seems to me that about the best we could hope for is for Google to push forward the commercialization of new shared access/low cost frequency bands like 3.5GHz (e.g. as part of an MVNO deal with an existing operator) so that the wireless industry no longer has to spend as much on spectrum in the future and can deliver more data at lower cost.
However, that’s not necessarily all bad news. It seems almost quaint to look back a year or two at how wireless operators were reportedly “terrified” of Facebook and concerned about how Project Loon could “hand Google an effective monopoly over the Internet in developing countries.”
If Facebook and Google are now simply going to come up with clever technology to reduce network costs (rather than building rival networks) or even just act as a source of incremental demand for mobile data services, then that will be good for mobile operators. Those operators may just be “dumb pipes,” but realistically, despite Verizon’s (flailing) efforts, that’s pretty much all they could hope for anyway.
Last June, I pointed out that CTIA had taken the odd (but hardly surprising) decision to run away from its own data on US mobile data traffic growth in 2014, which showed only a 26% increase, in favor of an error-strewn Brattle Group paper that used Cisco’s February 2015 mobile VNI estimates to supposedly show the “FCC’s mobile data growth projection in 2010 was remarkably accurate.”
Now Cisco has released its February 2016 mobile VNI report, CTIA’s attempt to bury its own data has backfired, because Cisco has retrospectively revised its prior estimates of North American mobile data traffic downwards by more than one third. Cisco’s estimate of EOY North American mobile data traffic in 2014 has been reduced from 562TB/mo in the Feb 2015 mobile VNI to only 360TB/mo in the new release, which makes it almost the same as the estimate of 2013 traffic in last year’s publication (345TB/mo). Similarly the estimate of 2015 traffic has been reduced from 849TB/mo in last year’s publication to only 557TB/mo this year, a loss of more than an entire year’s growth.
The chart below highlights the impact of this massive revision to Cisco’s estimates, showing that when combined with previous revisions, the latest estimate of traffic in 2014 is less than half the figure projected by Cisco back in February 2010 (which was used by the FCC as one of the traffic estimates in its infamous October 2010 paper).
Ironically, Cisco’s latest revision actually brings its estimate of US mobile data traffic for 2014 (323TB/mo at the end of the year – from the online tool, the North America figure above includes Canada) below CTIA’s own total of 4.1PB for the year as a whole (i.e. an average of 340TB/mo) meaning that in fact CTIA would have been better off using its own data. Of course that wouldn’t have had the desired effect of trying to make the FCC feel good about its hopelessly flawed 2010 paper and justify CTIA’s own attempt to disinter this methodology and use it to claim a spectrum crisis is still imminent.
UPDATE (2/4): Oddly, in a blog post Cisco asserts that the revision was so that the “2014 baseline volume adjustment now aligns with CTIA’s figures.” That’s actually wrong: the Cisco estimate represents an end-of-year figure (the first line of the report states “Global mobile data traffic reached 3.7 exabytes per month at the end of 2015“) and given that the monthly traffic is growing (one assumes relatively consistently) through the year, it is implausible for the final month of the year to have less traffic than the average for the year as a whole. However, it is certainly very ironic that Cisco has chosen to (try and) align its traffic figures with CTIA at precisely the time when CTIA wants to bury its own numbers in favor of more optimistic growth estimates.
Specifically, Cisco’s latest VNI forecast shows that the FCC’s estimate that traffic would grow by 35 times between 2009 and 2014 (a reduction from Cisco’s own estimate of 48 times growth) was actually 50% too high, because according to the latest Cisco data, North American traffic grew by only 22 times between 2009 and 2014 (unless of course Cisco has also made some undisclosed retrospective revision to its 2009 data).
As usual, Cisco has not provided any detailed explanation of this dramatic change in its numbers, though part of the cause seems to be a further increase in the assumption of offloaded traffic in 2014 and 2015: the latest forecast claims offloaded traffic on a global basis will grow from 51% in 2015 to 55% in 2020, whereas the previous version claimed growth from 45% in 2014 to 54% in 2019. However, this certainly can’t account for the scale of the change and Cisco must have revised many other individual elements of its traffic estimates in order to reduce its estimates by such a large amount.
Not content with disinterring the FCC’s infamous October 2010 working paper that most thought had been completely discredited five years ago, last month CTIA went on to commission Brattle Group to produce a new “updated” version of the FCC’s forecasts.
Ironically enough this new report confirms that the FCC was totally wrong in 2010, because the total amount of spectrum in use at the end of 2014 was only 348MHz, not the 822MHz that the FCC projected. Despite this clear demonstration of how ludicrous the original projections were, Brattle reuses the same flawed methodology, which ignores factors such as that new deployment is of cells for capacity not for coverage, and so the ability to support traffic growth is in no way proportional to the total number of cellsites in the US.
Now Verizon’s Q2 results, announced today, highlight another fundamental flaw in the methodology used by Brattle, in terms of the projected gains in spectral efficiency. Brattle assume that the gain in spectral efficiency between 2014 and 2019 is based on the total amount of traffic being carried on 3G, 4G LTE and LTE+ technologies, so with 72% of US traffic in 2014 already carried on LTE, there is relatively little scope for further gains.
This is completely the wrong way to account for the data carrying capacity of a certain number of MHz of spectrum, since it is the share of spectrum used in each technology that is the critical factor, not the share of traffic. Verizon highlighted that only 40% of its spectrum is used for LTE at present, while 60% is still deployed for 2G and 3G, despite the fact that 87% of traffic is now carried on LTE. Of course once that 60% of 2G and 3G spectrum is repurposed to LTE, Verizon’s network capacity will increase dramatically without any additional spectrum being needed.
Brattle’s methodology would suggest that moving the rest of Verizon’s traffic to LTE would only represent a gain of 5% in capacity (assuming an improvement from 0.72bps/Hz to 1.12bps/Hz) but in fact moving all of Verizon’s spectrum to LTE would produce a gain of 27% in network capacity (and an even bigger improvement once LTE Advanced is considered). Adjusting for this error in the methodology reduces the need for more spectrum very sharply, and once it is considered that the incremental cellsites will be deployed to add capacity, not coverage, the need for additional spectrum above the current 645.5MHz is completely eliminated.
Back in October 2012, despite their misleading press release, CTIA’s own data indicated that there had been a significant slowdown in data traffic growth and confirmed that the emperor/FCC Chairman had no clothes when talking about the non-existent spectrum crisis. Now it seems CTIA is at it again, releasing an error-strewn paper today on how the FCC’s October 2010 forecasts of mobile data traffic have supposedly proven to be “remarkable accurate.”
This groveling attempt to “renew the effort to bring more licensed spectrum to market” is clearly designed to distract from CTIA’s own release of its year end 2014 wireless industry survey results last week, which showed that US mobile data traffic only grew by 26% last year (from 3230PB in 2013 to 4061PB in 2014) compared to growth of 120% in 2013, a dramatic slowdown which CTIA conveniently ignores.
Instead CTIA is praising the “solid analytical foundation” of the FCC’s October 2010 paper, which was recognized at the time, by myself and others, to be fundamentally flawed. So perhaps its not so ironic that the CTIA’s new paper mischaracterizes the data that the FCC used, stating that the forecasts “were remarkably accurate: In 2010, the FCC’s growth rate projections predicted mobile data traffic of 562 petabytes (PBs) each month by 2014; the actual amount was 563 PBs per month.”
Firstly, the FCC did not actually state an explicit projection of mobile data traffic, instead giving an assessment of growth from 2009 to 2014, as the (simple arithmetic) average of growth projections by Cisco, Yankee and Coda (use of an arithmetic average in itself is erroneous in this context, a geometric average of multipliers should be used instead).
Secondly, the FCC was projecting US mobile data traffic, not North American data traffic, which is the source of the quoted 563PB per month (which is taken from Cisco’s February 2015 mobile VNI report). We can see the difference, because the February 2010 Cisco report (available here) projects growth for North America from 16.022PB/mo in 2009 to 773.361PB/mo in 2014, a multiplier of 4827%, whereas the FCC paper quotes Cisco growth projections of 4722% from 2009 to 2014. (The reason for the difference is that growth in Canada was expected to be faster than the US, because Canada was expected to partially catch-up with US in mobile data traffic per user over the period).
If CTIA had bothered to look at Cisco’s mobile VNI tool, which gives data for major countries, it could have easily found out that Cisco estimates US mobile data traffic grew by 32 times between 2009 and 2014, not 35 times as the FCC forecast, let alone the 47 times that Cisco forecast back in February 2010.
Moreover, CTIA completes fails to mention that Cisco’s figure for 2014 (which according to the VNI tool is 531.7PB/mo for the US, rather than the 562.5PB/mo for North America that CTIA quotes), is completely different to (and far higher than) CTIA’s own data, which is based on “aggregated data from companies serving 97.8 percent of all estimated wireless subscriber connections” so should obviously be far more accurate than Cisco’s estimates.
However, CTIA is instead running away from its own data, stating in a footnote to the new paper that:
“Note that participation in CTIA’s annual survey is voluntary and thus does not yield a 100 percent response rate from all service providers. No company can be compelled to participate, and otherwise participating companies can choose not to respond to specific questions. While the survey captures data from carriers serving a significant percentage of wireless subscribers, the results reflect a sample of the total wireless industry, and does not purport to capture nor reflect all wireless providers’ traffic metrics. CTIA does not adjust the reported traffic figures to account for non-responses.”
Compare that disclaimer to the report itself, which notes that “the survey has an excellent response rate” (of 97.8%) and that it is adjusted for non-responses (at least so far as subscribers are concerned):
“Because not all systems do respond, CTIA develops an estimate of total wireless connections. The estimate is developed by determining the identity and character of non-respondents and their markets (e.g., RSA/MSA or equivalent-market designation, age of system, market population), and using surrogate penetration and growth rates applicable to similar, known systems to derive probable subscribership. These numbers are then summed with the reported subscriber connection numbers to reach the total estimated figures.”
CTIA’s wireless industry survey states that total US mobile data traffic was 4061PB in 2014, equating to an average of 338.4PB/mo over the year. Even allowing for the fact that Cisco estimate end of year traffic, not year averages, it is hard to see how the CTIA number for Dec 2014 could be more than 400PB/mo, some 25% less than Cisco.
If we instead compare growth estimated by CTIA’s own surveys (which only provide data traffic statistics back to 2010), then the four year growth from 388PB in 2010 to 4061PB in 2014 is a multiplier of 10.47 times, whereas the FCC model is a multiplier of 13.86 times (3506%/253%) and Cisco’s projection is a multiplier of 19.51 times (4722%/242%).
Thus by any rational and undistorted analysis, the FCC’s mobile data traffic growth projections have proven to be overstated. Likely reasons for this include the increasing utilization of WiFi (which was dismissed by the FCC paper, stating that “the rollout of such network architecture strategies has been slow to date, and its effects are unclear”) and the effect of dilution, as late adopters of smartphones use far fewer apps and less data than early adopters.
Nevertheless, what the data on traffic growth does confirm is that the FCC’s estimate of a 275MHz spectrum deficit by 2014 was utter nonsense. Network performance has far outpaced expectations, despite cellsite growth being far slower than predicted (3.9% compared to the 7% assumed in the FCC model) and large amounts of spectrum remaining unused: if we simply look at the Brattle paper prepared for CTIA last month, its easy to calculate that of the 645.5MHz of licensed spectrum identified by Brattle, at least 260MHz remains undeployed (12MHz of unpaired 700MHz, 10MHz of H-block, 65MHz of AWS-3, 40MHz of AWS-4, 20MHz of WCS, and all but around 40MHz of the 156.5MHz of BRS/EBS).
Thus in 2014, the US didn’t require 822MHz of licensed spectrum as the FCC forecast (which would have increased to 861MHz if the FCC model was corrected to the supposed traffic growth of 32x, as estimated by Cisco, and the actual number of 298,055 cellsites, as reported by CTIA), but instead, as CTIA proclaims, US mobile operators enabled “Americans [to] enjoy the best wireless experience in the world” with less than 400MHz of actual deployed spectrum.
I’m told that after a fair amount of difficulty and a month or two of delay, Greg Wyler has now successfully secured commitments of about $500M to start building the OneWeb system, and he will announce the contract signing with Airbus at the Paris Air Show next week. The next step will be to seek as much as $1.7B in export credit financing from COFACE to support the project with an objective of closing that deal by the end of 2015.
This comes despite Elon Musk’s best efforts to derail the project, culminating in an FCC filing on May 29. That filing proposes the launch of 2 Ku-band test satellites in late 2016, which would presumably be aimed at ensuring OneWeb is forced to share the spectrum with SpaceX, as I predicted back in March.
Clearly Musk is not happy about the situation, since I’m told he fired Barry Matsumori, SpaceX’s head of sales and business development, a couple of weeks ago, after a disagreement over whether the SpaceX LEO project was attracting a sufficiently high public profile.
Most observers appear to think that Musk’s actions are primarily motivated by animus towards Wyler and question whether SpaceX is truly committed to building a satellite network (which is amplified by the half-baked explanation of the project that Musk gave in his Seattle speech in January, and the fact that I’m told SpaceX’s Seattle office is still little more than a sign in front of an empty building).
Google also demonstrated what appears to be a lack of enthusiasm for satellite, despite having invested $900M in SpaceX earlier this year, when its lawyers at Harris, Wiltshire & Grannis asked the FCC on May 20 to include a proposal for WRC-15 that consideration should be given to sharing all of the spectrum from 6GHz to 30GHz (including the Ku and Ka-bands) with balloons and drones (see pp66-81 of this document). Needless to say, this last minute proposal has met with furious opposition from the satellite industry.
However, one unreported but intriguing aspect of SpaceX’s application is the use of a large (5m) high power S-band antenna operating in the 2180-2200MHz spectrum band for communication with the satellites. Of course that spectrum is controlled by DISH, after its purchase of DBSD and TerreStar, and so its interesting to wonder if SpaceX has sought permission from DISH to use that band, and if so, what interest Charlie Ergen might have in the SpaceX project.
Nevertheless, it looks like Wyler is going to win the initial skirmish, though there are still many rounds to play out in this fight. In particular, if Musk truly believes that the LEO project, and building satellites in general, are really going to be a source of profits to support his visions of traveling to Mars (as described in Ashlee Vance’s fascinating biography, which I highly recommend) then he may well invest considerable resources in pursuing this effort in the future.
If that’s the case, then the first to get satellites into space will have a strong position to argue to the FCC that they should select which part of the Ku-band spectrum they will use, and so Wyler will also have to develop one or more test satellites in the very near future. Fortunately for him, Airbus’s SSTL subsidiary is very well placed to develop such a satellite, and I’d expect a race to deploy in the latter part of 2016, with SpaceX’s main challenge being to get their satellite working, and OneWeb’s challenge being to secure a co-passenger launch slot in a very constrained launch environment.
The Independent Group has today (September 26) issued a new statement on MH370.
The previous statement dated September 9 is available here.
In summary, we continue to believe that the ‘most probable’ end point is located further to the south than any of the currently announced potential search areas.
The report I wrote jointly with LS Telcom on “Mobile spectrum requirements estimates: getting the inputs right” has now been published and is available here. This report critiques the ITU Speculator model, concluding that (as I noted earlier), the Speculator model traffic assumptions vastly exceed any reasonable traffic density that can be expected in 2020, even at events such as the Superbowl or World Cup Final.
The analysis in this report was also the basis of the poster on “Lies, Damn Lies and Mobile Statistics: Forecasting Future Demand for Wireless Spectrum” that I presented at TPRC42 in Arlington VA last weekend, and I was very gratified to receive an award for the best poster at the conference. If you are interested in discussing this work further, then please do get in touch.
The independent group analyzing the loss of MH370 has now issued a new statement, responding to the release of the June 26 ATSB report.
Last week’s Wall St Journal article and my blog post highlighted that the MH370 search area was poised to move to the southwest, and yesterday this shift was confirmed by Inmarsat.
Although the location of this new search area has not yet been released, the independent team that has been analyzing the publicly available data felt it was appropriate to provide a statement, given below, with our best estimate of the highest probability (but not the only possible) location for a potential search. In this way, we hope to provide information which can clearly be seen to be completely independent of any locations that might be published by the search team in the near future.
The statement is as follows:
Shortly after the disappearance of MH370 on March 8th, an informal group of people with diverse technical backgrounds came together on-line to discuss the event and analyze the specific technical information that had been released, with the individuals sharing reference material and their experience with aircraft and satellite systems. While there remain a number of uncertainties and some disagreements as to the interpretation of aspects of the data, our best estimates of a location of the aircraft at 00:11UT (the last ping ring) cluster in the Indian Ocean near 36.02S, 88.57E. This location is consistent with an average ground speed of approximately 470 kts and the wind conditions at the time. The exact location is dependent on specific assumptions as to the flight path before 18:38UT. The range of locations, based on reasonable variations in the earlier flight path result in the cluster of results shown. We recommend that the search for MH370 be focused in this area.
We welcome any additional information that can be released to us by the accident investigation team that would allow us to refine our models and our predictions. We offer to work directly with the investigation team, to share our work, to collaborate on further work, or to contribute in any way that can aid the investigation. Additional information relating to our analysis will be posted on http://duncansteel.com and http://blog.tmfassociates.com. A report of the assumptions and approaches used to calculate the estimated location is being prepared and will be published to these web sites in the near future.
The following individuals have agreed to be publicly identified with this statement, to represent the larger collective that has contributed to this work, and to make themselves available to assist with the investigation in any constructive way. Other members prefer to remain anonymous, but their contributions are gratefully acknowledged. We prefer that contact be made through the organizations who have published this statement.
Brian Anderson, BE: Havelock North, New Zealand;
Sid Bennett, MEE: Chicago, Illinois, USA;
Curon Davies, MA: Swansea, UK;
Michael Exner, MEE: Colorado, USA;
Tim Farrar, PhD: Menlo Park, California, USA;
Richard Godfrey, BSc: Frankfurt, Germany;
Bill Holland, BSEE: Cary, North Carolina, USA;
Geoff Hyman, MSc: London, UK;
Victor Iannello, ScD: Roanoke, Virginia, USA;
Duncan Steel, PhD: Wellington, New Zealand.
Back in February, I wrote an article for GigaOm, questioning the unrealistic projections of future data traffic produced by the ITU Speculator model. Since then the conclusions of one of the studies I mentioned, conducted by Real Wireless for Ofcom in June 2013, have been amended to reduce the modeled traffic per sq km by a factor of 1000 (from 10 PB per sq km per month to 10 TB per sq km per month in suburban areas), by the simple expedient of changing the label on the chart axis in Figure 44. The new version of the report fails to give any explanation of why this thousandfold “error” occurred, or indeed how the new results are consistent with the ITU model (which of course does project traffic demand of petabytes per sq km per month by 2020).
Ofcom claimed by way of explanation, in a statement to PolicyTracker, that “since the report has served its purpose we do not plan to carry out any further work to update it,” but one therefore has to wonder exactly what that purpose was, if not to exaggerate future demand for mobile spectrum and/or shore up a model which even Ofcom now apparently considers to be in error by a factor of 1000.
Just to give another illustration of quite how badly wrong the Speculator model is, I thought it might be helpful to compare the predicted levels of traffic demand with that experienced during the Superbowl in 2014, which is documented in a Computerworld article from earlier this year. That article highlights that AT&T carried around 119 GB of traffic in the busiest hour of the game, while Verizon carried roughly 3 times as much as AT&T. Broadly, we can therefore estimate that the total amount of data traffic across all mobile networks in the busiest hour of what is widely viewed as the most extreme situation for mobile demand in the entire US (if not the whole world) is around 500GB in the square kilometer in and around the stadium (depicted in red below).
For comparison, the Speculator model projects that by 2020, the typical level of everyday demand that needs to be accommodated by mobile networks (excluding WiFi) in a dense urban public area will be 51 TB per hour per sq km, one hundred times more than the traffic level experienced in the busiest hour at the Superbowl in 2014.
When AT&T reports that data usage in the busiest hour of the game has increased by only a factor of four in the last 3 years, is it really credible to expect traffic at the Superbowl to increase by 100 times in the next 6 years? And even if traffic at the Superbowl itself grows by leaps and bounds, why should global spectrum allocations be set based on traffic in the busiest hour at the busiest location in the whole of the US? Clearly, a more rational conclusion is that the Speculator model is simply wrong, and cannot possibly be representative of typical scenarios for mobile data usage in 2020.
« Previous entries Next Page » Next Page »