_Note: Welcome to Friday Fudge #3 of hopefully many to come.
As I was saying back in August of last year… YouTube is difficult to understand as a business model for a lot folks. Heck, a lot of this Internet stuff is.
What’s changed since then? Not a lot really. But… Welcome to 2010 and the promise of the ever elusive (illusive?) promise of the P WORD.
Please wait… YouTube profitability for Google is -still- buffering
As anyone that remembers the buffering messages associated with early video streaming technology of the 90s will recall, the waiting is the hardest part. Just like the background colors in the Tom Petty & The Heartbreakers video hint Google’s colorful logo there is finally a hint at YouTube profitability.
In June 2009, frustration with waiting for profitability was on the rise and coverage reflected as such. Indeed, another point of view at this time was to qualify how bad losses might be. Absent strong guidance from Google, it’s easy to see where analysts would begin to color YouTube differently as a result of that frustration. To understand the frustration is to understand the limits of traditional modeling in emerging services models with no guidance on where revenues will emerge or the underlying costs to deliver those services.
Early in the life of the YouTube acquisition, the revenue assumption was coarsely positioned as “running ads”. Years later the sources of revenue are as varied as the content one might find on YouTube. Indeed, the past two years of rapid feature rollouts and development within YouTube showcase and tantalize the scenarios for far more than just “running ads” to generate revenue:
Full feature length movies
Deep audience analytics
Storage for HD content
Large screen formats
Partner programs for news media
Location based services options
Beyond the less than specific origins of revenues, the specifics of costs to deliver these services is also of concern. Backing into estimations of operational costs for a company like Google properties such as YouTube or Amazon EC2/S3 etc… (with all due respect to the analysts) is a bit like trying to measure the speed of light in the times of Empedocles.
Titans of this size have remarkably nimble ways of disrupting assumed models for calculating costs. These are companies not only regarded for their impact on scaling problems but also for the heft of the footprint they place on sourcing avenues. Indeed, it might be argued that such special cases are the outliers in the sweeping graphs for a given pricing model.
So where is the intercept?
The speed at which technology platforms are turned over in a scale problem as large as those faced by these Internet titans represents a drastic shift in thinking. A better assumption might be to envision purchasing patterns in ways that are neither enterprise nor carrier. In other words, burn your existing models.
Instead, a focus should be made on several other aspects:
Location: the case of shared network computing models within multi-use data centers, and novel peering relationships — while the size of the plate is a sensational metric it is more important to consider the mix of business made possible by location selection
Goals: the case of business goals for allocated infrastructure — it is important to remember that a Google vs. Amazon sizing could be close but not applicable in estimates since the business goals are different for each
Multi-use: the case of unique business mixing — these are drastically different companies in terms of how they use their facilities and there is no fit for an enterprise pattern and the presence of a collection of data centers does not create a carrier model.
Lastly, there has been great speculation on what Apple will do with their planned North Carolina datacenter. Of note is the fact that as of now, the only thing to do is speculate and assume that this is an excellent location. That said, no concrete goals or multi-use disclosure has been made by Apple at this time.
In conclusion, any model based on enterprise or carrier assumptions (no matter how they are assumed) renders most analyst predictions far from accurate. When approaching the scale and vibrant refresh requirements of the current crop of Internet infrastructure titans, it is best to stick with public financial reporting and avoid reverse engineering of secret sauce costs and wait for the revenue numbers to be shared.