First, congratulations to @KQED, and particularly to one of our top innovators, @TimOlsonSF and his team, on the launch of the very cool KQED Video web app for the Chrome browser. From Chrome, it’s accessible through the Google Chrome web store. KQED’s implementation features its own productions served up through YouTube -- but a very clean and unobtrusive implementation of YouTube. My only complaint is that when you hit the back button during playback, it dumps you out of the site, requiring you to re-enter the site to view more.
Congratulations, too, to Jason Seiken and his team at PBS for the PBS Kids Video and PBS Kids PLAY! apps, and to Kinsey Wilson and his team at NPR for the NPR for Chrome browser. Job One will be to get my daughter in Wyoming to download Chrome for Mac and the PBS apps for my grandkids and the NPR app for herself and her husband.
Web apps are going to be huge, duplicating or even exceeding the experience one gets on a tablet in a web browser.
Although I tried it almost as soon as it came out, I’d not really spent much time with Chrome until a few months ago, but now for reasons other than its app store (largely its ability to do search in the URL window), it’s up all the time. The web apps it supports are sure to make it an even more frequent choice. --Dennis
P.S.: KQED’s app shows up as a “Popular” choice in the “News and weather” category.
Like most of you, searches via Google or Bing are key to a functional web. I’ve been noticing more and more that I’m having to go deeper into search results now to find links of value than was once required, especially when I’m trying to find objective information on products. Monetizers are getting smarter about search engine optimization and big money is being invested in it. Too often, early pages are a thicket of commercial exhortations and “content farm” garbage – pages designed to provide just enough content to rise in the search ranks but really are thinly disguised spam. An entrepreneur turned academic, Vivek Wadhwa [@vwadhwa], took this on in a TechCrunch post early this month:
… But it turns out that you can’t easily do such searches in Google any more. Google has become a jungle: a tropical paradise for spammers and marketers. Almost every search takes you to websites that want you to click on links that make them money, or to sponsored sites that make Google money. There’s no way to do a meaningful chronological search. …
Since then he’s written more about this elsewhere and it started quite a conversation in the blogosphere and tweetosphere. On Tuesday, Feb. 1, this will be the subject of a conference that will be streamed live on BigThink.com from 10a-2p Pacific Time. Wadhwa will be moderating a roundtable including reps from Google, Bing, and Blekko. --Dennis
How often to you see something like this on the web?
Loading Tweets seems to be taking a while. Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.
I encounter this almost every week on Twitter’s web site. Or some Twitter client I use cites API problems. Don’t mean to pick on them, there are a lot of examples: Tumblr’s been down. Amazon has an outage. Comcast, too. WikiLeak sympathizers have been harassing certain sites with DDoS attacks. Sunday night scheduled maintenance on web sites and cellular systems make it a good time to sit down and read a physical book.
I’ve spent a career running broadcast and distance learning telecom systems that are designed for many more “nines” of reliability than I perceive some of our major web operations to be achieving (99.9% = 526 minutes per year outage, 99.99% = 53 minutes per year outage). I’d be surprised if, in the aggregate, Twitter is even getting three nines.
If we’re serious about making web and mobile media competitive with broadcast media, then someone needs to figure out how to improve the reliability we’re getting today.
Update 14 December 2010: I'm "promoting" Stephen Hill's comments below on how difficult this level of reliability is to achieve. The point of my post was to suggest that we have a long way to go before we can put live radio on the web with the scale and reliability to which we've become accustomed in broadcasting -- and Stephen makes that point even better. --Dennis _________________________
The rhetoric of this post betrays some of the differences between traditional broadcast infrastructure and the new era of web-based infrastructure.
Web services are built on hardware and network infrastructure, but operate entirely on software. Even highly standardized software like the Apache web server has hundreds of variables in setup and operation which increase entropy and decrease reliability. When you add custom software to create any kind of practical web service, the variables (and therefore the possible bugs) multiply exponentially.
I'm not a CTO or even close, but as a small web music service provider, we have been forced to grapple with the Inescapable Truths of Online Reliability, which go something like this:
1. Reliability is inversely proportional to complexity in a hardware/software system.
1A. The larger the number of users and/or more functionally sophisticated the site, the more complex the hardware/software system must be....therefore the less reliable.
2. Reliability can be bought at a premium by adding additional servers, load balancing, "hot spares" and redundant functionality. However:
2A. Each increase in real or virtual (cloud) hardware and software makes the overall system more challenging to manage. More servers are also more attractive to attack and increase security issues unless the right preemptive steps are taken to defend them.
3. You can buy "five nines" of uptime (= 5 min/year of downtime) for a big premium, but it can never be 100% guaranteed. Each .9 increase in reliability will be roughly 5 to 10x more costly. Besides, all a guaranteed Service Level Agreement really gets you is a better attitude from the vendor and a credit when things inevitably fuck up.
4. The growth curve of the most highly visible and successful Internet sites (like Twitter) makes the problem of scaling infrastructure under load 100x more difficult.
5. It makes more sense to plan for minimizing recovery time after an outage, not preventing them completely.
6. Except for the goal of 100% reliability, broadcast infrastructure management practices are largely irrelevant. The valid comparision would be to an entire broadcast network, not a single station. Broadcast infrastructure remains at a relatively fixed size once operational regardless of the number of listeners/viewers, and can be optimized over time. Digital network infrastructure has to "scale" and change constantly over time to support millions of users and is much more difficult to optimize and manage.
Considering the above, it is quite a remarkable achievement that some sites, like Google, Amazon, Flickr, Yahoo and Facebook, are as day-to-day reliable as they are. Twitter is a particularly troubled example of a site that has had difficulty keeping up with its growth.
BOTTOM LINE ON WEB RELIABILITY: Easy to say -- very, very hard to do.
It's nice to see that CPB is still investing in PBCore, the public media metadata standard. Today, it announced a 2.0 development project. The press release:
The Corporation for Public Broadcasting today announced the launch of the PBCore 2.0 Development Project. ¶ The PBCore 2.0 Development Project will expand the existing PBCore metadata standard to increase the ability, on one hand, of content producers and distributors using digital media to classify and describe public media content (audio and video) and, on the other, of audiences to find public media content on a variety of digital media and mobile platforms. ¶ The PBCore 2.0 Development Project will also work to enhance the PBCore standard to ensure that it will be able to satisfy the demands of multiplatform digital content as well as an evolving World Wide Web. Since PBCore’s development in 2005, it has become not only one of the most widely-used metadata standards in the world, but also the basis of other metadata standards. At the same time, in the last five years, the number of digital media applications that would benefit from PBCore has grown significantly. An updated PBCore will benefit not only public broadcasters, but all users of metadata standards based on PBCore. ¶ PBCore 2.0 will be managed by WGBH, AudioVisual Preservation Solutions and Digital Dawn. For more information on the PBCore 2.0 Development Project, please go to www.pbcore.org.
The concept of a coming "digital dark age" has been around since the early years of the web, originally covering the possibility that content would become orphaned due to advances in file and storage formats. Andrei Codrescu updated the concept in an NPR commentary earlier this month. And, although he doesn't use the term, John C. Dvorak singles out search engine optimization as contributing to the death of the internet. He writes:
... The problem with the technique
is that it ruins the search experience for users and also requires the
search engine folks to constantly work on countermeasures to minimize
the impact of SEO techniques. SEO techniques then adapt to the changes,
and then begins round two, then three, etc. ...
In a post titled RSS is the new BetaMax, says Apple sales numbers, Steve Gillmor writes:
The data from the Apple earnings call illustrates the difficult time RSS will have staving off micromessaging. Most telling was the turf the iPhone took out of iPod sales. Every one of those iPhone sales, no matter whether they are the new 3GS or the $100 3G model, reduce the reasons for syncing to a Windows or Mac box in order to bring down podcasts. ¶ As realtime accelerates, streaming text and media services are much easier to leverage than slower ones such as RSS readers. ...