How much video can the Internet bear?

April 29, 2008

Some people think that the rapidly growing amount of streaming video and peer-to-peer file exchange will bring the Internet to the verge of collapse. If you listen to Cisco, this won’t happen. In their White Papers “The Exabyte Era” and “Global IP Traffic Forecast and Methodology, 2006–2011” Cisco comes to the conclusion that video and peer-to-peer on the Net will continue to grow, but not enough to threaten the Internet. According to Cisco, P2P traffic’s share will even decrease from 62 percent (2006) to 30 percent (2011). At the same time Internet video will grow to 30 percent of consumer traffic.

However, Cisco leaves a door open for “traffic surprises”. The huge volume of video makes Internet traffic less predictable, which could lead to “unexpected scenarios” that may prove Cisco’s forecast “too conservative” (in their own words) or, put more bluntly, wrong (in my words).

Despite the uncertainties of Cisco’s predictions, the White Papers contain some interesting insights. According to Cisco, Internet video to PC via YouTube and other providers is only the first step. The next step will be Internet video to TV screen, and beyond 2015, the third wave of video traffic will be dominated by video communications. The bandwidth burden of high-definition video communications will be the most difficult to mitigate.

Mainly driven by Internet video, Cisco expects global IP traffic to grow to 29 exabytes per month by 2011, of which 17 exabytes will be related to video. Just in case you are wondering what exactly an exabyte is: it equals one quintillion bytes – that is a “one” with 18 “zeros”. To give you an idea of the order of magnitude: all words ever spoken by human beings could be stored in about 5 exabytes of data, according to the Berkeley study “How Much Information?” published in 2003.

Apart from the question how much video the Internet can bear, we should also ask ourselves how much video we humans can bear.


Surveillance on the rise

December 31, 2007

The year 2007 saw the further rise of surveillance and the decline of privacy worldwide. This is the main result of the recent Privacy & Human Rights Report by the US-based Electronic Privacy Information Center and the UK-based Privacy International. The survey included 47 countries, compared to 37 countries in the previous year. Some of the findings were to be expected, others are rather surprising.

The tightening of immigration controls in many countries, particularly the United States, were accompanied by the implementation of databases, identity and fingerprinting systems. In addition, the 2007 rankings show an increasing trend among governments to archive data on the geographic, communications and financial records of their citizens and residents.

The lowest ranking among all surveyed countries continue to be Malaysia, Russia and China. In terms of statutory protections and privacy enforcement, the US is the worst ranking country in the democratic world. In terms of overall privacy protection the United States has performed very poorly, being out-ranked by both India and the Philippines and falling into the category of endemic surveillance.

In view of the ongoing fight against terrorism this is no surprise. More surprising, however, is that major European countries with a traditionally high level of privacy protection rank among those who have significantly increased surveillance and data collection.

The most telling example of this trend is Germany: in the 2007 rankings, Germany dropped from 1st to 7th place behind Portugal and Slovenia. The highest-ranking country in terms of privacy protection is now Greece, followed by Romania and Canada. The worst ranking EU country is the United Kingdom, which again fell into the “black” category of endemic surveillance along with Russia and Singapore.

There is not much hope that 2008 will be better in terms of privacy protection. Especially in Germany, a further decay of privacy protection is already on the way. The discussion in democratic countries will continue on how much the privacy and freedom rights of citizens should be limited in order to protect a democratic society whose main purpose is the protection of each citizen’s rights.

The vision of a universal digital library

November 28, 2007

On 27 November 2007, Heise News reported that the Universal Digital Library, also known as the Million Book Collection, has by now digitised more than 1.2 million books. This is about one percent of all books. The non-commercial project has been driven by Carnegie Mellon University in cooperation with partners from China, India and Egypt (Bibliotheca Alexandrina). The financial support was mainly provided by the US National Science Foundation (NSF), while China and India provided manpower of about 2,000 man years. The mission of the Universal Digital Library (UDL) is to “foster creativity and free access to all human knowledge”. Thus, the digitised books are freely available on the UDL website.

The Web database of 1.2 million scanned books, as it is right now, is still far from being satisfactory. If you try to access it, you face – even with a decent broadband connection – long page loading times. I tried to find the autobiography of Benjamin Franklin and received, after a couple of minutes a list of several editions of this book. In order to view one of these editions, it is necessary to download a proprietary viewer called DjVu, which is provided by LizardTech, a Seattle-based software company that is part of the Japanese technology company Celartem Technology Inc.

I see basically two problems with this: firstly, the software is not very user-friendly and the graphical user interface does not high usability. Secondly, using proprietary software is not fully in line with the vision of permanently providing access to the scanned books. The first barrier is that you have to install the DjVu viewer, which may already be a problem for some Internet users. Furthermore, a proprietary viewer format could become outdated in a couple of decades.

In comparison, the approach by Project Gutenberg appears to be more perspicacious. They save every e-book in ASCii plain-text format, which should guarantee a relatively high readability of electronic documents irrespective of the operating system and the text viewer/editor used. Admittedly, the scope of Project Gutenberg as far more limited, as they currently have about 20,000 e-books in their Internet library – very little compared to the 1.2 million UDL books, but still very impressive considering that this is all based on volunteer work. Instead of several editions of Franklin’s autobiography I just get one, but this is completely sufficient for me, and the usability of an easily downloadable txt file is much higher compared to downloading books page by page, as in the case of DjVu.

There is another global library project emerging, the World Digital Library under the auspices of UNESCO. It’s purpose is to provide free access via Internet to important cultural documents from all over the world in a multilingual format. In October, a prototype of the Internet platform was presented which looks impressive. However, this is still in a early phase, and it doesn’t replace the Universal Digital Library or Project Gutenberg.

In addition to these non-commercial projects there are also projects driven by commercial interests, most prominently Google Book Search. Finding Franklin’s autobiography there is also no problem, but in this case, I am referred to commercial online bookshops. A nice feature of Google Book Search is the map of places mentioned in the book. My vision for a global digital library is to have the scope of the Universal Digital Library and UNESCO’s World Digital Library combined with the perspicacious approach of Project Gutenberg and the usability of Google Book Search.

Milon Gupta