Image Dissectors


Twitter Facebook YouTube Home All Calendar Copyright Contact RSS
Television Internet Radio News Film Search

The Nature of the Internet

Simon Pitt | Internet | Sunday 26th April 2009

With the Internet being, basically, so big that numbers don't really work anymore, you might be excused for wondering how I'm going to provide any sort of meaningful review of it. Well, a good place to start would be to look at the most popular websites, to get an idea of what the Internet is mainly used for, if not what is mainly on it.

So, with that preamble, here is a list of the current top ten Internet sites. This list is relatively static - it takes something new and exciting to knock these off the top place:
  1. Google
  2. Yahoo
  3. YouTube
  4. Facebook
  5. Windows Live
  6. Microsoft Network (MSN)
  7. Wikipedia
  8. Blogger
  9. MySpace
  10. Baidu
Not really any surprises there, except for Baidu. If you're reading this, chances are you haven't heard of it because it's the leading Chinese language search engine. In Chinese.

These may seem like a fairly random set of websites but there is something that holds them together: in a way, none of them really have anything on them. Four of them are search engines, so they get their content from the rest of the Internet. YouTube's content is provided by its users uploading their own videos. Facebook and MySpace are social networking sites. Wikipedia articles are written by members of the public and Blogger is written by users. MSN is an indexing site of the Internet. Yes, welcome to the world of user-generated content.

Of these top ten sites, none of them employ the people who create content - if you want, they are more content management systems. You could, if you wanted to, argue that they cynically manipulate users for their own gain. Take something like Facebook or YouTube - brands that are worth amounts so big they make your brain want to call in a sickie. These brands gain all of their worth from their users, of whom they do not share their vast, endless wealth. I'm not suggesting that they should, just observing that they don't.

In fact, it's not until you get to the 47th most popular web site in the world (bbc.co.uk) that you actually reach a site where the people who run the site generate the content themselves, rather than get their users to generate it for free. And even then, that site has the mindexplodingly boring "have your say" where they pretty much beg their users to vomit their ill informed and ignorant opinions all over their articles.

My thought for the day is that this is an inevitable part of the Internet, and is, in fact, created by part of its underlying structure.

In 1995 if you wanted to make a webpage, you had to create each page manually. This made sites like Wikipedia or Blogger effectively impossible since there was no way you could have a body of users providing content. More than this, it made sites that presented vast arrays of content, like the OED or any shopping site, pretty much impossible as well. This was something of a problem, and so a succession of geeks worked on something called "php". This originally stood for "Personal Home Page", but that was too simple and understandable, so they changed it to the recursive mindfuck: "PHP: Hypertext Processing". When combined with MySQL, web pages could be created dynamically from a database.

For example, the following piece of code will provide a near infinite number of webpages:
<?php $var = $_GET['q'] ; $result = mysql_query("SELECT * FROM Database WHERE ID='$var'"); while($row = mysql_fetch_array($result)) { echo $row['Content']; } ?>
The site will take the "q" value from the webaddress, look up that value in a database and dump out the content that corresponds to that "q" value. (Don't worry if you don't understand that, by the way, just skip that bit and try to catch up later. There's a funny bit later on with a joke about "Human Resources".)

But, anyway, back to the point: suddenly, these few lines of code and a database, have resulted in a huge and easily expandable dynamic website.

This was something of a revolution. Using Php a user could write one page of code and then connect it to a database, and create a near limitless number of webpages. This was brilliant for sites like Amazon, which could store all of their product information in a database, and then spit it out when users searched for products. This however, still relied on someone working for the site putting all the information into the database in the first place.

The real revolution came when site creators realised that if you allowed users to add information to the database as well, you didn't even have to bother creating the content either. All you had to do was create a generalised formula for the pages, and let them run themselves.

This is the principle behind pretty much all of the sites in the top ten - Facebook, MySpace, YouTube and Wikipedia all have generalised Php formulae for an "article" and then user data populates the database. This principle isn't limited just to the major sites, however. Php is installed on around 20 million websites, and SQL on around 11 million. If you're looking to understand the Internet, this might be a good way to start.

Yet, there is something intrinisic about MySQL and Php and the way they make you think, that, I think, influences the nature of the Internet. It's a common observation from literary critics and those who write stuff about books that the form you write in (whether that's a novel, a short story or an episode of Eastenders) effects, at some level, what you choose to say and how you just to say it. My suggestion is that Php and MySQL work in the same way.

This way of writing websites is the ultimate in scalability; it is the ideal business model. At the most basic level, the site creator could create one generalised page, and from there the site could expand infinitely with, essentially, no additional effort. More than that, Php and MySQL force you to think in generalised terms. The interface forces you to think of the actual content of the site, that which is at the heart of it, as pure data and not to worry about what it actually is. Many computer scientists are are, really, failed mathematicians, and this is something that appeals to mathematicians (since maths, at its heart, is about coming up with formulae that do everything, so you don't have to do any more maths) and those sort of people who find the following joke funny:
How many members of a particular demographic does it take to change a lightbulb?
(N+1). 1 to change the lightbulb and N to act in a stereotypical manner.
In a way, this is treating users and humans as a comodity. This attitude is one that has been developing for some time. The term "human resources" is so ubiquitous in companies now that people tend to forget that "resources" are things that you "use up".

But my suggestion here is this is taken to its logical conclusion on and because of, the Internet. At its heart, if you manage sites, the world wide web encourages you to generalise and not to create content. This creates two levels on the Internet, those in control who create simple generalised formulae to take advantage of the near limitless "workforce" and those who actually provide the content. The Internet creates, in a way, a "virtual sweatshop", enslaving users to provide content for the myriad of sites that rely on user generated content for their popularity.

And that's what's so great about the Internet.

SP



~~~


Latest Articles:


More »

Most Popular:


More »

Twitter Facebook YouTube Home All Calendar Copyright Contact RSS