MyBrain.net

The colonization of real-time and other trends in Web 2.0

The neurological turn in recent web criticism exploits “the obsession with anything related to the mind, brain and consciousness”. Geert Lovink turns the discussion to the politics of network architecture, exploring connections between the colonization of real-time and the rise of the national web.

“Sociality is the capacity of being several things at once.”
G. H. Mead

 

Web 2.0 has three distinguishing features: it is easy to use; it facilitates the social element; and users can upload their own content in whatever form, be it pictures, videos or text. It is all about providing users with free publishing and production platforms. The focus on how to make a profit from free user-generated content came in response to the dotcom crash. At the height of dotcom mania all attention was focused on e-commerce. Users were first and foremost potential customers. They had to be convinced to buy goods and services online. This is what was supposed to be the New Economy. In 1998 the cool cyberworld of geeks, artists, designers and small entrepreneurs got bulldozed overnight by “the suits”: managers and accountants who were after the Big Money provided by banks, pension funds and venture capital. With the sudden influx of business types, hip cyberculture suffered a fatal blow and lost its avant-garde position for good. In a surprising turn of events, the hyped-up dotcom entrepreneurs left the scene equally fast when, two years later, the New Economy bubble burst. Web 2.0 cannot be understood outside of this context: as the IT sector takes over the media industry, the cult of “free” and “open” is nothing but ironic revenge on the e-commerce madness.


During the post-9/11 reconstruction period, Silicon Valley found renewed inspiration in two projects: the vital energy of the search start-up Google (which successfully managed to postpone its IPO for years), and the rapidly emerging blog scene, which gathered around self-publishing platforms such as blogger.com, Blogspot and LiveJournal. Both Google’s search algorithm and Dave Winer’s RSS invention (the underlying blog technology) date back to 1997-98, but managed to avoid the dotcom rage until they surfaced to form the duo-core of the Web 2.0 wave. Whereas blogging embodied the non-profit, empowering aspect of personal responses grouped around a link, Google developed techniques that enabled it to parasite on other people’s content, a.k.a. “organizing the world’s information”. The killer app turned out to be personalized ads. What is sold is indirect information, not direct services. Google soon discovered it could profit from free information floating around on the open Internet, anything from amateur video to news sites. The spectacular rise of user-generated content has been fuelled by the IT industry, not the media sector. Profit is no longer made at the level of production, but through the control of distribution channels. Apple, Amazon, eBay and Google are the biggest winners in this game.

Let’s discuss some recent Web 2.0 criticism. I leave out justified privacy concerns as addressed by danah boyd and others, in part because they have already received wide coverage. Andrew Keen’s The Cult of the Amateur (2007)1 has been regarded as one of the first critiques of the Web 2.0 belief system. “What happens”, Keen asks, “when ignorance meets egoism meets bad taste meets mob rule? The monkey takes over.” When everyone broadcasts, no one is listening. In this state of “digital Darwinism” only the loudest and most opinionated voices survive. What Web 2.0 does is “decimate the ranks of our cultural gatekeepers”. Whereas Keen could still be read as a grumpy and jealous response of the old media class, this is no longer the case with Nicholas Carr’s The Big Switch (2008),2 in which he analyses the rise of cloud computing. For Carr this centralized infrastructure signals the end of the autonomous PC as a node within a distributed network. The last chapter, entitled “iGod”, indicates a “neurological turn” in net criticism. Starting from the observation that Google’s intention has always been to turn its operation into an Artificial Intelligence, “an artificial brain that is smarter than your brain” (Sergey Brin), Carr turns his attention to future of human cognition: “The medium is not only the message. The medium is the mind. It shapes what we see and how we see it.” With the Internet stressing speed, we become the Web’s neurons: “The more links we click, pages we view, and transactions we make, the more intelligence the Web makes, the more economic value it gains, and the more profit it throws off.”

In his famous 2008 Atlantic Monthly essay “Does Google make us stupid? What does the Internet do to our brains?” Carr takes this argument a few steps further and argues that constant switching between windows and sites and frantic use of search engines will ultimately dumb us down. Is it ultimately the responsibility of individuals to monitor their Internet use so that it does not have a long-term impact on their cognition? In its extensive coverage of the ensuing debate, Wikipedia refers to Sven Birkerts’ 1994 study The Gutenberg Elegies: The Fate of Reading in the Electronic Age, and the work of developmental psychologist Maryanne Wolf, who pointed out the loss of “deep reading” capacity. Internet-savvy users, she states, seem to lose the ability to read and enjoy thick novels and comprehensive monographs. Carr’s next book is called The Shallows. What the Internet is Doing to Our Brains and will appear in 2010. Carr and others cleverly exploit the Anglo-American obsession with anything related to the mind, brain and consciousness – mainstream science reporting cannot get enough of it. A thorough economic (let alone Marxist) analysis of Google and the free and open complex is seriously uncool. It seems that the cultural critics will have to sing along with the Daniel Dennetts of this world (loosely gathered on edge.org) in order to communicate their concerns.

The impact on the brain is an element picked up on by the Frankfurter Allgemeine Zeitung editor and Edge member Frank Schirrmacher in his book-length essay Payback (2009).3 Whereas Carr’s take on the collapse of the white male’s multi-tasking capacities had the couleur locale of a US IT-business expert a.k.a. East Coast intellectual, Schirrmacher moves the debate into the continental European context of an aging middle class driven by defensive anxiety over Islamic fundamentalism and Asian hypermodernity. Like Carr, Schirrmacher seeks evidence of a deteriorating human brain that can no longer keep up with iPhones, Twitter and Facebook on top of the already existing information flows from television, radio and the printed press. We are on permanent alert and have to submit to logic of constant availability and speed. Schirrmacher speaks of “I exhaustion”. Most German bloggers responded negatively to Payback. Apart from factual mistakes, what concerned them most was Schirrmacher’s implicit anti-digital cultural pessimism (something he denies) and the conflict of interest between his role as newspaper publisher and as critic of the zeitgeist. Whatever the cultural media agenda, Schirrmacher’s call will be with us for quite some time. What place do we want to give digital devices and applications in our everyday life? Will the Internet overwhelm our senses and dictate our worldview? Or will we have the will and vision to master the tools?

The latest title in growing collection is virtual reality pioneer Jaron Lanier’s You Are Not a Gadget (2010),4 which asks: “What happens when we stop shaping the technology and technology starts shaping us?” Much like Andrew Keen, Lanier’s defence of the individual points at the dumbing down effect of the “wisdom of the crowd”. In Wikipedia, unique voices are suppressed in favour of mob rule. This also crushes creativity: Lanier asks why the past two decades have not resulted in new music styles and subcultures, and blames the strong emphasis on retro in contemporary, remix-dominated music culture. Free culture not only decimates the income of performing artists, it also discourages musicians to experiment with new sounds. The democratization of digital tools has not led to the emergence of “super-Gershwins”. Instead, Lanier sees “pattern exhaustion”, a phenomena in which a culture runs out of variations on traditional designs and becomes less creative: “We are not passing through a momentary lull before a storm. We have instead entered a persistent somnolence and I have come to believe that we will only escape it when we kill the hive.”

Thierry Chervel of the German cultural aggregator Perlentaucher writes: “According to Schirrmacher the Internet grinds down the brain and he wants to regain control. But that is no longer possible. The revolution eats its children, fathers, and those who detest it.”5 If you do not want to go into complaint mode you end up celebrating the “end of control”. The discussion will eventually have to shift to who is in charge of the Internet. The Internet and society debate should be about the politics and aesthetics of its network architecture and not be “medicalized”. So instead of repeating what the brain faction proclaims, I would like to turn to trends that need equal attention. Rather than mapping the mental impact and wondering whether something can be done to tame the net’s influence, or discussing over and again the fate of the news and publishing industries, let us study the emerging cultural logic (such as search). Let us dig into the knowledge production of Wikipedia, and study the political forces that operate outside of the mainstream structures. Let us look at new forms of control.

The colonization of real-time

There is a fundamental shift away from the static archive towards the “flow” and the “river”. Protoblogger Dave Winer promotes it on Scripting News6 and Nicholas Carr writes sceptical notes about it in his blog series The Real Time Chronicles.7 We see the trend popping into metaphors like Google Wave. Twitter is the most visible symptom of this transitory tendency. Who responds to yesterday’s references? History is something to get rid of. Silicon Valley is gearing up for the colonization of real-time, away from the static web “page” that still refers to the newspaper. Users no longer feel the need to store information and the “cloud” facilitates this liberating movement. If we save our files at Google or elsewhere, we can get rid of the clumsy, all-purpose PCs. Away with the ugly grey office furniture. The Web has turned into an ephemeral environment that we carry with us, on our skin. Some have even said goodbye to the very idea of “search” because it is too time-consuming an activity often with unsatisfactory outcomes. This could, potentially, be the point at which the Google empire starts to crumble – and that is why they are keen to be at the forefront of what French philosopher of speed Paul Virilio described a long time ago: these days, live television is considered too slow, as news presenters turn to Twitter for up-to-the-second information. Despite all the justified calls for “slow communication”, the market is moving in the opposite direction. Soon, people may not have time to pour some file from a dusty database. Much like in finance, the media industry is exploring possibilities to maximize surplus value from the exploitation of milliseconds. But unlike hedge funds, this is a technology for all. Profits will only grow if the colonization of real-time is employed on a planetary scale.

Take Google Wave. It merges e-mail, instant messaging, wikis and social networking. Wave integrates the feeds of Facebook, Twitter etc. accounts into one real live event happening on the screen. It is a meta online tool for real-time communication. Seen from your “dashboard”, Wave looks like you are sitting on the banks of a river, watching the current. It is no longer necessary to approach the PC with a question and then dive into the archive. The Internet as a whole is going real time in an attempt to come closer to the messiness, the complexities of the real-existing social world. But one step forward means two steps back in terms of design. Just look at Twitter, which resembles ascii email and SMS messages on your 2001 cell phone. To what extent is this visual effect conscious? Typo rawness html-style may not be a technical imperfection, but rather points to the unfinishedness of the Eternal Now in which we are caught. There is simply no time to enjoy slow media. Back in Tuscany mode, it is nice to lie back and listen to the offline silence, but that is reserved for quality moments.

The pacemaker of the real-time Internet is “microblogging”, but we can also think of the social networking sites and their urge to pull as many real-time data out of its users as possible: “What are you doing?” Give us your self-shot. “What’s on your mind?” Expose your impulses. Frantically updated blogs are part of this inclination, as are frequently updated news sites. The driving technology behind this is the constant evolution of RSS feeds, which makes it possible to get instant updates of what’s happening elsewhere on the web. The proliferation of mobile phones plays a significant background role in “mobilizing” your computer, social network, video and photo camera, audio devices, and eventually also your TV. The miniaturization of hardware combined with wireless connectivity makes it possible for technology to become an invisible part of everyday life. Web 2.0 applications respond to this trend and attempt to extract value out of every situation we find ourselves in. The Machine constantly wants to know what we think, what choices we make, where we go, who we talk to.

There is no evidence that the world is becoming more virtual. The cyber-prophets were wrong here. The virtual is becoming more real. It wants to penetrate and map out our real lives and social relationships. We are no longer encouraged to act out some role, but forced to be “ourselves” (which is no less theatrical or artificial). We constantly login, create profiles in order to present our “selves” on the global market place of employment, friendship and love. We can have multiple passions but only one certified ID. Trust is the oil of global capitalism and the security state, required by both sides in any transaction or exchange. In every rite de passage, the authorities must trust us before they let both our bodies and information through. The old idea that the virtual is there to liberate you from your old self has collapsed. It is all about self-management and techno-sculpturing: how do you shape the self in real-time flow? There is no time for design, no time for doubt. System response cannot deal with ambivalence. The self that is presented here is post-cosmetic. The ideal is to become neither the Other nor the better human. Mehrmensch, not Übermensch. The polished perfect personality lacks empathy and is straight-out suspect. It is only a matter of time until super persons such as celebrities reveal their weaknesses. Becoming better implies revealing who you are. Social media invite users to “administer” their all-too-human sides beyond merely hiding or exposing controversial aspects. Our profiles remain cold and unfinished if we do not expose at least some aspects of our private lives. Otherwise we are considered robots, anonymous members of a vanishing twentieth century mass culture. In Cold Intimacies, Eva Illouz puts it this way: “It is virtually impossible to distinguish the rationalization and commodification of selfhood from the capacity of the self to shape and help itself and to engage in deliberation and communication with others.”8

Every minute of life is converted into “work”, or at least availability, by a force exerted from the outside. That is the triumph of biopolitical interpretations of informational capitalism. At the same time, we appropriate and incorporate technology into our private lives, a space of personal leisure, aiming to create a moment for ourselves. How do we balance the two? It seems an illusion to speed up and slow down simultaneously, but this is exactly how people lead their lives. We can outsource one of the two and deal with either speedy or slow tasks according to our character, skill set, and taste.

Netizens and the rise of extreme opinions

Where has the rational and balanced “netizen” gone, the well-behaved online citizen? The Internet seems to become an echo chamber for extreme opinions. Is Web 2.0 getting out of control? At first glance, the idea of the netizen is a mid-1990s response to the first wave of users that took over the Net. The netizen moderates, cools down heated debates, and above all responds in a friendly, non-repressive manner. The netizen does not represent the Law, is no authority, and acts like a personal advisor, a guide in a new universe. The netizen is thought to act in the spirit of good conduct and corporate citizenship. Users were to take social responsibility themselves – it was not a call for government regulation and was explicitly designed to keep legislators out of the Net. Until 1990, the late academic stage of the Net, it was presumed that all users knew the rules (also called netiquette) and would behave accordingly. (On Usenet there were no “netizens”: everyone was a pervert.) Of course this was not always the case. When misbehaviour was noticed, the individual could be convinced to stop spamming, bullying, etc. This was no longer possible after 1995, when the Internet opened up to the general public. Because of the rapid growth of the World Wide Web, with the browsers that made it so much easier to use, the code of conduct developed over time by IT-engineers and scientists could no longer be passed on from one user to the next.

At the time, the Net was seen as a global medium that could not easily be controlled by national legislation. Perhaps there was some truth in this. Cyberspace was out of control, but in a nice and innocent way. That in a room next to the office of the Bavarian prime minister, the authorities had installed a task force to police the Bavarian part of the Internet, was an endearing and somewhat desperate image. At the time we had a good laugh about this predictably German measure.

9/11 and the dotcom crash cut the laughter short. Over a decade later, there are reams of legislation, entire governmental departments and a whole arsenal of software tools to oversee the National Web, as it is now called. Retrospectively, it is easy to dismiss the rational “netizen” approach as a libertarian Gestalt, a figure belonging to the neo-liberal age of deregulation. However the issues the netizen was invented to address have grown exponentially, not gone away. These days we would probably frame them as a part of education programs in schools and as general awareness campaigns. Identity theft is a serious business. Cyberbullying amongst children does happen and parents and teachers need to know how to identify and to respond to it. Much like the mid-1990s, we are still faced with the problem of “massification”. The sheer number of users and the intensity with which people engage with the Internet is phenomenal. What perhaps has changed is that many no longer believe that the Internet community can sort out these issues itself. The Internet has penetrated society to such an extent that they have become one and the same.

In times of global recession, rising nationalism, ethnic tension and collective obsession with the Islam Question, comment cultures inside Web 2.0 become a major concern for media regulators and the police. Blogs, forums and social networking sites invite users to leave behind short messages. It is particularly young people who react impulsively to (news) events, often posting death threats to politicians and celebrities without realizing what they have just done. The professional monitoring of comments is becoming a serious business. Just to give some Dutch examples. Marokko.nl has to oversee 50 000 postings on a daily basis, and the rightwing Telegraaf news site gets 15 000 comments on its selected news items daily. Populist blogs like Geen Stijl encourage users to post extreme judgments – a tactic proven to draw attention to the site. Whereas some sites have internal policies to delete racist remarks, death threats and libellous content, others encourage their users in this direction, all in the name of free speech.

Current software enables users to leave behind short statements, often excluding the possibility of others to respond. Web 2.0 was not designed to facilitate debate. The “terror of informality” inside “walled gardens” like Facebook is increasingly becoming a problem. If the Web goes real-time, there is less time for reflection and more technology that facilitates impulsive blather. This development will only invite authorities to interfere further in online mass conversations. Will (interface) design bring a solution here? Bots play a increasing role in the automated policing of large websites. But bots merely work in the background, doing their silent jobs for the powers-that-be. How can users regain control and navigate complex threads? Should they unleash their own bots and design tools in order to regain their “personal information autonomy”, as David d’Heilly once put it?

The rise of the national web

Web 2.0 can be seen as a specific Silicon Valley ideology. It also simply means a second stage of Internet development. Whereas control over content may have vanished, control within the nation state is on the rise. Due to rise of the worldwide Internet user base (now at 1.7 billion), focus has shifted from the global potential towards local, regional and national exchanges. Most conversations are no longer happening in English. A host of new technologies are geo-sensitive. The fact that 42.6 per cent of Internet users are located in Asia says it all. Only around 25 per cent of content is in English these days. Such statistical data represent the true Web 2.0. What people care about first and foremost is what happens in their immediate surroundings – and there is nothing wrong with that. This was predicted in the Nineties, it just took a while to happen. The background of the “national web” is the development of increasingly sophisticated tools to oversee the national IP range (the IP addresses allocated to a country). These technologies can be used in two directions: to block users outside the country from viewing, for example, national television online, or visiting public libraries (such as in Norway and Australia, in the case of new ABC online services). They can also prevent citizens from visiting foreign sites (mainland Chinese residents are not able to visit YouTube, Facebook, etc.). In a recent development, China is now exporting its national firewall technology to Sri Lanka, which intends to use it to block the “offensive websites” of exile Tamil Tiger groups.

The massive spread of the Internet really only happened in the past 5 to 10 years. The Obama campaign was a significant landmark in this process. Representation and participation, in this context, are outworn concepts. “Democratization” means that firms and politicians have a goal and then invite others to contribute to it. In this age of large corporations, big NGOs and governmental departments, it is all too easy to deploy Web 2.0 strategies as a part of your overall communication plan. True, open-knowledge-for-all has not arrived everywhere yet – and there is still a role to play for the Web 2.0 consultant. But Web 2.0 is certainly no longer an insider tip. A lot is already known about web demographics, usability requirements and what application to use in what context. One would not use MySpace to approach senior citizens. It is known that young people are reluctant to use Twitter – it just isn’t their thing.

These are all top-down considerations. It gets more interesting if you ask the Netizen 2.0 question. How will people themselves start to utilize these tools bottom-up? Will activists start to use their own Web 2.0 tools? Remember that social networking sites did not originate in a social movement setting. They were developed as post-dotcom responses to the e-commerce wave of the late 1990s, which had no concept of what users were looking for online. Instead of being regarded merely as consumers of goods and services, Web 2.0 users are pressed to produce as much data as possible. Profiles are abstracted from so-called “user generated content” that are then sold to advertisers as direct marketing data. Users do not experience the parasitic nature of Web 2.0 immediately.

From a political point of view, the rise of national webs is an ambivalent development. In design terms it is all about localization of fonts, brands and contexts. Whereas communicating in one’s own language and not having to use Latin script keyboards and domain names can be seen as liberating, and necessary for bringing on board the remaining 80 per cent of the world’s population that is not yet using the Internet, the new digital enclosure also presents a direct threat to the free and open exchanges the Internet once facilitated. The Internet turns out to be neither the problem nor the solution for the global recession. As an indifferent bystander, it does not lend itself easily as a revolutionary tool. It is part of the Green New Deal, but is not driving these reforms. Increasingly, authoritarian regimes such as Iran are making tactical use of the Web in order to crack down on the opposition. Against all predictions, the Great Chinese Firewall is remarkably successful in keeping out hostile content, whilst monitoring the internal population on an unprecedented scale. It proves that power these days is not absolute but dynamic. It is all about control of the overall flow of the population. Dissidents with their own proxy servers that help to circumvent the Wall remain marginal as long as they cannot transport their “memes” into other social contexts. As the jargon says: regardless of your size or intent, it is all about governmentality, how to manage complexity. The only way to challenge this administrative approach is to organize: social change is no longer techno warfare between filters and anti-filters, but a question of “organized networks” that are able to set events in motion.

Andrew Keen, The Cult of the Amateur, Nicholas Brealey Publishing: London 2007.

Nicholas Carr, The Big Switch, W.W. Norton & Company: New York 2008.

Frank Schirrmacher, Payback, Blessing Verlag, Munich, 2009. See also: "Die Politik des iPad", http://www.faz.net/s/Rub475F682E3FC24868A8A5276D4FB916D7/Doc~E4C9B52F05C0C4D6AA6E031D952812B10~ATpl~Ecommon~Scontent.html.

Jaron Lanier, You Are Not a Gadget, Alfred A. Knopf: New York, 2010. See also responses to Lanier's Edge.org essay "Digital Maoism", http://www.edge.org/discourse/digital_maoism.html.

Thierry Chervel, "Fantasie über die Zukunft des Schreibens", http://www.perlentaucher.de/blog/134_fantasie_ueber_die_zukunft_des_schreibens#521.

Dave Winer, Scripting News, http://www.scripting.com/.

Nicholas Carr, The Real Time Chronicle, http://www.roughtype.com/index.php.

Eva Illouz, Cold Intimacies: The Making of Emotional Capitalism, Polity Press: Cambridge, 2007.

Published 18 March 2010
Original in English

© Geert Lovink / Eurozine

PDF/PRINT

Newsletter

Subscribe to know what’s worth thinking about.

Related Articles

Cover for: Alternative AI

Alternative AI

springerin 1/2024

Artificial intelligence beyond good and evil: connectionist vs. symbolist logic and how to programme creativity; why the algorithmic gaze reproduces neocolonialism in the Middle East; technophobia and what sci fi gets wrong about robots.

Discussion