Tara Brabazon: The Rice-A-Roni Paradox

Can a person use Wikipedia, Google and every other social and digital tool out there and still be critical of it? I guess I am partly one of those born before 1980 whom Clay Shirky writes about (p. 303 of 2008 hardcover edition Here Comes Everybody):

One reason many of the stories in this book seem to be populated with young people is that those of us born before 1980 remember a time before any tools supported group communications well. For us, no matter how deeply we immerse ourselves in a new technology, it will always have a certain provisional quality.  -Shirky

That is closely the vantage point I am at: the doubting user who doesn’t see online status updates as a birthright.

In researching for a future Wikipedia post, I came across a critic of Google and Wikipedia that doesn’t strike the same cord as anti-Wiki or anti-Google crowd, I’ve come across.  Tara Brabazon doesn’t take the kind of evil empire approach that makes mountains out of molehills but she does take a exception to Google and Wikipedia becoming the default position for young scholars in-training and particularly on Clay Shirky’s book regarding accessibility and, ‘his assumption that “we” can learn about technology from technology.’  She is no anti-social media demogogue.  Her credentials include: Professor of Media at the University of Brighton, United Kingdom, Visiting Professor at Edge Hill’s SOLSTICE CETL, Fellow of the Royal Society for the encouragement of Arts, Manufactures & Commerce (RSA), Director of the Popular Culture Collective and Programme Leader of the Master of Arts in Creative Media.

Below is video of Brabazon talking about her recent book: The University of Google

From Product Description

Information is no longer for social good, but for sale.Tara Brabazon argues that this information fetish has been profoundly damaging to our learning institutions and to the ambitions of our students and educators. In “The University of Google”, she projects a defiant and passionate vision of education as a pathway to renewal, where research is based on searching and students are on a journey through knowledge, rather than consumers in the shopping centre of cheap ideas.Angry, humorous and practical in equal measure, “The University of Google” is based on real teaching experience and on years of engaged and sometimes exasperated reflection on it.

So not anti-social media, just against social media technology replacing the role of teachers.  Her critique of Here Comes Everybody is something that I’ve thought about but have never written about here.

They are shielded through the flawed assumption that if more “people” (and as a visitor to Second Life, I use this word advisedly …) are involved in doing “something” then it becomes important. When we were at high school, this was called mob rule. Now it is called social networking.  . . Older citizens, the poor, the illiterate and the socially excluded are invisible in Shirky’s “everybody”. Once more, the US, and occasionally the UK, is “the world” in the world wide web. The hypothesis is clear: the internet/web/Web 2.0 changed “everything”. The question remains: for whom?

I think she makes some valid points, and merely to assume that those indirectly affected by social media are some how a part of it (as a sole commenter of the post does), is in my opinion careless.  If information is power, and access to information is necessary to gain power, inaccess is connected to powerlessness.  The irony of this is that I was ready to buy up one of 10 books by Mrs. Brabazon, but found that they were too expensive for me to purchase.  I understand that a writer has got to eat, but does that mean that the price of admission is more risotto than rice-A-roni?



Filed under Uncategorized

Response#9 Wikipedia . . .

Should we trust Wikipedia or an expert-led encyclopedia more? How could Wikipedia be better set-up to better provide accuracy? Should it be open to everyone or just verified “experts”?

I think the potential benefits of the long tail, the limitless space by which to accumulate information, and the organic nature of Wikipedia make it formidable resource for storing and gathering information.  But should we trust Wikipedia over an expert-led encyclopedia like the standard-bearer, Britannica? Should students be allowed to use Wikipedia as a primary source for their papers? Is the potential of the tragedy of the commons too big a cross to bear for the sake of disseminating information?   Is anonymous posting on Wikipedia a gateway to misrepresentation? Finally, is there reason to believe that there may be an inadvertent bias toward white, European-descendent males? Probably, no, no, not quite, and not sure.

On the issue of trust, I favor timeliness over institutions.  I will admit that like newspapers and some books, Wikipedia can be prone to errors, but as  Nature magazine found, so could Encyclopedia Britannica. I would venture to guess that particularly for more popular articles at least, Wikipedia has not only the ability to correct them, but have anyone come in and make the correction themselves.  The group monitoring that each category undergoes ensures that there isn’t a kind of stop the presses moment or worse an acceptance of the error after its too late to recall them (something that may occur with a print reference).  So why can’t students use Wikipedia as an original source?

My belief is that information is most constructive when it is static.  Wikipedia is a great resource for general timely overviews, but when formulating an idea, static information provides the sturdiest foundation for critical thinking. This can be particularly important when it comes to the descent by a minority view.  Even if this minority viewpoint is not accepted as correct, having the ‘mistakes,’ ‘visible,’ means that other can learn from it.  Using Wikipedia as the primary or sole source would be akin to believing that the first item that comes up from every term you type in a search engine will be exactly the item you need.  The act of rejecting a premise is as important as accepting one.

The story of John Seigenthaler is that often cited cautionary tale of the limits of Wikipedia and additional stories of the possible M15 mole SlimVirgin, EssJay, and Vandalism on Wikipedia Watch, if true, represents a disturbing vulnerability to Wikipedia’s open approach.  That said, the Britannica makes mistakes a 12-year old can find, textbooks can tell half truths or outright lies, ‘journalists‘ can make up stories out of thin air.  Wikipedia has demonstrated a willingness to refine their process to mitigate the possibility of the tragedy of the commons just as Britannica, modern textbooks, and MSM eventually corrected their own vulnerabilities.  The advent of WikiScanner, also goes a long way toward mitigating ‘foul play’ on the playground of the ‘people’s encyclopedia,’ while many textbooks have articles without attribution and newspapers are still allowed to print articles with, ‘sources close to . . .’

The last question was first brought to my attention at Antonella Weyler’s post last week about the relative homogeneity of Wikipedia authors.

If so, does the fact that the 83% of contributors are men, mostly white, put Wikipedia’s “representativeness” in check? Who is expressing the knowledge and experiences of the ones who have no access to internet: 93% of the population in African, 80% in Asia, 75% in the Middle East, and 70% in Latin and Caribbean American? Is Wikipedia articles biased by a limited perspective?

This might simply be an access issue but again, the over-representation of white mostly male techno-class isn’t unusual to the web generally but the same argument can be true for the gender inequity of collage professors or racial equity in media.

When it comes down to it, many of the issues with Wikipedia seem to be issues of scale and access.  If it were possible for Britannica to have over 32,000 (as one report has it for Wikipedia) contributors to their English language version, would people be criticizing them too? If Britannica could manufacture and distribute its volumes for free, would people criticize that Britannica is too accessible?

Leave a comment

Filed under My fellow Classmates, Responses

Upcoming post AND Thought on what makes a good blog Rule 1.

I am working on a post regarding the fall of good wrting since the dawn of digtial media and web 2.0.  Right now, the post for my inspiration: Death of Writing on Loose Wire Blog, which I came across when I typed in Google: ‘death of writing.’  The post will be about how now that everything I type is digital, it seems (or is) weak in comparison to writing longhand or with typewriter.  I know typing on a typewriter is an anachronism, but I am having trouble reconciling what is better: the power of scarcity/permanance of errors in the typewriter world and the power of the many of the digital age of web 2.0.

Scarcity creates a level of urgency that when it hits, it soars. The particular challenge of typing words on a manual typewriter and their place on the physical page means that one poor choice effects the rest of the page.  This creates a strong filter for mediocre work (both leaving the X’d in mark of the mistake or even whiteout don’t erase the mistakes only covers them like mistakes in our own lives).   Our mistakes offline usually leave a permanent mark somewhere, even when well-hidden, and while holding on to them is not good, forgeting and/or never learning from them is how we evolve.  In the world of crowdsourcing, your mistakes are often pointed out by someone else, but often become little digital cautionary tales for others, leading to potentially a strong work in aggregate. My question is, does it make the individual writer better when the pressure is off on making mistakes.   If anyone comes across anything along these lines, let me know.  Thanks.  I hope to pull something together in the next few days.


I don’t know if I am following protocol here but I wanted to post on some other classmates blog post from each week’s assignments (this week crowdsourcing)  to make my list of what I am looking for in a blog and what makes me want to linger on a site (visual, content)? I am trying to practice the below approach but don’t always hit the mark.

1. Lead with clever or new: I have a short attention span and get bored easily. This week’s posts by Angie and Antonella’s brought something to the table that I hadn’t thought about: Angie’s recent post on Girliegirl1965 brought the new technology (crowdsourcing) to the traditional activity of practicing faith.  I don’t know if it was her intent but it got me thinking about how the church has long been a place of belief, relationships and sometimes gossip, and how crowdsourcing is very similar. Antonella fed my quiet distrust of homogeny with In Wikipedia we trust. Should we? My biggest beaf (and likely misstep) with the rise of social media is that the demographics of the techonocratic class skews largely white and male (or at least it feels that way).  This doesn’t necessarily mean that wikipedia and the like are biased.  But I think race or identifying with a particular race does determine context and perspective.  I don’t think it can be avoided but I also don’t think it should be taken for granted.

What I learned from Angie and Antonella: DON’T BURY THE LEAD.  One of the ways good blogs are like good writing is that though writing blogs is often more free form,  the best part of the blog can often come at the end of the post.  If it takes reworking the beginning and putting the juiciest links (with the cleverest anchor text) at the top, do it.

Leave a comment

Filed under My fellow Classmates, Writing Projects

Response#8 Crowdsourcing vs. Groupthink OR Looking for 20% OR Why halfbackery.com gets it right

My questions for evaluating crowdsourcing sites:

1. Does it follow Clay Shirky’s Principle of Promise, Tool, Bargain?

2. Who can actually join this ‘crowd’ or How easy is it to join the crowd?

3. Would I want to join this ‘crowd’?

4. Can I be myself and still be a part of the crowd (avoid groupthink)?

Josh Catone has even come up with rules (bottom of post) for successful crowdsourcing  on a ReadWriteWeb post from 2007 that were helpful but for me the key is whether a particular activity meets my personal sustained engagement threshold (me be the baromater for what any yahoo would do).  I poked around the crowdsourcing directory and a few other listings for different types of crowdsourcing sites (both ones clearly with marketing in mind or others that were meant for amusement-hat tip to classmate).  I admit the ones that are still going strong are genuinely neat.  Some were a little creepy like Perverted-Justice, some seem uncomfortably corporate like YourEncore and some were silly like Halfbackery.  What I couldn’t find was a crowd that is something more than momentarily interesting.  I look at something like Threadless and I am certainly impressed with the collective intelligence and it follows Catone’s rules.  I can’t confirm that Threadless follows the 80/20 rule, but I think I’m definitely in the 80.  Where I think sites fall a little short is when the psychological lift of viewing let alone engaging is higher than any red-blooded lurker is willing to go.  Threadless averages 5.6 pageviews a visitor based on Alexa.

If I had to pick one site that I keep coming back to it’s Halfbackery. I enjoy the mix of funny posts like the tumbleweed dispensor and creative like the phobia alarm clock and appreciate that its intent isn’t to be a repository for the next big idea like Cambrian House or bzzagent, but really just a place to share weird ideas.

From What the halfbakery isn’t

The site is also not a resource to help people guide their inventions from conception to completion. This is the place where you post the things you’re not going to be working on – because you can’t be bothered, or you don’t know how to, or because it’s not such a stellar idea after all.

The site is also not a marketplace where owners of patents find interested developers. Such sites exist (some are listed under links), but this isn’t one of them.

And finally, sending me email isn’t a good way of contacting the Dunkin’ Donuts corporation (but clicking on the preceding link is).

Any site that keeps my short-attention span for more than two pages, is about half-way to being a success in my book.  Halfbakery is simple enough a concept and its interface is ridiculously self-explanatory. The titles of the intentions draw you in for at least 3-5 inventions and the comments can be informative and are generally funny but the format is such that people don’t fall into the Borg mentality and stupid or brilliant (favoring stupid) your idea is given a fair shake from the group.   This is a fun group of 20% and I am about 50% sure that if I come up with something off-the wall crazy, I would post it on this site.

Links that I found interesting that I couldn’t fit into this post:

Dumbness of Crowds by Kathy Sierra

Top 100 Digg users control 56% of homepage content -80/20 Rule

Digital MaoismJaron Lanier

Leave a comment

Filed under Responses

Response#7 Most Surprising Thing About Social Media Class . . .

It has to be that the sheen of our web 2.0 world is a little faded from in 2007-2008 when I started and withdrew from a similar class. When I first read Shirky in mid-2008, I felt what it must have felt like to read Cluetrain Manifesto in 1999.  His was the kind of theoretical approach to the Brave New World of social media that I was both in awe of and excited to see how the world could change because of this medium that thrived in spite of traditional hinderance of money and resources.  I appreciated the case studies because they were largely around the frame of us, the users of the world, as global underdog that could take down theft safely behind laptops, almost shift the balance of power in entrenched political machines, and take down one of the world’s most powerful religious institutions.  Shirky in explaining the force of the many to many,  pitted us (the aggregate collaborators) versus powerful though narrowly focused institutions.

I wasn’t foolish to think that social media would mean a kind of perpetual digital revolution (Mao by way of Google), or that the kind of global realignment Shirky writes about would happen 15 months after he declared it, but maybe I hoped I was wrong.  The Obama campaign or Graff’s, ‘First Campaign’ may be the culprit in skewing my perception that campaign forced tradition media to wake up and take a gigantic leap forward. Tommaso Sorchiotti’s slideshare presentation aptly and comically depicts this.  The quixotic narrative of 2008 Presidential cycle lifted the resurrection narrative of the power of the web and vice versa.  Both are a bit of mythology of course, I do think that ascendance of the Obama campaign and the side-story of their online strategy made the most compelling argument for the power of social media.  But its 2009 almost 20-10, and I am left with thinking:

What’s Next?

All new media tools herald a new dawn in how we communicate but eventually it falls victim to the tragedy of the commons, right? I guess I thought the villian of this tragedy would be some lone-gunman type and not MSM or the social media tools themselves.   Social media, web 2.0 tools, and their companies are no different. Google, who could do no wrong in my book two years go has become just another company that is doing what it can to bend reality to its own creepy interests.  Twitter just saw its ultimate social potential when it was the go-to medium of reporting Iran elections protests by the US government no less and yet it bothers me that Ashton Kutcher has more followers than NPR.  When AIG has an RSS feed, it makes me want to stock my house with canned goods and wait for the Mayan Prophesy.  I am not saying that social media and web 2.0 tools have jumped the shark, just that we still seem to have made this leap and no one really knows what is next. Web 3.0 and the semantic web is probably years if not a decade away.  We are in this grey area where these tools are being actively adopted by the media institutions like the Washington Post and the New York Times not because it generates revenue, but because it seems the only way to stay above mere relevancy.   Perhaps I thought in a web 2.0 world, Moore’s Law applied to social change as well as computing speed.  And maybe it does, but it doesn’t feel like we’ve culturally reached even a tenth of the journey to critical mass.  Perhaps this still an odd time to look back on the last six to ten years of the web 2.0 social media explosion and study it as you would artifacts.  It’s no longer new, yet we haven’t reached the point in this journey where we are closer to the end than the beginning.  Then again, there is something to Shirky’s notion that technology doesn’t get socially interesting until it becomes technologically boring, and this world of many to many communications isn’t boring yet.

Leave a comment

Filed under Responses

Response#6 The Time Vampire of MMPOGs

I am dating myself when I say that my videogaming experience was Commodore 64 and Atari in the early 80s. My folks loaded up with education games (most of which I can’t remember the names to), Frogger II (I never had Frogger I but I was able to follow the intricate plot points nonetheless) and Indiana Jones and the Raiders of the Lost Ark (Loved this movie so much as a kid that I would continue to play this game to the third level but could never figure out how to get past it [The world cheats and hacks was not a possibility]).   Nintendo came later (I was Link) and I played Super Mario Brothers and the Legend of Zelda like an addict.  I got a Sega Genesis, which I thought would be my last game console until my father actually bought Nintendo Wii’s for every child in the family (3)  as well some of his friends (2).  Both sides of my grandparents have passed away so it could be worse.  They were impossibly hard to get in the town where he is so when they were finally scheduled to come in to the Sears, my dad actually waited outside the store the night of buy five to hand out to friends (completely insane).

Anyway, the point of most games that I played was to save someone, save the world, or stop someone from doing something bad.   I was the good guy, did the right thing and the fate of the world was riding on me.  That is a lot of pressure for a kid to put on himself but that kind of simple narrative fantasy makes sense when you are a kid who otherwise has no power, wealth or love.

What baffles me about MMPOGs like Second Life and World of Warcraft is 1. I am not there to save the world and be done with it.  2. I am not the center of attention.   3.  The simple narrative of Good vs. Evil seems to be blurred.  Some may argue that this third one is closer to real life.  But what is fun about that? I live in a world where good and evil are blurred everyday, why would I want my fantasy world to be the same?  Second Life in particular is the kind of time vampire, both because of its occasional choppiness, and also because its a straight simulation of the world where you have to decide the narrative, where I don’t know why anyone would play let alone exchange actual money to do the boring things people do in the real world.   “Buy digital representation of shoes that look nothing like real shoes? Sign me up!”

I can appreciate some MMPOG for there ability to foster teamwork, help those with mobility issues, and keep the Chinese employed.  But I want to be the only good guy in my fantasy world.

1 Comment

Filed under Responses

Response#5 Google Fail or Does the World Need Google?

libberding: It’s funny when something goes down on the intarwebz (like the current google fail) and I turn to Twitter to see if it’s true.-Twitter (9:48PM Oct 20, 2009)

Reading John Battelle’s ‘The Search,’ has done much to demystify me on the world’s largest media organization, Google.  Its one thing to see an article or post here or there, about copyright, privacy issues, or censuring, but it is quite another to create a narrative that depicts the steady devolution of the company of, ‘Don’t Be Evil,’ to corporate giant that will marshall its lawyers to defend its brand against its own fans. I have an easier time believing Battelle because I personally have nothing to gain from not believing unlike other culturally ‘positive’ brands like Apple (whose stock price impacts me).  Despite the reality that a corporation, like any institution, must first ensure its own existence (ala Shirky), and that means doing things that might appear unseemly, I still can’t relegate Google to the likes of News Corps or hayday of the Hearst media empire. This might also be because I use Google products almost hourly.  Telling me Google is evil is like saying pens are evil-sure I would refrain from writing using pens but I’m going to eventually have to sign my name.  If Larry Page, Sergey Brin and Hubert Chang”s master plan was to make Google so ubiquitous that people don’t even give it a second thought, it worked.  That is a step above brand loyalty. But has Google really made the world a better place?
A funny thing happened tonight.  Google failed. Just for 30 minutes, maybe an hour.  And while there were few moments of me going through the five stages of grief because I couldn’t check my Reader or Gmail for the hundredth time today, in the end, I hit up Yahoo and Twitter, and was just fine. Not the first time, a server or something went down at Google but this question popped into my head for the first time: Does the World Need Google? I know it’s so pervasive and diversified a company, many question the need for certain parts but that’s not the same thing as need. I haven’t done an exhaustive search but I did find an article from Geekpreneur that gives an ok assessment of why we need Google (1. ease of use, 2. level playing field for small online business, etc), but it seems more a laundry list of why Google is easy, popular and technologically savvy.  It’s not like it take a tenured professor in Semantics to figure out Yahoo Search.
We are talking about likely the most powerful data aggregator in the world, but doesn’t that make it just an aggregator? Would the world be less if we had to settle for Yahoo? Or Bing or any of the other hundreds of search engines? Probably not.  Google’s greatest achievement might not be any of the innovative tools, its unique path to corporate dominance, or its unimaginable storage of data. In the end, Google’s greatest achievement may be in convincing the world that we need it.  Not being evil is not the same as being good.

Leave a comment

Filed under Responses