Wednesday, April 22, 2009

Learning to Speak in Code


“I hope somehow to be able to speak what Manan Ahmed calls "future-ese," to be able to learn (some of) the language of the programmer over the course of this year so that I can begin to "re-imagine", as Ahmed has exhorted, the old in new ways. I’m excited, if duly daunted, by the prospects.” ~ Quoted from my first blog post, 10 September 2008.

* * * * *
If I ever meet Manan Ahmed, whose Polyglot Manifestos I and II were two of the very first assigned readings for our Digital History class, I would let him know that, like any effective manifesto, his inspired me to take a certain course of action this year – to sign up for the role of Programmer for the digital exhibit that the class would be preparing on the work of Dr. William Harvey.

Incidentally, if I ever did meet Manan Ahmed, I would also casually let him know that I hold him entirely responsible for the sleepless nights I had, agonizing over the code for the program I was attempting to write for an interactive exhibit on Harvey.

I might add here that I knew as much about programming as I did about APIs and mashups prior to this year, which is to say, nada.

(Accusatory) jesting aside, I’ve been reflecting on what it has been like to learn programming from scratch over the course of this school year. I was inspired, as mentioned, by Ahmed’s call for the historian to be more than simply a scholar submerged in past-ese without regard for how their studies might be made relevant to a modern audience (i.e. in present-ese) or how it might be re-imagined in the age of mass-digitization (i.e. in future-ese). How compelling was his call for historians to be “socially-engaged scholar[s],” how apt his challenge for us to become polyglots - master “togglers,” if you will, between past-ese, present-ese, and future-ese – apt especially to those of us with public history ambitions, who had entered the program interested in communicating the past to a general audience in new (i.e. digital) ways. [1]

“All that is required,” Ahmed wrote simply (alas, too simply), as a directive for historians willing to venture into the programmer’s world, “is to expand our reading a bit.” [2]

After my eight-month foray into programming, the words “all” and “a bit” in Ahmed’s above statement strike me as just a tad bit understated. I agree that reading was certainly a major part of my process to learn how to program this year: I pored over, highlighted, marked up, and even wrote conversational notes to the authors of my text (such as the occasional “not clear!”). But I think Ahmed might have also mentioned that not only reading but practicing, experimenting, fumbling, failing, and, yes, even agonizing, are all part of the process of learning how to speak some of the programmer’s language.

Like immersion into any new language, programming has its own set of daunting rules to absorb; break any one of them and you won’t be understood – at all. The program simply won’t run. (I don’t know how many error messages I gnashed my teeth at.) As well, like any language, there is always more than one way to say the same thing - and some of them are more “logical,” “eloquent,” or just plain clearer than others; concision and verbosity, I’ve learned, apply equally in the programmer’s world as they do in the writer’s. (I’ve also observed that my tendency to be wordy applies equally too in the world of code. In fact, I was delighted to learn about the concept of iteration, where lines of repetitive code could be magically – well, okay, mathematically – reduced to a few simple lines, using variables and a certain formula. If only paring down written text were so easy!)

Needless to say, I found the immersion into the programmer’s language very challenging. It was challenging (and, I will admit, even harrowing at times) because not only was I trying to accumulate basic knowledge of the new language, I was also brainstorming ideas for an interactive exhibit on Harvey at the same time. In some ways, it felt like I was trying to devise a Shakespearean sonnet in Chinese or with the vocabulary of a second grader (which is pretty much the extent of my vocabulary in Chinese). All I could envision was something rudimentary at best.

It was challenging to design an exhibit as I was learning the new language simply because I did not know if the ideas that I or others had were actually possible, or, more precisely, would be actually possible for me to learn how to do within the time limit. (I also discovered a humorous difference between the kinds of ideas thrown out by those in Programming versus those in non-Programming roles; the “anything is possible” optimism that technology seems to inspire was not so readily exhibited by those of us who had confronted, and would still have to confront, the befuddling intricacies of code.)

Despite all the challenges, uncertainties, and yes, even secret fears that the particular interactive exhibit I was working on might not come to fruition, things worked out. We hosted our Digital Exhibit on Harvey in early April; all programs functioned; no computers crashed (thank goodness). Looking back to September and my reasons for deciding to learn how to program, I think I am glad, after all, that Ahmed had made it sound so simple. With just a bit of reading, he had written coaxingly, the socially-conscious scholar will be well on his or her way to programming, to filling that gap between the public and the past, and between computer scientists and the future of history. If he had spelled out all the emotions one was apt to go through when learning how to program, I’d probably not have taken it on and thus have missed out on learning to speak a new language, on learning to speak in code.

____________________________

[1] Manan Ahmed, "The Polyglot Manifesto I," Chapati Mystery, http://www.chapatimystery.com/archives/univercity/the_polyglot_manifesto_i.html.

[2] Manan Ahmed, "The Polyglot Manifesto II," Chapati Mystery, http://www.chapatimystery.com/archives/univercity/the_polyglot_manifesto_ii.html.

Thursday, April 2, 2009

Reading the Landscape

This week’s Public History readings examine the relationship between history and the environment. Both Rebecca Conard’s and David Glassberg’s articles mention a key idea that environmental historians take for granted: that there is nothing natural about “nature”, nothing inevitable about the way that physical landscapes have evolved over time. The presupposed dichotomy between the urban and the “wild”, between human beings, on the one hand, and the “natural” environment, on the other, is not so clear cut at all. Rather, as Glassberg and Conard show, individuals, communities, organizations, and governments have played an important (if at times unnoticed or unemphasized) role in shaping the physical landscape. [1]

Both authors point out how the environment has often reflected the heavy hand of human agency in order to make it conform to certain ideas about desirable landscapes. Their discussions of national parks, in particular, suggest that what a landscape does not show is just as important – or even more so – than what it does show. Speaking of national parks in the western United States, Glassberg writes that “the landscapes tourists encountered in these parts, seemingly inhabited only by elk and buffalo, would not have existed if the native peoples had not first been defeated and removed to reservations, and the wildlife populations carefully managed to encourage picturesque megafauna and discourage pesky wolves.” [2] Similarly, Conard mentions how the desire of the US National Park Service to present parks as “pristine” and “uninhabited” spaces were influenced by ideas about the “romantic wilderness”; such an approach to national parks meant that visitors would not see that “these landscapes were ‘uninhabited’ only because U.S. Indian removal policies either had killed the former inhabitants or had relocated them to reservations” [3].

What’s missing from the physical landscape, then, is as instructive as what is apparent to the naked eye. How to convey a landscape's significance and complexity to a general (and often uninformed) audience, in terms of its cultivated image as well as the absence or removal of elements of its historical development, remains an important task for the public historian. It's a task that, as Conard strongly suggests, would benefit from discussion and collaboration among those who are intimately involved in preserving and presenting the history of the environment: historic preservationists, environmentalists, and land managers. [4]

In essence, Glassberg’s and Conard’s articles remind me that the landscape is also a source of historical information. It can be “read” as a historical text for insights into the changing values of a community, region, or nation over time. “Landscapes,” as Glassberg writes, “are not simply an arrangement of natural features, they are a language through which humans communicate with one another.” [5] Of course, as the author shows, this language is a complex one, reflecting conflicting interpretations and understandings of the environment. These conflicts also raise important questions about how one conception of the landscape comes to dominate others (and thus to shape its preservation and development in specific ways), requiring us to ask, as Glassberg does, “whose side won out and why?” [6]

___________________________

[1] Rebecca Conard, “Spading Common Ground” in Public History and the Environment, edited by Ed. Martin V. Melosi and Philip V. Scarpino, (Florida: Krieger, 2004) 3-22. David Glassberg, “Interpreting Landscapes,” in ibid., 23-36.

[2] Glassberg, 25.

[3] Conard, 6.

[4] Ibid., 4-5, 8.

[5] Glassberg, 29.

[6] Ibid.

Tuesday, March 31, 2009

Present(ing) History

When I was studying history as an undergraduate student, I was particularly fascinated by discussions about historiography. Perhaps it was the influence of my English Lit background, but I tended to do close readings of historical accounts, approaching them almost as literary texts that reflected much about the assumptions and attitudes, biases and values of the writer. It was therefore interesting to be asked in certain history classes to analyse the works of historians not primarily for what they revealed about the past, but for what insights they provided about the particular way of doing history that was “in vogue” at the time.

Over the course of this year, I’ve seen how the idea of the present’s imposition on the past is as applicable to public history as it is to traditional, scholarly history. History in the public realm is certainly as much (or perhaps even more) about the present -- that is, the “present” of whoever is, or was, writing the history, composing the plaque text, or curating the exhibit, for instance -- as it is about the past.

Museums, for example, as Helen Knibb’s article, “ ‘Present but not Visible’: Searching for Women’s History in Museum Collections,” suggests, do not necessarily present information, in the context of women’s history, about the actual lives and experiences of women from a particular time period. Instead, the artifacts on display may reveal more about the preoccupations and personal tastes of curators, or about the collecting or donating impulses of those whose items are on display. With regards to the latter, Knibb suggests that women may have simply donated items they thought were important from the standpoint of the museum or of society, rather than in relation to their own experiences. She raises the interesting question of whether "museum collections tell us more about how women collect than how they lived their lives.” [1] Knibb's article reminds me that museums themselves are constructed sites that are very much influenced by contemporary concerns.

The idea that public history is as much about the time period of the people presenting the history as it is about the history being presented is, I’m sure, hardly startling. But it does remind me of the need which underlies the rationale for these blogs – the need for self-reflexivity. As history students, my peers and I have been trained to read historical accounts critically, with an eye open to its constructed nature, to the ways in which the account reflects the biases of the historian and the preoccupations of his or her time. As public history practitioners, we will have to direct that critical gaze inwards, to assess how our own assumptions and biases are shaping the histories we will help to produce. Moreover, we will also have to negotiate our way through the assumptions and biases of others, who, in the collaborative realm of public history, will also have a stake – sometimes a very substantial one – in the history-making process. Given how contentious history in the public realm can be, not only the need for critical self-reflection but also the ability to practice what Rebecca Conard has called the “art of mediation” [2] are crucial requirements for the practicing public historian.

__________________________

[1] Helen Knibb, “‘Present But Not Visible’: Searching For Women’s History in Museum Collections,” Gender & History 6 (1994): 355, 361-362. The quote is from page 362.

[2] Rebecca Conard, “Facepaint History in the Season of Introspection,” The Public Historian 25, no. 4 (2003): 16. JSTOR, http://www.jstor.org/.

Monday, March 30, 2009

Information vs. History

"Would a complete chronicle of everything that ever happened eliminate the need to write history?" -- St Andrews final exam question in mediaeval history, 1981

"To give an accurate and exhaustive account of that period would need a far less brilliant pen than mine" -- Max Beerbohm

* * * * *

About a year ago, I took a short creative non-fiction course on the topic of writing historical narratives for a general audience. The instructor, Dr. Richard Mackie, emailed the class the above quotes, to stimulate thoughtful reflection about the nature of history and historical writing. (The first quote was actually a question that Dr. Mackie himself encountered as a History student at St. Andrews in the 80s.) These quotes have come to mind lately as I’ve been ruminating about the implications of doing history in a digital age.

The era of the Internet has, I think, made the idea of a “complete chronicle” of our current times more conceivable than ever before. The Web has certainly made it possible for virtually anyone, irrespective of gender, class, ethnicity, etc., to share their thoughts, ideas, photos, videos, even “statuses” (i.e. what one is doing at a precise moment in time) continuously. Provided that all of this electronic data is adequately preserved, there is going to be a vast abundance of information available for anyone a generation or two (or more) down the road who is curious about the interests, opinions, tastes, preoccupations, etc. of ordinary people in our time.

Yet, such information, no matter how detailed, is not the same as history. The chronicling of people’s lives, even on as minute a level as that expressed in an article about “lifelogging” by New Yorker writer, Alec Wilkinson, [1] results only in the production of information. It is the interpretation of that information, the piecing together of disparate parts into a coherent and (hopefully) elegant narrative which pulls out (or, more accurately, constructs) themes and patterns, that transforms it into history, into a meaningful story about the past.

What’s interesting, of course, is that although no historian (I think) would ever claim to write the history on any subject, discussions about the potential of history in the digital age has sometimes suggested the ability for history to be more complete than ever before. The idea of hypertextual history, for instance, where readers of a historical account can click on links leading them to pertinent primary source documents on the topic, say, or to other similar or divergent viewpoints about the particular subject they’re examining, has almost a decentring impact at the same time that it provides more information. It can be easy for readers, I think, to be overwhelmed by the profusion of hyperlinks within a text, and perhaps to never finish reading the actual article to learn the historian’s particular approach to the past.

The beauty of history, I think, is not that it claims to be a complete, exhaustive chronicle that leaves no stone unturned in its examination, but that it presents one angle on the past, a new way of understanding something that is extraordinarily complex and, for that reason, is open to -- and I’d even say requires -- multiple interpretations. History is, after all, a story as opposed to a record book, a narrative as opposed to mere facts.
______________________________

[1] Wilkinson’s interesting article recounts how computer guru Gordon Bell has been involved in a “lifelogging” experiment, in which he wears a Microsoft-developed device called a SenseCam around his neck that takes continual pictures of his day-to-day experience and allows him to record his thoughts at any given point in time if he so wishes. According to Wilkinson, Bell “collects the daily minutiae of his life so emphatically that he owns the most extensive and unwieldy personal archive of its kind in the world.” Alec Wilkinson, “Remember This? A Project to Record Everything We Do in Life,” The New Yorker.com, May 28, 2007, http://www.newyorker.com/reporting/2007/05/28/070528fa_fact_wilkinson.

Saturday, February 28, 2009

Google Tales for the Future

Sometimes, I wonder what historians of the future are going to be writing about, when they examine the early twenty-first century. No doubt, the term “digital revolution” is going to creep in to more than one monograph of the future about our present-day times. Cultural historians (if cultural history is still in vogue) might also, I think, take some delight in tracing the ways in which Google has entered into modern consciousness. Perhaps they’ll trace the moment when Google ceased to be only a proper noun, when the phrase “Let’s google it!” first appeared, and then flourished, in popular discourse. Or maybe they’ll explore the ways in which Google has become a part of popular culture and everyday life, to the point of inspiring satirical responses expressed in, you guessed it, digital ways.

Here are some anecdotes to help that future cultural historian.

* * * * *

Awhile ago, a friend told me an amusing story about how the father of one of her friends was confused about the nature of the Internet. He had never used it before (yes, there are still such folks), and he didn’t quite know what it was all about. So, one day, he asked his son to explain, framing his question according to the only term that he was familiar with – or had heard often enough: “Is Google,” he asked innocently, “the Internet?” The son choked back a gasp of unholy laughter, and proceeded to explain the phenomenon of the Internet to his father. However, if he had simplified his response, if he had said that Google was, in a way, the Internet, he may not have been all that wrong.

* * * * *

During Christmas dinner with my family this past winter, Google (of all topics) entered into our conversation. I don't remember how exactly. All I recall is that my mom, who (yes, it’s true) had never heard of Google before, perked up when she heard the term at the dinner table, probably because of its odd sound. “Google?” she said, brows furrowed, “what is Google?” To that, my dad, without missing a beat, responded (in Chinese) that Google "is the big brother of the Internet." Now, "big brother" (or "dai lo") in Cantonese, when used in a figurative sense, simply means someone who is to be respected, some important or dominant figure or force. But I couldn’t help laughing at the Orwellian overtones that my father's comment had unwittingly implied. He had meant big brother; I, of course, had heard Big Brother, Chinese-style.

* * * * *

Back in September, Dr. Don Spanner, my archival sciences professor, showed the class a video clip called Epic 2015. Its opening lines were captivatingly ambiguous: "It is the best of times," said the solemn narrator, "It is the worst of times." We were entranced by the video's fictitious yet somewhat chilling projection of the world in 2015, which involved no less than the merging of two powerful companies (Google and Amazon) to become Googlezon, an entity whose information-making and dissemination power had reduced even the might of the New York Times. At the end of the clip, Don joked that the first time he watched it, he just wanted to sit in a corner and stare at paper for a long, long time. We all laughed – and, perhaps, shivered inside a bit too.

Subsequently, I mentioned the clip to a friend, remarking how it was so interesting to see just how big Google had become, as evidenced by the fact that it was inspiring such responses as Epic 2015 with its subtle questioning of the Google empire and its cultural hegemony. My friend in turn enlightened me further about other similar responses. He asked if I had ever heard of “The Googling”? I hadn’t. So he emailed me links to several clips on YouTube, which explore Google’s services (such as their mapping devices) in a new - and, of course, hilariously sinister - way. To view them…simply google “The Googling.” :) (There are five parts.)

* * * * *

To the cultural historian of the future:

It was true. Google was (is?) ubiquitous, to the point that it entered into dinner table conversations and was mistaken (or correctly identified?) for the Internet. Even to the point of inspiring satirical YouTube clips and prophetic visions of a Google-ized world. That is, of course, when you know something is big – when it becomes the subject of cultural humour and unease, negotiated and even resisted in satirical ways.

So, we embraced Google even while scrutinizing it at arm's length. We questioned Google even while googling. It’s what we did in the early twenty-first century.

Monday, February 23, 2009

Pieces of History

One of my best friends and I have a tendency to reminisce about our shared experiences. During these (sometimes admittedly nostalgic) moments of looking back, I am always amazed at the different things that have stood out for each of us – a telling word, gesture, expression that I or she would not have ever recalled without the presence of the other.

In a way, then, my friend and I help make each other's history more complete by remembering details that the other has forgotten. In a way too, it means that the past - or that particular version being remembered in bits and pieces – becomes quite spontaneous for us, entirely dependent on the course of the conversation, on the ebb and flow of memory on that particular day. Reminiscing about the same experience with my friend years later, I find that other aspects surface; the past is, one might say, renewed and re-created in each instance of remembrance, a mental landscape that is both familiar and yet full of surprising colour too.

I think one of the interesting aspects of conducting oral history interviews – which I had the privilege of doing recently with one of the former staff members at a local health care institution – is observing that very organic and spontaneous process of memory in play. While I, of course, did not share in any experiences of my interviewee, bringing only my knowledge of certain aspects of the history of the institution to the table, it was interesting to see how certain memories surfaced for her based on the flow of the conversation.

My understanding of this institution’s history informed the questions that I prepared. Yet the interview was by no means confined to these questions. They became starting points, triggering memories of other aspects of my interviewee's experience – ones that I had not thought in advance to ask about and perhaps ones that she had not revisited until that moment in time. Another day, another interviewer, would undoubtedly bring other memories to the surface, revealing new pieces of a multifaceted history that can be tapped and reconfigured in so many ways.

And speaking about fragments of the past, I left the interview with an unexpected piece of history – literally. My interviewee was excited and eager to give me a brick that she had kept from the first building of her former work place, constructed in the late 19th century. Embedded with the shape of an animal, it now sits at the foot of my desk, a tangible piece of the past that stands in contrast to the transience and spontaneity of memory.

Saturday, January 31, 2009

Dabbling in the Digital Age

dream

Photography is one of those activities that I can lose myself completely in. The hours I spend on it is time freely given (and hardly felt). Although I’ve been lucky enough to capture a few photographs that I’m pleased with (including the one above, which was modestly granted an honorable mention in the Geography Department’s fundraising contest for United Way), I’ve always considered myself just a tinkerer of sorts. A dabbler, if you will, whose yearning to be “artistic” has been mostly helped by technology. (I credit my Nikon camera completely for taking good shots.)

* * * * *

As it turns out, the digital age is apparently very amenable to those with tinkering and dabbling tendencies.

That, at least, was the (hopeful) sense that I got from reading Jeff Howe’s article on “The Rise of Crowdsourcing.” In it, Howe traces the ways in which companies are tapping into “the latent talent of the crowd.” He brings up the example of iStockphoto, a company that sells images shot by amateur photographers – those who do not mind (who, in fact, I’m guessing, would be thrilled about) making a little bit of money doing what they already do in their spare time: take pictures.

According to Howe, the increasing affordability of professional-grade cameras and the assistance of powerful editing software like Photoshop means that the line between professional and amateur work is no longer so clear-cut. Add to that the sharing mechanisms of the Internet, and the fact that photographs taken by amateurs sell for a much lower price than those of professionals, and it seems inevitable that some ingenious person would have thought up a way to apply crowdsourcing to stock photography sooner or later.

Howe provides an even more striking example of how the expertise of the crowd is being plumbed these days. Corporations like Procter and Gamble are turning to science-minded hobbyists and tinkerers to help them solve problems that are stumping their R&D departments. Howe mentions the website InnoCentive as one example of the ways in which companies with a problem and potential problem-solvers are finding each other on the web: the former post their most perplexing scientific hurdles on the site and anyone who is part of the network can then take a stab at solving the problem. If they do, they are finely compensated. And a good number, in fact, do. According to InnoCentive’s chief scientific officer, Jill Panetta, 30% of all problems posted on their website have been solved. That is, to quote Panetta’s words, “30 percent more than would have been solved using a traditional, in-house approach.”

What’s intriguing about all of this is the fact that the solvers, as Howe says, “are not who you might expect.” They may not necessarily have formal training in the particular field in which the problem arises; their specialty may lie in another area altogether. Yet, it is this very diversity of expertise within the crowd of hobbyists that contributes to the success of such networks as InnoCentive. As Howe puts it, “the most efficient networks are those that link to the broadest range of information, knowledge, and experience.” The more disparate the crowd, in other words, the stronger the network. [1] I love the ironies of the digital age.

* * * * *

I’ve been wondering lately about whether history could benefit at all from the diverse knowledge and background of the crowd, whether crowdsourcing – posting a problem or request out in the virtual world in the hopes that someone might have the expertise to be able to fulfill it – could apply to a non-scientific discipline.

In other words, would a History version of InnoCentive work? A network where historical researchers could poll the crowd for information or materials or insight to help fill research gaps…where they could tap into the memories, artifacts, anecdotes, records, ephemera, (and even the ways people understand the past) of a diverse group and thereby possibly access information that might have never made it into the archives for formal preservation? How would the writing and construction of history change if, instead of primarily drawing upon the 5 to 10% of all records that ever make their way into an archives, researchers could tap into the personal archives of a disparate crowd made up of the “broadest range of information, knowledge, and experience”? (Let us put aside, for the moment, the issues of the integrity of the record and its provenance when we talk about “personal archives.” I realize that the shoebox in the attic is not nearly as reassuring a sight as the Hollinger box of the archives.) It seems probable to me that some of the 90 to 95% of records that never make their way into an archival institution are still extant, not destroyed, and that there could be valuable research material in them that could very well change one’s argument about the past. Would crowdsourcing be one way to get at that material?

* * * * *

P.S. Of course, I just realized that I’m raising the above questions without considering a crucial aspect in all the examples of crowdsourcing that Howe mentioned: money. Those who answered the call – whether the amateur photographer or the scientific tinkerer – were paid for their services (ranging from a dollar all the way to an impressive $25,000). To pay someone for a piece of history raises a whole other set of questions…

__________________________

[1] Jeff Howe, "The Rise of Crowdsourcing," Wired, http://www.wired.com/wired/archive/14.06/crowds.html.