Archive | New Media RSS for this section

Experts and experiments: focusing on the future

It’s been a long time now since I blogged on here. Life has been busy, not least due to a new full time role at the University of Manchester Library, working in market research and data analysis. For a little insight into just one of the many projects I am involved in there, here is a guest blog I wrote for the fascinating Books Right Here Right Now project.

A scholarly guinea pig...

A scholarly guinea pig takes a break from study to consider the future of reading. 

Hopefully I’ll find time to write more again here soon. If only I can finish the PhD at long last…

Advertisements

Citizen Journalism

Today I gave a guest lecture through at Liverpool John Moores University on ‘Citizen Journalism’ – an extremely fascinating and more or less recent phenomenon that can be linked to the rise of mobile/digital technologies, the internet, and multimedia content, as well as to disillusionment with, or disengagement from the ‘traditional’ press. Even if you’re not familiar with theories or debates around Citizen Journalism, you’ve doubtless seen countless examples of it in practise already; including the most mainstream news broadcasters and online publishers. Citizen Journalism is, in essence, unprofessional journalism, conducted by ‘ordinary’ members of the public. Naturally there is a lot to debate about what the potentials of Citizen Journalism are, how it contrasts with or complements professional journalism, and how it might develop in future. What insights can we gain if we analyse it in terms of style, content, politics, economics? I’m making my slides available here so please feel free to download them and let me know what your own views are. The embedded version below might have a few formatting glitches – apologies! Click the options button for notes.



Many thanks to Dr Iqbal Akthar and his lovely students for their input and enthusiasm, and for making me feel extremely welcome. Despite the cold and the rain, it was really nice to be in Liverpool again. 🙂

Information’s Future

Presenting a paper at Sheffield University’s inaugural iFutures conference, Thursday saw me taking my first trip to the Steel City. Having been a student again for 2 years now, the 5am start was a bit of a shock to the system, so I was very happy to find a lovely little on-campus cafe selling amazingly fluffy two-egg omelettes and a decent Fairtrade coffee (extra strong, naturally). Wolfing these down and wondering why, in 30 years, I’d never before heard of Yorkshire’s “famous” Henderson’s Relish (have you?) I perused the day’s programme and gave my slides a final once-over. The conference – tagline: “the next 50 years”, since Sheffield’s iSchool is currently celebrating its 50th birthday – was run entirely by Postgrads and aimed to provide a “forum for students to present forward-thinking and challenging research” in an “encouraging environment”. The organisers had accordingly “blocked” (in tongue-in-cheek fashion) their iSchool seniors from attending, focussing instead on attracting an audience of young/early-career academics. This worked out well; the event was no less intellectual, stimulating or professional, but for the students presenting, the day was made less intimidating in that ideas could be exchanged and space carved out more freely without fear of overtly supervisory objections.

Topics included the impact of ICTs on informal scientific communication, Institutional Repositories in Thailand, Chemoinformatics, telehealth project management, the ways in which public libraries can pro-actively support and respond to their communities, and a “radical” new approach to the analysis of classification schemes. A post-lunch Pecha Kucha session saw us voting via an “audience participation device” for the best and most engaging presenter. Pecha Kucha, if you haven’t come across it, is a trendy but very fun method of rapid-fire presentation – 20 slides are pre-programmed to be on screen for only 20 seconds each, meaning that the presenter ends up “pitching” a vision as much as opening up a debate and therefore has to be more creative. Facing stiff competition, Simon Wakeling’s take on the Future of the Filter Bubble was decided most worthy of a prize. My own full-length paper, which was also well received, was more traditional, describing a methodology for assessing academics’ attitudes toward new media and why that matters.

This slideshow requires JavaScript.

So what is the future of our field, which might broadly be called “Information Science”? Predicting the future is a dubious enterprise, and in an age of almost maniacal technological development, it becomes even harder to know what is scientifically probable and what is just science-fiction. Still, disclaimers aside, we can make some informed speculations based on current socio-technical trends. Two impressive keynote speakers – Professor Diane Sonnenwald (University College Dublin and the University of North Carolina at Chapel Hill) and Vanessa Murdock (Principal Applied Researcher at Microsoft Research) – were on hand to share their views. Coming from quite different perspectives, both shared thoughts about where information science should, or might, concentrate its energies. As a group, we possess much expertise that could help solve pressing social and environmental problems; failing health, climate change, inequality, global insecurity. While remedies for these might be figured out by analysts of the “big data” coming from scientific sensors and digitally mediated environments, disaster prevention initiatives and “crisis informatics” will only be successful if those creating systems, strategies and technologies are supported by experts able to assess their impacts on work patterns, task performance, and their wider (often unconsidered) socio-cultural effects.

Describing her own research into 3D medical Telepresence devices, Professor Sonnenwald emphasised that information professionals must make sure we are “at the table” when research projects and funding priorities are discussed institutionally and internationally. The kind of analyses that we undertake may lead to short-term headaches for those developing products – for example, one of her studies showed a particular device to be more flawed than its initial backers supposed – however in the long run, this is a good thing not just for them but for all of us. It’s cheaper to address design issues pre- rather than post-production, and, economics aside, we must make sure that the groups whose problems we try to solve are not inadvertently given more of them by shimmering but naively designed solutions. In an age of algorithms and automation, information science is far from redundant.

This slideshow requires JavaScript.

Vanessa Murdock focussed on how we can map the world and its preoccupations through the harvesting and analysis of social media data. Location-aware and location-based services on smartphones and web-browsers are one obvious example; Microsoft and others are working hard to build the “hyper local” as well as the personalised into their products. If you’re in Oslo and you fancy a pizza, wouldn’t it be nice to see at a click which restaurant near you has a menu to match your dietary requirements, what other customers thought about it, and where, based on your tastes, you might go afterwards? Less trivially, it would be valuable for sociologists, political economists and others to discover with reliability precisely where most tweets about Revolution X are coming from in order to ascertain the demographics of those tweeting them and what percentage of the population they actually represent. Naturally such applications are not without their issues. We need to think deeply about privacy, data protection, regulation and – at a technical level – the reliability of services based on data which are often difficult to interpret syntactically and semantically. Further, aren’t companies really just servicing the “Technorati”, treating them as typical of the needs and preferences of humanity when in fact, they are only a small and (it might be argued, insubstantial) minority? Reminding us of a need to understand the difference between solutions that work on “toy data” or simplified abstract models, and those which work when applied to reality, Murdock also pointed out that while “you should take the noble path and build things which are useful when possible, there is also a role for building things which are cool!”

Sheffield has about 60 PhD Students working in the two main research groups of their Information School, and it seems that the culture there is as lively as it is cutting edge. All of the presenters were really impressive and I’d like to thank the committee for putting together such a fun event. 🙂

Transformed Realities

This Thursday, our Digital Transformers Symposium finally took place and I am delighted to report that it was a huge success! Nineteen speakers presented diverse work from multiple Arts and Humanities subjects, sharing ideas and findings with one essential common theme – the ways in which digital technologies are transforming things – be they individuals, societies, or the ways in which we experience and understand. The day kicked off with a keynote speech by Dr. Jim Mussell, who took as his example the serial publications of Charles Dickens in Victorian magazine Household Words. This might seem an odd place to start – even if we are talking about digitised versions. After all, aren’t electronic versions just surrogates for higher quality originals? Dr. Mussell convinced us otherwise. By making use of digital tools for bibliographic analysis which at first glance seem utterly alien to the contexts of the works they are used to study, we may in fact find ourselves closer to the “truth” of a printed piece and its place in history. Magazines and books are objects. They are not the essence of a work but are “records of a set of cultural practices” – whether those practices involve binding and ink or binary and hyperlinks. By thinking about how digital versions of texts relate to their non-digital forebears, we might better appreciate that our interpretation of the past is always just that: an interpretation, within an imposed artificially linear narrative. Instead of being treated as deficient, inauthentic and lacking, new interfaces to old texts should be valued as enhancements – as transformative.

Panel 1 DTN

Ioanna Zouli presents her paper on “video practices and the dynamics of telepresence”.

The day proceeded with four themed panels – Participation and Community Engagement; Methodological Challenges; Shifting Structures of Communication, and Audio-Visual Experiences. Two play sessions were also on offer – Introducing the geographic dimension to your research: GIS for the Humanities (led by Dr. Paty Murrieta Flores) and Meaning and meaning-making: a social semiotic multimodal approach to contemporary issues in research (led by Professor Gunther Kress). All of our paper presenters were young early career researchers, from around the UK and beyond. For me, one of the best parts of the symposium was the sense of community that seemed to emerge once the sessions got underway. Even when discussing work far removed from their own, the audience were supportive and enthusiastic. This may be one of the key positives of digital media within the academy – particularly for young researchers. Whether you are a cultural theorist, a linguist, or an information scientist, a realignment of disciplinary boundaries creates opportunities to identify shared and new perspectives, enhanced by engagements with digital tech. Dr. Patricia Murrieta Flores explored with us how Geographic Information Systems, initially designed for engineers, scientists and planners, have become fruitful and fascinating tools for archaeologists and historians, who use them to identify and model patterns and trends of the earth, its artefacts, its people and their geo-politics, across space and time.

Dr Patricia Murrieta Flores gave us a hands-on introduction to GIS in the Humanities

Dr Patricia Murrieta Flores gave us a hands-on introduction to GIS in the Humanities

The day concluded with a stimulating and lively debate on the perils and potentials of Open Access Publishing as it relates to the Humanities and to Universities more generally. It is difficult to know at present whether the “Gold” or the “Green” route to Open Access publishing will prevail; most likely institutions will use some mixture of the two, with both becoming competitors in an increasingly uneven and costly publishing ecosystem based around entrenched and outmoded (?) notions of prestige and value. Those who can afford it will be driven towards Gold, with Green and its laudable aims of equity and freedom being pushed into the role of second-best.

Open Access Panel

A spirited panel of experts led a debate on the potential costs and benefits of Open Access

Our expert panel (Drs. Cathy Urquhart, Paul Kirby, Alma Swan and Stephen Pinfield) hinted now and then at positive transformative potentials stemming from OA – Alma Swan in particular sees OA as a welcome tonic to old-fashioned models – but overall it seemed a rather gloomy picture, dictated as ever by economics and elitist notions of bettering one’s peers. Many academics wish to see a culture of openness, experimentation and sharing, with contributions valued for their merit. The harsh realities of convention and money make that something of a pipe dream. There will be limited budgets to pay article processing fees hence managers will be forced to ask which articles represent the best financial “return on investment”, too busy and pressurised to judge them on anything other than proxy criteria of quality that do not consider the intellectual value of a work in its own right. Well, that’s the doomsday scenario. However naively, I very much hope that freer forms of communication will emerge to combat that!

We will be uploading slides, videos and other materials to the official Digital Transformers website over the coming month, so please do check there for more details on the excellent papers and presentations that were given by the members of our nascent ERC network. :-)

A New Network Emerges

After months of hard work and planning, we have finally finalised the schedule for our Digital Transformers Symposium, happening on Thursday the 23rd of May, at MMU. Working in academia – and in particular on a PhD – it’s easy to get caught up in stress and uncertainty of various kinds. So it really is great to be able to pour energy into a community-based event like this, which everyone seems to be looking forward to. All of the workshops and papers sound amazing and Jo and myself really couldn’t be more pleased at the quality and scope of submissions.

There aren’t any places left for the Symposium now (we only have room for 40): but tickets are still available for our Open Access Debate which opens to a wider audience later in the day. Hope to see some of you there! 🙂

Creative Change

It’s about time that I posted something on a very impressive exhibition which I went to see last weekend at the Cornerhouse GalleryRosa Barba’s Subject to Constant Change. Dealing with many of the themes currently preoccupying me as I delve into explorations of technology, Barba’s work thoughtfully and coolly expresses much of what 21st century academics are busy analysing – the essence(s) of digital and “post-digital” environments. Viewing her work, we are invited to consider materiality, memory, technology, technique, the relationship between past and present and the problematic nature of linear narratives. Complex relationships between text/performance, reader/viewer, the fixed and the slippery, are all considered, for instance in Time Machine, which is part script, part novella, part invention, and which looks like a projection although really it’s a print.

In one darkened gallery space, colour films run on projectors modified so that the speed and intensity of their wheels and their light alter in ways not possible on the unmodified original equipment. A series of statements and phrases apparently detached from all context appear flickering on the wall – and as I enjoyed the playful hints of meaning evoked by their flowery italic script, I also found myself fascinated by the mechanics of the projectors themselves. How much does the technology used to display these words contribute to their possible meaning and our interpretation of them? What associations are created when new and old approaches are combined? When rhythm varies and intensity is altered? Can we appreciate the past more fully by melding it with the present? At the same time we realise how both will forever evade being cemented.

This slideshow requires JavaScript.

In the second gallery, a pair of projectors work together to show us the two parts of Subconscious Society. A crowd of local people dressed somehow “timelessly” appear to haunt the neglected interior of the Manchester Albert Hall, moving around it as though defiantly detached from some imagined authentic context and accompanied by a soundtrack of fleeting observations. One staff member (who was very keen to get feedback and discuss the installations with us) revealed with a little amusement that some visitors have been puzzled. “Why are you not doing it all on digital? Why are you using this old equipment? Isn’t it more difficult and expensive?” Well, yes. And there have been some problems – bulbs overheating, projectors stalling, film getting caught. Such difficulties are in themselves a thought-provoking part of the exhibition. The medium is as much a part of the message as is the content. Really. Barba’s refusal to embrace a lazy and straightforward “logical” modernity is what gives her exhibition its power.

Former configurations of society – and technology – may appear to be obsolete but the point is that their imprintings and patterns remain to resonate in our own time, reimagined, reasserted, reinterpreted. Using such techniques will be less possible for artists in the future. It’s hard to find not only the spare parts and the film – she is using some of the last of Kodak’s old stock apparently – but also the technicians able to handle and maintain them. In an era where digital technology lets amateurs do almost everything at the touch of a button (this blog is just one example!), it is nice to reflect on the highly skilled and patient operators/artists of the past who understood both the physics and the metaphysics.

Humanities by any other name

On Monday night, I attended the latest in a series of thought-provoking events taking place within the Institute of Humanities and Social Science Research at MMU. As part of their Annual Research Programme, Dr David M. Berry (currently based at the University of Swansea and author of several books on digital cultures, software and code) had been invited to give a talk on the fundamental nature of Digital Humanities scholarship. Given the current changes taking place within MMU and many other universities as a result of educational technologies arriving on campus, a naturally large audience was secured.

Berry took a rather critical approach in his lecture, raising a number of issues and problems around Digital Humanities as both an academic discipline, and as a brand. Given how enthusiastic he is about DH his criticism is highly informed and cannot be said to be of the reactionary sort. And really that was his whole point: as academics we must continue to raise difficult, challenging questions about the subject areas within which we are embedded. It was refreshing to have the all-too tangible tensions between scholarly and business imperatives recognised in relation to DH. In terms of my own research, such debates are vital to understanding how academics in different fields relate to, understand, and use digital and new media.

Enriching and challenging tradition

Key philosophical questions about the nature(s) of digital environments and techniques are often overlooked by proponents of  DH (although not, it must be said, by Cultural and Media theorists). Many nascent Digital Humanists are unsure what the term means – or what the core epistemic assumptions and problematics underlying their discipline are. Partly this is because Digital Humanities is an emerging and multi-disciplinary field, without clear historical traditions or organisational roots. Partly also it is because, for many Universities, “Digital Humanities” is something of a buzzword, with a surface level appeal considered enough in itself to attract new students and academics.

The danger is that Digital Humanists will become lost in computational formalisms, technologically-determinist methodologies, and the quantitative structural logic of engineers. They may lose sight of both the wider and more detailed perspectives brought about by traditional methods for illuminating truths about discourse and humanity. There is also the risk – in a target focused managerial culture – of being dazzled to the point of critical amnesia by the large public audiences that digital projects can garner when compared with audiences available for “gold standard” outputs like monographs.

Yet so long as we are careful not to sell or neglect our fundamental principles, Digital Humanities have much to offer. The Understanding Shakespeare project that Dr Berry showed to us during his afternoon workshop was one such example. Multiple German translations of Shakespeare have been scanned, OCRd and marked up, ready to be represented and queried digitally and visually. Analysing text and metadata computationally can reveal known and previously unknown correspondences and differences between editions, whether in terms of structure or content. As with many other semantic-web based tools (e.g. Gephi, Google Ngram and IBM’s Many Eyes), parameters can be set by researchers in a few easy steps and huge corpora can be explored – something almost impossible to do manually.

For me, the take home message was that the Digital Humanities – regardless of specific instantiations within individual institutions – must “extend their critique to include society, politics, the economic and the cultural.” Many researchers are already doing this and I certainly aim to do so in my own work. At the same time, Humanities scholars must not forget the “traditional” core concerns of their fields – i.e. the human subject, speculative knowledge, interpretation, and the value of focused, close readings – even as they rearticulate those concerns in exciting ways via computational methods.