Researching the ways in which new media and digital technologies are positioned in modern universities means reading about the changing models of governance, management and administration that shape how research and teaching are conducted. This proves very revealing in unexpected ways and—appropriate to my focus on discipline characteristics—there are clear differences in work from different fields. In the Humanities, and to a lesser extent, the Social Sciences, authors (even when offering some critique) discuss the ‘essence’ of digital media, often imbuing them with a quasi-magical quality. In these largely techno-determinist narratives, new media and digital technologies directly enact a “paradigm shift” that is inevitable because they innately (so it appears) alter our social relations, and relations of space-time. In fact, these technologies are agents or instruments of a shift with origins located elsewhere. Authors writing about what might be called ‘digitalism’ and science tend to be more prosaic. They acknowledge quite coolly that the increasing dominance of digital technologies on campus and in research ‘workflows’ is directly linked to the priorities of the market economy. This is obvious when we think about the emphasis on “innovation” (developing commodifiable products of interest to commercial private sector companies, who may in turn outsource their R&D activities) and on training students to adopt this same mind-set while charging them ever-increasing fees. University infrastructure becomes the final service-platform of a top-down government agenda. This agenda is reinforced by managers and normalised (through various means) in the minds of those working at lower levels—i.e. research and teaching staff.
Clearly digital technologies offer many exciting opportunities, regardless of debates on their nature. They are fascinating to explore, which is why I have chosen them as my topic of research. But we must remain aware of the wider systems which they are a part of, and this is why I am interested to consider them in relation to the academic “habitus”. Krull (2000) reminds us that nowadays, most Western governments view the funding of Higher Education as a “strategic investment” and that with limited finances available to support that investment, a focus on “public-private partnerships” and inter-disciplinarity are the logical outcomes of current political logic, focused as it is on “knowledge economies”1 . At the same time, “market populism” and “consumer democracy” have become “ideological lodestones against which all new policies [in the public sector] must be evaluated”, as Deem, Hillyard and Reed (2007) explain. The formation and dominance of neo-liberal New Managerialism (NM) and New Public Management (NPM) theories are part of a “cultural revolution” with a series of inter-linked effects upon “the discursive strategies, organizational forms and control technologies” embedded within and used to legitimate public services. Networks, personalisation and customisation are among the concepts it privileges. Universities are “by no means exempt from these underlying structural pressures and the ideological momentum that they generate”, having become more like “workplaces” than “communities of scholars”2 . The restructuring inspired by NM and NPM have significant and long-term consequences for academic communities. I would argue, uncontroversially (?), that the promotion of digital media is one of those consequences.
This leads me to some more general thoughts about academia which are only loosely relevant to my work but which are certainly relevant to my status as a junior member of the academic community. Whether the changing management models mentioned above are good or bad is dependent on the position of the observer. Specifically, in relation to what the observer and his/her community have to gain or lose. Naturally, it can be hard to criticise a system on which you rely for employment, especially when the changes you are critical of seem utterly inevitable and “just the way things are now”. It is hard to know what the effects of changing technologies will be. It is easier for a journalist or other commentator to descry what is happening to HE (for instance, the current funding cuts) than it is for a low-grade academic on a temporary contract or in a department facing an uncertain future. Clearly, if attitudes based on a combination of fear and the desire for self-preservation (if not whole-hearted subscription to the new ideals) become endemic, there are likely to be depressing consequences. I’m sure we can all imagine those. The nightmare scenario is a group of ‘yes men’ and women unquestioningly serving highly paid superiors who do not necessarily possess or appreciate the core intellectual and social values of the average academic. Rather than scholars choosing the methods and tools that best fit a particular course, or project, they will be enforced. More positively, it may be possible for those in all fields of enquiry to adapt to the new market-centred regime without totally compromising. Despite inevitable concessions, traditional values and ideals might quietly be defended and promoted from within. This enterprise is a difficult one—a little like brokering for peace when outside the negotiating room there are bodies and ‘collateral damage’.
1Krull, W. (2010). Beyond the Ivory Tower: Some Observations on External Funding of Interdisciplinary Research in Universities. In Nico Stehr and Peter Weingart eds. Practising Intderdisciplinarity. University of Toronto Press, pp.260-270.
Recently, I’ve started to analyse data gathered via the online questionnaire which is central to my thesis. This means having to get acquainted with a little bit more statistical analysis than I am comfortable with – i.e. pushing beyond the basics of mean, median, mode, and standard deviation, although naturally I first had to refresh my memory of those as well. Along with a copy of IBM’s excellent SPSS program (essentially, software for undertaking both basic and complex statistical analyses) and a copy of Julie Pallant’s SPSS Survival Manual, the complicated cycle of deriving numbers from text, recoding existing numbers into other numbers ➞ extrapolating from those numbers other, more illuminating numbers ➞ interpreting and then turning these new numbers back into narrative and prose, begins! Let’s just say that adjusting my research questions into something that will conform with the mystical world of dependent and independent variables is an intriguing process. Initial tests have led me to make a number of observations, some of which I think are worth sharing, especially with other humanities/social science researchers:
- Contrary to popular misconceptions about the coolly objective operating manual-style of science, there are, if you care to look beyond basics, almost as many disagreements about method, applicability and interpretation when it comes to statistics as there are about whether or not god exists. Well, okay, maybe not quite as many. But you get my point.
- The reassuring tone of a beginner’s textbook is wonderful but also dangerous. Particular authors will recommend making certain assumptions and using certain techniques that other authors argue just as convincingly against. Using one over the other may appear a trivial decision, if you are even aware (as a novice) of the debate to begin with. In reality, the decision you make about which author to trust can make a huge difference to the output you end up with. An output that cuts (or seems to) right to the heart of your research.
- Debates flagged up in various books are troubling and usually glossed over – can we really charge ahead with parametric tests when data does not look very normal? To what extent is it justifiable to manipulate (i.e. alter) data so that different more “robust” tests can be used? If I will never in a million years understand the maths behind a given procedure, how confident can I ever really be about using it?
As a result of all of this, statistics are often sloppily applied or deliberately misused; researchers proceed from all the wrong assumptions because they don’t really know what they are dealing with, or they already know what result they want. Knowing that nobody will really dig very deeply anyway, it can be assumed that most readers skip ahead to the conclusions. Naturally, there will be differences according to academic field (very relevant for my work!) in how statistics are perceived, used and justified. Young Min Baek writes of statistics in communication studies:
Like most social scientific terms, statistical terms and their findings are academically and/or socially (re)constructed facts. Statistical methods are not given, but created and (re)constructed for specific reasons in various disciplines before the birth of the communication field. Methodological myths, such as subjectivity or neutrality, are reinforced by learning of statistics as something given, not as something constructed. Learning something established does not demand critical minds that statistics can be changed for more appropriate understanding of communication. Communication students simply learn statistics from a communication methodology course, or an introductory statistics course. Most, if not all, students rarely have an interest in how statistical terms or concepts are born and (re)constructed throughout intellectual history in diverse academics. They just learn the basic logic and its applications to the understanding of social worlds.1
A friend who knows just a little bit more about all this than me suggested:
If you want to get some excitement out of statistics, ignore classical probability theory and use quantum probabilities. Statistics could be more fun than the usual Kolmogorovian bore, if only statisticians would not be so boring themselves…
Hmm. Right. I think maybe what he means by that is that standard statistical methods do not capture the subtlety at the heart of chaotic “reality”. But I can’t be sure. Software helps us but also flatters us, letting us click buttons and tick boxes to pretend that we are in some ways mathematicians. For that, I am grateful but also (as a “truth-seeker”) a little concerned. How far I can do any more than learn the basic logic, is unclear, but at least I am aware of some of these issues. I have plenty more analysis ahead of me, and I’m sure it’s going to continue being challenging, infuriating, fun, and informative. Right now though, I feel like Mulder in the X Files – the truth is out there, but I’m not sure if I will ever be able to prove it, or even if proof is the most relevant concept…watch this space!
Presenting a paper at Sheffield University’s inaugural iFutures conference, Thursday saw me taking my first trip to the Steel City. Having been a student again for 2 years now, the 5am start was a bit of a shock to the system, so I was very happy to find a lovely little on-campus cafe selling amazingly fluffy two-egg omelettes and a decent Fairtrade coffee (extra strong, naturally). Wolfing these down and wondering why, in 30 years, I’d never before heard of Yorkshire’s “famous” Henderson’s Relish (have you?) I perused the day’s programme and gave my slides a final once-over. The conference – tagline: “the next 50 years”, since Sheffield’s iSchool is currently celebrating its 50th birthday – was run entirely by Postgrads and aimed to provide a “forum for students to present forward-thinking and challenging research” in an “encouraging environment”. The organisers had accordingly “blocked” (in tongue-in-cheek fashion) their iSchool seniors from attending, focussing instead on attracting an audience of young/early-career academics. This worked out well; the event was no less intellectual, stimulating or professional, but for the students presenting, the day was made less intimidating in that ideas could be exchanged and space carved out more freely without fear of overtly supervisory objections.
Topics included the impact of ICTs on informal scientific communication, Institutional Repositories in Thailand, Chemoinformatics, telehealth project management, the ways in which public libraries can pro-actively support and respond to their communities, and a “radical” new approach to the analysis of classification schemes. A post-lunch Pecha Kucha session saw us voting via an “audience participation device” for the best and most engaging presenter. Pecha Kucha, if you haven’t come across it, is a trendy but very fun method of rapid-fire presentation – 20 slides are pre-programmed to be on screen for only 20 seconds each, meaning that the presenter ends up “pitching” a vision as much as opening up a debate and therefore has to be more creative. Facing stiff competition, Simon Wakeling’s take on the Future of the Filter Bubble was decided most worthy of a prize. My own full-length paper, which was also well received, was more traditional, describing a methodology for assessing academics’ attitudes toward new media and why that matters.
So what is the future of our field, which might broadly be called “Information Science”? Predicting the future is a dubious enterprise, and in an age of almost maniacal technological development, it becomes even harder to know what is scientifically probable and what is just science-fiction. Still, disclaimers aside, we can make some informed speculations based on current socio-technical trends. Two impressive keynote speakers – Professor Diane Sonnenwald (University College Dublin and the University of North Carolina at Chapel Hill) and Vanessa Murdock (Principal Applied Researcher at Microsoft Research) – were on hand to share their views. Coming from quite different perspectives, both shared thoughts about where information science should, or might, concentrate its energies. As a group, we possess much expertise that could help solve pressing social and environmental problems; failing health, climate change, inequality, global insecurity. While remedies for these might be figured out by analysts of the “big data” coming from scientific sensors and digitally mediated environments, disaster prevention initiatives and “crisis informatics” will only be successful if those creating systems, strategies and technologies are supported by experts able to assess their impacts on work patterns, task performance, and their wider (often unconsidered) socio-cultural effects.
Describing her own research into 3D medical Telepresence devices, Professor Sonnenwald emphasised that information professionals must make sure we are “at the table” when research projects and funding priorities are discussed institutionally and internationally. The kind of analyses that we undertake may lead to short-term headaches for those developing products – for example, one of her studies showed a particular device to be more flawed than its initial backers supposed – however in the long run, this is a good thing not just for them but for all of us. It’s cheaper to address design issues pre- rather than post-production, and, economics aside, we must make sure that the groups whose problems we try to solve are not inadvertently given more of them by shimmering but naively designed solutions. In an age of algorithms and automation, information science is far from redundant.
Vanessa Murdock focussed on how we can map the world and its preoccupations through the harvesting and analysis of social media data. Location-aware and location-based services on smartphones and web-browsers are one obvious example; Microsoft and others are working hard to build the “hyper local” as well as the personalised into their products. If you’re in Oslo and you fancy a pizza, wouldn’t it be nice to see at a click which restaurant near you has a menu to match your dietary requirements, what other customers thought about it, and where, based on your tastes, you might go afterwards? Less trivially, it would be valuable for sociologists, political economists and others to discover with reliability precisely where most tweets about Revolution X are coming from in order to ascertain the demographics of those tweeting them and what percentage of the population they actually represent. Naturally such applications are not without their issues. We need to think deeply about privacy, data protection, regulation and – at a technical level – the reliability of services based on data which are often difficult to interpret syntactically and semantically. Further, aren’t companies really just servicing the “Technorati”, treating them as typical of the needs and preferences of humanity when in fact, they are only a small and (it might be argued, insubstantial) minority? Reminding us of a need to understand the difference between solutions that work on “toy data” or simplified abstract models, and those which work when applied to reality, Murdock also pointed out that while “you should take the noble path and build things which are useful when possible, there is also a role for building things which are cool!”
Sheffield has about 60 PhD Students working in the two main research groups of their Information School, and it seems that the culture there is as lively as it is cutting edge. All of the presenters were really impressive and I’d like to thank the committee for putting together such a fun event. 🙂
This Thursday, our Digital Transformers Symposium finally took place and I am delighted to report that it was a huge success! Nineteen speakers presented diverse work from multiple Arts and Humanities subjects, sharing ideas and findings with one essential common theme – the ways in which digital technologies are transforming things – be they individuals, societies, or the ways in which we experience and understand. The day kicked off with a keynote speech by Dr. Jim Mussell, who took as his example the serial publications of Charles Dickens in Victorian magazine Household Words. This might seem an odd place to start – even if we are talking about digitised versions. After all, aren’t electronic versions just surrogates for higher quality originals? Dr. Mussell convinced us otherwise. By making use of digital tools for bibliographic analysis which at first glance seem utterly alien to the contexts of the works they are used to study, we may in fact find ourselves closer to the “truth” of a printed piece and its place in history. Magazines and books are objects. They are not the essence of a work but are “records of a set of cultural practices” – whether those practices involve binding and ink or binary and hyperlinks. By thinking about how digital versions of texts relate to their non-digital forebears, we might better appreciate that our interpretation of the past is always just that: an interpretation, within an imposed artificially linear narrative. Instead of being treated as deficient, inauthentic and lacking, new interfaces to old texts should be valued as enhancements – as transformative.
The day proceeded with four themed panels – Participation and Community Engagement; Methodological Challenges; Shifting Structures of Communication, and Audio-Visual Experiences. Two play sessions were also on offer – Introducing the geographic dimension to your research: GIS for the Humanities (led by Dr. Paty Murrieta Flores) and Meaning and meaning-making: a social semiotic multimodal approach to contemporary issues in research (led by Professor Gunther Kress). All of our paper presenters were young early career researchers, from around the UK and beyond. For me, one of the best parts of the symposium was the sense of community that seemed to emerge once the sessions got underway. Even when discussing work far removed from their own, the audience were supportive and enthusiastic. This may be one of the key positives of digital media within the academy – particularly for young researchers. Whether you are a cultural theorist, a linguist, or an information scientist, a realignment of disciplinary boundaries creates opportunities to identify shared and new perspectives, enhanced by engagements with digital tech. Dr. Patricia Murrieta Flores explored with us how Geographic Information Systems, initially designed for engineers, scientists and planners, have become fruitful and fascinating tools for archaeologists and historians, who use them to identify and model patterns and trends of the earth, its artefacts, its people and their geo-politics, across space and time.
The day concluded with a stimulating and lively debate on the perils and potentials of Open Access Publishing as it relates to the Humanities and to Universities more generally. It is difficult to know at present whether the “Gold” or the “Green” route to Open Access publishing will prevail; most likely institutions will use some mixture of the two, with both becoming competitors in an increasingly uneven and costly publishing ecosystem based around entrenched and outmoded (?) notions of prestige and value. Those who can afford it will be driven towards Gold, with Green and its laudable aims of equity and freedom being pushed into the role of second-best.
Our expert panel (Drs. Cathy Urquhart, Paul Kirby, Alma Swan and Stephen Pinfield) hinted now and then at positive transformative potentials stemming from OA – Alma Swan in particular sees OA as a welcome tonic to old-fashioned models – but overall it seemed a rather gloomy picture, dictated as ever by economics and elitist notions of bettering one’s peers. Many academics wish to see a culture of openness, experimentation and sharing, with contributions valued for their merit. The harsh realities of convention and money make that something of a pipe dream. There will be limited budgets to pay article processing fees hence managers will be forced to ask which articles represent the best financial “return on investment”, too busy and pressurised to judge them on anything other than proxy criteria of quality that do not consider the intellectual value of a work in its own right. Well, that’s the doomsday scenario. However naively, I very much hope that freer forms of communication will emerge to combat that!
We will be uploading slides, videos and other materials to the official Digital Transformers website over the coming month, so please do check there for more details on the excellent papers and presentations that were given by the members of our nascent ERC network.
After months of hard work and planning, we have finally finalised the schedule for our Digital Transformers Symposium, happening on Thursday the 23rd of May, at MMU. Working in academia – and in particular on a PhD – it’s easy to get caught up in stress and uncertainty of various kinds. So it really is great to be able to pour energy into a community-based event like this, which everyone seems to be looking forward to. All of the workshops and papers sound amazing and Jo and myself really couldn’t be more pleased at the quality and scope of submissions.
There aren’t any places left for the Symposium now (we only have room for 40): but tickets are still available for our Open Access Debate which opens to a wider audience later in the day. Hope to see some of you there! 🙂
On Monday night, I attended the latest in a series of thought-provoking events taking place within the Institute of Humanities and Social Science Research at MMU. As part of their Annual Research Programme, Dr David M. Berry (currently based at the University of Swansea and author of several books on digital cultures, software and code) had been invited to give a talk on the fundamental nature of Digital Humanities scholarship. Given the current changes taking place within MMU and many other universities as a result of educational technologies arriving on campus, a naturally large audience was secured.
Berry took a rather critical approach in his lecture, raising a number of issues and problems around Digital Humanities as both an academic discipline, and as a brand. Given how enthusiastic he is about DH his criticism is highly informed and cannot be said to be of the reactionary sort. And really that was his whole point: as academics we must continue to raise difficult, challenging questions about the subject areas within which we are embedded. It was refreshing to have the all-too tangible tensions between scholarly and business imperatives recognised in relation to DH. In terms of my own research, such debates are vital to understanding how academics in different fields relate to, understand, and use digital and new media.
Key philosophical questions about the nature(s) of digital environments and techniques are often overlooked by proponents of DH (although not, it must be said, by Cultural and Media theorists). Many nascent Digital Humanists are unsure what the term means – or what the core epistemic assumptions and problematics underlying their discipline are. Partly this is because Digital Humanities is an emerging and multi-disciplinary field, without clear historical traditions or organisational roots. Partly also it is because, for many Universities, “Digital Humanities” is something of a buzzword, with a surface level appeal considered enough in itself to attract new students and academics.
The danger is that Digital Humanists will become lost in computational formalisms, technologically-determinist methodologies, and the quantitative structural logic of engineers. They may lose sight of both the wider and more detailed perspectives brought about by traditional methods for illuminating truths about discourse and humanity. There is also the risk – in a target focused managerial culture – of being dazzled to the point of critical amnesia by the large public audiences that digital projects can garner when compared with audiences available for “gold standard” outputs like monographs.
Yet so long as we are careful not to sell or neglect our fundamental principles, Digital Humanities have much to offer. The Understanding Shakespeare project that Dr Berry showed to us during his afternoon workshop was one such example. Multiple German translations of Shakespeare have been scanned, OCRd and marked up, ready to be represented and queried digitally and visually. Analysing text and metadata computationally can reveal known and previously unknown correspondences and differences between editions, whether in terms of structure or content. As with many other semantic-web based tools (e.g. Gephi, Google Ngram and IBM’s Many Eyes), parameters can be set by researchers in a few easy steps and huge corpora can be explored – something almost impossible to do manually.
For me, the take home message was that the Digital Humanities – regardless of specific instantiations within individual institutions – must “extend their critique to include society, politics, the economic and the cultural.” Many researchers are already doing this and I certainly aim to do so in my own work. At the same time, Humanities scholars must not forget the “traditional” core concerns of their fields – i.e. the human subject, speculative knowledge, interpretation, and the value of focused, close readings – even as they rearticulate those concerns in exciting ways via computational methods.
Last week, myself and a fellow MMU PhD student (now also teaching at the University of Sheffield) received some excellent news – a funding bid that we submitted to the Arts and Humanities Research Council under their Collaborative Skills Development Call was successful! 🙂 The AHRC have funded a number of exciting and exemplary projects around Digital Transformations over the past year or so and clearly our Symposium will be a great opportunity for both of our departments to take part in that, extending and enriching their current Digital Humanities research. In May, we will be hosting a one day event for UK-based Postgrads and Early Career Researchers. To quote from our official documentation:
Combining workshops, presentations, and networking opportunities, The Digital Transformers Symposium 2013 will be run jointly by Manchester Metropolitan University’s Department of Languages, Information and Communications and the University of Sheffield’s Information School (iSchool). The Workshop offers an exciting opportunity for all across the Humanities to explore the methodological and conceptual approaches and techniques required for the study of digital arenas. Further, the event will act as a platform for the creation of a young, cutting-edge academic network sustainable long-term.
We aim to include a range of exciting papers and discussions that make room not only for positive examples of DH practise but also for critiques and debates about some of its more problematic aspects; for instance, in terms of methodological foundations or the reliability of data. We’re also going to include a number of hands-on ‘play sessions’, where the 40-50 people taking part can get to grips with various types of digital research tool and learn more about how to use them.
Sorry I can’t be more specific at the moment but of course it all depends what ideas come in from potential contributors when we issue the Call for Papers in January. We do have a wishlist of presenters and some clear ideas about possible thematic strands – for instance, narratives of old and new media, media archaeology, data visualisation techniques, text mining, and Open Access Publishing, the last of which will be a highly pertinent topic relevant to participants from every discipline. Once the day’s schedule is in place, a major challenge will be making sure we create an event that participants find exciting, fun, and the sort of thing they’d like to continue being a part of longer-term. Watch this space (and others) for details of a forthcoming dedicated website and the CfP!