It’s been a long time now since I blogged on here. Life has been busy, not least due to a new full time role at the University of Manchester Library, working in market research and data analysis. For a little insight into just one of the many projects I am involved in there, here is a guest blog I wrote for the fascinating Books Right Here Right Now project.
Hopefully I’ll find time to write more again here soon. If only I can finish the PhD at long last…
Researching the ways in which new media and digital technologies are positioned in modern universities means reading about the changing models of governance, management and administration that shape how research and teaching are conducted. This proves very revealing in unexpected ways and—appropriate to my focus on discipline characteristics—there are clear differences in work from different fields. In the Humanities, and to a lesser extent, the Social Sciences, authors (even when offering some critique) discuss the ‘essence’ of digital media, often imbuing them with a quasi-magical quality. In these largely techno-determinist narratives, new media and digital technologies directly enact a “paradigm shift” that is inevitable because they innately (so it appears) alter our social relations, and relations of space-time. In fact, these technologies are agents or instruments of a shift with origins located elsewhere. Authors writing about what might be called ‘digitalism’ and science tend to be more prosaic. They acknowledge quite coolly that the increasing dominance of digital technologies on campus and in research ‘workflows’ is directly linked to the priorities of the market economy. This is obvious when we think about the emphasis on “innovation” (developing commodifiable products of interest to commercial private sector companies, who may in turn outsource their R&D activities) and on training students to adopt this same mind-set while charging them ever-increasing fees. University infrastructure becomes the final service-platform of a top-down government agenda. This agenda is reinforced by managers and normalised (through various means) in the minds of those working at lower levels—i.e. research and teaching staff.
Clearly digital technologies offer many exciting opportunities, regardless of debates on their nature. They are fascinating to explore, which is why I have chosen them as my topic of research. But we must remain aware of the wider systems which they are a part of, and this is why I am interested to consider them in relation to the academic “habitus”. Krull (2000) reminds us that nowadays, most Western governments view the funding of Higher Education as a “strategic investment” and that with limited finances available to support that investment, a focus on “public-private partnerships” and inter-disciplinarity are the logical outcomes of current political logic, focused as it is on “knowledge economies”1 . At the same time, “market populism” and “consumer democracy” have become “ideological lodestones against which all new policies [in the public sector] must be evaluated”, as Deem, Hillyard and Reed (2007) explain. The formation and dominance of neo-liberal New Managerialism (NM) and New Public Management (NPM) theories are part of a “cultural revolution” with a series of inter-linked effects upon “the discursive strategies, organizational forms and control technologies” embedded within and used to legitimate public services. Networks, personalisation and customisation are among the concepts it privileges. Universities are “by no means exempt from these underlying structural pressures and the ideological momentum that they generate”, having become more like “workplaces” than “communities of scholars”2 . The restructuring inspired by NM and NPM have significant and long-term consequences for academic communities. I would argue, uncontroversially (?), that the promotion of digital media is one of those consequences.
This leads me to some more general thoughts about academia which are only loosely relevant to my work but which are certainly relevant to my status as a junior member of the academic community. Whether the changing management models mentioned above are good or bad is dependent on the position of the observer. Specifically, in relation to what the observer and his/her community have to gain or lose. Naturally, it can be hard to criticise a system on which you rely for employment, especially when the changes you are critical of seem utterly inevitable and “just the way things are now”. It is hard to know what the effects of changing technologies will be. It is easier for a journalist or other commentator to descry what is happening to HE (for instance, the current funding cuts) than it is for a low-grade academic on a temporary contract or in a department facing an uncertain future. Clearly, if attitudes based on a combination of fear and the desire for self-preservation (if not whole-hearted subscription to the new ideals) become endemic, there are likely to be depressing consequences. I’m sure we can all imagine those. The nightmare scenario is a group of ‘yes men’ and women unquestioningly serving highly paid superiors who do not necessarily possess or appreciate the core intellectual and social values of the average academic. Rather than scholars choosing the methods and tools that best fit a particular course, or project, they will be enforced. More positively, it may be possible for those in all fields of enquiry to adapt to the new market-centred regime without totally compromising. Despite inevitable concessions, traditional values and ideals might quietly be defended and promoted from within. This enterprise is a difficult one—a little like brokering for peace when outside the negotiating room there are bodies and ‘collateral damage’.
1Krull, W. (2010). Beyond the Ivory Tower: Some Observations on External Funding of Interdisciplinary Research in Universities. In Nico Stehr and Peter Weingart eds. Practising Intderdisciplinarity. University of Toronto Press, pp.260-270.
Recently, I’ve started to analyse data gathered via the online questionnaire which is central to my thesis. This means having to get acquainted with a little bit more statistical analysis than I am comfortable with – i.e. pushing beyond the basics of mean, median, mode, and standard deviation, although naturally I first had to refresh my memory of those as well. Along with a copy of IBM’s excellent SPSS program (essentially, software for undertaking both basic and complex statistical analyses) and a copy of Julie Pallant’s SPSS Survival Manual, the complicated cycle of deriving numbers from text, recoding existing numbers into other numbers ➞ extrapolating from those numbers other, more illuminating numbers ➞ interpreting and then turning these new numbers back into narrative and prose, begins! Let’s just say that adjusting my research questions into something that will conform with the mystical world of dependent and independent variables is an intriguing process. Initial tests have led me to make a number of observations, some of which I think are worth sharing, especially with other humanities/social science researchers:
- Contrary to popular misconceptions about the coolly objective operating manual-style of science, there are, if you care to look beyond basics, almost as many disagreements about method, applicability and interpretation when it comes to statistics as there are about whether or not god exists. Well, okay, maybe not quite as many. But you get my point.
- The reassuring tone of a beginner’s textbook is wonderful but also dangerous. Particular authors will recommend making certain assumptions and using certain techniques that other authors argue just as convincingly against. Using one over the other may appear a trivial decision, if you are even aware (as a novice) of the debate to begin with. In reality, the decision you make about which author to trust can make a huge difference to the output you end up with. An output that cuts (or seems to) right to the heart of your research.
- Debates flagged up in various books are troubling and usually glossed over – can we really charge ahead with parametric tests when data does not look very normal? To what extent is it justifiable to manipulate (i.e. alter) data so that different more “robust” tests can be used? If I will never in a million years understand the maths behind a given procedure, how confident can I ever really be about using it?
As a result of all of this, statistics are often sloppily applied or deliberately misused; researchers proceed from all the wrong assumptions because they don’t really know what they are dealing with, or they already know what result they want. Knowing that nobody will really dig very deeply anyway, it can be assumed that most readers skip ahead to the conclusions. Naturally, there will be differences according to academic field (very relevant for my work!) in how statistics are perceived, used and justified. Young Min Baek writes of statistics in communication studies:
Like most social scientific terms, statistical terms and their findings are academically and/or socially (re)constructed facts. Statistical methods are not given, but created and (re)constructed for specific reasons in various disciplines before the birth of the communication field. Methodological myths, such as subjectivity or neutrality, are reinforced by learning of statistics as something given, not as something constructed. Learning something established does not demand critical minds that statistics can be changed for more appropriate understanding of communication. Communication students simply learn statistics from a communication methodology course, or an introductory statistics course. Most, if not all, students rarely have an interest in how statistical terms or concepts are born and (re)constructed throughout intellectual history in diverse academics. They just learn the basic logic and its applications to the understanding of social worlds.1
A friend who knows just a little bit more about all this than me suggested:
If you want to get some excitement out of statistics, ignore classical probability theory and use quantum probabilities. Statistics could be more fun than the usual Kolmogorovian bore, if only statisticians would not be so boring themselves…
Hmm. Right. I think maybe what he means by that is that standard statistical methods do not capture the subtlety at the heart of chaotic “reality”. But I can’t be sure. Software helps us but also flatters us, letting us click buttons and tick boxes to pretend that we are in some ways mathematicians. For that, I am grateful but also (as a “truth-seeker”) a little concerned. How far I can do any more than learn the basic logic, is unclear, but at least I am aware of some of these issues. I have plenty more analysis ahead of me, and I’m sure it’s going to continue being challenging, infuriating, fun, and informative. Right now though, I feel like Mulder in the X Files – the truth is out there, but I’m not sure if I will ever be able to prove it, or even if proof is the most relevant concept…watch this space!
Presenting a paper at Sheffield University’s inaugural iFutures conference, Thursday saw me taking my first trip to the Steel City. Having been a student again for 2 years now, the 5am start was a bit of a shock to the system, so I was very happy to find a lovely little on-campus cafe selling amazingly fluffy two-egg omelettes and a decent Fairtrade coffee (extra strong, naturally). Wolfing these down and wondering why, in 30 years, I’d never before heard of Yorkshire’s “famous” Henderson’s Relish (have you?) I perused the day’s programme and gave my slides a final once-over. The conference – tagline: “the next 50 years”, since Sheffield’s iSchool is currently celebrating its 50th birthday – was run entirely by Postgrads and aimed to provide a “forum for students to present forward-thinking and challenging research” in an “encouraging environment”. The organisers had accordingly “blocked” (in tongue-in-cheek fashion) their iSchool seniors from attending, focussing instead on attracting an audience of young/early-career academics. This worked out well; the event was no less intellectual, stimulating or professional, but for the students presenting, the day was made less intimidating in that ideas could be exchanged and space carved out more freely without fear of overtly supervisory objections.
Topics included the impact of ICTs on informal scientific communication, Institutional Repositories in Thailand, Chemoinformatics, telehealth project management, the ways in which public libraries can pro-actively support and respond to their communities, and a “radical” new approach to the analysis of classification schemes. A post-lunch Pecha Kucha session saw us voting via an “audience participation device” for the best and most engaging presenter. Pecha Kucha, if you haven’t come across it, is a trendy but very fun method of rapid-fire presentation – 20 slides are pre-programmed to be on screen for only 20 seconds each, meaning that the presenter ends up “pitching” a vision as much as opening up a debate and therefore has to be more creative. Facing stiff competition, Simon Wakeling’s take on the Future of the Filter Bubble was decided most worthy of a prize. My own full-length paper, which was also well received, was more traditional, describing a methodology for assessing academics’ attitudes toward new media and why that matters.
So what is the future of our field, which might broadly be called “Information Science”? Predicting the future is a dubious enterprise, and in an age of almost maniacal technological development, it becomes even harder to know what is scientifically probable and what is just science-fiction. Still, disclaimers aside, we can make some informed speculations based on current socio-technical trends. Two impressive keynote speakers – Professor Diane Sonnenwald (University College Dublin and the University of North Carolina at Chapel Hill) and Vanessa Murdock (Principal Applied Researcher at Microsoft Research) – were on hand to share their views. Coming from quite different perspectives, both shared thoughts about where information science should, or might, concentrate its energies. As a group, we possess much expertise that could help solve pressing social and environmental problems; failing health, climate change, inequality, global insecurity. While remedies for these might be figured out by analysts of the “big data” coming from scientific sensors and digitally mediated environments, disaster prevention initiatives and “crisis informatics” will only be successful if those creating systems, strategies and technologies are supported by experts able to assess their impacts on work patterns, task performance, and their wider (often unconsidered) socio-cultural effects.
Describing her own research into 3D medical Telepresence devices, Professor Sonnenwald emphasised that information professionals must make sure we are “at the table” when research projects and funding priorities are discussed institutionally and internationally. The kind of analyses that we undertake may lead to short-term headaches for those developing products – for example, one of her studies showed a particular device to be more flawed than its initial backers supposed – however in the long run, this is a good thing not just for them but for all of us. It’s cheaper to address design issues pre- rather than post-production, and, economics aside, we must make sure that the groups whose problems we try to solve are not inadvertently given more of them by shimmering but naively designed solutions. In an age of algorithms and automation, information science is far from redundant.
Vanessa Murdock focussed on how we can map the world and its preoccupations through the harvesting and analysis of social media data. Location-aware and location-based services on smartphones and web-browsers are one obvious example; Microsoft and others are working hard to build the “hyper local” as well as the personalised into their products. If you’re in Oslo and you fancy a pizza, wouldn’t it be nice to see at a click which restaurant near you has a menu to match your dietary requirements, what other customers thought about it, and where, based on your tastes, you might go afterwards? Less trivially, it would be valuable for sociologists, political economists and others to discover with reliability precisely where most tweets about Revolution X are coming from in order to ascertain the demographics of those tweeting them and what percentage of the population they actually represent. Naturally such applications are not without their issues. We need to think deeply about privacy, data protection, regulation and – at a technical level – the reliability of services based on data which are often difficult to interpret syntactically and semantically. Further, aren’t companies really just servicing the “Technorati”, treating them as typical of the needs and preferences of humanity when in fact, they are only a small and (it might be argued, insubstantial) minority? Reminding us of a need to understand the difference between solutions that work on “toy data” or simplified abstract models, and those which work when applied to reality, Murdock also pointed out that while “you should take the noble path and build things which are useful when possible, there is also a role for building things which are cool!”
Sheffield has about 60 PhD Students working in the two main research groups of their Information School, and it seems that the culture there is as lively as it is cutting edge. All of the presenters were really impressive and I’d like to thank the committee for putting together such a fun event. 🙂
I’ve just started a summer research placement project with the Manchester Digital Laboratory – aka MadLab – and it’s already proving to be an eye-opener. The theme of our project is communities – which are MadLab’s raison d’être; but although I’ve heard people talking about it more than once over the past year, I have to admit I’ve never actually been there before now. Seeing the space and how it’s used is pretty inspiring. Around 50 groups use MadLab regularly, with many more hiring it for one-off or special events – performances, workshops, training sessions. At the same time, it’s friendly, down-to-earth and totally unpretentious, buzzing with a relaxed creativity that attracts groups as diverse as android developers, poets, and budding taxidermists, who drop in and out to share ideas, crack on with work, and generally have a nice time doing what they are passionate about with others who feel the same. It could be hard to find space otherwise. So, that’s the sales pitch, right? Well, actually, it’s entirely accurate. So it seems to me anyway. Finding useful and exciting ways to demonstrate what MadLab is all about using data, graphics, and the 9 days we have available to us, is what our MadLab Community Networking Project is all about. It’s going to be an interesting challenge!
With input and advice from MadLab’s Dave Mee and DARE‘s David Jackson, graphic designer/researcher Anna Frew and myself are going to be gathering, organising and manipulating information about the techies, creatives and other enthusiasts who bring MadLab to life. What are the characteristics of these groups and what are the connections between them? Who and what drives them? How active are they and how do they intersect with public or private sector organisations elsewhere in the city? There are miriad ways to look at the data. Sifting through different sources and different types of documentation, we can identify what we know and what we need to know. Then we can start gathering information from the groups themselves, fleshing everything out and filling in the gaps. Our aim is to shed new light on MadLab, mapping and modelling the networks that operate inside and around it and making it clearer how they fit within its ecosystem. My task is to bring some structure to a bundle of data and metadata, and enrich it. After which, Anna will begin to create some at once beautiful and informative visualisations, giving us multiple perspectives on MadLab’s communities. Naturally this will all end up online at some point. Or so I imagine. The details aren’t yet entirely clear since we’re only just getting started. If you want to know more about our emerging workflows and thought-processes, please do go over to Anna’s blog and read her excellent write-up of what we’ve been doing in Weeks 1 and 2.
This Thursday, our Digital Transformers Symposium finally took place and I am delighted to report that it was a huge success! Nineteen speakers presented diverse work from multiple Arts and Humanities subjects, sharing ideas and findings with one essential common theme – the ways in which digital technologies are transforming things – be they individuals, societies, or the ways in which we experience and understand. The day kicked off with a keynote speech by Dr. Jim Mussell, who took as his example the serial publications of Charles Dickens in Victorian magazine Household Words. This might seem an odd place to start – even if we are talking about digitised versions. After all, aren’t electronic versions just surrogates for higher quality originals? Dr. Mussell convinced us otherwise. By making use of digital tools for bibliographic analysis which at first glance seem utterly alien to the contexts of the works they are used to study, we may in fact find ourselves closer to the “truth” of a printed piece and its place in history. Magazines and books are objects. They are not the essence of a work but are “records of a set of cultural practices” – whether those practices involve binding and ink or binary and hyperlinks. By thinking about how digital versions of texts relate to their non-digital forebears, we might better appreciate that our interpretation of the past is always just that: an interpretation, within an imposed artificially linear narrative. Instead of being treated as deficient, inauthentic and lacking, new interfaces to old texts should be valued as enhancements – as transformative.
The day proceeded with four themed panels – Participation and Community Engagement; Methodological Challenges; Shifting Structures of Communication, and Audio-Visual Experiences. Two play sessions were also on offer – Introducing the geographic dimension to your research: GIS for the Humanities (led by Dr. Paty Murrieta Flores) and Meaning and meaning-making: a social semiotic multimodal approach to contemporary issues in research (led by Professor Gunther Kress). All of our paper presenters were young early career researchers, from around the UK and beyond. For me, one of the best parts of the symposium was the sense of community that seemed to emerge once the sessions got underway. Even when discussing work far removed from their own, the audience were supportive and enthusiastic. This may be one of the key positives of digital media within the academy – particularly for young researchers. Whether you are a cultural theorist, a linguist, or an information scientist, a realignment of disciplinary boundaries creates opportunities to identify shared and new perspectives, enhanced by engagements with digital tech. Dr. Patricia Murrieta Flores explored with us how Geographic Information Systems, initially designed for engineers, scientists and planners, have become fruitful and fascinating tools for archaeologists and historians, who use them to identify and model patterns and trends of the earth, its artefacts, its people and their geo-politics, across space and time.
The day concluded with a stimulating and lively debate on the perils and potentials of Open Access Publishing as it relates to the Humanities and to Universities more generally. It is difficult to know at present whether the “Gold” or the “Green” route to Open Access publishing will prevail; most likely institutions will use some mixture of the two, with both becoming competitors in an increasingly uneven and costly publishing ecosystem based around entrenched and outmoded (?) notions of prestige and value. Those who can afford it will be driven towards Gold, with Green and its laudable aims of equity and freedom being pushed into the role of second-best.
Our expert panel (Drs. Cathy Urquhart, Paul Kirby, Alma Swan and Stephen Pinfield) hinted now and then at positive transformative potentials stemming from OA – Alma Swan in particular sees OA as a welcome tonic to old-fashioned models – but overall it seemed a rather gloomy picture, dictated as ever by economics and elitist notions of bettering one’s peers. Many academics wish to see a culture of openness, experimentation and sharing, with contributions valued for their merit. The harsh realities of convention and money make that something of a pipe dream. There will be limited budgets to pay article processing fees hence managers will be forced to ask which articles represent the best financial “return on investment”, too busy and pressurised to judge them on anything other than proxy criteria of quality that do not consider the intellectual value of a work in its own right. Well, that’s the doomsday scenario. However naively, I very much hope that freer forms of communication will emerge to combat that!
We will be uploading slides, videos and other materials to the official Digital Transformers website over the coming month, so please do check there for more details on the excellent papers and presentations that were given by the members of our nascent ERC network.
After months of hard work and planning, we have finally finalised the schedule for our Digital Transformers Symposium, happening on Thursday the 23rd of May, at MMU. Working in academia – and in particular on a PhD – it’s easy to get caught up in stress and uncertainty of various kinds. So it really is great to be able to pour energy into a community-based event like this, which everyone seems to be looking forward to. All of the workshops and papers sound amazing and Jo and myself really couldn’t be more pleased at the quality and scope of submissions.
There aren’t any places left for the Symposium now (we only have room for 40): but tickets are still available for our Open Access Debate which opens to a wider audience later in the day. Hope to see some of you there! 🙂