The Trouble With Big Data: insights from Jennifer Edmond, Jörg Lehmann, Mike Priddy and Nicola Horsley 

In this episode of our Open Scholar stars series, I am talking with a whole group: Jennifer Edmond (Trinity College Dublin), Jörg Lehmann (University of Tübingen), Mike Priddy (Data Archiving and Networked Services (DANS), and Nicola Horsley (Leeds Beckett University) on occasion of the publication of their Open Access book, The Trouble With Big Data: How Datafication Displaces Cultural Practices. We discuss inherent disciplinary and cultural biases in doing data-driven research,  digitization agendas,  data and power, data metaphors, open access book publishing and more. Enjoy! 

Hi Jennifer, Jörg, Nicola and Mike, and thanks for joining us! Could you start off by telling us a bit about what motivated the creation of your book? Who are the target audiences?

This image has an empty alt attribute; its file name is C7tzk3TsaM7eYVkMwFurV7WzqvKaO0su4-PnUeK0tLAj68r1n89mg1C4bGl4-diylqNjHq-lZwkG6-f2ALZA5M70q3WlKupGIGFOUP7wQ8Y9agO1xu_-INE2gkZ1KmAGxwJRQIOw

Jennifer: The book very much brings together the experiences of the KPLEX (Knowledge Complexity) project, on which we all worked together in 2017-2018. KPLEX was born of a frustration with some of the communication failures we had observed in Digital Humanities (DH) projects, where certain terms (like ‘data’ or ‘search’) would seem to function in an interdisciplinary context, but in fact be hiding a failure to communicate, due to different understandings of the implication of the terms, variations that depend heavily on epistemic culture.  This left us wondering how widespread this kind of phenomenon was in non-DH big data research, and how we could explore the relationship between big data and culture.  The answer seemed to be that the tensions were everywhere, which emboldened us to undertake a project to use our varied cultural analysis skills to explore the kinds of questions the move toward big data as a research and social knowledge making paradigm left behind.  

Jörg: A key motivation for writing the book were the insights we had while we conducted the KPLEX project. For me as a historian, these findings had to be integrated into long-term perspectives, e.g. with respect to how big data changes current epistemologies and the way research is conducted, and how power is exerted by those few companies which are able to deal with big data. Another motivation was to discuss the discourses around big data in a broader context, such as the claims for objectivity accompanying big data and the algorithms implemented to analyze them. Target audiences are (digital) humanists, people who deal with the digitization of cultural heritage, and critical observers of the field of big data, machine learning and “AI”.

Nicola: For me, as a sociologist, I wanted to expose the hidden processes that are elided by representations of datafication and bring to light the impact that people, in a wide variety of roles, have on what becomes big data, as well as the implications of big data practices for our relationship with social science and humanities knowledge. I was interested to expand on the cases of complexity we uncovered in our research to consider how far the effects of this paradigm shift might reach if we don’t re-centre the rigorous pursuit of (complex) knowledge as our cultural objective.

Let us come back to the KPLEX project for a question. Could you highlight the most surprising, most unexpected outcomes of this experience of data-focussed collaboration “with a group of people locked in a conflict”? What would be your advice for such multidisciplinary teams working on digitization?

Jennifer: I might have already answered this above, but the problem with building large scale, opaque systems in interdisciplinary contexts is that even if everyone has a basic understanding of the whole system (and a deep understanding of parts of it), there will always be places where we rely on heuristics, metaphors, models or other simplifications to bridge between the solid blocks.  If we aren’t really careful with these, it can lead to loss of time at best, and critical failures at worst.  This is of course even worse ‘out in the wild’, where consumers are using systems they don’t understand at all.  We are a very adaptable species, and of course we use tools we can’t build or fix all the time, but when that tool doesn’t just get us from point a to point b, but has an impact on how we understand the world down to the level of our own identities, the doorway to critical failures is wide open.

Returning to the question of research teams, I think that to manage these risks, you need to be very explicit about things, in particular about things like terminology, models, concepts, conceptualisations of success.  You have to start from the understanding that things won’t always be clear, and let yourself be happily surprised when a critical look at the underlying valences proves you wrong!  This is the ‘hidden workpackage’ of interdisciplinary projects: successful ones will always tacitly or explicitly manage this risk somehow (although sometimes that strategy is simply to keep the experts siloed in a more multidisciplinary structure).  

Do you think data is still “a dirty word” across the arts and humanities domain?

Jörg: Sure this is still the case. In the arts and humanities domain the term “data” is seen as a synecdoche for quantitative research – in the sense that data are equivalent with numbers -, and while the domain has become aware of the quantitative-qualitative dichotomy in recent years, most often the capability of data to be both – qualitative and quantitative – is not perceived, nor are the bridges that exist between both methodological approaches.

In the second chapter of your book, you investigate how different communities bring completely different conceptualizations of the word data and what are the far-reaching epistemological consequences of them. Could you tell us the most influential data-metaphors you identified?

Jennifer: When I began digging into it, I was actually surprised how very broad the field of metaphors was (and how much work already existed on it).  Of course my favourites was the one I discovered, quite literally, this idea of big data as a transcendent good has a lot of variations, from ‘the new oil’ to ‘ the new bacon,’  and it really messes with your mind to think about what this kind of message this leaves you with – more than love, more than money, we need… big data!  Another rather sinister category is the pool of extractive metaphors, like (again) data as oil, but also found in terms like data mining.  This obscures the created nature of data, as if they cannot have biases or gaps, it grounds them in our minds as something so fundamental – and yet in reality they are anything but.

What are the biggest challenges you see in “transforming culture into data”? Do you see them well addressed in national and European digitization agendas?

Mike: It is unlikely that we will ever see a significant proportion of cultural objects (of interest to researchers) become digitised.

With museums each holding millions of objects and archives containing kilometres of document shelving, the ambition to digitise will always outstretch the available resources to do so. Thematic project digitisation and the digitisation of documents upon request by some cultural heritage institutions will simply lead to increased hiddenness of the unrequested and unpopular heritage resources, as researchers move to the use of search engines for discovery, which will prioritise those resources that have more inbound hyperlinks, obfuscating potentially more useful resources in the the long tail of the search results. This is where interacting with the skilled and knowledgeable cultural heritage practitioner will outperform the mass results of the search engine.

Moreover the process of scanning/photographing digitally does not, in itself, produce usable data. For textual documents further processing and interpretation, requiring human intervention, is necessary to create data usable in knowledge extraction, linguistic pipelines, semantic graphs, etc. This, currently, is likely to be an additional bottleneck in the creation of historic cultural heritage data.

The book concludes with summarizing the key ethical questions in ensuring that the datification processes remain in line with the public good, rather than favouring a monocular fashion, associated with monopolistic companies. Could you mention tendencies in this respect that our readers should be especially aware of?

Nicola: I think it was important to us not to replicate the norms of the solutions culture we’re critiquing! Our research allowed us to reflect on threats and opportunities relating to the tension between commercial imperatives and the public good, new manifestations of inequality between the data-rich and data-poor, new minoritising subjectivities, redistributing power over knowledge and the fetishisation of data as truth and rejection of complexity as undesirable contamination – but it’s not our role to prescribe a particular model for improving on current practice. What we advocate is the recognition of humans and their biases in every step of knowledge production and that the design of systems that change cultural heritage practitioners’ practice be driven by them.

We were careful not to represent the trouble with big data as manifesting a choice between the dark side of technology and a luddite retreat; rather, the march of datafication offers an opportunity for us to take stock of our knowledge environment and consider how well we agree on, and strive to serve, a set of priorities that will benefit our supposed aim of furthering human knowledge.

Speaking of the public good, publishing your book open to everyone is a powerful gesture in this direction. Could you tell us about this experience? Do you think open access book publishing is still a privilege?

Jennifer: I work with a lot of scholars from around the world, and have become very sensitive to the privileges I have associated with the country, job, institution etc. that I happen to belong to.  Anything I can individually do to mitigate these inequalities is important for me to do.  But personally, I have had really great experiences of publishing an open access book – the 2020 volume Digital Technologies and the Practices of Humanities Research (Open Books Publisher).  It was really inspiring to be able to see how many people were reading this book, and where.  I really wanted the same for this new book, so when Bloomsbury offered us the opportunity to appear in the open access collection, it was not a question of if, but how!

Jörg: Open access publishing is definitely still a privilege, but the transition to open access will take place soon, and it will have massive consequences which fall into the rubric “digital transformation”. The point here is that formerly – at a time when books were printed – publishing houses could be understood as a kind of quality filter. Currently this is no longer or only partially the case, and the challenge for us scientists is to establish quality control mechanisms beyond mere proofreading and beyond the quality assurance provided internally by the authors’ collective.

Mike: Even though publishing an open access book may still be a privilege at least reading it will not be! I am very much aware that many of those who contributed through interviews etc. and their fellow cultural heritage practitioners do not necessarily have access to an academic library.

Nicola: Exactly. Providing open access to the book was really the least we could do. I think we would have felt a little deflated, if not hypocritical, if our critique of inequitable access to knowledge was inaccessible. That said, there were a lot of moments when writing this book that felt quite meta, if that word can still be used freely. In our research we, of course, looked for sources using the kind of searching tools we argue are problematic, and at times I was tempted to re-structure the book as a kind of autoethnography of the struggle to preserve our epistemic agency when researching how datafication displaces cultural practices … which is why it’s important to have good co-authors who keep you from falling into a black hole and ensure that the book is useful and accessible to a wide audience in that sense too.

Thank you for your time! It has been great to learn from your insight and experience!



Cite this blog post
Erzsebet Toth-Czifra (2022, March 21). The Trouble With Big Data: insights from Jennifer Edmond, Jörg Lehmann, Mike Priddy and Nicola Horsley . DARIAH Open. Retrieved March 29, 2024, from https://doi.org/10.58079/ngus

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Search OpenEdition Search

You will be redirected to OpenEdition Search