Assessing the Societal Impacts of Research In Violently Divided Societies (VDS)
The need for the systematic evaluation of the societal impacts1 of research is pressing and challenging in both non-conflict and conflict contexts. However, as we explore in the current chapter, and throughout this book, it is even more complex in VDS. The interplay of context, knowledge production and research utilisation is not easily untangled, let alone measured. The most obvious positive impact of research on a VDS is to increase the practical knowledge base of crucial development stakeholders — institutions, policy-makers and civil society — through analysis characterised ideally by methodological rigour, timeliness, relevance and usability. However, research (even methodologically sound and scientifically valid) may also exacerbate tensions, for example, if its implications or conclusions are perceived to be threatening to the interests of one or more groups.2 As such, we need to understand the political and societal contexts within which research is embedded and through which impacts are mediated. We need to ask, therefore, how individuals and institutions appropriate (or misappropriate) research, and apply (or misapply) it, for the purposes of influencing policy and practice.
The evaluation of research, however, faces a number of particular challenges. First, conventional approaches to programme and policy evaluation, including the evaluation of research programmes, are largely linear. While such approaches are the mainstay of many funders, they are not designed to be applied to research, much less to research in violence-prone contexts. To the extent that evaluations are undertaken on initiatives in VDS, they focus on development, humanitarian and peacebuilding programmes rather than research per se. While research and research activities are often housed within projects and programmes that include multiple types of interventions such as training, education or service delivery (to name a few), mainstream approaches for evaluating these sorts of programme interventions are ill-suited for evaluating research. Similarly, these sorts of programmes are often funded by international aid agencies, rather than research funders. In these cases, research is but one component, embedded within a larger, multi-faceted package of interventions. Consequently, evaluations focus on the overall impact of the project/programme, rather than the research component per se.
Second, in the world of international aid, too often evaluation adopts an almost singular focus on accountability — typically accountability for resource efficiency. While accountability for resource use is of unquestionable importance, the learning function of evaluation is also essential. This tendency to eclipse the learning potential of evaluation inhibits prospects for the generation of knowledge in situations and contexts of deep complexity. It is precisely in these environments that we are most in need of innovative thinking and new ways of viewing old problems. Relatedly, standard approaches to programme evaluation also tend to place more emphasis on the tangible, short-term outputs of activities, rather than on the more subtle, and less easily measured, outcomes and impacts of research within a programme.
A study that aims to evaluate the impact of research in violently divided societies
Third, conventional approaches to the evaluation of research have been dominated by two modes of assessment — peer review and bibliometric analysis. While these approaches have their strengths, one of the principal problems with both is that they tend to assess the merits of short-term research results (outputs such as papers, articles and books or research management processes) but tell us virtually nothing about research effectiveness — how research is used to influence social change and contribute to solving societal problems. Despite some advancements, bibliometrics and peer review dominate academic settings and, in the absence of viable alternatives, tend also to dominate the evaluation of extra-academic research — that is, research that is led by the broader policy research community and which includes think tanks, not-for-profit firms, governmental, non-governmental and inter-governmental actors. These approaches, while limiting enough in academia, are even less helpful in VDS settings where the social change objectives of much research are inextricably linked to, and influenced by, politics and the dynamics of violence.
Research institutes (inside and outside the walls of a university) and agencies that fund research have been involved in generating knowledge oriented towards policy and practice change. So too have they been involved in nurturing innovation in the theory and practice of evaluation. Nonetheless, our sense is that, to date, these two broad fields of peace research and evaluation have developed in isolation from each other. The current book attempts to bridge these divides by mapping out the most vexing challenges — and promising avenues — to understanding and evaluating the impact of research in VDS. The central concern of the book is to systematically examine how practice can be improved to better understand the difference that research makes in VDS. In tackling this issue, it quickly became apparent that the book and its contributors would need to explore largely uncharted intellectual terrain, answering two important questions: (a) Why is evaluating research so much more difficult in contexts affected by violence? and (b) What can we apply from current evaluation practice in the Global North and South to our assessment of the impacts of research in VDS?
To interrogate these questions, this book brings together four groups seeking improvements in the conduct and use of research and evaluation in VDS: those who undertake research of different types (researchers); those who commission research or the evaluation of research (referred to as funders, donors or grant makers); those who make use of research in the course of working in violence-prone environments (practitioners); and those who evaluate research and non-research interventions (evaluators). Of course, these groups are not mutually exclusive. For example, researchers may serve as evaluators; evaluators often undertake research; and decision-makers within donor agencies may be active or retired researchers.
These four groups work within VDS or on the issues that affect these societies. They are regularly called upon to make snap decisions in high-risk, high stake contexts. It is hoped that this book will help to prepare them (and the students who will eventually take their places) to anticipate, identify and respond effectively in these kinds of environments. By assembling contributors from these groups, our intention is to draw on, and harness, the experience and expertise needed to bring distinct fields of professional and intellectual activity together, and to develop a more self-conscious and systematic understanding of, and approach to, the evaluation of research on and in VDS.
The result is an exploration of the ways that research, power and politics interact in VDS. The analytical lens used to explore these interactions is drawn from the field of evaluation. We believe that this is a novel and fruitful approach for understanding more clearly and systematically the positive and negative role of research — and indeed any intervention — in VDS. An evaluative lens offers the possibility of bridging theory and practice, and ideas and impact. This particular focus locates the book on the interface between evaluation research and peace and conflict studies.
The objective of this introductory chapter is threefold: first, to present the rationale and objectives of the book; second, to orient the reader to the central concepts, issues, debates and challenges related to the conduct and evaluation of research in and on VDS3; and lastly, to sketch out the principal themes of the book.
The Two Faces of Research: Positive and Negative Societal Impacts
In many ways, we have only a rudimentary understanding of the societal and political effects or impacts of research — whether the research is undertaken by think tanks, research institutes, non-governmental organisations or by individual researchers inside or outside the walls of a university.4 Within university settings, there is increasing debate around how to assess the quality and extra-academic impacts of research (i.e., how research is used and contributes to larger goals of social and economic betterment). As discussed in the literature review undertaken in Chapter 2, the intensity and motivations of this debate are complex — and are tied as much to funding politics as to the desire to optimise impacts beyond the walls of the university. Outside the university setting, efforts are also being made to evaluate the spread, use and influence of social change-focused research on policy and practice. The fruits of these efforts and the advances made in thinking about research effectiveness (understood as research use, influence and extra-academic or societal impacts) are evident in the chapters that make up this book.
In orienting ourselves to the evaluation of research, we need to bear in mind that research may have both positive and negative effects. As a social good, research can increase opportunities for cooperation and collaboration within and between divided groups. It can inform public policy decision-making, particularly in settings where new spaces arise for dialogue, and where there is an increase in the receptivity of policy-makers to make use of research findings.
The above excerpt has been taken from the chapter ‘Research, Impact and Politics in Violently Divided Societies: An Evaluative Lens for Small Scale Maps’.
Excerpted with permission from
Evaluation in the Extreme: Research, Impact and Politics in Violently Divided Societies
Edited by Kenneth Bush and Colleen Duggan
Sage India
ISBN 978-9351503941
348pp.
Dear visitor, the comments section is undergoing an overhaul and will return soon.