I completely understand the need to evaluate public diplomacy programs due to their need to “receive financial support from government agencies, foundations, corporations, and/or individual donors,” and the need to show their worth to such organizations to solidify that financial support (Olberding). It is definitely a very complicated issue, however, considering most have long-term goals of changing the “hearts and minds,” so to speak, of the local people in whatever country (some more than others).
I really appreciated Julia’s post this week, discussing more concretely the ideas presented by the Foreign and Commonwealth Office (FCO) and British Council. As she notes, these organizations, in their evaluations, have grounded their studies by considering the long term and ongoing evaluation challenges. As Banfield’s statement indicates, however, these long term effects and changing impact are “very difficult to measure year to year.” Professor Hayden’s lecture this week also capitalizes on this point, and mentions Tara Sonenshine’s statement as Under Secretary for Public Diplomacy and Public Affairs, in a speech about public diplomacy, and its challenges in evaluating the long-term impacts. He suggests that this statement is factual, but not enough of a solution, with which I agree, but unfortunately have not yet been able to find my own solution to the problem.
The International Visitor Program at the State Department has done a somewhat effective job at showing its impact. Statistics from evaluators in Philadelphia found that “97.1% of hosts/resources agreed that hosting and/or interacting with foreign visitors participating in exchange programs promotes mutual understanding among Americans and foreigners; 94.2% reported having ‘basic to advanced knowledge’ about the culture and country of the foreign visitors immediately after the hosting experience, compared to 75.3% before the experience…,” among others. These statistics clearly show an impact on the people involved, and Olberding and Olberding further note that “youth peacebuilding or exchange programs can impact not only the exchange students who travel and stay in other countries (that is, the direct participants), but also the other individuals who are involved with the program is less direct ways, including chaperones who may travel with the exchange students, host families, and students and teachers in the host school (that is, indirect participants).” Is this enough? That is, does knowing that a wide number of people involved in such exchanges -- one example of a public diplomacy program -- are affected in a largely positive way help evaluate the success overall of such programs? Having a more favorable opinion of other countries based on these interactions does not necessarily lead to other action on the part of that individual, so although I personally find these programs and interactions extremely valuable in the long term in giving people around the world a more humane and deeper understanding of their neighbors, I wonder if that statement is enough to convince a foundation to keep paying for them.
Going back to Julia’s post, I very much hope that her evidence-based, innovating evaluative model will work and change the future of public diplomacy. I would agree with her that the “influence tracker” sounds the most interesting, and would like to continue to learn about their efforts in supporting such evaluation measures.
For much of the semester I’ve been harping on the importance of objectives and assessments or in the parlance of this week’s focus, measurement and evaluation. Much like Kim and Julia have noted, evaluation is appropriate and necessary to measure the effectiveness of a program against the resources being applied to it. Although measurement and evaluation is perhaps somewhat new in the PD field, it is not something that is new in other areas of study. The “Dirty Dozen” challenges outlined by Banks highlight many of the difficulties “inherent in measuring success in public diplomacy.” For the most part though, these challenges are the same challenges faced in virtually any other arena. For example, in many ways the Department of Defense equivalent of Public Diplomacy is some semblance of “Information Operations.” These are defined as “the integrated employment, during military operations, of information related capabilities in concert with other lines of operation to influence, disrupt, corrupt, or usurp the decision making of adversaries and potential adversaries while protecting our own.” Implicit in the doctrine relating to information operations is that one of the most challenging aspects of conducting these operations is assessing them. It’s easy to measure if an action worked or not, and immensely more difficult to determine if it was successful in influencing/persuading the intended audience. Julia’s example highlights the importance of developing objectives, clear program goals, and metrics to measure rather than just trying to determine if the program worked or not. This makes it possible to conduct evaluation even if it is difficult.
ReplyDelete