Analyzing Social Networks

To be published in Jan 2018. SECOND EDITION. Published by Sage
Stephen P Borgatti – University of Kentucky, USA
Martin G Everett – Manchester University, UK
Jeffrey C Johnson – University of Florida, USA

Publishers blurb: “Designed to walk beginners through core aspects of collecting, visualizing, analyzing, and interpreting social network data, this book will get you up-to-speed on the theory and skills you need to conduct social network analysis. Using simple language and equations, the authors provide expert, clear insight into every step of the research process—including basic maths principles—without making assumptions about what you know. With a particular focus on NetDraw and UCINET, the book introduces relevant software tools step-by-step in an easy to follow way.

In addition to the fundamentals of network analysis and the research process, this new Second Edition focuses on:

  • Digital data and social networks like Twitter
  • Statistical models to use in SNA, like QAP and ERGM
  • The structure and centrality of networks
  • Methods for cohesive subgroups/community detection
  • Supported by new chapter exercises, a glossary, and a fully updated companion website, this text is the perfect student-friendly introduction to social network analysis.”

Detailed contents list here

 

The Ethics of Influence: Government in the Age of Behavioral Science

by Cass R. Sunstein, Cambridge University Press, 2016

Contents:

1. The age of behavioral science;
2. Choice and its architecture;
3. ‘As judged by themselves’;
4. Values;
5. Fifty shades of manipulation;
6. Do people like nudges? Empirical findings;
7. Green by default? Ethical challenges for environmental protection;
8. Mandates – a very brief recapitulation;
Appendix A. American attitudes toward thirty-four nudges;
Appendix B. Survey questions;
Appendix C. Executive Order 13707: using behavioral science insights to better serve the American people;

Amazon blurb: “In recent years, ‘nudge units’ or ‘behavioral insights teams’ have been created in the United States, the United Kingdom, Germany, and other nations. All over the world, public officials are using the behavioral sciences to protect the environment, promote employment and economic growth, reduce poverty, and increase national security. In this book, Cass R. Sunstein, the eminent legal scholar and best-selling co-author of Nudge (2008), breaks new ground with a deep yet highly readable investigation into the ethical issues surrounding nudges, choice architecture, and mandates, addressing such issues as welfare, autonomy, self-government, dignity, manipulation, and the constraints and responsibilities of an ethical state. Complementing the ethical discussion, The Ethics of Influence: Government in the Age of Behavioral Science contains a wealth of new data on people’s attitudes towards a broad range of nudges, choice architecture, and mandates.

Book Review by Roger Frantz (pdf)

Norms in the Wild: How to Diagnose, Measure, and Change Social Norms

Cristina Bicchieri, Oxford University Press, 2016. View Table of Contents

Publisher summary:

  1. Presents evidence-based assessment tools for assessing and intervening on various social behaviors
  2. Illustrates the role of mass media and autonomous “first movers” as the forefront of wide-scale behavioral change
  3. Provides dichotomous models for assessing normative behaviors
  4. Explains why well-tested interventions sometimes fail to change behavior

 

Amazon blurb: “The philosopher Cristina Bicchieri here develops her theory of social norms, most recently explained in her 2006 volume The Grammar of Society. Bicchieri challenges many of the fundamental assumptions of the social sciences. She argues that when it comes to human behavior, social scientists place too much stress on rational deliberation. In fact, many choices occur without much deliberation at all. Bicchieri’s theory accounts for these automatic components of behavior, where individuals react automatically to cues–those cues often pointing to the social norms that govern our choices in a social world

Bicchieri’s work has broad implications not only for understanding human behavior, but for changing it for better outcomes. People have a strong conditional preference for following social norms, but that also means manipulating those norms (and the underlying social expectations) can produce beneficial behavioral changes. Bicchieri’s recent work with UNICEF has explored the applicability of her views to issues of human rights and well-being. Is it possible to change social expectations around forced marriage, genital mutilations, and public health practices like vaccinations and sanitation? If so, how? What tools might we use? This short book explores how social norms work, and how changing them–changing preferences, beliefs, and especially social expectations–can potentially improve lives all around the world.”

 

 

How to Measure Anything: Finding the Value of Intangibles in Business [and elsewhere]

3rd Edition by Douglas W. Hubbard (Author)

pdf copy of 2nd edition available here

Building up from simple concepts to illustrate the hands-on yet intuitively easy application of advanced statistical techniques, How to Measure Anything reveals the power of measurement in our understanding of business and the world at large. This insightful and engaging book shows you how to measure those things in your business that until now you may have considered “immeasurable,” including technology ROI, organizational flexibility, customer satisfaction, and technology risk.

Offering examples that will get you to attempt measurements-even when it seems impossible-this book provides you with the substantive steps for measuring anything, especially uncertainty and risk. Don’t wait-listen to this book and find out:

  • The three reasons why things may seem immeasurable but are not
  • Inspirational examples of where seemingly impossible measurements were resolved with surprisingly simple methods
  • How computing the value of information will show that you probably have been measuring all the wrong things
  • How not to measure risk
  • Methods for measuring “soft” things like happiness, satisfaction, quality, and more

Amazon.com Review Now updated with new research and even more intuitive explanations, a demystifying explanation of how managers can inform themselves to make less risky, more profitable business decisions This insightful and eloquent book will show you how to measure those things in your own business that, until now, you may have considered “immeasurable,” including customer satisfaction, organizational flexibility, technology risk, and technology ROI.

  • Adds even more intuitive explanations of powerful measurement methods and shows how they can be applied to areas such as risk management and customer satisfaction
  • Continues to boldly assert that any perception of “immeasurability” is based on certain popular misconceptions about measurement and measurement methods
  • Shows the common reasoning for calling something immeasurable, and sets out to correct those ideas
  • Offers practical methods for measuring a variety of “intangibles”
  • Adds recent research, especially in regards to methods that seem like measurement, but are in fact a kind of “placebo effect” for management – and explains how to tell effective methods from management mythology
  • Written by recognized expert Douglas Hubbard-creator of Applied Information Economics

How to Measure Anything, Second Edition illustrates how the author has used his approach across various industries and how any problem, no matter how difficult, ill defined, or uncertain can lend itself to measurement using proven methods.

See also Julia Galef’s podcast interview with the author: 

 

 

Fact Checking websites serving as public evidence-monitoring services: Some sources

These services seem to be getting more attention lately, so I thought it would be worthwhile compiling a list of some of the kinds of fact checking websites that exist, and how they work.

Fact checkers have the potential to influence policies at all stages of the policy development and implementation process, not by promoting particular policy positions based on evidence, but by policing the boundaries of what should be considered as acceptable as factual evidence. They are responsive rather than pro-active.

International

American websites

  • Politifact– PolitiFact is a fact-checking website that rates the accuracy of claims by elected officials and others who speak up in American politics.
  • Fact Check–They monitor the factual accuracy of what is said by major U.S. political players in the form of TV ads, debates, speeches, interviews and news releases.
  • Media Bias / Fact Check…claims to be ” the most comprehensive media bias resource on the internet”, but content is mainly American

Australia

United Kingdom

Discussions of the role of fact checkers

A related item, just seen…

  • This site is “taking the edge off rant mode” by making readers pass a factual knowldge quiz before commenting. ““If everyone can agree that this is what the article says, then they have a much better basis for commenting on it.”

Update 20/03/2017: Read Tim Harford’s blog posting on The Problem With Facts (pdf copy here), and communication value of eliciting curiosity

Monitoring and Evaluation in Health and Social Development: Interpretive and Social Development Perspectives

Edited by Stephen Bell and Peter Aggleton. Routledge 2016. View on Google Books

interpretive researchers thus attempt to understand phenomena through accessing the meanings participants assign to them

“...interpretive and ethnographic approaches are side-lined in much contemporary evaluation work and current monitoring and evaluation practice remains heavily influenced by more positivist approaches

attribution is not the only purpose of impact evaluation

Lack of familiarity with qualitative approaches by programme staff and donor agencies also influences the preferences for for quantitative methods in monitoring and evaluation work

Contents

1. Interpretive and Ethnographic Perspectives – Alternative Approaches to Monitoring and Evaluation Practice

2. The Political Economy of Evidence: Personal Reflections on the Value of the Interpretive Tradition and its Methods

3. Measurement, Modification and Transferability: Evidential Challenges in the Evaluation of Complex Interventions

4. What Really Works? Understanding the Role of ‘Local Knowledges’ in the Monitoring and Evaluation of a Maternal, Newborn and Child Health Project in Kenya

PART 2: Programme Design 5. Permissions, Vacations and Periods of Self-regulation: Using Consumer Insight to Improve HIV Treatment Adherence in Four Central American Countries

6. Generating Local Knowledge: A Role for Ethnography in Evidence-based Programme Design for Social Development

7. Interpretation, Context and Time: An Ethnographically Inspired Approach to Strategy Development for Tuberculosis Control in Odisha, India

8. Designing Health and Leadership Programmes for Young Vulnerable Women Using Participatory Ethnographic Research in Freetown, Sierra Leone

Part 3: Monitoring Processes

9. Using Social Mapping Techniques to Guide Programme Redesign in the Tingim Laip HIV Prevention and Care Project in Papua New Guinea

10. Pathways to Impact: New Approaches to Monitoring and Improving Volunteering for Sustainable Environmental Management

11. Ethnographic Process Evaluation: A Case Study of an HIV Prevention Programme with Injecting Drug Users in the USA

12. Using the Reality Check Approach to Shape Quantitative Findings: Experience from Mixed Method Evaluations in Ghana and Nepal

Part 4: Understanding Impact and Change

13. Innovation in Evaluation: Using SenseMaker to Assess the Inclusion of Smallholder Farmers in Modern Markets

14. The Use of the Rapid PEER Approach for the Evaluation of Sexual and Reproductive Health Programmes

15. Using Interpretive Research to Make Quantitative Evaluation More Effective: Oxfam’s Experience in Pakistan and Zimbabwe

16. Can Qualitative Research Rigorously Evaluate Programme Impact? Evidence from a Randomised Controlled Trial of an Adolescent Sexual Health Programme in Tanzania

Rick Davies Comment: [Though this may reflect my reading biases…]It seems like this strand of thinking has not been in the forefront of M&E attention for a long time (i.e. maybe since the 1990s – early 2000’s) so it is good to see this new collection of papers, by a large collection of both old and new faces (33 in all).

Case-Selection [for case studies]: A Diversity of Methods and Criteria

Gerring, J., Cojocaru, L., 2015. Case-Selection: A Diversity of Methods and Criteria. January 2015 Available as pdf

Excerpt: “Case-selection plays a pivotal role in case study research. This is widely acknowledged, and is implicit in the practice of describing case studies by their method of selection – typical, deviant, crucial, and so forth. It is also evident in the centrality of case-selection in methodological work on the case study, as witnessed by this symposium. By contrast, in large-N cross-case research one would never describe a study solely by its method of sampling. Likewise, sampling occupies a specialized methodological niche within the literature and is not front-and-center in current methodological debates. The reasons for this contrast are revealing and provide a fitting entrée to our subject.

First, there is relatively little variation in methods of sample construction for cross-case research. Most samples are randomly sampled from a known population or are convenience samples, employing all the data on the subject that is available. By contrast, there are myriad approaches to case-selection in case study research, and they are quite disparate, offering many opportunities for researcher bias in the selection of cases (“cherry-picking”).

Second, there is little methodological debate about the proper way to construct a sample in cross-case research. Random sampling is the gold standard and departures from this standard are recognized as inferior. By contrast, in case study research there is no consensus about how best to choose a case, or a small set of cases, for intensive study.

Third, the construction of a sample and the analysis of that sample are clearly delineated, sequential tasks in cross-case research. By contrast, in case study research they blend into one another. Choosing a case often implies a method of analysis, and the method of analysis may drive the selection of cases.

Fourth, because cross-case research encompasses a large sample – drawn randomly or incorporating as much evidence as is available – its findings are less likely to be driven by the composition of the sample. By contrast, in case study research the choice of a case will very likely determine the substantive findings of the case study.

Fifth, because cross-case research encompasses a large sample claims to external validity are fairly easy to evaluate, even if the sample is not drawn randomly from a well-defined population. By contrast, in case study research it is often difficult to say what a chosen case is a case of – referred to as a problem of “casing.”

Finally, taking its cue from experimental research, methodological discussion of cross-case research tends to focus on issues of internal validity, rendering the problem of case-selection less relevant. Researchers want to know whether a study is true for the studied sample. By contrast, methodological discussion of case study research tends to focus on issues of external validity. This could be a product of the difficulty of assessing case study evidence, which tends to demand a great deal of highly specialized subject expertise and usually does not draw on formal methods of analysis that would be easy for an outsider to assess. In any case, the effect is to further accentuate the role of case-selection. Rather than asking whether the case is correctly analyzed readers want to know whether the results are generalizable, and this leads back to the question of case-selection.”

Other recent papers on case selection methods:

Herron, M.C., Quinn, K.M., 2014. A Careful Look at Modern Case Selection Methods. Sociological Methods & Research
 Nielsen, R.A., 2014. Case Selection via Matching. http://www.mit.edu/~rnielsen/Case%20Selection%20via%20Matching.pdf

Overview: An open source document clustering and search tool

Overview is an open-source tool originally designed to help journalists find stories in large numbers of documents, by automatically sorting them according to topic and providing a fast visualization and reading interface. It’s also used for qualitative research, social media conversation analysis, legal document review, digital humanities, and more. Overview does at least three things really well.

  • Find what you don’t even know to look for.
  • See broad trends or patterns across many documents.
  • Make exhaustive manual reading faster, when all else fails.

Search is a wonderful tool when you know what you’re trying to find — and Overview includes advanced search features. It’s less useful when you start with a hunch or an anonymous tip. Or there might be many different ways to phrase what you’re looking for, or you could be struggling with poor quality material and OCR error. By automatically sorting documents by topic, Overview gives you a fast way to see what you have .

In other cases you’re interested in broad patterns. Overview’s topic tree shows the structure of your document set at a glance, and you can tag entire folders at once to label documents according to your own category names. Then you can export those tags to create visualizations.

Rick Davies Comment: This service could be quite useful in various ways, including clustering sets of Most Significant Change (MSC) stories, or micro-narratives form SenseMaker type exercises, or collections of Twitter tweets found via a key word search. For those interested in the details, and preferring transparency to apparent magic, Overview uses the k-means clustering algorithm, which is explained broadly here. One caveat, the processing of documents can take some time, so you may want to pop out for a cup of coffee while waiting. For those into algorithms, here is a healthy critique of careless use of k-means clustering i.e. not paying attention to when its assumptions about the structure of the underlying data are inappropriate

It is the combination of searching using keywords, and the automatic clustering that seems to be the most useful, to me…so far. Another good feature is the ability to label clusters of interest with one or more tags

I have uploaded 69 blog postings from my Rick on the Road blog. If you want to see how Overview hierarchically clusters these documents let me know, I then will enter your email, which will then let Overview give you access. It seems, so far, that there is no simple way of sharing access (but I am inquiring).

Feminist Evaluation & Research: Theory & Practice

 

 

Sharon Brisolara PhD (Editor), Denise Seigart PhD (Editor), Saumitra SenGupta PhD (Editor)
Paperback: 368 pages, Publisher: The Guilford Press; Publication Date: March 28, 2014 | ISBN-10: 1462515207 | ISBN-13: 978-1462515202 | Edition: 1
Available on Amazon (though at an expensive US$43 for a paperback!)

No reviews available online as yet, but links to these will be posted here when they become available

CONTENTS

I. Feminist Theory, Research and Evaluation

1. Feminist Theory: Its Domain and Applications, Sharon Brisolara
2. Research and Evaluation: Intersections and Divergence, Sandra Mathison
3. Researcher/Evaluator Roles and Social Justice, Elizabeth Whitmore
4. A Transformative Feminist Stance: Inclusion of Multiple Dimensions of Diversity with Gender, Donna M. Mertens
5. Feminist Evaluation for Nonfeminists, Donna Podems

II. Feminist Evaluation in Practice

6. An Explication of Evaluator Values: Framing Matters, Kathryn Sielbeck-Mathes and Rebecca Selove
7. Fostering Democracy in Angola: A Feminist-Ecological Model for Evaluation, Tristi Nichols
8. Feminist Evaluation in South Asia: Building Bridges of Theory and Practice, Katherine Hay
9. Feminist Evaluation in Latin American Contexts, Silvia Salinas Mulder and Fabiola Amariles

III. Feminist Research in Practice

10. Feminist Research and School-Based Health Care: A Three-Country Comparison, Denise Seigart
11. Feminist Research Approaches to Empowerment in Syria, Alessandra Galié
12. Feminist Research Approaches to Studying Sub-Saharan Traditional Midwives, Elaine Dietsch
Final Reflection. Feminist Social Inquiry: Relevance, Relationships, and Responsibility, Jennifer C. Greene

 

The Science of Evaluation: A Realist Manifesto

Pawson, Ray. 2013. The Science of Evaluation: A Realist Manifesto. UK: Sage Publications. http://www.uk.sagepub.com

Chapter 1 is available as a pdf. Hopefully other chapters will also become available this way, because this 240 page book is expensive.

Contents

Preface: The Armchair Methodologist and the Jobbing Researcher
PART ONE: PRECURSORS AND PRINCIPLES
Precursors: From the Library of Ray Pawson
First Principles: A Realist Diagnostic Workshop
PART TWO: THE CHALLENGE OF COMPLEXITY – DROWNING OR WAVING?
A Complexity Checklist
Contested Complexity
Informed Guesswork: The Realist Response to Complexity
PART THREE: TOWARDS EVALUATION SCIENCE
Invisible Mechanisms I: The Long Road to Behavioural Change
Invisible Mechanisms II: Clinical Interventions as Social Interventions
Synthesis as Science: The Bumpy Road to Legislative Change
Conclusion: A Mutually Monitoring, Disputatious Community of Truth Seekers

Reviews

%d bloggers like this: