By R. Craig Lefebvre, Chief Maven for socialShift
Keeping up with the evolution of social marketing research and practice can be a tough problem. To help with that, I do an annual review of papers that document in the peer-reviewed literature how the field is developing (here are the links to the 2011, 2012, and 2013 selections).
One of my priorities in reviewing the work is how they help strengthen the evidence base for the discipline. Just collecting more stories, or case studies, about social marketing needs to end; we need a stronger focus on research with better descriptions of methods, collection and analysis of relevant data – not convenient ones, and the use of experimental designs.
Pick up most textbooks on social marketing and read the references. I have, and was stunned by the lack of citations to research studies about the effectiveness of social marketing (my highest count was 10 – about as many as are in this post, and was something I deliberately set out to change with my book). I have heard from colleagues that they wouldn’t, or couldn’t, teach a course in social marketing in their department using available textbooks because there was a lack of ‘scientific rigor’ in them (or words to that effect).
I am also interested in how social marketing is presented outside our immediate orbit. Practitioners can carry on about the ‘art’ of social marketing (I do it as well), but if social marketing is to be taken seriously by others (leading academic institutions and policy makers to name two) it needs data, not stories, to demonstrate its value.
One of the studies that caught my attention was “Effectiveness of a combination prevention strategy for HIV risk reduction with men who have sex with men in Central America: a mid-term evaluation,” published in BMC Public Health, by a team of researchers affiliated with Population Services International, led by Rebecca Firestone.
Although you might expect a comparison group in most effectiveness studies, this study is an example of when you are involved in large-scale programming across five countries in Central America to reduce HIV risk. The correct decision in this case, in my mind, is go big and leave the small stuff to other investigators.
The authors present a wealth of data, and I was particularly impressed to see the attention to measures of program exposure – and that exposure was related to behavioral outcomes. Measuring the relevant, not convenient, variables is the lesson in this study.
R. Craig Lefebvre, PhD is an architect and designer of public health and social change programs. He is the chief maven at socialShift, the social|design, marketing and media consultancy located in Sarasota, FL. He was the Chief Technical Officer at Population Services International (PSI) where he led PSI’s Innovation teams in capacity-building, HIV, malaria, child survival and clean water programs, reproductive health, social marketing and research & metrics.
This post is excerpted from this original.
Image credit: StockMonkeys.com via Flickr (CC BY 2.0)