Evaluating Violence Against Women Research Reports

Sandra K. Beeman With contributions from Carol Arthur

Over the past 15 years, the number of journal articles, books, and reports presenting the results of research on violence against women has grown dramatically. Academic journals such as Violence Against Women are devoted specifically to publishing such research. This research potentially provides individuals and organizations working to end violence against women with information that can help improve services to battered women and their families; better understand the lives of battered women and their families; develop programs based on sound research knowledge; and contribute to the development of public policies that support battered women and their families.

 

This document provides introductory guidelines for the use and evaluation of research reports. The purpose of this document is to help advocates become more skilled and more confident about reading and understanding research reports. What are the different forms of research reports and where can they be found? Can we believe what we read? How can a non-researcher critically read and analyze research reports? How can we judge the quality of research? What can be done with the results of research?

 

What to read and where to find it

 

Findings of research on violence against women are available in a variety of forms and from a variety of places. Professional journals such as Violence Against Women, the Journal of Interpersonal Violence, Aggression & Violence Behavior, Violence & Victims, and the Journal of Family Violence include research conducted by psychologists, social workers, sociologists, advocates, and others. In addition to professional journals, findings of research are presented at domestic violence conferences, described in the popular press, found on websites devoted to ending violence against women, and are available as publications from government agencies or private research organizations. With so many research reports available, how do we know what to read? After we locate a research report, how do we know whether itís worth reading?

 

Not all research is created equal ñ either in its scientific quality or its practical value. There are several questions to consider when deciding whether or not to read a research report, and if we do choose to read it, whether or not to trust what we read. These questions include: Who is the researcher? What is their professional background? Do they represent a particular ideological perspective? Who funded the research? Who published the research? Although we often like to think of science as objective, most researchers now recognize that everyone brings values, beliefs, and prejudices to their research. This doesnít mean these values and beliefs necessarily bias their research, but the informed consumer of research needs to ask these questions to determine if the findings can be trusted or if there is reason to be skeptical.

Research reports published in scientific journals are subject to peer review. That is, these reports are read and reviewed by independent reviewers or referees who help the editor of the journal decide whether or not to publish the research. These referees often conduct ìblindî reviews ñ in other words, they are not aware of the identity of the author or authors. Research published in scientific journals thus gives the reader some confidence in the scientific credibility of the research findings. Scientific credibility, however, does not necessarily mean that the findings represent ìthe truth.î There is an extensive literature on the philosophical and methodological disagreements about the ability of different types of research methods to generate ìtruthsî, a discussion of which is beyond the scope of this paper. It is important, however, for readers of research to not confuse scientific credibility with truth.

 

Research released directly from an organization sponsoring the research does not usually go through the peer review process. If the report contains enough information about how the study was done, it may still be possible to judge the credibility of the research. The next sections provide guidance for our own critical analysis of research ñwhether or not the research report has been subjected to the peer review process.

 

Can we believe what we read?

 

No research is perfect. The key to making maximum use of research findings is knowing enough about research to critically read and understand the findings. Research can be categorized in a variety of ways: by purpose (exploratory, descriptive, and explanatory); by design and method (experimental, field, survey); or by underlying philosophy (feminist, phenomenological, positivist). One simple and straightforward way of categorizing research is often labeled quantitative and qualitative. This distinction will be used in this article because the two categories of research have broadly different implications for judging the ìtruth valueî and generalizability of the findings.

 

Quantitative and Qualitative Research

 

Quantitative research will be used in this article to refer to research conducted in a positivist tradition. Research conducted in this tradition generally includes experiments, quasi-experiments, and surveys, and uses statistical manipulations of numbers to process data and summarize results. Two important concerns in quantitative research have to do with its internal and external validity. Validity refers to the ìtruthî of the research findings - was the study designed and data collected and analyzed in a way such that we have confidence about its conclusions? And even if we have this confidence is there any reason to believe that the findings are also true beyond this particular study? These questions must be answered in quantitative research because the goals are generally to determine answers to questions about the relationship between some variables in a way that we have confidence in the findings beyond the study at hand (i.e. they are representative of some larger truth.)

 

Qualitative research will be used in this article to refer to research conducted in an interpretive or critical tradition. Research conducted in this tradition generally includes ethnographies, naturalistic observation or intensive interviewing studies, and uses some type of content analysis of words or texts to generate themes, which summarize the results of the study. Qualitative research has the same concerns as quantitative research about the truth value of its findings but it is often referred to as trustworthiness or credibility. The goals of qualitative research are not usually to generalize from the findings to some larger truth, but rather to explore or generate truths for the particular sample of individuals studied or to generate new theories. There is often an emphasis in qualitative research on perception or lived experience.

 

It is important to keep these distinctions between quantitative and qualitative research in mind when using the following questions to guide our critical analysis of research.

           

Being a critical consumer of research: Five basic questions

 

In their guide to reading and understanding research, Locke, Silverman, and Spirduso (1998) recommend using the following five basic questions to guide the critical analysis of research reports. Those questions are:

 


1.     What is the report about?

2.     How does the study fit into what is already known?

3.     How was the study done?

4.     What was found?

5.     What do the results mean?

 

What is the report about?

 

The statement of purpose and abstract should provide us with enough information to know if we are interested in reading the entire study. It should indicate if the study was exploratory, descriptive, explanatory, or if it was an evaluation of a program. The purpose of the study is usually found in the abstract and in the first part of the report or article. An example of a statement of purpose from a quantitative study is: ìThe purpose of the present investigation was to empirically evaluate the effectiveness of a sexual assault education programî(Breitenbecher & Scarce, 1999, p. 459). The statement of purpose of a qualitative study may read something like the following: ìThis study identified variables related to change in abusive behavior through qualitative analyses of interviews with nine reformed batterersî (Scott & Wolfe, 2000, p. 827).

 

How does the study fit into what is already known?

 

The answer to this question is generally found in the review of the literature and rationale of the study. The literature review should describe studies relevant to the issue at hand and include recently published studies on the topic. Authors of research reports often provide their own analysis of the relevance of a study. For example, literature review often ends with a statement such as ìThis study contributes to the existing literature in the following manner: by choosing to study a real life sexual assault education program that was designed and implemented by specialists in rape education ñ a program that is currently in use on a large university campus, by focusing on the incidence of sexual assault among program participants as an outcome variable, by using a 7-month follow-up period within which to assess program effectiveness ñ a follow-up period that is longer than in any of the published literature to date, and by evaluating the relation between participantsí histories of sexual victimization and program effectivenessî (p. 462, Breitenbecher & Scarce, 1999). The literature review may also provide rationale for the use of particular methods. For example, ìA review of published quantitative studies emphasizes both the paucity of research on variables related to change in abusive men and the lack of compelling resultsî (p. 828, Scott & Wolfe, 2000). In this example, these authors go on to describe the use of qualitative methods to ìclarify or elaborate quantitative theoriesî (p. 829.).

 

In evaluating the value of a particular study in terms of contributing to a larger knowledge base, we should ask: Does the study provide new knowledge? Does it test a new program? Does it contribute to what we know and donít know?

 

How was the study done?

 

This is the methods section. The methods section of a research report should describe how the sample was selected, how key concepts were defined, the design that was used, and the methods of data collection and analysis that were used. Information in this section helps us decide whether or not we have confidence in the ìtruth valueî and generalizability of the findings to people other than those studied.

 

Sample: Whether quantitative or qualitative, the research report should clearly describe the ìsubjectsî of the research and how they were selected for study. This description may include information about the age, race, and other demographic characteristics of the research subjects along with their geographical location ñ urban, rural, Midwestern, Southern, etc. It is important to know whether or not the researcher hoped to generalize the findings of the research beyond this sample. If they did, did they provide us with enough of a description of the sample to determine to whom the findings can be generalized? For example, when reading the results of a program evaluation, we should ask how the individuals in the sample are the same or different from individuals served by our own programs.

 

Research reports should also describe the size of the sample and how the researcher arrived at that number of subjects. Quantitative studies generally have larger samples than qualitative studies. This is true for several reasons. In quantitative research, the researcher will usually conduct statistical analyses of their findings. The ability to use certain statistical tests depends on having a sample of a certain size. In addition, the quantitative researcher usually hopes to generalize, and can generalize the study findings with more confidence with larger samples. This is not the case in qualitative research where the purpose is usually not to generalize, but to generate theories, explore, or better understand something in depth. In addition, sample sizes are usually small in qualitative research because of the type of data collection and data analysis procedures used, which limit the number of individuals from whom we may collect the data. Finally, the researcher should describe the limitations of their sample or their sampling procedure.

 

In the Breitenbecher and Scarce (1999) study, the sample was described in this way: ìParticipants in this investigation were 275 women recruited from a large Midwestern university community. These women were recruited through advertisements in the university newspaper and flyers posted at various locations on campus describing a research project investigating sexual experiences among womenî (p. 462). The sample is further described as ìthe majority of the participants were single (94%), heterosexual (92%), Caucasian (84%), 18 ñ 21 year old (72%) undergraduate students (84%)î (p. 462). If we were trying to decide whether the results of this study might apply to women in our own programs, we would need to ask whether the women in the sample are similar or different to those served by our programs.

 

In Scott and Wolfeís (2000) study, the sample is described as ìall men (N=9) who were deemed by themselves, their counselors, and their partners as having been successful at changing their abusive behaviorî (p. 830). The article also indicates that the study was conducted as part of ongoing longitudinal research at a community agency in London, Ontario that provides a ìfeminist oriented group treatment program for voluntary and court-ordered men who are abusive toward their intimate partnersî (p. 830). Although it was not the goal of this qualitative study to generalize the findings beyond the study participants, additional information about the age, race, socio-economic status of these men and whether or not they had been court-ordered or were voluntary involved in the program would allow the reader to better understand the sample from whom the data were generated.

 

Key Concepts: How were key concepts such as domestic violence or sexual assault, defined? For example, how was ìdomestic violenceî defined? This is a particularly important issue because often the definitions are different for legal, social services, clinical, and scientific purposes. Readers of research reports describing adult domestic violence should carefully consider whether the authorís definition is the same or different from the definition they hold, and its implications for interpreting the results of a study. In the Scott and Wolfe (2000) study mentioned earlier, the authors were interested in interviewing ìreformed batterers.î This definition is very important ñ in this case, ìmen who were deemed by themselves, their counselors, and their partners as having been successful at changing their abusive behavior though treatmentî (p. 830).

 

If we are reading the report of an evaluation, the definition of the intervention should be very clear as well. For example, in the evaluation of the sexual assault education program described in Breitenbecher and Scarce (1999), the program was defined as follows: ì The program . . . highlighted such issues as the following: the prevalence of sexual assault among college populations; the existence of rape myths; the existence of sex role socialization practices that promote a rape-supportive environment; and a six-point redefinition of rape that emphasizes rape as an act of violence and power, as humiliating and degrading, and as a community issue affecting all men and womenÖThe education program incorporated both lecture-style presentation and solicitation of group discussionî (p. 463). The reader must decide if the definitions used in the study align with their own.

 

Research Design: How does the author describe the design? If it was experimental, was there random assignment to groups? Random assignment to groups ñ in other words, one group of individual receives the experimental intervention, one does not ñ helps guard against systematic sources of error. In an experiment, the researcher hopes to demonstrate that the intervention resulted in a change in the group of subjects. For example, a battererís treatment program hopes to reduce the use of violence. By randomly assigning some individuals to receive the experimental intervention and some to not receive it, the researchers strengthen their case that the intervention, rather than some other systematic difference between groups, is the cause of the reduction in use of violence. In the Breitenbecher and Scarce (1999) study, the 275 women in the sample were randomly assigned to either the treatment (the sexual assault education program) or control (no program) condition.

 

Sometimes researchers cannot or will not randomly assign subjects to groups. Either the groups already exist (for example comparing two different existing interventions) or the researcher is unable to randomly assign because of ethical or practical reasons. In these cases, the researcher should discuss the equivalence or nonequivalence of groups. In other words, the researcher should provide information describing the groups of individual to convince us that the groups were very similar before receiving the interventions.

 

Another important question to ask is: does the design fit the question and purpose? If the researcher hopes to demonstrate the causal effects of an intervention, then an experimental or quasi-experimental design is appropriate. If the purpose of the research is to measure attitudes, a survey design is usually appropriate. If the researcher hopes to develop theory, or explore an issue in depth, a qualitative design is appropriate. The researcher should provide a rationale for the selection of the design based on the purpose of their study.

 

Data Collection: This section of the report describes the methods and procedures used to collect data about the variables of interest. Measurement will be based on the definition of the studyís key concepts (discussed above.) The authors may describe the use of existing measuring instruments, e.g. the Conflict Tactics Scale (Strauss, 1979). When existing instruments or standardized instruments are used, the authors should provide some information about the validity (whether the instrument measures what was intended to be measured) and reliability (whether the instrument measures consistently when used repeatedly.) The authors may also describe measuring instruments designed by them for the study. For example, Breitenbecher and Scarce (1999) describe the Sexual Assault Knowledge Survey (SAKS) designed for use in their study. In this case, the authors provide examples of the questions and report on their own assessment of the reliability of the scale.

 

In a qualitative study, the authors are likely to describe the use of repeated, in-depth, or unstructured interviewing. This type of data collection generally requires the development of rapport with the research subjects, and often includes multiple interviews with the same individual. Scott and Wolfe (2000) describe the use of semi-structured hour-long interviews ìconducted in a quiet, private room by a skilled clinical interviewerî (p. 831). The authors also provide examples of the questions asked in the interviews, and techniques used by the interviewers to elicit answers.

 

No matter the type of data collection used, the authors should provide us with some sample questions or interview topics. The authors should also describe who collected the data, where it was collected, and how data was recorded. The importance of this section is knowing the extent to which the data collection methods allowed the authors to collect data to answer the research question(s).

 

Data Analysis: In the methods section of a research report, the authors describe the data analysis procedures used. If is often very difficult for a reader not trained in the use of statistical methods ñ and sometimes even readers who are trained ñ to determine if the researchers used the appropriate data analysis techniques. If the results of the research report are very important, consider consulting a researcher to help determine if the data analysis methods used were appropriate for the type of data collected.

 

Generally, in quantitative studies, the researcher will describe the use of statistical software to analyze data. The purpose of data analysis in quantitative research is to descriptively summarize the findings of their study and to determine if there are relationships between variables in their study. Thus the authors should describe the use of univariate (one variable) descriptive statistics and bivariate (relationships between two variables) statistics. Most often, the researchers will also use some type of multivariate (more than two variables) analyses. This type of analysis allows the researchers to analyze the relationship between multiple variables. If the researchers hope to generalize beyond their own sample, and this is usually the case in quantitative research, they will also use inferential statistics to help them determine if the relationships found between variables is simply due to chance.

 

In qualitative research, the approach to data analysis is very different. Qualitative researchers often use software to analyze data, but in this case the software does not summarize the data numerically, but rather helps the researcher sort and group data. While most data from qualitative studies are words, qualitative researchers often count or summarize data numerically. Qualitative researchers usually develop coding categories ñ either developed before the data are collected based on theory or past knowledge, or developed during the data collection and data analysis stage.

 

 

What was found?

 

This is the results section of a research report. This section of the research report summarizes the data that were collected and how the research questions were answered with those data. In quantitative studies, the results section tells us whether or not the data supported their original hypothesis. For example, was the sexual assault education program effective in increasing participants/ knowledge about sexual assault? In quantitative studies, the findings are usually reported in the form of numbers and statistics, and they are often presented in tables or on graphs. Findings generally include descriptive statistics that describe one variable (e.g. frequencies or counts, means or medians, standard deviations or ranges); statistics that describe the relationship between two or more variables (e.g. chi-square, correlation statistics, results of multiple regression or logistic regression); and statistics that analyze the difference in means between two groups (e.g. results of t-tests or ANOVA.) If the authors used inferential statistics, they will report a ìp-value.î The p-value represents the probability that the finding reported occurred by chance rather than because there is a ìtrueî relationship between variables or a ìtrueî difference between experimental and control groups on some measured variable. For example, if a research report indicates a significant difference (p=. 01 or p<. 01) between experimental and control groups, it means that 1 time in 100 or less than 1 time in 100, a difference that large will occur by chance. It is important to remember that significance, when used to refer to statistical significance, does not necessarily mean the findings have practical significance or importance.

In qualitative studies, the findings are usually reported in the form of words ñ quotes from interviews or samples of text that represent other similar findings from the study. The quotes from interviews are organized around themes identified by the researcher. Some qualitative researchers will include a respondent identification or code number next to quotes. These are included to demonstrate to the reader that not all quotes were taken from one or two interview respondents but rather represent a range of respondents.

What does it mean?

 

This content is often found in the discussion or conclusion section of the research report and is the researcherís interpretation of the meaning of the results. In this section, the author steps back from the reporting of data and tries to ìmake senseî of their findings. Locke, Silverman, and Spriduso (1998) suggest four major things to look for in the discussion and implications section of their research report: 1) The authorís take on the meaning of the data just reported (what is most important, what might have been unexpected, what is the importance of the findings); 2) any difficulties encountered by the researcher in the conduct of the study and their implications (e.g. a low response rate, difficulty recruiting subjects); 3) the contribution of the study to the larger literature; and 4) the conclusions should match the findings reported in the previous section.

 

Questions to ask about the results section include: Did their data answer their question? Did the data support their hypothesis? Are the conclusions grounded in the findings or do they speculate? Did the researchers discuss the limitations of their study? Are the implications of the research findings clear? What can we learn from the findings that may help us to improve services, to better understand the lives of battered women and their families, to develop programs, or to impact public policy?

 

Many research reports include a section titled ìimplications for practice or policy.î Other research reports provide very little information about the authorís perspective on such implications. For example, the Breitenbecher and Scarce study (1999) included just two sentences about such implications: ìResearchers are encouraged to include incidence of sexual assault in future outcome studies. In addition, rape education organizations are encouraged to conduct empirical evaluations of their programs in order to add to the knowledge base in this area.î (p. 475)

 

Conclusion

 

The results of research on violence against women provide individuals and organizations working to end violence against women with information that can help improve services, better understand the lives of battered women and their families, develop programs based on sound research knowledge, and provide information to influence public policies that support victims of violence. Other online documents emphasize the important contributions to research made by individuals and organizations working to end violence against women ñ including collaboration between researchers and practitioners, and evaluating the outcome of domestic violence service programs. Being a knowledgeable consumer of research is an equally important contribution to research on violence against women. Research reports often contain language and concepts that are unfamiliar to their readers, and often generate as many questions as answers. Although we may wish for a research ìtravel guideî that provides us with absolute answers on the ìbest ofî research designs, methods, types of sample, definitions of concepts, and the like, it is hopefully clear from this article that these answers are instead based on informed and critical judgement. It is hoped that this article provides some beginning guidance in how to make these judgments.

 

Author of this document:

Sandra K. Beeman, Ph.D.
sandrabeeman@mac.com

Consultant:

Carol Arthur
Executive Director
Domestic Abuse Project
carthur@mndap.org

 

Distribution Rights: This Applied Research paper and In Brief may be reprinted in its entirety or excerpted with proper acknowledgement to the author(s) and VAWnet (www.vawnet.org), but may not be altered or sold for profit.

Suggested Citation: Beeman, S. (2002, March). Evaluating Violence Against Women Research Reports. Harrisburg, PA: VAWnet, a project of the National Resource Center on Domestic Violence/Pennsylvania Coalition Against Domestic Violence. Retrieved month/day/year, from: http://www.vawnet.org


References

 

Breitenbecher, K., & Scarce, M. (1999). A longitudinal evaluation of the effectiveness of a sexual assault education program. Journal of Interpersonal Violence, 14 (5), 459 ñ 478.

 

Scott, K., & Wolfe, D. (2000). Change among batterers: Examining menís success stories. Journal of Interpersonal Violence, 15 (8), 827-842.

 

Straus, M. (1979). Measuring intrafamily conflict and violence: The Conflict Tactics Scale. Journal of Marriage and the Family, 41, 75-88.

 

 

Recommended for Further Study:

 

Edleson, J. & Bible, A. (1998). Forced bonding or community collaboration? Partnerships between science and practice in research on woman battering. Retrieved from www.vaw.umn.edu/Documents/Collab.htm

 

Girden, E.R. (1996). Evaluating research articles, Thousand Oaks, CA: Sage.

 

Locke, L., Silverman, S. & Spirduso, W. (1998). Reading and understanding research, Thousand Oaks, CA: Sage.

 

Sullivan, C. & Alexy, C. Evaluating the outcomes of domestic violence service programs: Some practical considerations and strategies. Retrieved from www.vawnet.org/vnl/library/general/AR_evaldv.html

 

 


Distribution Rights

This Applied Research paper and In Brief may be reprinted in its entirety or excerpted with proper acknowledgement to the author(s) and VAWnet, a project of the National Resource Center on Domestic Violence, but may not be altered or sold for profit.