Main Content

Schedule

Course Schedule
LessonReadingsAssignments
Week 1: Definition, History, and Social Context of Evaluation Research

Required:

  • Go through the Orientation page (link is available in the left menu)
  • Rossi, Lipsey, & Freeman: Chapters 1 and 12
  • The Colorado Trust, "The Importance of Culture in Evaluation: A Practical Guide."

Recommended:

  • Alkin, Evaluation Roots (Read this book to gain a deeper understanding of the
    history and trends in evaluation research.)
  • Fischer, Evaluating Public Policy: Chapter 1 (This is a critical theory view of program evaluation.)
  • Fitzpatrick, Sanders, & Worthen, Program Evaluation. (Read the entire book for an understanding of alternative models of evaluation, the contexts of evaluation, and practical advice.)

None.

Week 2: Tailoring Evaluations, Identifying Issues, and Formulating Questions

Required:

  • Rossi, Lipsey, & Freeman: Chapters 2 and 3

Recommended:

  • Fischer, Evaluating Public Policy: Chapters 2–10
  • Chen, Practical Program Evaluation: Chapters 4 and 5 (These chapters are particularly good because they include practical advice on how to engage stakeholders in designing a program evaluation study.)

First quiz is due.

Week 3: Needs Assessment

Required:

  • Rossi, Lipsey, & Freeman: Chapters 4
  • Elliott, N. L., Quinless, F. W., & Parietti, E. S. (2000). Assessment of a Newark neighborhood: Process and outcomes. Journal of Community Health Nursing, 17(4), 211–224.
  • Holton, E. F., Bates, R. A., & Naquin, S. S. (2002). Large-scale performance-driven training needs assessment: A case study. Public Personnel Management, 29(2), 249–267.

Recommended:

  • McDavid & Hawthorn, Program Evaluation & Performance
    Management: Chapter 6
  • Lacey, Joanne. “Alcohol brief interventions: exploring perceptions and training needs,” The Journal of the Health Visitors' Association Community Practitioner 82. 6 (Jun 2009): 30–33.

For more detailed information on needs assessments, read the following books in their entirety:

  • Altschuld, J. W., Eastmond, J. N. (2009). Needs assessment: Phase I—getting the process started. Los Angeles: Sage.
  • Altschuld, J. W., & Kumar, D. D. (2009). Needs assessment: An overview. Los Angeles: Sage.
  • Altschuld, J. W., & White, J. L. (2009). Needs assessment: Analysis and prioritization. Los Angeles: Sage.
  • Stevahn, L., & King, J. A. (2009). Needs assessment: Phase III—taking action for change. Los Angeles: Sage.
  • Witkin, B. R. (1995). Planning and conducting needs assessments. Thousand Oaks, CA: Sage.

None.

Week 4: Program Theory

Required:

  • Rossi, Lipsey, & Freeman: Chapters 5
  • Engel-Cox, J. A., Van Houten, B., Phelps, J., & Rose, S. W. (2008). Conceptual model of comprehensive research metrics for improved human health and environment. Environmental Health Perspectives, 116(5), 583-592.
  • Bond-Zielinski, C., & Moss, M. (2009). Using the logic model to develop family and consumer sciences programming.
    Journal of Family and Consumer Sciences, 101(1), 53–56.

Recommended:

  • Bamberg et al., RealWorld Evaluation: Chapter 9 (This chapter includes discussions on causality issues, mixed-methods, etc.)
  • Chen, Practical Program Evaluation: Chapters 2, 4, and 5
  • McDavid & Hawthorn, Program Evaluation and Performance Measurement: Chapter 2 (The authors use a somewhat different terminology and discuss “program technologies.”)
  • United Way of America, Measuring Program Outcomes: A Practical Approach (In parts of this book, the authors discuss how to develop logic models and apply them. The logic model was developed by the United Way. This book is where it originally formulated.)

First analytical assignment will be posted.

Week 5: Assessing and Monitoring Program Process

Required:

  • Rossi, Lipsey, & Freeman: Chapters 6
  • Carswell, Steven B, & Thomas E. Hanlon. (2009). "A Preventive Intervention Program for Urban African American Youth Attending an Alternative Education Program: Background, Implementation, and Feasibility," Education & Treatment of Children, 32 (3), 445–469.
  • D. R. Young  et al. (2009). “Process evaluation results from a school- and community-linked intervention: the Trial of Activity for Adolescent Girls (TAAG)” Health Education Research, 23(6), 976–986.

Recommended:

  • Vedung: Chapters 9 and 13

First analytical assignment is due.

Week 6: Measuring and Monitoring Program Outcomes

Required:

  • Rossi, Lipsey, & Freeman: Chapters 7
  • Hendricks, M., Plantz, M. C., & Pritchard, K. J. (2008). Measuring outcomes of United Way–funded programs: Expectations and reality. J. G. Carman & K. A. Fredericks (Eds.). Nonprofits and evaluation. New Directions for Evaluation, 119, 13–35.

Recommended:

  • Vedung: Chapters 9 and 13
  • United Way of America, Measuring Program Outcomes: A Practical Approach

Second quiz is due.

Week 7: Assessing Program Impact: Randomized Experiments

Required:

  • Rossi, Lipsey, & Freeman: Chapters 8
  • McKay, H., Sinisterra, L., McKay, A., Gomez, H., & Lloreda, P. (1978). Improving cognitive ability in chronically deprived children. Science, 200(4339), 270–278.
  • “Getting out the Youth Vote: Results from Randomized Field Experiments”

Recommended:

  • Bingham & Felbinger: pp. 57–59; pp. 77–78
  • Solberg, A. (1983). Community posthospital follow-up services.
    Evaluation Review, 7, 96–109. (This is an example of post-test only control group design.)
    (This is an example of post-test only control group design.)
  • Duflo, E, “Scaling Up and Evaluation.” (This is a methodologically sophisticated application of experimental design in complex social conditions.)
  • Banerjee A., et al. “Remedying Education.” (This is a methodologically sophisticated application of experimental design in complex social conditions.)
  • Langbein & Felbinger: Chapters 3, 4, and 5
  • Shadish, W. R., Cook, T. D., & Campbell D. T. (2003). Experimental and quasi-experimental designs for generalized causal inference. Houghton Mifflin. Entire book
  • Morçöl, G. (2002). A new mind for policy analysis: Toward a post-Newtonian and postpositivist epistemology and methodology. Westport, CT: Praeger. pp. 37–47.

None.

Week 8: Assessing Program Impact: Quasi-Experimental Designs

Required:

  • Rossi, Lipsey, & Freeman: Chapters 9
  • Newcomb. T. M. (1984). Conservation program evaluations: The control of self-selection bias. Evaluation Review, 8, 425-440. (This is an example of the pretest-posttest comparison group design. The researcher used t-test to compare the groups.)
  • Powella, J. V., Aeby Jr., V. G., Carpenter-Aeby, T. (2003).
    A comparison of student outcomes with and without teacher
    facilitated computer-based instruction. Computers & Education, 40, 183–191.
    (This is an example of posttest-only comparison group design.)

Recommended:

  • Bingham & Felbinger: pp. 109–119; pp. 120–121
  • Moran, G. E. (1985). Regulatory strategies for workplace
    injury reduction: A program evaluation. Evaluation  Review, 9,  21–33.
    (This is an example of the interrupted time-series analysis. The authors used regression analysis with dummy variables.)
  • Langbein, L., & Felbinger, C. L. (2006). Public program evaluation: A statistical guide. Armonk, NY: M.E. Sharpe. Chapter 6
  • Shadish, W. R., Cook, T. D., & Campbell D. T. (2003). Experimental and quasi-experimental designs for generalized causal inference. Houghton Mifflin. Entire book
  • Morçöl, G. (2002). A new mind for policy analysis: Toward a post-Newtonian and postpositivist epistemology and methodology. Westport, CT: Praeger. pp. 47–54.

Second analytical assignment will be posted.

Week 10: Assessing Program Impact: Reflexive Designs

Required:

  • Rossi, Lipsey, & Freeman: Chapter 9 (pp. 289–295 only)
  • Goetz, E. G. (1995). A little pregnant: The impact of rent
    control in San Francisco. Urban Affairs Review, 30(4), 604–612.
    (This is an example of simple time-series design. It requires some background in time-series analyses to grasp its contents entirely. In this course we do not cover this kind of analyses. However, you can still understand what the author tried to do in his analyses. Ignore the mathematical parts of this article.)

Recommended:

  • Bingham & Felbinger, pp. 153–154; pp. 166–167
  • Langbein, L., & Felbinger, C. L. (2006). Public program evaluation: A statistical guide. Armonk, NY: M.E. Sharpe. Chapter 7
  • Shadish, W. R., Cook, T. D., & Campbell D. T. (2003). Experimental and quasi-experimental designs for generalized causal inference. Houghton Mifflin. Entire book
  • Morçöl, G. (2002). A new mind for policy analysis: Toward a post-Newtonian and postpositivist epistemology and methodology. Westport, CT: Praeger. pp. 47–54

Week 10: Second analytical assignment is due.

Week 11: Detecting, Interpreting, and Analyzing Program Effects; Meta-Analysis

Required:

  • Rossi, Lipsey, & Freeman: Chapters 10
  • Ennett, S. T., Tobler, N. S., Ringwalt, C. L., & Flewelling, R. L. (1994). How effective is drug abuse resistance education? A meta-analysis of Project DARE outcome evaluations. American Journal of Public Health, 84(9), 1394-1401.
  • Wilson, S., J, & Lipsey, M. W. Effects of school based social information processing programs, (A Campbell Collaboration Paper)

Recommended:

  • Langbein & Felbinger: Chapter 9
  • Bingham & Felbinger, pp. 225–226; pp. 237–239
  • Hunter & Schmidt (entire book)
  • Lipsey, M. W., & Wilson, D. B. (2000). Practical meta-analysis. Thousand Oaks, CA: Sage. Entire book.

None.

Week 12: Measuring Efficiency

Required:

  • Rossi, Lipsey, & Freeman: Chapters 11
  • Malitz, D. (1984). The costs and benefits of Title Xx and Title Xix family planning services in Texas. Evaluation Review, 8, 519–536.
    (This is an example of ex-pots cost-benefit analysis.)
  • Graveley, E. A, & Littlefield, J. H. (1992). A cost-effectiveness analysis of three staffing models for the delivery of low-risk prenatal care. American Journal of Public Health, 82(2), 180–184.)
    (This is an example of ex-post cost-effectiveness analysis.

Recommended:

  • Bingham & Felbinger, pp. 181–182; 194–196; 239
  • Jones, Bumbarger, Greenberg, Greenwood, & Kyler, “The Economic Return on PCCD’s Investment in Research-Based Programs: A Cost-Benefit Assessment of Delinquency Prevention in Pennsylvania.”
  • Roman, Chalfin, Reid, J., & Reid, S. “Impact and Cost-Benefit Analysis of the Anchorage Wellness Court”

Third quiz is due.

Week 13: Preparation for Class Papers
  • None.

None.

Week 14: Presentations of Class Papers
  • None.

Upload your PowerPoint presentation to the Discussion Forum for Class Paper Presentation.

Week 15: Revisions on Class Papers
  • None.

Make revisions in your paper and prepare it to submit next week.

Week 16: Class Paper is Due
  • None.

Submit your paper through turnitin.com. See the turnitin submission information at the course website.


Top of page