A Systematic Evaluation of a Soccer Club's College Advisory Program

A Systematic Evaluation of a Soccer Club's College Advisory Program

Seung Youn (Yonnie) Chyung, Stacey E. Olachea, Colleen Olson, Ben Davis
Copyright: © 2015 |Pages: 33
DOI: 10.4018/978-1-4666-8330-3.ch009
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The College Advisory Program offered by Total Vision Soccer Club aims at providing young players with the opportunity to learn how to navigate the collegiate recruiting process, market themselves to college coaches, and increase their exposure to potential colleges and universities. A team of external evaluators (authors of this chapter) conducted a formative evaluation to determine what the program needs to do to reach its goal. By following a systemic evaluation process, the evaluation team investigated five dimensions of the program and collected data by reviewing various program materials and conducting surveys and interviews with players and their parents, upstream stakeholders, and downstream impactees. By triangulating the multiple sources of data, the team drew a conclusion that most program dimensions were rated as mediocre although the program had several strengths. The team provided evidence-based recommendations for improving the quality of the program.
Chapter Preview
Top

Organizational Background

Human performance improvement (HPI) involves the use of systematic and systemic approaches to closing gaps in organizational outcomes by employing cost-effective solutions (Chyung, 2008; Van Tiem, Moseley, & Dessinger, 2012). The International Society for Performance Improvement (ISPI) is a leading organization that provides professional standards, principles, and ethical guidelines to the community of practitioners involved in various types of HPI processes. For example, HPI practitioners are expected to: 1) focus on results, 2) take a systemic view, 3) add value, 4) establish partnerships with clients and stakeholders, 5) determine needs or opportunities for improvement, 6) determine causal factors for performance gaps, 7) design solutions to close the performance gaps, 8) ensure solutions’ conformity and feasibility, 9) successfully implement recommended solutions, and 10) evaluate results and impact of the implemented solutions (ISPI, n.d.). Among the 10 items, HPI practitioners often encounter barriers to conducting evaluations of implemented solutions or interventions, and thus evaluation is rather infrequently performed by HPI practitioners (Gordon, 2003; Guerra, 2003).

Evaluations conducted in the context of HPI involve investigations of the quality, value, and significance of the interventions that have been implemented to make improvements to current performance levels. The implemented solutions can be instructional programs such as educational or training courses, workshops, and e-learning programs, or non-instructional programs such as incentive programs, employee engagement programs, and electronic performance support systems. They can also be a combination of both. By referring to the solutions or interventions as programs, evaluations conducted in the HPI context can be characterized as program evaluations. The most popular evaluation model that HPI practitioners are aware of is undoubtedly Kirkpatrick’s 4-level model of evaluation (1996), which is designed to evaluate training programs. However, since training programs are required only about 10% of the time (Dean, 1997), HPI practitioners also need to be equipped with knowledge of conducting evaluations on programs that are not training programs.

When conducting program evaluations, whether they are training or non-training programs, HPI practitioners need to follow a systematic process involving steps such as: 1) identify the program to be evaluated (a.k.a., evaluand), 2) identify the overall purpose and type of conducting the evaluation (e.g., formative vs. summative, and goal-based vs. goal-free), 3) identify stakeholders of the program, 4) review an existing, or develop a new, program logic model, 5) identify dimensions of the program to be investigated, 6) determine data collection methods, 7) identify or develop data collection instruments, 8) collect data, 9) analyze data against rubrics, and 10) synthesize dimensional results, draw conclusions, and make recommendations (Chyung, Wisniewski, Inderbitzen, & Campbell, 2013; Davidson, 2005; Scriven, 2007). This chapter describes a case evaluation study on a youth soccer club’s college advisory program using those 10 steps. This 10-step procedure was established based on Key Evaluation Checklist (KEC) developed by Michael Scriven (2007), whose approach to evaluation is recognized as a consumer-oriented and needs-oriented approach. Other approaches to program evaluations include David Fetterman’s (2001) empowerment evaluation, Robert Stake’s (2004) responsive evaluation, and Michael Patton’s (2012) utilization-focused evaluation [see Stufflebeam and Shinkfield (2007) for other program evaluation approaches and comparisons among them]. The evaluators of this chapter chose to use the KEC framework for its needs-oriented approach (i.e., assessing if the program is meeting the actual consumers’ needs) and its explicit guidance for conducting comprehensive and systematic evaluations (Davidson, 2005).

Complete Chapter List

Search this Book:
Reset