Saturday, September 19, 2009

Assignment 2: ECUR 809

Suggested Evaluation Model Re:ECS Program for Children with Severe Disabilities


Some key factors make this program more suitable to some evaluation models than others:
1. The program is customized and in a sense, new every time in regards to goals and objectives. Specific goals-based evaluation would be on a case-by-case basis. However, the general process could be examined and compared (timelines, reasons for adapting goals and/or barriers to achieving goals).
2. The unique and individualized nature of the content (individual children’s needs direct the choice of activities) make it challenging to attempt generalized comparisons.
3. The variation in program contexts: centre-based and in-home and possibly a combination of both add to the difficulty of using a broad brush to paint the overall program results.
4. The various and changing stakeholders (children, parents, teachers, and/or teacher assistants, speech/language pathologist, and funder: Alberta Education) make systematic comparisons difficult at best.

Depending on resources available (time and money), I’d approach the evaluation using parts of Provus’ Discrepancy Model (DIPPS) to compare and contrast the two contexts of home (one-to-one) and classroom (group). Since Design content is specific and Installation appears established with the criteria regulated quite firmly, I’d focus on the Process, Product and Cost-benefit analysis.

Some Possible Methods to collect data: observation and program visits, interviews with staff and parents, case studies involving in-home and center-based program participants (as feasible and appropriate), key stakeholder surveys (pre- and post- program), participant assessment results (pre and post program), a cost-benefit analysis to include a review documentation to assess budget, other similar existing programs, and program schedules.

I would classify this as a participatory or collaborative approach, with all stakeholders contributing advice, expertise, and rich perspectives in order to best serve the needs of the children who benefit from this program.

Resource: Medicine Hat Catholic Separate Regional Division No. 20, Student Services. 2009.
Student Services Program Description, ECS Programming for Children with Severe Disabilities. www.mhcbe.ab.ca

Friday, September 11, 2009

Assignment 1 ECUR 809

Assignment 1: ECUR 809
September 11, 2009
For: Jay Wilson
By: Joanne LaBrash

Lessons Learned from a State-Funded Workplace Literacy Program

Project Summary/Background

An evaluation was completed on a set of ten workplace literacy pilot programs that were publicly funded by a state government (The Indiana Department of Workforce Development). One of the identified purposes for the projects came from: “the perception that Indiana’s competitiveness and growth may be constrained by a mature, incumbent workforce that has deficient basic skills” (p 2). These pilot programs were developed to improve those perceived deficient skills.

This evaluation is both formative and summative: formative because it examines a pilot project; (its future improvements, adaptations, or termination depend on the evaluation results), and summative because it quantified scores to measure learner gains. (1800 participant surveys and test scores were collected and analyzed).

The model showed elements of Provus’ Discrepancy Model, in that it compared intentions with actual results in the areas of DIPPS: design and content, installation, process, product and cost-benefit analysis (to varying degrees). Both qualitative and quantitative data were used and I think that the combination of methods adds valuable layers of perspective.

Qualitative Focus

Two site visits occurred at ‘most’ of the 10 funded projects. I would have liked all 10 sites visited, which would give a more thorough analysis and evaluation. The first visit focused on planning and implementation (design, installation and process) and the second visit near the end of the project focused on outcomes and lessons learned (product and cost benefit analysis). The visits involved examination of administrative challenges, instructional delivery, contextualization, motivations of both employee participants and businesses, and a general cost-benefit analysis for workers, companies, literacy providers, and government.

The struggle with implementation was highlighted. None of the sites were familiar with the pre and post test requirements, which led to strained relationships with workers and employers when it was determined that a third assessment appraisal was needed. This confusion led to inconsistency across all sites, with only three sites administering the pre-assessment test and only one-third completing post-tests. The results showed higher scores for pre-tests than post-tests overall! I tended to discount the validity and use of the data for these reasons.
Another implementation failure was that the computer course required internet access, which was not always available. The resource materials also revealed problems (especially the computer course) with “pushback from students (and employers) who felt it was too technical” (p 4).

Quantitative Focus

A survey of participants, the adult student assessment test scores and resulting learning gains along with earnings histories were the data quantitatively analyzed. I noticed a failure to draw any strong connection between the earnings histories to the impact desired; the state hoped to reduce the use of public assistance benefits. The inclusion of possible additional factors involved and a longer time frame were neglected. Therefore, the information gathered seemed to be irrelevant and a bit misguided, although it probably looked good in a graph. Statistics tend to hold more appeal and appear objective, but for what purpose?

Overall, I found the evaluation very effective, with strong examination of design, content, installation and process. Its weakest area was the evaluation of the product. In my view, the attempts to measure outcomes and changes were riddled with testing and resource material problems that negatively impacted the validity intended. All of the qualitative data and most of the quantitative data were directly connected to the findings and were purposefully and thoughtfully examined.

Reference:

Hollenbeck, Kevin and Timmeney, Bridget. (2009). Lessons Learned from a State-Funded
Workplace Literacy Program W.E. Upjohn Institute for Employment Research. Indiana.
(Working Paper 09-146). http://ssrn.com/abstract=1359182