Wednesday, November 18, 2009

Survey Says!

Assignment 5: Survey Design

Purpose of Survey:
This survey targets facilitators and coordinators who work in a literacy organization. The purpose of the survey is to gather information and opinions regarding current program processes (how the programs are planned and supported) and program outcomes measurements (methods and efficiency of methods used).
The results could help guide future budgets, regarding resources and professional development (program evaluation techniques). Another potential benefit to this survey is the opportunity for staff to reflect on their own practice; reflection is a critical activity that is often set low on a long list of other priorities. This reflective exercise would also help set the stage for the upcoming evaluation plan that will be initiated over the next few months.
I shared Draft 1 of the survey with 5 staff: 2 coordinators and 1 staff member from a workplace literacy program, 1 coordinator from a family literacy program, and the executive director of the organization. I received feedback from 4 of 5. Based on their feedback and comments, I revised the survey, named Draft 2.

Comments on Survey Design:
Since the organization is small (10), I purposely left out demographics that I believe do not impact the nature of the data sought, and may very likely identify each ‘anonymous’ staff member. I tried to collect only pertinent information. (In my experience, often data is collected from program participants that is no longer relevant or perhaps never was. I have seen templates for surveys that were relevant for one program’s requirements that were duplicated in other programs, whose requirements did not match the data collected). I could have asked how long the staff had been with the organization, or how long they had been facilitating programs in other organizations, but I didn’t deem it relevant to my purpose. My purpose is to have staff share their current perspectives regarding program development, support and evaluation, which will hopefully involve reflection on their own expertise levels. Although not the main focus, I believe professional development should be based on current need and not years of experience. (I may be mistaken; I’m open to any challenges to this perspective)

Rationale for changes in Surveys:
1. Question 2 was added that involved a reworking of vocabulary: (2) How do you know you are making a difference? (What would convince others that your program is successful?) I believe this language helps to remove the tendency to look at the word ‘evaluation’ with fear and trembling and makes it clear and offers another way of looking at the idea of evaluation.
2. Question 3: ‘statistics’ was added, since I omitted this. In my view this is an output, and I’m not as interested in it as longer term outcome measurements. However, it is an evaluation technique and belongs here.
3. Question 4 was added to accommodate the request for more depth surrounding staff confidence levels: a) Please rate your confidence level regarding your expertise in evaluation (i.e.: terms, methods) b) Please explain the reasons for your rating. I think this gage of facilitator perception is an important factor when planning a participatory program evaluation.
4. Question 6: It was suggested to remove (if any). I included it in to allow for the option of not answering if not relevant. However, I removed it to reduce wordiness.
5. Question 9: ‘Websites’ was added as resource source. Oops- a major omission in draft 1.
6. Question 10 Draft 1 question 7 was changed to: a) Have you have felt adequately supported/helped by supervising staff? b) Have you felt adequately supported/helped by your colleagues? c) Please share specific examples of when you felt supported and/or how you responded in situations where you experienced a lack of support. This change was based on 2 differing responses:
a. The executive director had strong feelings around the term ‘superior’: “I choke when I see ‘superior’- is there a better way of putting that?” I changed the term to “supervising staff”. in question 10.
b. Based on feedback from another staff member, I removed the rating scale regarding employee support, since the sample size is very small (only 10 staff, including 3 supervisory staff) and honesty may not be accurately measured. Instead, I added a request for examples of both support and lack of support, along with strategies used to address them. This gives the question a more positive spin and elicits specific instances that could be used in future to support and guide.
7. One staff suggested using this survey as a planning tool as well as a data collection device; “Perhaps respondents could be asked to set a goal for their own/program (evaluation) improvement for the next six months/year.” I didn’t add it to the survey due to the extra time required by participants, but think it’s a great idea to present in another format, such as during the evaluation process. I struggled with whether it would be too much to ask in a survey...thoughts?

Staff Evaluation Survey Draft 1
Thanks for your feedback! This information will help guide future program/project planning. Please contact me at phone #:_________ or by email: ________with any additional suggestions or comments.
1. What do you see as the strengths of your program/project?

2. Regarding resource materials (such as: program-specific manuals &curricula, related reference books & magazines, & research documents), please rate your resources available to plan/develop your program on the scale below:

0 Low 1 2 3 4 5 6 7 8 9 High 10
3. How would you improve your resource materials?
____include more resource books (i.e.: research, theory, collections of activities/ideas)
____adapt/expand current curriculum/program topics/themes
____adapt/expand current curriculum/program handouts
____no improvements are needed
____other:___________________________________________________

4. Which methods of evaluation do you currently use to measure outcomes?
___post-program participant surveys ___interviews ___anecdotal records
___photos ___focus groups
___other:_______________________________________________________

5. Do you feel satisfied with the evaluation methods you currently use?

Yes, I’m satisfied___ Somewhat satisfied___ Not satisfied___

6. What (if any) methods of evaluation do you suggest to improve measurement of your program/project goals?
7. a) On the scale below, rate the level of support &/or guidance you have received from your superior(s):

0 Low 1 2 3 4 5 6 7 8 9 High 10
b) On the scale below, rate the level of support &/or guidance you have received from your colleagues:

0 Low 1 2 3 4 5 6 7 8 9 High 10
c) Please explain the reason(s) for your choices.
8. Are there any further suggestions that you can share to improve your program?

Thank you for taking the time to share your feedback!

Staff Evaluation Survey Draft 2
Thanks for your feedback! This information will help guide future program/project planning. Please contact me at phone #:_________ or by email: ________with any additional suggestions or comments.
1. What do you see as the strengths of your program/project?
2. How do you know you are making a difference? (What would convince others that your program is successful?)

3. Which methods of evaluation do you currently use to measure outcomes?
___post-program participant surveys ___interviews ___anecdotal records
___photos ___focus groups ___statistics
___other:_______________________________________________________

4. a)Please rate your confidence level regarding your expertise in evaluation (ie: terms, methods):

0 Low 1 2 3 4 5 6 7 8 9 High 10
b)Please explain the reasons for your rating:
5. Do you feel satisfied with the evaluation methods you currently use?
___Yes, I’m satisfied ___Somewhat satisfied ___Not satisfied
6. What methods of evaluation do you suggest to improve measurement of your program/project goals?
7. What percentage of time do you give towards evaluation for your project?
___under 5% ___5 – 10% ___11 – 25% ___26 – 40% ___above 41%
8. Regarding resource materials (such as: program-specific manuals &curricula, related reference material, & research/theory information), please rate the resources available to plan/develop your program on the scale below:

0 Low 1 2 3 4 5 6 7 8 9 High 10
9. How would you improve your resource materials?
____include more resource books (i.e.: research, theory, collections of activities/ideas)
____adapt/expand current curriculum/program topics/themes
____adapt/expand current curriculum/program handouts
____no improvements are needed
____websites
____other:___________________________________________________

10. a) Have you have felt adequately supported/helped by supervising staff?
___ regularly ___ rarely ___ occasionally

b) Have you felt adequately supported/helped by your colleagues?
___ regularly ___ rarely ___ occasionally

c) Please share specific examples of when you felt supported and/or how you responded in situations where you experienced a lack of support:

11. Are there any further suggestions that you can share to improve your program?
Thank you for taking the time to share your feedback!

Thursday, October 15, 2009

ECUR 809 Assignment 3: Evaluation Assessment: A Literacy Organization

Context
Organization A is a volunteer literacy organization that provides free literacy services to individuals, families, workplaces and the community.
Their vision: To support create a community that values literacy.
Their mission: To provide literacy services for adults and families.
This non-profit organization offers many programs to adults through 4 focus areas:
1. One-to-one tutoring program
2. Family literacy programs and services
3. Facilitator trainings
4. Workplace programs
The management is undertaking the development of an Accountability Framework to provide a roadmap for their agency (for all 4 focus areas offered), to meet the needs of major funders and to share the framework with four other agencies once the framework is completed.
Stakeholders Involved
The audience for the information from the evaluation is management and staff in the initial phase and funders and 4 other organizations at a later stage. The approach taken to the evaluation would be participatory, especially when examining outcomes and measures, not only to ensure the information is relevant and that commitment would likely be stronger, but also to enable staff to conduct and use their own evaluation that is realistic, relevant and doable in their practice after the evaluation is completed.

Purpose of Evaluation
The purpose is to improve the measurement of program outcomes in order to enhance and verify the impact of Literacy programs/services in order to:
· improve program effectiveness
· guide future program development
· prepare and justify budgets
· focus board members’ attention to program issues and values
· improve and promote clarity and collaboration with funders by employing a common set of terms and language
· show funders that benefits produced merit continued support
· prove that informal education is not only valuable, but also valid and measureable
· reinforce the positive impact staff and volunteers make as a result of their dedication and hard work.

Type of Evaluation
The evaluation would be outcomes-based. It is a long term project (2 years) that will require an examination and identification of the current outcomes for the 4 main focus areas and the services and programs found within each area. Identification, enhancement &/or development of measures and efficient methods to collect information would follow. It is important to note that many participants have low literacy levels, so creative and non-print based techniques to collect data will be required.
Within the One-to-one tutoring focus area (and possibly the Family Literacy focus area), Adult Literacy Benchmarks will be integrated with the existing program outcomes and measures will need to be developed to address them. This will require close collaboration and planning with management and coordinators throughout all phases of the project. Since this project has yet to begin, these are preliminary plans that will change as more information and details develop.

Data Collection
· Program documents (program evaluations, anecdotal records)
· Funding proposals for programs (& other documents that list inputs, outputs, activities and outcomes for Organization A)
· Past and current reports, both external (to funders and board members) and internal (staff monthly reports)
· Staff and management interviews and/or surveys
· Observing program participants (re: the program evaluations)
· Conducting focus groups

Saturday, September 19, 2009

Assignment 2: ECUR 809

Suggested Evaluation Model Re:ECS Program for Children with Severe Disabilities


Some key factors make this program more suitable to some evaluation models than others:
1. The program is customized and in a sense, new every time in regards to goals and objectives. Specific goals-based evaluation would be on a case-by-case basis. However, the general process could be examined and compared (timelines, reasons for adapting goals and/or barriers to achieving goals).
2. The unique and individualized nature of the content (individual children’s needs direct the choice of activities) make it challenging to attempt generalized comparisons.
3. The variation in program contexts: centre-based and in-home and possibly a combination of both add to the difficulty of using a broad brush to paint the overall program results.
4. The various and changing stakeholders (children, parents, teachers, and/or teacher assistants, speech/language pathologist, and funder: Alberta Education) make systematic comparisons difficult at best.

Depending on resources available (time and money), I’d approach the evaluation using parts of Provus’ Discrepancy Model (DIPPS) to compare and contrast the two contexts of home (one-to-one) and classroom (group). Since Design content is specific and Installation appears established with the criteria regulated quite firmly, I’d focus on the Process, Product and Cost-benefit analysis.

Some Possible Methods to collect data: observation and program visits, interviews with staff and parents, case studies involving in-home and center-based program participants (as feasible and appropriate), key stakeholder surveys (pre- and post- program), participant assessment results (pre and post program), a cost-benefit analysis to include a review documentation to assess budget, other similar existing programs, and program schedules.

I would classify this as a participatory or collaborative approach, with all stakeholders contributing advice, expertise, and rich perspectives in order to best serve the needs of the children who benefit from this program.

Resource: Medicine Hat Catholic Separate Regional Division No. 20, Student Services. 2009.
Student Services Program Description, ECS Programming for Children with Severe Disabilities. www.mhcbe.ab.ca

Friday, September 11, 2009

Assignment 1 ECUR 809

Assignment 1: ECUR 809
September 11, 2009
For: Jay Wilson
By: Joanne LaBrash

Lessons Learned from a State-Funded Workplace Literacy Program

Project Summary/Background

An evaluation was completed on a set of ten workplace literacy pilot programs that were publicly funded by a state government (The Indiana Department of Workforce Development). One of the identified purposes for the projects came from: “the perception that Indiana’s competitiveness and growth may be constrained by a mature, incumbent workforce that has deficient basic skills” (p 2). These pilot programs were developed to improve those perceived deficient skills.

This evaluation is both formative and summative: formative because it examines a pilot project; (its future improvements, adaptations, or termination depend on the evaluation results), and summative because it quantified scores to measure learner gains. (1800 participant surveys and test scores were collected and analyzed).

The model showed elements of Provus’ Discrepancy Model, in that it compared intentions with actual results in the areas of DIPPS: design and content, installation, process, product and cost-benefit analysis (to varying degrees). Both qualitative and quantitative data were used and I think that the combination of methods adds valuable layers of perspective.

Qualitative Focus

Two site visits occurred at ‘most’ of the 10 funded projects. I would have liked all 10 sites visited, which would give a more thorough analysis and evaluation. The first visit focused on planning and implementation (design, installation and process) and the second visit near the end of the project focused on outcomes and lessons learned (product and cost benefit analysis). The visits involved examination of administrative challenges, instructional delivery, contextualization, motivations of both employee participants and businesses, and a general cost-benefit analysis for workers, companies, literacy providers, and government.

The struggle with implementation was highlighted. None of the sites were familiar with the pre and post test requirements, which led to strained relationships with workers and employers when it was determined that a third assessment appraisal was needed. This confusion led to inconsistency across all sites, with only three sites administering the pre-assessment test and only one-third completing post-tests. The results showed higher scores for pre-tests than post-tests overall! I tended to discount the validity and use of the data for these reasons.
Another implementation failure was that the computer course required internet access, which was not always available. The resource materials also revealed problems (especially the computer course) with “pushback from students (and employers) who felt it was too technical” (p 4).

Quantitative Focus

A survey of participants, the adult student assessment test scores and resulting learning gains along with earnings histories were the data quantitatively analyzed. I noticed a failure to draw any strong connection between the earnings histories to the impact desired; the state hoped to reduce the use of public assistance benefits. The inclusion of possible additional factors involved and a longer time frame were neglected. Therefore, the information gathered seemed to be irrelevant and a bit misguided, although it probably looked good in a graph. Statistics tend to hold more appeal and appear objective, but for what purpose?

Overall, I found the evaluation very effective, with strong examination of design, content, installation and process. Its weakest area was the evaluation of the product. In my view, the attempts to measure outcomes and changes were riddled with testing and resource material problems that negatively impacted the validity intended. All of the qualitative data and most of the quantitative data were directly connected to the findings and were purposefully and thoughtfully examined.

Reference:

Hollenbeck, Kevin and Timmeney, Bridget. (2009). Lessons Learned from a State-Funded
Workplace Literacy Program W.E. Upjohn Institute for Employment Research. Indiana.
(Working Paper 09-146). http://ssrn.com/abstract=1359182