Evaluation Theories/Week 12: Connecting Theory to Practice: Politics and Interpersonal Relationships:

Type classification: this is a notes resource.


Information about Next Class, “Evaluation Procedures”

edit

Note on Registration for “Evaluation Procedures” - The way the class works is Tiffany teaches one section Wed. Afternoon 1p-4p; Tarek teaches Morning 9-12p.

No difference in class: we grade all final exams together; they’re all lined; requirements for class are the same; we try to keep it as similar as possible.

The final class in the sequence is “Evaluation Practicum” - the idea is that you get the foundational knowledge in the first one; then theory; then practice embedded in a real program; and then

The teams are like 4 people. Understand culture, client, develop logic model.

Workshop at end of the summer. . . 2 unit elective can come in through these professional development workshops.

Reflection paper: two of them; I’m making #2 optional; if you want to do it; great. If you don’t want to do it, you don’t have to. If you do decide to do it, then the 1st reflection paper will be 2.5%, and second will be 2.5%.

Will ask you to reflect on practice. It might help on integrating thoughts that will help for the final exam. If you got a Check; Check+; how does that figure into the percentage of the grades? - It would be converted to a numerical value: represented as 5% within the total. A “check” is like a “B” - Check+ is like an A, and “Check-minus” is lower than a B.

Check & Check+: Full credit; check minus was “a little bit less than” full credit.


Politics and People

edit

Agenda:

  1. Quotes
  2. Guest Presenter: Bret Levine on Politics in Evaluation
  3. Politics & Interpersonal Issues in Evaluation (with CDC Framework)
  4. Fitzpatrick Interview Discussions

Quotes

edit

“Politics i the art of looking for trouble, finding it everywhere, diagnosing it incorrectly, and applying the wrong remedies.” - Groucho Marx.

Q: How does that apply to Eval? A: If you’re looking for something wrong, you can find it anywhere. A: COnstructivist paradigm: if it’s political to them they won’t be looking at things as objectively; htey might make assumptions about people’s motives. M: Politics completely pervades all parts of the evaluation process. I think it’s our job to not ignore it, but to embrace it,a nd recognize that it’s there. We often get it wrong: we try to figure out the connections, the political power, and the issues: and a lot of times we don’t do a good enough job at it and we might miscalculate. I think it hearkens back to being as thoughtful as possible, to integrate this without ignoring it.

“A theory of evaluation must be as much a theory of political interaction as it is a theory of how to determine facts.” - Cronbach (1980)

Q: What does this say to you?

  1. A: J: that we’re talking about program evaluation again rather than
  2. A: …
  3. M: If you take objective social science approach you ask, “Where does politics play a part?” - but they do.

“There are five key variables in evaluation use. They are in order of importance: people, people, people, people, people, and people.” - Patton (1997)

Q: What does this mean?

  1. A: People put evaluation into use. And they’re completely intertwined with the politics. They come with their own hidden agenda: and if you don’t figure out what that is.

With that brief intro, we’ll move on to Bret Levine

Bret Levine: Background on Politics in Evaluation

edit

There is currently no published article anywhere that empirically tests politics in evaluation, to my or my advisor’s knowledge.

There are:

  1. Anecdotal stories
  2. Personal accounts

But nothing that actual measures it. What I’ll do is give an account of Chelimsky (2007) doing an evaluation at the Department of Defence. It is not empirical.

Now, she did a project, dealing with sensitive information, found something that (didn’t add up), reported it in the evaluation; and they got defensive and attacked her credibility, methodology. It took 5 years, 8 more evaluations, and a review committee for them to go back to her report and say, “yeah, it was right.”

When threatened, or when results don’t really agree with what the stakeholder wants, suddenly everybody becomes a methodology expert.

Empirically testing politics in evaluation

edit

Q: Why isn’t it done? A: It’s pretty difficult to operationalize.

  1. It’s deeply contextual: Plays on people’s motivations
  2. It can occur in any stage in the evaluation, from the beginning to end.
  3. Both evaluators and stakeholders can be guilty of it.
  4. Political things can happen after the evaluation has ended
    1. (J: Two big reasons in my view:
      1. 1) fear of light being shed in which things look bad ‘’out of context’’; in other words light being shed inappropriately.
        1. The response to #1 is good stakeholder management and a process that allows people to give the further information they need to show that things aren’t all bad.
      2. 2) fear of light being shed finding bad things that would go under the radar otherwise;
        1. The Response to #2 is harder than #1: it falls under two categories:
          1. Suppressing information in the report; Suppressing dissemination of the report; analyzing the effects of the report if it gets into so-and-so’s hand: does this information NEED to be reported? - These are deep questions: who is at risk, what is their attitude? These are political questions)
  5. Q: There is an entire study of politics

How could we operationalize this?

edit
  1. You could do a meta-evaluation; do two sets of evaluations as it occurs: to try to detect politics in an evaluation.
  2. If you wanted to do an experiment, you could try to make a very carefully-constructed experiment.

Tarek Azzam’s dissertation on something similar to politics in evaluation: he found that:

  1. Evaluators were most likely to respond to stakeholders that had the greatest amount of perceived control.


Q: What are the potential dangerous implications of evaluators saying, “you’re not reporting everything” when the evaluator is not a content expert? A: I think you should always be objective, because that’s your job: you shouldn’t be swayed by bias in your opinions. You have to be as objective as possible, so that you don’t get wrapped up in it. (J: )

Q: How do evaluators keep from getting into a “muck”? A: To quote Michelle Bligh: there is always politics, even if you think you’re not involved. It’s omnipresent, and whether you care to address it or not is up to you.


Politically Responsive Evaluation
edit

Azzam & Levine (2014) we basically theorized what we call “Politically Responsive Evaluation” - PRE. - how to understand culture in a more nuanced way.

It’s a framework for setting up how to analyze a situation. We made an attempt to empirically test politics in evaluation.

  1. We surveyed Mturkers and high school principles:
    1. “Based on these test scores, do you want to adopt this program.”
  2. As manipulation, we added political context; “Incentives” -
    1. Aka: “hey, btw; good test scores is tied to money for your district; accolades; really nice rewards for you
  3. We found that Mturkers were affected by it by the way we operationalized politics
  4. High school principles were also influenced, but not significantly. (J: are they just more politically savvy in what they SAY they would adopt?:))

We have yet to really measure it, and it’s difficult. THe reason I wasn’t able to explain our test in full, is because it was very difficult to operationalize.

If you’re interested, both myself and Tarek are doing research here: i would have you contact Tarek.

We’re going through a paradigm shift: there’s too much information; we’re in the age of analysis, because there’s too much information. (J: and not enough. WAY not enough)

Be as interpersonal as you can be with everybody: with stakeholders; with evaluators. But just be patient! There will be people that act politically based on the hard work you do. Be mindful. Be objective.

Politics & Interpersonal Issues in Evaluation

edit

We’ll start with the CDC Framework:


CDC Phase 1: Engage Stakeholders

edit

Deciding who to engage and developing Rapport:

edit
  1. In theory:
    1. Significant up-front time (Greene & Preskill)
    2. Engage all possible stakeholders (House and Green)
    3. Actively involve all stakeholders in all stages (Fetterman & Patton)
    4. (J: If you have a language to describe this with in the most efficient manner possible, that seems to be it; then you just try to focus on use and improvement of that language. Language can be something like BPMN; something formal - and with empirical backing.)
  2. In practice:
    1. Build trust and rapport
      1. Who are the gatekeepers? (It might not be who you think it is)
      2. Our field is unfortunately overrun by charlatans (J: how do you measure that? You’d have to do an evaluation to know whether their evaluation was terrible! - What are the impacts of them ‘’not’’ knowing what they’re doing from a “professional” evaluator?)
    2. Clearly articulate your agenda - if you have one. (“Mine is to help them improve and identify what we can learn about children’s development)
    3. have a sense of humor: (If I’m trying to get people to come meet with me its usually easier if I’m funny and interesting)
      1. If meeting with you is this dreaded thing, where they think it will be painful; I will ask them all these hard questions that they don’t know answers to and they’ll feel dumb at the end of the meeting, if I call and want to schedule a meeting they’ll say, “Oh, yah, we have time in a couple of months…”
      2. If you don’t have a sense of humor, you should develop one. It’s so much easier to negotiate that people piece, and that politics piece.
    4. Develop your ‘’’interpersonal’’’ effectiveness
    5. Have conversations about how the tradeoff will pay off early on
      1. By them providing me money to do this work, they’re taking away money from service provider.
    6. Understand hidden agendas and politics - who’s connected to who? (J: there is a great case study in Maritza Salazar’s class that provides information on this)

Complex political dynamics:

edit
  1. Funder
  2. Contractor
  3. Sub-Contractor
  4. Powerful person X

So, I understand the relationship: Powerful Person X is a:

  1. Supervising agent to the Funder
  2. Is the Boss of the Big Boss of the Contractor
    1. That is already a challenging political thing: the contractor doesn’t have as many degrees of freedom, because their boss is the “powerful person X”
  3. What I didn’t know is that hte sub-contractor is ALSO connected to Powerful Person X, and they are like BFF’s! (“‘’Best Friends Forever’’” - a colloquial american term, often considered typical of used by young girls in southern california.)

___

So, about Powerful Person X: - It took me about a year to figure out that they were BFF’s; because nobody will tell you: when I asked the subcontractor it was never said that, “Oh yeah, we go way back! I ran his political campaign!” - And that makes me know, “Ah! Why is there a connection! That’s why everyone is so scared about touching sub-contractor X.

If anyone has really good suggestions as I write this report about eggregious use of funds, and lack of movement, and all these things that people will be very upset about .. .


CDC Phase II: Describe the Program

edit

2 Dealing with Faulty Program Theory

edit

Theory:

Practice

  1. Often implicit and vague
  2. Disagreement about key stakeholders
  3. eval priorities might be poorly matched
  4. resources may not allow for intensive observation.


Q: When you say program theories are often implicit or vague; does that mean the Organization doesn’t have a program theory? (J: could we call these “process theories”? Because the process of learning in this class, for example, would be one thing - but I’m not sure it would be considered a “program”) A: SOme examples frm my own practice>

  1. We have 10 logic models, which one do you want?
    1. They don’t have a clear idea of what their program is supposed to do.
    2. There were ten of them. . .they all have different outcomes. ANd they said, “Well, we have different funders” - And I said, “Well, which one is right?” And they said, “I don’t know! We don’t really know what our vision and mission is, because we have so many!”
      1. That could mean they don’t have strong leadership at the top driving vision and mission;
      2. They’re committed to thinking in that way; so might be open to collaborating wiht us to find a “real” program theory.
  2. “our activities don’t link to any of our funder outcomes”
    1. This is where people say, “They don’t link to funder outcomes, because the funders are stupid; we take the money and run; but we don’t think those outcomes are important”
  3. “It’s the colors man!”
    1. Probably the most egregious example: it was a curriculum evaluation; a big book taht publishers publish; if you use the book, you’re supposed to increase student learning. We were trying to get a sense on the pedagogical features of the text that would provide the SLOs that they were trying to achieve. “Why do you think your book / curriculum is supposed to improve student learning.” And this was their response! - It could be about visual presentation; “capturing student ….” - but it was no, “the other textbooks use orange and Greene; and we’re using this blue with…” (J: What percent of the variance in student learning outcomes can visual presentation have? :) What’s the model for how it works? (J: Trust -> Effort -> …) ; they’ might be going, “Sustainability of attention”
    2. Q: what does that indicate? A: … M: they’re getting nowhere in getting a theory of change;
    3. Q: So, what’s your next step? You’r dealing with people that are experts on the curriculum. A: Can you ask the textbook company? M: We had to bring in new stakeholders that said, “it wasn’t just the colors: there is a pedagogy beneath this”.
    4. Q: What other implications does this have? A: If they’re selling the program, they’re M: Usually the people selling are also the professional development people - they’re the internal experts, and they have no clue how the program works. If they don’t know, they won’t be able to teach their.
  4. M: We were evaluating the effectiveness of learning of that curriculum - for the publisher.
    1. M: To in the report; we might say, “In order to do this, you’ll need a good implementation system” - This was a summative evaluation. The first part in this process is.
    2. I hope that paying attention to the subtleties, you have to modify your response to the type of responses you get.



CDC Phase III: Focus the Evaluation Design

edit

THeory:

  1. St. activly gengaged in what counds at credible evidence 9Greene & Patton)

Practice

  1. lack of agreement on what counts as credible evidence. (J: Shouldn’t we have a Rubric for What Counts as Credible Evidence?)


Politics Influencing Designs

edit

NCLB -> SBR -> RCT

We need to do an RCT to be considered SBR in order to get approved by NCLB and get on the state adoption list.

  1. SBR: “Scientifically Based Research” - THe way in which this term was operationalized focused on RCTs as the best; Quasi if you don’t have RCT’s
  2. Textbook publishers then had to dedicate money to evaluate their products: before that they were doing market studies on Usability; what do people like; maybe some learning outcomes; but it was much more about how to position the product within the market, which is very differnet from “how does this product produce the ___ “?
  3. In order to get on the state approved list and make millions of dollars,

Why would an RCT be the wrong design?

  1. A: IF they implement differently, you don’t know - implementation
    1. M: Absolutely the right answer. These studies are about hundred thousands USD. What are we going to use as our control? No Textbook? Ideal - but who wants to be assigned to that? (2 students out of ≈60 in this class)
    2. What level could we randomly assign? Student level? No. Teacher level? - Possibly (but what threats would we be concerned with? - Discussion between teachers; (“cross-contamination” - resentful ___; . . . a lot of issues; if we randomly assign as school levels: Demography; differences in implementation; Funding ( a lot more than $300,000 USD to detect meaningful effects for learning) - we only had money to randomly-assign at the teacher level.
    3. If we randomly assign, who do you think will be a better implementer, the people using the other texxtbook for years, and who have developed ways to teach and use it, or teachers that got the textbook 2 days before school started with a 4-hour PD (professional development) on “colors”? (A reference to an example above) A: The incumbent. They’re supposed to supposedly outperform the competitor. (J: You need a process model for the whole
    4. People do really bad RCTs, because they don’t have the budget to do more, and they have to have an RCT to get on the state adoption list.
    5. However, nowadays, people don’t really care about the NCLB anymore; and the because it has waned, publishers are going back to the market surveys rather than efficacy studies. (J: what is the relationship between efficacy and market position? If the information is out there, can you actually sell more? Or does knowing what really works not give you any .. What percent of the variance is efficacy in this?:))
  2. Who would you use as a control?

CDC Phase IV: Gather & Analyze Evidence

edit

Theory

Practice

Lessons learned:

  1. Identify the gate-keepers!
    1. The CEO is not the gatekeeper of anything! I have to talk with people who have influential power with the staff: Who is the person who is most respected?
    2. I drove to every one of 58 sites during an evaluation; it took me several months; but every day I was out at a different site, meeting with people, chatting with them, letting them know that I cared about them at that ground level; I spread that evaluation word about, “What are we doing here? what is my role? How can I help you?” - so when it came to, “Hey, we need you to add duties to your plate.” - They were completely, “Sure Bet (“Bet” would be an informal version of “Betty” (Or Rebecca) - the use of this in American context shows informality and personability)! Of course I can help you” - because you’re fun; easy to share with; you bring cookies.
    3. This is different from the dry evaluator that submits it so it can sit on your shelf.
    4. The data over time; you have to trust them, and they have to trust you that you’re going to do something with the data.
  2. You have to listen to everyone, not just the CEO.
    1. Most of the CEO’s I work with are just introductory; they want to make sure I”m appropriate.
    2. Dealing with them at the implementation of the project; they’re at a different level; they’re just trying to bring money into the organization.
  3. Startegies for making data collecting easy -
    1. Intentional Engineering: Prepopulating a name on scantron; we had our staff fill out and populate all the demographics, bubbles; so that once the teachers got them all they had to do was give them to the right kid: you wouldn’t believe how far that went with teachers who usually have to do all that work themselves (J: EXACTLY! I’ve been saying this about our evaluation in the Office of Institutional Effectiveness (OIE) here at CGU
  4. Thank-You Notes go a long way too.
  5. ___ (Pull in from powerpoint)


CDC Phase V: ___

edit

‘’’Theory’’’

  1. Evaluators serve as educators to teach different stakeholders how to interpreta nd use the evaluation findings.
  2. Previous rapport and relationships with key stakeholders allow for frank discussions of negative findings.

‘’’Practice’’’

  1. Face-time often limited after results have been delivered
  2. Stakeholders endorse positive findings, and find fault with methods when findings are negative.
    1. - I build into my contracts time to discuss evaluation report and do something with it; instead of pre-populating all the results, I left them open;
    2. Q: Read recently data evaluation person; Stephanie Evergreene; started putting evaluation conclusions in fortune cookies; and another puts evaluation results in the bottom of chocolates (J: You could then do a study of the difference between real fortune cookies and evaluation findings for effective decision-making. . .)
    3. (J: In many cases, you need summative evaluation to be able to do formative evaluation)

M: What systems do they have in place to communicate? They tell CEO->Midlevel->STaff? -

WHen you think

CDC Phase VI Ensure Use and share lessons learned

edit

‘’’Theory’’’

‘’’Practice’’’

  1. Org. characteristics might inhibit use
    1. Some aspects might not embrace it hte same way.
  2. Eval findings are proprietary: not publically available
  3. Program lacks system to use date for decision-making


Multiple strategies:

  1. Deliver in multipel formats
    1. Aka staff; teacher; org-wide
  2. Engage staff in an activity to critically think about findings
  3. Deliver findings verbally as we go: get staff and leadership perspective on findings informally in a safe space
  4. Sticky role of recommendations:
    1. Say, “hey, I was looking at some data, and it seems that people don’t like this data . .. What are your thoughts on that? Does that resonate with you?” - ‘’’So that by the time they get the results, nothing is new.’’’
  5. Your stakeholders should ‘’know’’ what’s in it. (J: talking about the data as you get it. . . but you have to make sure politically that there is a platform for that; that it won’t interfere. . . if you get into fights early it could take away from your ability to perform the evaluation. . .)

___ (example)

To what extent can we deliver sticky recommendations? (J: I think that means ones that will “stick around” - have effect)

  1. I felt after 3 years that I really knew context and culture; but as an external evaluator I don’t want to be too specific with recommendations:
    1. I focus on global recommendations, like; “link this with this. . .” if this is waht you want to improve, then ____; as opposed to specific recommendations.


Wrap-up

edit

Politics is everywhere: its your job to figure out what it is. If you hear words like, “It’s because of the colors!” that might make you worried: you might need new partners for the process. That’s why engaging the stakeholders so early is important, because it will influence the work you do.

Politics is really important: there are power dynamics; context; stakeholder involvement; stakeholder; environmental; contextual factors: it’s hard to say, “Here’s a prescriptive list of everything you need to think of” - because maybe the people that said “it’s jus the colors” don’t matter; but in this case they were the ones who would be driving the entire study; the entire curriculum effort. They have a lot of power!

Getting the right people to the table, and getting them to communicate effectively is absolutely critical.

For me, interpersonal skill has helped me negotiate these difficult political situations.

  1. I try to be funny: i often fall flat, but I am at least enteratining to myself!

10-Minute Break. (14:39:47)

Activity:

edit

You have 20 minutes to discuss this:

What program was being evaluated? What was the purpose of the evaluation and who did the evaluator warrive at this purpose? Do you think the approach, design, and methods were a good match for contextand purpose? What political issues were particularly salient in their evaluation? How did the evaluator respond to these issues?


Fetterman:

1. Program Being Evaluated: STEP: Stanford Teacher Education program; 12 months, etc. Given in-Training. (p. 100) 1997-1998.

2. Purpose - Two Stages: Formative; 1997-1998: “ firstphase of the evaluation was formative, designed to provide information that might be used to refine and improve the program”

The second stage of this evaluation was summative in nature, providing an overall assessment of the program (Fetterman, Connors, Dunlap, Brower, Matos, & Paik, 1999).

Purpose: evaluate

2.1 How the evaluator arrived at purpose: Stakeholder interview?

3. Do you think the approach, design, and methods were a good match for context and purpose?

Yes: very; - “The only way to have a good understanding and an accurate assessment, understanding real-world activity right in front of your eyes, is to be immersed.” (p. 114)

  • 3.1 Approach: A:
  • 3.2 Design:
  • 3.3 Methods:

3.4 Relation to Context:

A: To what extent would this approach be contingency-based? M: A lot of what he talked about in it.

 Outcomes: 90% of recommendations adopted; 

4. Political Issues that were salient

- His status as a member overarching department. 
- One specific incident, and how he handled it was really good: 
- People feeling they didn’t have any voice; a political act - 
- The personal nature of the politics: that it would be personally disconcerting; and uplifting. 

6. 5. Evaluator response to those issues: A: 1. His status: he tried to maintain objectivity 2. - His response to the

Back together; 15:20. For Fetterman - what was the program? - Teacher Ed.

Honest feedback early on; being authentic and courageous. Address political climate; he was self-aware; knew he had credibility and could use it.

Aware of all the roles he had at the school; how they had issues with him being there; stepping in; all these different roles; and one of them was as a faculty member.

J: Expectation setting; if you’re able to say, “Hey, this is what we expect” - to try to diffuse political issues that would be based on fear and so on.)

Gary Henry - Evaluation of Georgia Education System

edit

Won a grant to construct an accountability system for the State of Georgia public school system.

First of its kind to measure things other than ___.

The way it was designed to begin with was political: the way it was published made all these ratings and results public to everyone could raise a lot of anger?

Henry reaching out to the media, and trying to train them to use it as a tool, shows how evaluators can help facilitate use. Being proactive.

___


Abbreviations used in this document:

  • Q: = Question
  • A: = Answer
  • M: = Mentor
  • J: = Note-taker comments



2014-04-09 13:04:18 Claremont Graduate University PSYCH 315z – Comparative Evaluation Theory