HIRE WRITER

Scholarship Policy and Procedures

This is FREE sample
This text is free, available online and used for guidance and inspiration. Need a 100% unique paper? Order a custom essay.
  • Any subject
  • Within the deadline
  • Without paying in advance
Get custom essay

Introduction

Proctor et al. (2009) define implementation research as, “the study of processes and strategies that integrate evidence-based effective treatment into routine use” (p.5). Driven by their concern with complexity and the process by which outcomes are reached, policy implementation scholarship is, most often, qualitative, with many implementation scholars utilizing case study methodology (Sandfort & Moulton, 2015). However, qualitative studies of implementation are not the predominant source from which policymakers receive data on evidence-based programming. Instead, it is quantitative program and policy evaluations that actively inform policymakers (Spencer & Richie, 2002). While complexities are embraced in qualitative implementation research, such scholarship has limited influence in a policymaking culture that values simplicity, which leads to an overreliance of data generated by quantitative scholarship. Though empirical studies can provide data on the effect and impact of reforms, they are limited by their choice of variables, may suffer from bias in their application of statistical methods, and often do not include concrete analysis of implementation processes by which outcomes are reached. Thus, relying solely on quantitative methods can further contribute to reductionism in policy design (Dumas & Anderson, 2014; Flyvberg, 2011; Spencer & Richie, 202).

Researchers in education need not wholly shift their methodological approach; there is merit in both quantitative and qualitative data collection, however there is a need for scholars to apply more dynamic lenses to their studies. The future of policy design and implementation would benefit from statistical studies that incorporate studies of implementation processes within their evaluations of programs (Weiss, 1998; Sandfort & Moutlon, 2015). While qualitative implementation studies of implementation should develop tools (Sandort & Moulton, 2015), testable conceptual frameworks and endeavor in the hypothesis testing of theories and frameworks in real world settings (Proctor et al., 2009)

Background

Case studies of implementation across policy contexts have lead to knowledge regarding effective processes by which implementation should be undertaken (Meyers et al., 2012; Sandfort & Moulton, 2014), and how change can be sustained (May & Finch, 2009). Case studies in education policy have examined how the practices, behaviors, and culture and climate of schools impacted the way that federal reforms such as No Child Left Behind (NCLB), Curricular reforms, like Reading First, or Comprehensive School Reforms, such as Success for All were enacted. Qualitative implementation analyses like those explored in part A, have promise as they offer policymakers “a theory of social action grounded on the experiences…of those likely to be affected by a policy decision or thought to be a part of the problem” (Walker, 1985, p.19). In depth analysis provided by case study also provide quantitative analysis a specific set of testable constructs to be incorporated into statistical research (Spencer & Richie, 2002). However, the preponderance of variables (Matland, 1995; Proctor et al., 2009), inconsistent language and definitions of terms (Meyers et al., 2012), are not conducive with the current policy design paradigm that is outcomes-focused and “evidence-based”.

Currently, the succinct manner in which quantitative researchers are able to communicate outcomes via their research findings, contributes to their widespread use by policymakers eager to employ “evidence-based” practices and programs within their policy designs (Spencer and Richie, 2002). Overreliance on research methods that communicate outcomes, without also including the processes by which those outcomes are reached, pose significant challenges to education stakeholders as they attempt to replicate the successes of “evidence-based”.

Implementation processes, policy environment, and organizational context are all complex and interrelated factors that impact the results of policy innovations. In 1986, Malcolm Goggin asserted, “implementation behaviors are shaped by the decision making environment, the type of policy at stake, and the characteristics of both the implementing organizations and the people who manage the program” P.330. The preclusion of any of the aforementioned, provide reformers with incomplete understandings of what it takes to make and sustain change. Without holistic knowledge regarding interventions and implementation processes, policymakers divert educational funds and resources towards innovations that may prove unsuccessful in practice.

Case Studies: Limited?

Implementation studies’ focus on uptake of innovations in particular contexts, meaning they often rely on case study methodology (Coburn, 2006). Proctor et al., (2009) argues that this is a detriment to the field because it provides “anecdotal evidence” from “highly controlled experiments”. Sandfort and Moulton (2015) further this claim, in the research by stating, “Research designs do little to help figure out which parts of the intervention are causal or what factors drive positive results”. In their view, implementation research tends to focus on “what it takes to inspire or motivate actors”. In an effort to contend with their need to produce data that can be understood by policymakers and applied in policy design, scholars within implementation studies have called for the development of tools (Sandfort & Moulton, 2014), a shared implementation language (Meyers et al, 2012; Proctor et al., 2009), and singular frameworks by which theories of implementation can be tested (Meyers et al, 2012; Proctor et al., 2009) and the use of my more “hypothesis driven statistical approaches” (Proctor et al., 2009). Proponents of such, assert that implementation research that is not seeking to develop mechanisms and tools by which effective implementation can be reached is not conducive with broadening the understanding within the field (Sandfort & Moulton, 2014) and, in turn, will not lead to improved policy design or outcomes. Without this methodological and analytical shift, policymakers will be left relying on quantitative measures to design future policies and implementation plans.

Show me the Evidence

Ron Haskins and Greg Margolis (2014) contend in, “Show me the Evidence: Obama’s Fight for Rigor and Results in Social Policy” that nine out of ten social policies fail. Haskins & Margolis (2014), argue that the widespread failure of social policy has given rise to the current evidence-based movement that evaluates programs for their impact in order to more effectively allocate scarce resources. Under this “evidence-based” regime, particular types of research and data are privileged, and evidence of programmatic success becomes the primary criterion that determines whether programs are replicated or defunded (Haskins & Margolis, 2014). Proctor et al (2009) argued, however that there is not a lack of evidence-based programs, but rather evidence-based implementation. Sandfort and Moulton (2015) claim that research-based interventions can only be beneficial if there are studies that examine implementation processes. Moreover, Blasé et al. (2015) write, “data is necessary to produce change, it cannot prompt the adoption of change or create or sustain change in practice in schools or classrooms” (p.2). In short, statistical data is limited by their ability to generate and test variables. Further, though empirical findings of studies are often viewed as less susceptible to bias, scholars have elucidated how the analytic assumptions applied to statistical modeling can undermine the validity of findings. Finally, quantitative evaluations of policies that do not included a concrete analysis of implementation process, pose challenges for future attempts at implementation and replication across contexts. Proctor et al., (2009) assert that the culture of an organization “may wield the greatest influence on acceptance of empirically supported treatments and willingness and capacity of provider organizations to implement treatments. Thus, without an understanding of how change processes began and were enacted, the characteristics of an organization, or the human behavior that must be amended in order for change to take hold in a school, the true ability to replicate effectively is limited.

Randomized Control Trials

Randomized controlled trials are heralded as a premier means to provide scientific evidence about policies and educational interventions due to their experimental design (Sandfort & Moulton, 2015). Fixsen et al., (2009) defined evidence-based as “two or more randomized group designs, professionally done by two or more groups of investigators that examine outcomes of a program (p. 531). In education, the most statistically rigorous of research studies can hope to be featured on the “WhatWorksClearinghouse” an outgrowth of the Institute of Education Sciences (IES). In contentious political climates, many reformers and policymakers are concerned with identifying innovations that work and subsequently taking them to scale (Elmore, 1996; Schneider, 2011). This leads many policymakers to rely on quantitative measures that apply “simplicity and precision” to find answers to complex problems.

The value of quantitative methods, in addition to their ability to provide causational or relational data regarding policy outcomes, is that statistical methods allow researches to perform subgroup analysis to better understand how targeted populations are impacted by policies. The knowledge of the differential effects of reforms and schooling practices on underserved populations has been a powerful use of statistical analysis. However, it is imperative that experimental studies are applicable to real world settings (Brown 1992 as cited by Anderson & Shattuck, 2012).

Limitation of Variables

What is considered in policy analysis and evaluative studies is crucial to policy design and implementation. The limitation of research that focuses on outcomes, is that while they may acknowledge the complexities that schools face, that the political and local implementation environment and processes in which reforms are enacted are often considered tangentially, rather than as ancillary (Elmore, 1980; Fullan, 1992; Leithwood et al., 2009). While statistical analysis can provide information regarding the measurable impact that a program is able to have on a particular set of outcome variables, they are bound by the variables included in their scales and survey instruments (George &Bennett, 2005). Therefore, though quantitative studies may be able to say that a program was (in)/effective in impacting a particular measure, they are not necessarily able to say why or how. Though outcome-centered studies include demographic, socio-economic, and other measurable factors, they are often missing the interpersonal school-based contextual factors that directly impact implementation behaviors and processes and the subsequent success of programs. Selection bias inherent in some school reforms poses a challenge to the ability to replicate programmatic outcomes. In an applied field such as education, our research methods must reflect the complexities of the environments we study (Dupre & Durak, 2008;Weaver-Hightower, 2008). Education is political (Henig et al, 2001), politics are racialized (Sizemore, 2008), schools as social institutions are imbued in the racial and political nature of education (Gillborn, 2005). An example of how a lack of broader socio-political analysis in policy evaluation can be seen in evaluations of charter school policies.

The efficacy of Charter schools and voucher programs abound in education policy research (Carlson et al, 2013; Imberman, 2011; Witte et al., 2014) However, Frankberg and colleagues (2010) called the charter school movement a failure of civil rights due to segregation and lack of increase in student outcomes. Chapman and Donner (2015) assert that many market reforms assert that providing choice to students of color will increase their educational opportunity by impeding and interrupting the monopolization of public education created by traditional public schools. The authors argue, however, that market ideology that does not contend with white privilege and white racism causes policies to be “entrench[ed] in racial inequality” (p.140), without attending the social and societal implications of such market-based policies does a disservice to students (Lipman, 2013).

Limitations of methodological assumptions

Hanushek (1996) asserts that the statistical methods that researchers employ and the assumptions that inform their methods are crucial to understanding the results produced in research. Borman et al (2006) illustrated this in their analysis of Quasi- experimental studies of Comprehensive School Reforms, such as Success for All. The authors found that effects reported in studies of Success for All are undermined if they did not account for the particular programmatic requirements that necessitates that 80% of a school’s faculty must agree to adopt the reform in their statistical analysis. This inclusion in statistical analysis is crucial, as this programmatic requirement, naturally combats one of the crucial aspects of implementation, willingness to change (Borman et al., 2006; Henig et al., 2001;Proctor et al., 2009; Thapa et al.,2013). Employing a different statistical method and sample selection that included schools that evidence “good and bad” implementation results, the authors generated smaller effect sizes than researchers who utilized a randomized design.

Implications for Policy Design and Implementation

In 1992, Miles and Fullan declared, “Education reform can’t be achieved if leaders and participants don’t internalize and habitually act on basic knowledge of how change takes place”. This proclamation is echoed by Bryk and Colleagues (2015) who argue, “Most education reforms reflect at best a partial understanding of system dynamics, and some seem almost oblivious to the fundamental character of the phenomena they seek to change”(p.58). The lack of attention to understand change is perpetuated by an overreliance on quantitative measures. While evidence is crucial to policy design, we must also understand what particular forms of data can and cannot tell us about (Mason, 2017). Without a dynamic approach to the study of policy implementation, policymakers will continue designing and implementing programs with limited success. The failure and abandonment of policy often leads to policy churn (Blasé et al., 2015). The consequences of policy churn, are not simply monetary or politically frustrating, the constant turnover of policy works causes frustration of stakeholders and actually undermines future change efforts within schools (Payne, 2008).

Method and Analysis for the future

Spencer and Richie (20002), in their article “Qualitative Analysis for Applied Policy Research” provide a framework that could prove beneficial to qualitative case study scholars. Qualitative data is described as a mechanism by which greater understanding can be reached, and can directly inform future quantitative analysis, by elucidating upon and corroborating findings gathered via quantitative methods. The work of qualitative analysis, in their view is “detection and the tasks of defining, categorizing, theorizing, explaining, exploring, and mapping” (p.176). It is within these qualitative tasks, that Spencer and Richie (2002) outline their “Framework” strategy for applied research. Their framework is founded upon the words and accounts of participants, flexible and easily adjusted throughout analysis, and allows for easy access to the raw qualitative material. The promise of this analysis systematic approach to data analysis is that it allows the “analytic process, and the interpretations derived from it, [to] be viewed and judged by people other than the primary analyst” (p.177).

In an effort to produce scholarship that can provide a holistic understanding of policy outcomes, both qualitative and quantitative researchers should seek to incorporate a singular scale that can measure` organizational contexts that have been found to have an impact on implementation, uptake, and sustainability of reforms. The Quality Implementation Framework put forth by Meyers et al (2012) along with Normalization Process Theory (May & Finch, 2009) both represent implementation frameworks and conceptualizations that can be adapted to education contexts, in addition to scales that are specific to the educational contexts and the policymaking process writ large.

If quantitative methodologists, employed a reliable quantitative instrument that measured such contextual factors, in the same way that demographic variables such as race, SES, and gender are collected, they would be able to contribute a wealth of information regarding not just programmatic outcomes, but also the context in which programs were implemented, which can lead to greater research on implementation process embedded in scholarship.

Conclusion:

Policy implementation scholars are often able to capture the complexities of implementing environments, processes, and behaviors of implementing actors in their scholarship. This data has the potential to greatly influence policy design and implementing practices of school organizations. However, the complexities of the findings and the conclusions reach, are not often conducive to the evidence-based, outcomes focused culture of policymaking, which leads to an overreliance of econometric and other quantitative policy evaluative methods, which are limited by their variables, assumptions, and often do not include data regarding implementation processes. Without adequate knowledge of the complexities of environments and processes, the ability to replicate programmatic outcomes is threatened. As such there is a need for qualitative researchers to employ analytical and methodological approaches to their research to be better applicable for policy. Additionally, quantitative researchers should endeavor to include data regarding implementation process within their studies of efficacy of particular educational interventions.

References

Cite this paper

Scholarship Policy and Procedures. (2022, Oct 31). Retrieved from https://samploon.com/scholarship-policy-and-procedures/

We use cookies to give you the best experience possible. By continuing we’ll assume you’re on board with our cookie policy

Hi!
Peter is on the line!

Don't settle for a cookie-cutter essay. Receive a tailored piece that meets your specific needs and requirements.

Check it out