Why have the results of the Follow Through evaluation failed to impact the
policies and practices of the educational community? Why have the most effective
approaches for educating children not been widely disseminated? Why has the
knowledge gained from the Follow Through evaluation not been used to reform
education in America? The answers to these questions may be found in part by
looking at how instruction is viewed by the various elements of the educational
establishment.
Follow Through provides an opportunity for such an
analysis because it revealed how the educational industry collectively conceived
of, planned, conducted, and interpreted a large scale educational experiment.
When I was in graduate school, I wrote a lengthy paper in which I traced the
history of Project Follow Through, looked at its implications for education, and
analyzed the contingencies that determine educational practices. This article is
condensed from the paper, which will be published this fall by the Cambridge
Center for Behavioral Studies. I made a vow that I would tell the story of
Project Follow Through to anyone who would listen.
History
Many people know the history of Project Follow Through far better than I,
because they lived it. As I understand it, it goes something like this. In 1964
Congress passed the Economic Opportunity Act, which initiated a range of
programs intended to fight poverty. The federal policy that emerged from the EOA
was influenced by a growing consensus that education would be the antidote to
poverty by providing skills necessary to break out of the exiting cycle. One of
the best known programs to develop from this rationale was Head
Start.
Head Start began in the summer of 1965. It was immediately popular
and continues to enjoy tremendous public support today. It is commonly believed
that Follow Through received impetus from the success of Head Start and from a
study showing that gains made by Head Start children dissipated when they began
school. In reality, the decision to initiate Follow Through was probably a
function of both conviction and expediency. In any event, in February 1967,
President Johnson requested that Congress establish a program to "follow
through" on Head Start. The outcome was Public Law 90-92, authorizing Project
Follow Through. Although it has been referred to as the largest and most
expensive educational experiment, Follow Through was not initially conceived as
an experiment, but as a comprehensive social services program. However, before
the program got underway, budget cuts forced a reconceptualization and Follow
Through was converted to a longitudinal experiment aimed at finding effective
methods for teaching disadvantaged children. The Follow Through experiment
involved close to 10,000 children from 120 communities each year from 1968 to
1976. Follow Through continued as a service program until funding was eliminated
in 1995.
Design
The design of the Follow Through experiment was called planned variation.
Based on the notion that a variety of curricula and instructional methods could
be designed, implemented and evaluated, the planned variation approach was
intended to reveal differences in effectiveness among different teaching
approaches.
A plan was devised that made it possible to implement a
variety of educational models in local school districts, while avoiding the
appearance of unwarranted federal intervention. The Office of Education
contracted with developers of educational approaches who then acted as sponsors
of their model and worked cooperatively with districts to implement the model in
Follow Through classrooms.
Each sponsor was responsible for translating
the model's particular approach to education into practice. This included
selecting or developing instructional materials, and training teachers in the
model's instructional method. The Follow Through sponsors' task of designing a
complete curriculum for the entire school day, had never before been attempted
in educational reform.
The selection of sponsors began in January of
1968. Individuals or groups who were involved in developing new approaches for
teaching young children were invited to present information about their
programs. Sixteen developers subsequently submitted formal proposals, twelve of
which were accepted. The approaches represented the entire spectrum of
assumptions about instruction, ranging from the carefully controlled approach of
the Direct Instruction and Behavior Analysis models to child-centered approaches
such as Bank Street and Open Education. Ten additional sponsors were added over
the following three-year period, not because they offered unique approaches to
compensatory education, but because they offered the possibility of enlarging
the Follow Through constituency.
The selection of sites progressed
synchronously with sponsor selection. From among a group of 225 nominated school
districts, a total 51 were selected, based on their ability to begin a
comprehensive services program before the start of the school year, their
willingness to participate in the planned variation experiment and their working
relationship with local community action agencies.
Sites and sponsored
models were paired during a four day conference held in Kansas City in February,
1968. In an effort to increase cooperation in implementing the various models,
local representatives were encouraged to choose the model they believed was most
compatible with the goals and interests of their district. Each model was
implemented in a variety of sites, where children received daily instruction in
the model. Performance data were collected when children entered the program and
at the end of each school year until they completed third grade.
Evaluation
The evaluation of this enormous project was complex and expensive. The data
were collected by Stanford Research Institute and analyzed by Abt
Associates.
Eleven outcome measures were included in the national
evaluation. All sponsors agreed upon the outcome measures, which were intended
to assess performance in different learning domains including basic academic
skills, general problem-solving skills, and the development of
self-concept.
For evaluation purposes Abt Associates divided models into
three broad categories according to their areas of primary emphasis. The
typology was determined based on the sponsor's own program description and
stated goals and objectives of the models. The Basic Skills category included
models that focused primarily on directly teaching fundamental skills in
reading, arithmetic, spelling and language. The Cognitive-Conceptual category
included that intended to develop "learning-to-learn" and problem solving
skills. Models in the Affective-Cognitive category emphasized development of
self-concept and positive attitudes toward learning, and secondarily,
"learning-to-learn" skills. Nine of the major models included in the national
evaluation are described by model type in Table 1.
For each outcome
subtest, Abt evaluators compared the performance of a group of Follow Through
children at a given site with a comparison group. This process resulted in more
than 2,000 comparisons. The difference between a Follow Through group and the
comparison group was used as the measure of effect. An effect was judged to be
educationally meaningful if the difference 1) was statistically significant and
2) was at least one quarter standard deviation. When Follow Through scores
exceeded non-Follow Through scores the outcome was considered positive. When
non-Follow Through scores surpassed Follow Through scores, the outcome was
considered negative. Average effects were computed for individual models, as
well as for model types.
Abt Associates produced yearly reports, which
were published in four volumes titled Education as Experimentation: A Planned
Variation Model. Volume IV (Stebbins, et. al., 1977), provides the most
comprehensive evaluation of the differential effectiveness of the models. The
following findings of average effects for model types are paraphrased (in
italics) from Volume IV-A (pp. 135-148).
Models that emphasized basic
skills succeeded better than other models in helping children gain these
skills. Groups of children in Basic Skills models performed significantly
better on measures of academic skills than did non-Follow Through groups. Abt
evaluators concluded that a Basic Skills model would be preferable if an
educator was concerned with teaching skills such as spelling, math computation,
language, and word knowledge. Note that the Abt report refers to the superiority
of a model type. However, it is not inclusion in a category that leads to
educational effectiveness, but the particular instructional materials and
procedures used. The Direct Instruction model had an unequivocally higher
average effect on scores in the basic skills domain than did any other
model.
Where models emphasized other skills, the children they served
tended to score lower on tests of basic skills than they would have done without
Follow Through. With the exception of the Florida Parent Education model,
all Cognitive-Conceptual and Affective-Cognitive models had more negative than
positive outcomes on measures in the basic skills domain. That is, performance
of students in the comparison group was superior to that of the Follow Through
students in those models. At the end of third grade, children taught in these
models had achievement scores that were, in fact, lower than would have been
predicted in the absence of "compensatory" education. Thus, four years spent in
the majority of models actually increased the educational deficits that Follow
Through was intended to remediate.
No type of model was notably more
successful than the others in raising scores on cognitive conceptual skills.
No model type had an overall average positive effect on measures in this domain,
which included reading comprehension and problem solving. One model that did
have considerable impact on cognitive conceptual skills was the Direct
Instruction model. Not one model in the Cognitive-Conceptual category obtained a
positive average effect on these measures, despite the fact that their
instructional programs emphasized development of these skills. Models that
focused on cognitive-conceptual skills were incapable of influencing
standardized measures of those skills after four years of
instruction.
Models that emphasized basic skills produced better
results on tests of self-concept than did other models. On the average,
children in models the evaluators classified in the Basic Skills category,
performed better on affective measures than did children in Cognitive-Conceptual
or Affective models. All models in the Basic Skills category had positive
average model effects. The only other model to demonstrate a positive average
effect was the University Florida's Parent Education model. In every case, the
models that focused on affective development had negative average effects on
measures in this domain.
The Direct Instruction and Behavior Analysis
models ranked first and second, respectively, in average effects on affective
measures. Both of these approaches stress careful structuring and sequencing of
curriculum materials that are designed to limit the number of errors and ensure
successful performance. In addition, they both rely on frequent measurement of
the child's progress in order to provide immediate remediation. These models
view positive self-concept as an outcome of skill acquisition. In other words,
rather than considering self-concept a necessary prerequisite for learning, they
contend that instruction resulting in academic success leads to improved
self-concept. The data uphold this view.
It would be a mistake, however,
to claim that instruction in a Basic Skills model leads to academic success and
improved self-concept. Significant differences on both categories of measures
were observed for only two of the Basic Skills models, Direct Instruction and
Behavior Analysis. In other words, describing the result as a "Basic Skills"
effect does not identify the specific instructional variables that lead to
significantly better performance in both outcome areas. The fact remains
however, that no model classified as "Affective" had a positive average effect
on affective measures.
The average effects for nine individual models are
represented in Figure 1. The centerline of the figure indicates no difference
between students in a Follow Through model and comparison students. Notice that
the Direct instruction model is the only model to show sizable positive effects
on all measures. The majority of models show considerable negative effects
(performance below the level of the comparison group) on all measures. These
findings clearly show the Direct instruction model to be superior on these
measures compared with traditional programs and with other Follow Through
models.

Figure 1: This figure shows the average effects of nine Follow Through models on measures of basic skills (word knowledge, spelling, language, and math computation), cognitive-conceptual skills (reading comprehension, math concepts, and math problem solving) and self-concept. This figure is adapted from Engelmann, S. and Carnine, D. (1982), Theory of Instruction: Principles and applications. New York: Irvington Press.
The evaluation was not only costly, but controversial. At least three
other major reanalyses of the data were independently conducted. None of these
analyses show significant disagreement with respect to achievement data. Results
of the national evaluation and all subsequent analyses converge on the finding
that the highest achievement scores were attained by students in the Direct
Instruction model. The Follow Through experiment was intended to answer the
question "what works" in educating disadvantaged children. If education is
defined as the acquisition of academic skills, the results of the Follow Through
experiment provide a clear answer to the question.
Dissemination
The purpose of the Follow Through planned variation experiment was to
identify effective educational methods. However, there is little utility in
identifying effective methods if they are not then made accessible to school
districts. The Joint Dissemination Review Panel and the National Diffusion
Network were created to validate and disseminate effective educational programs.
In 1977, Follow Through sponsors submitted programs to the JDRP. "Effectiveness"
was, however, broadly interpreted. For example, according the JDRP, the positive
impact of a program need not be directly related to academic achievement. In
addition, a program could be judged effective if it had a positive impact on
individuals other than students. As a result, programs that had failed to
improve academic achievement in Follow Through were rated as "exemplary and
effective." And, once a program was validated, it was packaged and disseminated
to schools through the National Diffusion Network.
The JDRP's validation
practices did not go unchallenged. According to former Commissioner of
Education, Ernest Boyer, "Since only one of the sponsors (Direct Instruction)
was found to produce positive results more consistently than any of the others,
it would be inappropriate and irresponsible to disseminate information on all
the models..." (quoted in Carnine, 1984, p. 87). However, commissioner Boyer's
concerns could not prevent the widespread dissemination of ineffective
instructional approaches. The JDRP apparently felt that to be "fair" it had to
represent the multiplicity of methods in education. Not only did this practice
make it virtually impossible for school districts to distinguish between
effective and ineffective programs, it defeated the very purpose for which the
JDRP and NDN were established.
Funding Decisions
The effect of the Follow Through evaluation may also be measured by the
extent to which the findings influenced decisions about funding. While all
Follow Through models received budget cuts over the years, the disbursement of
available funds was not based on effectiveness, but on a non-competitive
continuation basis. In fiscal year 1982, the funding formula was changed so that
sponsors with JDRP-validated programs received the lowest level of funding,
while the highest level of funding went to those sponsors that had not been
validated!
Not surprisingly, funding ineffective programs at a higher
level did not make them effective. Not one additional program was validated
during the following year. Yet the same funding policy continued to be
implemented, favoring ineffective programs. It is clear that increased financial
support by itself does not lead to increased performance by students. How
children are taught is critically important.
The results of the Follow
Through evaluation also failed to influence decisions about allocation of
federal research funds. Planned variation makes it possible to identify the best
performing programs and then subject them to further analyses. Instead the
Office of Education and National Institute of Education agreed to spend 12
million dollars to develop and study new Follow Through approaches with the
primary concern being "whether or not an approach can be put in place and
maintained, not with the effectiveness of the approach in improving student
outcomes" Proper and St. Pierre, 1980, p. 8) [emphasis added]. According to
Stallings (1975), the Direct Instruction model was not only most effective, it
and the Behavior Analysis models were the most easily implemented. If
information about implementation was needed, these two models provided a good
starting point. The plan that was pursued shows total neglect of the findings of
the Follow Through evaluation and astonishing disregard for the academic
achievement of students.
Perhaps even more disturbing is the fact that
twenty years after the publication of the Follow Through evaluation, there is
little evidence that the results have altered educational practices in American
classrooms. The majority of schools today use methods that are not unlike the
Follow Through models that were least effective (and in some cases were most
detrimental). Barriers at all levels of the educational system preclude
widespread adoption of the model that was most effective.
The Educational Establishment
The history of Follow Through and its effects constitute a case study of how
the educational establishment functions. As in other bureaucracies, it is
composed of parochial vested interests that work to either maintain the status
quo or to advance a self-serving agenda. As a result, the largest educational
experiment in history (costing almost one billion tax payer dollars) has been
effectively prevented from having the impact on daily classroom practices that
its results clearly warranted. Let's look at some factors that operate at each
level of the educational establishment to influence decisions about teaching
methods and materials.
Policymakers. Follow Through demonstrated that
public policy is based on public support, not on empirical evidence. Thus, the
position that officials adopt with respect to teaching methods is most likely to
be congruent with the position of the majority. Because the Direct Instruction
model represents a minority view in education, it was not surprising that
policymakers failed to take a strong position in support of the Follow Through
results.
Although some policymakers may have some formal training in
areas of education, they typically rely on input from education professionals
when developing and supporting programs. The influence of stakeholders in
traditional educational practices can be seen throughout the history of Project
Follow Through. Planning committees, advisory boards, and task forces were
composed of representatives of universities and research centers. These
professionals usually represent educational philosophies that the Follow Through
results suggest do not, and cannot, lead to the development of effective
teaching methods. For example, the chairman of the Follow Through National
Advisory Committee was the dean of the Bank Street College of Education, whose
model was ineffective in improving academic achievement or affective
measures.
Clearly some professionals with a self-interest have the power
to influence educational policy in a direction that will not necessarily lead to
improved education. In fact, some social policy analysts assert that in
situations where administrators are strongly convinced of the effectiveness of a
program, it is likely that an evaluation will be disregarded. This is tragically
illustrated in California where policy makers enamored with Whole Language were
seemingly incapable of attending to data showing serious declines in students'
reading performance, including a national assessment on which California
students placed last. By ignoring outcome data, policy makers continue to make
educational decisions that negatively impact children. And the most vulnerable
learners are those who are most seriously harmed.
An additional problem
is that policymakers frequently rely on information that others provide them.
Thus their decisions are often based on incomplete and inaccurate data that
reflect not what research has revealed, but the biases of program
administrators, and supporters. An Office of Education document that was read at
an appropriations meeting claimed that "when contrasting all Follow Through
children with their non-Follow Through comparisons... there emerge large
differences in achievement, motivation, and intense effects" (U. S. Congress,
1974, p. 2361), a statement leading senators to believe that the Follow Through
program as a whole was successful and should be continued. John Evans, OE's
Acting Deputy Commissioner for Planning, Budgeting, and Evaluation, explained to
Congress that:
...Follow Through is made up of a different set of alternative ways of approaching alternative education, different models, different programs. And the task and central purpose of that program...is to find out which of those methods or approaches are more or less effective. The evaluation evidence we have compiled indicates just what we would expect from that kind of experiment: namely, that some of those models and approaches are very reassuringly effective, and the kinds of things we would want to see disseminated and used more broadly...other models are not successful and not effective and not the kinds of things we would want to carry on or continue to fund or support indefinitely (U. S. Congress, 1974, p. 2360).
This example illustrates how reports and interpretation of results may
serve as a source of confusion when decision makers are faced with the task of
determining the fate of a program.
It is acknowledged that policy makers
are more likely to be influenced by social and political contingencies than by
empirical data. However, others may be expected to pay more heed to the findings
of major research programs in their field.
Colleges of Education.
Project Follow Through was unique because it examined not only instructional
programs, but the educational philosophies from which they were developed. While
the Follow Through models varied greatly in specific differences, they may
generally be considered to represent one of two general philosophies of
education. The majority of models were based on philosophies of "natural growth"
(Becker and Carnine, 1981) or what Bijou (1977) referred to as "unfolding."
According to these models, learning involves changes in cognitive structures
that are believed to develop and mature in the same manner as biological organs.
Whole Language is an example of instruction derived from this philosophy. It is
based on the belief that reading develops naturally given sufficient exposure to
a print-rich environment.
The second philosophical position is concerned
with principles of learning or "changing behavior" (Becker and Carnine, 1981).
From this perspective, teaching involves specifying what is to be taught and
arranging the environment in such a way that the desired change in behavior
results.
Although the data from Follow Through support the latter
position, the majority of colleges of education espouse a philosophy of
cognitive restructuring. Thus, the data from Follow Through fail to support the
philosophy that dominates colleges of education. This obviously made it
difficult for educators to accept the Follow Through findings and they responded
by discrediting the evaluation as well as by voicing specific objections about
the Direct Instruction model or questioning the values of the model. For
example, educators are fond of accusing direct teaching approaches of ignoring
the "whole child" by emphasizing academic achievement at the expense of
affective development. The Follow Through data clearly show that no such
trade-off occurs. The Direct Instruction model was more effective than any other
model on measures of self-esteem. A second objection is that this Direct
instruction is reductionistic and results in only rote learning of non-essential
skills. Yet, the data show that students in the Direct Instruction model
demonstrated superior performance on measures of complex cognitive skills. In
contrast, not a single model that set out to improve these cognitive skills was
able to do so.
Although effective methods may be rejected simply because
of their philosophical underpinnings, it is possible that they are rejected for
more practical reasons as well. If teachers are to become competent in the use
of effective teaching methods, teacher training programs must be restructured
and those who are responsible for teacher training must themselves become
proficient in those methods. Effective restructuring will require changes not
only in what is taught, but in how it is taught as well. The training paradigm
underlying most teacher training programs has little to recommend it, with
students spending the majority of their time listening to lectures about theory
and method. Sponsors of Follow Through models found that lectures about teaching
had little impact upon actual teaching practices. Training was most successful
when it included modeling of the desired behaviors, opportunities for teachers
to practice, and feedback about their performance (Bushell, 1978). This has
important implications not only for preservice training of teachers, but for how
schools provide inservice training.
Teachers. Probably the biggest
obstacle is the fact that the instructional methods a teacher uses are most
likely to be those taught during his or her own training. Although it is assumed
that teachers have acquired the skills necessary to teach their students, in
reality teachers are woefully unprepared. For example, there are currently
thousands of teachers in classrooms who do not know how to teach beginning
reading, because the professors who "taught" them adhered to a philosophy of
"natural growth." One teacher confided to me, "I do not know how to teach
reading to someone who doesn't already know how to read"! If our teachers do
not, by their own admission, know how to teach, how will our children
learn?
Teachers may not seek out empirically validated methods, such as
Direct Instruction, because they fail to recognize that their current methods
are not effective. Student failure is more likely to be attributed to deficits
within the child or to external factors such as the child's home life, than to
ineffective instruction. Furthermore, many teachers are not even aware that
methods exist that would enable them to be more effective. In many instances,
the only information teachers have about Direct Instruction is misinformation.
And, even if teacher did know there was a better way to teach, how would they
acquire the necessary skills? Surely not by returning to the schools where they
received their initial teacher training.
Teachers who are motivated to
look for and use effective methods, often run into opposition. For example, if
Direct Instruction materials have not been approved for purchase by curriculum
committees, teachers will, in effect, be unable to purchase those materials.
Even if appropriate materials can be obtained, teachers may be forbidden to use
them. In addition, districts often refuse to provide funds for teachers to
attend Direct Instruction conferences and training sessions, preferring to send
them to receive information about the most current fads.
School
Districts. The fact that effective teaching methods are available does not
mean that they will be adopted. According to Alan Cohen (personal communication,
1992), "We know how to teach kids, what we don't know is how to get the public
schools to do it!" Because there are no incentives for adopting effective
methods or penalties for adopting ineffective ones, the choice of instructional
programs will be made based on other factors. One factor that determines whether
a particular method will be adopted is how greatly it differs from existing
practices. The best candidates for adoption are those most similar to ongoing
practices, because they are least disruptive. Stallings and Kaskowitz (1974)
described the behavior of teachers in Direct Instruction Follow Through
classrooms as "quite atypical of generally practiced classroom behavior" (p.
220). This decreases the probability of adoption because it requires so much
change.
Financial incentives may also influence adoption decisions. While
funding may provide the inducement to adopt an innovation, monitoring is needed
to ensure its continued implementation. One way that Follow Through differed
from other federally funded programs was that in exchange for funding,
particular instructional practices were specified and monitored. This system of
supervision resulted in a higher degree of fidelity of implementation of the
model than might otherwise be expected. However, schools are generally not
organized to provide the level of supervision that Follow Through model sponsors
found necessary to ensure fidelity of implementation.
Publishers.
Much, perhaps most, of what a teacher does is determined by the materials he
or she uses. Yet, those who develop instructional materials typically do not
have the skills required to develop effective materials. Few educational
programs are based on state-of-the-art programming principles. Worse yet,
materials are not field tested to ensure their effectiveness with children. The
publishing industry does not initiate the development of instructional
materials, but instead reacts to the demands of the educational marketplace.
California provides a good illustration of this dependent relationship. In
California the state adopts an instructional framework. Criteria for
instructional materials are then derived from the framework. Publishers are
provided these criteria and busily get to work developing instructional
materials that conform to them. They submit their materials during the textbook
adoption process and panels evaluate the extent to which the materials
correspond to the specified criteria. Noticeably absent from these criteria is
any mention of measured effectiveness. Given this process, a program could meet
every single criterion and be recommended for adoption, and not be effective in
teaching a single child! But, field tests are expensive, and the prevailing
contingencies provide absolutely no incentive for publishers to conduct them in
order to provide learner verification data because such data are not considered
in textbook selection and adoption. (See "Why I sued California, Engelmann, ADI
News, Winter, 1991).
The Public. Although the public is not
typically considered part of the educational establishment, it can be included
in this discussion because it supports education. What the public has supported
is a system which has continued to neglect effective methods of instruction. Of
course, the public's support has been innocent because it is generally unaware
of instructional options and their differential effectiveness. Parents and
others have been led to accept that the failure of a great many students to
learn is due to deficits in the children. The general public has no way of
knowing that children's achievements are largely a function of how they are
taught. However, this may be changing.
Toward the Future
The American public's dissatisfaction with public education is becoming
increasingly clear. The failures of public education have been well publicized.
Endless studies and reports call attention to important factors such as
improving curricula, increasing teacher salaries, expanding the length of the
school day and/or year, and a variety of other changes. Although some of these
changes may be necessary, they will not be sufficient to produce the substantial
academic improvement that is possible. The critical factor that has been
historically ignored is instructional method. Our educational problems will not
be solved until it is recognized that how well students learn is directly
related to how well they are taught.
Is there any evidence that research
is beginning to influence educational policy and practice? Recent events in
California may point to progress in that direction. The Report of the California
Reading Task Force (1995) stresses effective teaching and recommends that every
school and district implement a "reading program that is research based" (p. 3).
In February of this year, Assembly Bill 3075 (1996) was introduced in the
California State legislature. This bill would amend the minimum requirements for
a teaching credential to include satisfactory completion of comprehensive
reading instruction "that is research-based" and includes "the study of direct,
systematic, explicit phonics." In September of 1995, Governor Wilson signed
Assembly Bill 170, referred to as the ABC bill, requiring the State Board of
Education to "ensure that the basic instructional materials that it adopts for
mathematics and reading...are based on the fundamental skills required by these
subjects, including, but not limited to systematic, explicit phonics, spelling,
and basic computational skills." It is possible that these documents offer the
promise of hope for the future. I will close with the words of Leonardo da
Vinci: "Tell me if anything ever was done."
References
Becker, W. C. & Carnine, D. (1981). Direct instruction: A behavior theory
model for comprehensive educational intervention with the disadvantaged. In S.
W. Bijou & R. Ruiz, (Eds.), Behavior modification: Contributions to
education (pp. 145-210). Hillsdale, NJ: Erlbaum.
Bijou, S. W. (1977).
Practical implications of an interactional model of child development.
Exceptional Children, 14, 6-14.
Bushell, D. (1978). An engineering
approach to the elementary classroom: The Behavior Analysis Follow-Through
project. In A. C. Catania, & T. A. Brigham (Eds.), Handbook of Applied
Behavior Analysis ( pp. 525-563). New York: Irvington
Publishers.
Carnine, D. W. (1983). Government discrimination against
effective educational practices. Proceedings of the Subcommittee on Human
Resources Hearing on Follow Through Amendments of 198, 99-103. Wash. D. C.:
U. S. Government Printing Office.
Carnine, D. W. (1984). The federal
commitment to excellence: Do as I say, not as I do. Educational
Leadership, 4, 87-88.
Engelmann, S. (1991). Why I sued California.
ADI News, 10, 4-8.
Engelmann, S. & Carnine, D. (1982).
Theory of instruction: Principles and applications. New York: Irvington
Publishers.
Every child a reader: The report of the California reading
task force. (1995) Sacramento: California Department of
Education.
Proper, E. C., & St. Pierre, R. G. (1980). A search for
potential new Follow Through approaches: Executive summary. Cambridge,
Mass.: Abt Associates. (ERIC Document Reproduction Services No. ED 187
809)
Stallings, J. A., & Kaskowitz, D. H. (1974). Follow Through
classroom observation evaluation (1972-1973). Menlo Park, CA: Stanford
Research Institute.
Stebbins, L. B., St. Pierre, R. G., Proper, E. C.,
Anderson, R. B., and Cerva, T. R. (1977). Education as experimentation: A
planned variation model (Volume IV-A: An evaluation of Follow Through).
Cambridge, Mass.: Abt Associates.
U. S. Congress, Senate. (1974, May
29). Hearings before a subcommittee of the Committee on Appropriations.