Task
Force on Empirically Supported Treatments (Section on Clinical Psychology of the Canadian
Psychological Association)
Responses
to the Task Forces Reports
Thirty years ago Paul (1967) recommended that the field of
psychotherapy research focus its efforts on determining which treatments work for which
patients under what conditions. The Division 12 initiative to define a list of empirically
supported treatments is an effort to address Pauls challenge. A review of the
literature which has emerged in response to the Task Force initiative reveals general
agreement with the need to identify effective treatments for specific conditions. However,
concerns have been raised with several aspects of the initiative. These can be categorized
as concerns with the terminology used and the process undertaken by the Task Force,
concerns based on the methodology used in most psychotherapy research, and concerns
stemming out of the potential impact of compiling such a list. (For a detailed response by
the first chair of the Task Force to many of the issues discussed below, see Chambless,
1996a, b).
Concerns with Terminology
and Process
A number of commentators have taken issue with the term
"validated" treatment, often pointing out that validated implies a greater
degree of precision and authority than is supported by current research (e.g., Garfield,
1996, 1998). As stated in the 1996 Task Force report, the committee members acknowledged
the legitimacy of this criticism and indicated that the term "empirically
supported" is preferable. However, there is more to this than simple semantics, for
even among supporters of the initiative, there is ambivalence about the use of the term
validated. Empirical validation, like science in general, is an ongoing process;
validation, therefore, is never complete. Even if one accepts this stance, there may be
problems in the actual determination of whether a treatment meets the requirements for
designation as validated or supported. For example, Garfield (1996) noted that the Task
Force was inconsistent in the application of its own criteria. Specifically, several of
the studies cited as supporting the validity of certain treatments employed very small
samples upon which definitive conclusions could not be made. Others included statements by
the investigators that reflected a need for caution in accepting the results as
definitive.
A related concern was expressed by Wampold (1997) regarding
differences within the scientific community with respect to the nature of the current
status of psychotherapy research. He noted that the strategy of the Task Force was to
start with an empty set of empirically supported treatments to which were added those
treatments that met the established criteria. He pointed out that this strategy may be
inconsistent with the state of psychotherapy research, which often indicates that most bona
fide treatments are equally efficacious, with differences between treatments often
tending to be small and limited to one or two outcome measures. His proposition was that
one should begin with the assumption that all bona fide treatments belong on the list and
that treatments can then be removed once they have been found to be inferior to others on
a preponderance of measures. Although appealing, such a strategy overlooks the possibility
that the reason many treatment studies find few differences among treatments is that they
lack the statistical power to detect any such differences (Kazdin & Bass, 1989).
The Task Force has also been criticized for engaging in
limited consultation prior to publication of its report (Garfield, 1996). If a listing of
treatments is to be useful in routine practice, then some attention should be paid to the
nature of services actually delivered in typical service settings. This would ensure that
all forms of clinical intervention had an equal opportunity to be represented on the list
before any list of treatments was disseminated. To illustrate this point, several
interventions with supporting empirical evidence that are used in health psychology and/or
rehabilitation settings were not listed until the third version of the list was published.
Although no listing will ever be complete, there could be potential negative consequences
of an efficacious intervention not appearing on the list.
Concerns with Methodology used in Psychotherapy Research
Some psychologists have noted that the conclusions of the
Division 12 Task Force are limited by the methodological considerations involved in most
psychotherapy research. For example, Wampold (1997) and Garfield (1998) argued that
efficacy research has primarily focused on detecting differences between various treatment
approaches and has attempted to homogenize treatments and clinicians in a way that may
have diluted the most important ingredients of successful intervention. In other words,
the emphasis on evaluating treatment strategies has led to a neglect of the contribution
of the qualities of the clinician, the nature of the therapeutic relationship, and the
intricacies of clinical judgement necessary to determine how best to respond to clients
concerns.
In current treatment research, the necessity for treatments
to be delivered in a standardized manner has necessitated the use of treatment manuals.
The use of a manual is a key component of the current requirement for inclusion of a
treatment as being empirically valid. A number of criticisms have been leveled at the use
of treatment manuals, including that they focus the researchers attention on
clinician adherence rather than clinician competence (Wampold, 1997) and that they can
only present general principles of a treatment approach, making them potentially
unsuitable as tools to guide the delivery of treatment (Havik & VandenBos, 1996;
Levant, 1995; Smith, 1995; Strupp & Anderson, 1997). In response, supporters of the
use of treatment manuals argue that they facilitate dissemination of a treatment (Dobson
& Shaw, 1988) and that they provide optimal strategies for interventions in routine
practice (Wilson, 1996, 1997). Furthermore, the depiction of treatment manuals as
requiring strict adherence to a set of techniques that must be followed in a lock-step
fashion is clearly at odds with explicit statements in many manuals that flexibility in
the application of the procedures is essential to the success of the treatment (e.g.,
Beck, Rush, Shaw, & Emery, 1979). As suggested by Addis (1997), the skill in the use
of a manual lies somewhere between the poles of total, inflexible adherence and sole
reliance on clinical judgment. However, even proponents of the use of treatment manuals
acknowledge that an important gap in the literature is the limited data on the
effectiveness of treatment delivered by clinicians using manuals under usual clinical
conditions. If treatment manuals are to be used to guide intervention in clinical
settings, it is essential that data be gathered on this topic.
Relatedly, a key recommendation of the Task Force was that
training programs prepare students in at least two of the empirically supported
treatments. Although some suggestions are emerging (Calhoun, Moras, Pilkonis, & Rehm,
1998), there is not as yet a widely accepted definition of competence that could be used
to ensure that students receive adequate training. Moreover, with some exceptions (e.g.,
Shaw & Dobson, 1988), we have not been able to develop easily applied measures of core
competence for most treatments. If the expectation is that clinical programs should engage
in such training using treatment manuals as a resource, then teaching and supervision
manuals may be also needed.
Finally, there are also concerns about the use of efficacy
studies to guide the practice of clinical services. Many critics have suggested that
efficacy studies cannot be used to "validate" psychotherapy as it is conducted
in the field (e.g., Goldfried & Wolfe, 1998; Seligman, 1995; Shapiro, 1996). The
nature of most clinical treatment research requires, among other elements, the screening
of clients/patients for suitability, random assignment to treatment condition, and
intensive training and monitoring of clinicians providing the intervention. As in all
research, efforts to enhance the internal validity of a study result in lowered external
validity. Accordingly, the extent to which the results of most treatment research
generalize to routine practice is unclear.
The potential problems associated with developing service
delivery systems based on psychotherapy research have been discussed for many years (e.g.,
Kazdin, Kratochwill, & VandenBos, 1986; Parloff, 1979). If nothing else, the Division
12 initiative has caused this issue to resurface and to be taken much more seriously by
researchers and practitioners alike. There is now much more appreciation of the critical
need for effectiveness research that explicitly examines the impact of treatment in
routine clinical settings. However, the issue of the generalizability of therapy research
cuts both ways. If one contends that most psychotherapy research is not directly
generalizable to routine practice, then, logically, one must also refrain from citing the
findings of large scale meta-analyses of this literature (e.g., Smith, Glass, &
Miller, 1980) to support the position that psychotherapy is effective in treating a wide
range of disorders and psychological problems.
Concerns with the Impact of the Report
The reaction of many practitioners to the task force
initiative has been highly charged, to say the least. For example, one prominent opponent
has described the initiative as irresponsible and has characterized the list of
empirically supported treatments as a clear example of blacklisting (Silverman, 1996).
Such reactions appear to be due to several inter-related concerns that stem from the
tension between science and practice, and from economic factors related to current service
delivery. The first is that, as discussed above, evidence from efficacy trials is given
precedence over the clinical experience of licensed professionals. A second concern is
that potential clients and third-party payers may misinterpret the task force list as
indicating that only these treatments are effective. Despite disclaimers by the Task
Force, there is significant worry that any treatment not on the list will be viewed as
ineffective, rather than as being untested (Sleek, 1997).
A third concern is that the list consists of predominantly
short-term, cognitive-behavioral interventions. As a result, clinicians offering long-term
treatments (i.e., more than 20 sessions) may not be able to defend their practice to
companies providing reimbursement for psychological services. The type of research
demanded for designation as empirically supported (e.g., use of treatment manuals, two or
more treatment studies) may not be easily accomplished for forms of long-term therapy, as
the research would be both organizationally demanding and very expensive. In the context
of granting agencies that give preference to funding research on short-term treatments, it
may be exceedingly difficult to develop the database necessary to evaluate long-term
treatments (cf. Task Force on Promotion and Dissemination of Psychological Procedures,
1995). This is seen by some as serving to undermine the legitimacy of psychodynamic and
psychoanalytic interventions and privileging cognitive-behavioral approaches.
A fourth issue is that there are currently major gaps in
the availability of empirically supported treatments for a number of psychological
disorders. As a primary example, the list of empirically supported treatments is almost
entirely devoid of treatments for personality disorders. One may conclude that this
reflects the absence of effective treatments for such disorders; however, it may be
equally accurate to explain the limited representation of treatments for personality
disorders on the basis of the criteria developed by the Task Force and the aforementioned
difficulties in conducting research on longer term interventions.
All of these concerns exacerbate the fears of American
practitioners who are coming to terms with the monumental changes in the delivery of
psychological services brought about by the growth of managed health care systems. Over
the past two decades, the dominant form of service delivery in the United States has
become one in which mental health services are managed by third-party payers (e.g.,
insurance companies, government agencies, employers). As a cornerstone of managed health
care is cost-containment, managed care organizations are attracted to cost-efficient
interventions with demonstrable outcomes (Mash & Hunsley, 1993). Unfortunately, all
too often the focus of these organizations has been on reducing costs by limiting access
to services. In such an environment, many practitioners are concerned that any evidence
that could be used to reduce payment for services will be used in a forceful and imprecise
manner: specifically, they are worried that the disclaimers offered by the Task Force will
be ignored in the rush of managed care organizations to disallow reimbursement for any
intervention not designated as empirically supported. There continue to be concerns about
the inappropriate use of empirical evidence by managed care entities (Sleek, 1997);
however, based on information provided to us by two prominent American psychologists who
are actively involved in the process of practice guideline development, no managed care
entity has used the APA Task Force listing to certify or deny payments for psychotherapy
services (Steven Hayes, personal communication, December 12, 1997; Kirk Strosahl, personal
communication, January 5, 1998).
Next: Alternative
Options for Promoting Evidence-Based Practice |