1. Introduction
The evaluation of psychotherapeutic interventions in the mental health field is complex. Particularly challenging is the evaluation of fidelity, a process undertaken to determine whether the intervention under study was implemented as intended. When evaluating fidelity, two elements must be considered: adherence, which refers to the degree to which an intervention protocol is followed, [
1,
2,
3,
4,
5] and competence, which refers to how proficiently the intervention is performed. [
6]
The development of a reliable fidelity rating tool for psychotherapeutic interventions is particularly difficult when developers attempt to reduce a complex intervention to a simple scale, [
7] perhaps explaining why there are few fidelity assessment tools in existing literature. One psychometric fidelity assessment tool is the
Sharing the Patient’s Illness Representations to Increase Trust (
SPIRIT) Intervention Fidelity Assessment Tool, [
8] used to evaluate fidelity in the delivery of a psycho-educational program for patients with end-stage renal disease and their surrogate decision makers. Although this study had an acceptable inter-rater reliability, only one interventionist was evaluated in the trial. Another psychometric fidelity assessment tool, developed by the RAND Corporation [
6], was used to assess the fidelity of the
Building Recovery by Improving Goals, Habits, and Thoughts (BRIGHT-2) intervention. In this study, the fidelity of five addiction counselors who were trained to deliver group cognitive behavioural therapy for depression was investigated.
Creating a fidelity scale that captures the complexity of a single provider model, as previously described in the literature, is challenging. The challenge may be compounded by a program-based intervention composed of many parts, often delivered by multiple practitioners [
9] and disseminated by training practitioners in diverse professional settings. The effectiveness of a psychotherapeutic intervention relies in large part on the non-specific factors of psychotherapy [
10] and the dynamic and individualized nature of the therapeutic relationship [
8], both of which are difficult to capture and measure in a fidelity scale. Further, measurement of fidelity may be confounded by variables such as client characteristics and severity of symptoms. [
11] Once developed, the verification of the scale’s reliability and the training of raters to use it are resource intensive [
8]. More importantly, even when validated, a tool may not produce useful information. For instance, data collected using the tool may indicate that fidelity to the method is poor, leading to the conclusion that the person delivering the intervention lacks proficiency. However, a highly skilled practitioner may appropriately stray from the model, compromising fidelity in order to meet the specific needs of the individual client and to maintain the therapeutic relationship. [
8,
11]
Despite the challenges posed by fidelity measurement, it remains a necessary part of the delivery and evaluation of psychotherapeutic interventions. Knowing whether the intervention is delivered proficiently and as intended is necessary in the determination of treatment efficacy (or lack thereof), the exact mechanism of any changes that it produces [
1,
11] and whether the intervention’s success or failure is attributable to the method or the delivery. [
8,
12,
13] The evaluation of a complex psychotherapeutic intervention that does not consider fidelity risks measuring the effects of the program as delivered rather than as designed and can lead to erroneous conclusions regarding the efficacy of the intervention. [
9,
14,
15]
The
Reitman Centre CARERS – Coaching, Advocacy, Respite,
Education,
Relationship,
Simulation –Program (RCCP) is a group psychotherapeutic intervention developed for carers looking after family members with dementia, with demonstrated efficacy in improving caregiving competence, coping capacity and mental well-being. [
16] It combines therapeutic principles with a targeted approach to education and skills training, and formal Problem Solving Techniques (
PST) adapted for the needs of family carers. [
16] Four to six family carers participate in the 10-week group intervention at a time. The
RCCP is delivered by a specially trained group leader, and includes a novel therapeutic use of simulation – a validated experiential learning tool used in the education of health and other professionals. A specially trained standardized patient is also present to interact with carers to provide them with the opportunity to practice new skills – for example, carrying out difficult conversations with the care recipient – while coached by the group leader. Upon completion of the 10-week intervention, participants are asked to complete an anonymous satisfaction survey which has 20 attitudinal statements relating to the key components of the program: tailored dementia and carer education (5 statements), problem solving therapy (3 statements); therapeutic simulation (4 statements); group structure (3 statements); general satisfaction (5 statements). Participants were asked to respond on a 5-item Likert scale (strongly agree, agree, neither agree or disagree, disagree, strongly disagree) to each statement.
As the clinical efficacy of the RCCP became known, demands grew from health professionals for training in the RCCP methods, leading to the creation of the Reitman Centre CARERS Training Program for Professionals. Health care professionals from community and major dementia care partners have requested and received the formal training necessary to become RCCP group leaders. Initially delivered by mental health clinicians in the hospital setting, the RCCP has been adapted for broader dissemination in different geographical and cultural settings, locally, nationally and internationally.
In 2017 the Ontario Ministry of Health in Ontario, Canada approved and funded a province wide initiative to address the needs of family care partners providing care to people with dementia in the community at home. This program, called the Enhancing Care for Ontario Care Partners Program (EC program) scaled CARERS to address the needs of both urban and rural family care partners in Canada’s largest province of about 16 million (2024) people. The lead organization, Sinai Health, an academic health sciences centre in Toronto, established and maintains a formal province-wide network of Alzheimer Society and hospital partner agencies to implement CARERS in all the health regions of the province. Mental health practitioners in each of 12 sites were trained to deliver the program. A training program located at Sinai Health is an embedded, funded component of the EC program, designed to maintain the network of practitioners by training new practitioners in response to staff changes and attrition. Maintaining fidelity to the evidence-based, effective model (Chiu et al., 2013; Sadavoy et al 2020, 2021) is an ongoing key component of the program and of the training program.
Dissemination of the RCCP and ongoing examination of its effectiveness and of the methods used to train health professionals in its delivery require a dependable method of assessing their implementation. As no standardized fidelity rating tool existed, the RCCP Fidelity Assessment Tool (RCCP-FAT) was developed to not only monitor and evaluate adherence to the RCCP methods and principles, but the competence of the professionals trained to deliver it. This paper 1) describes the development of the RCCP-FAT; 2) presents exploratory data on the usability and inter-rater reliability; 3) discusses the value and challenges inherent in creating a tool used to measure a complex group psychotherapy intervention.
2. Methods
2.1. Design and Development of the RCCP-FAT
The development of an assessment tool employs multiple research methodologies and designs [
6] and two major phases: 1) tool design and development; and 2) tool evaluation. In phase 1, existing validated tools [
6,
19,
20,
21] were identified from the literature and adapted by a panel including
Reitman Centre mental health clinicians and external clinical experts. An iterative process was employed in which the panel convened four times to work collectively on item writing and item checking. The process also included considerable deliberation regarding the difference between adherence and competence and the best way to capture each in one tool. External expert review ensured face validity of the tool, that is, that it contains rating criteria that properly represent the group psychotherapy constructs they are intended to measure.
The final rendition of the
RCCP-FAT includes a preamble that describes the purpose of the tool, each of its components and instructions for use. The tool assesses trained group leaders’ fidelity in the following seven components of the
RCCP: Group Structure, Dementia Education, Problem Solving Techniques, Therapeutic Coaching of Simulation, Vertical and Horizontal Group Cohesion and Global Rating of Fidelity. There are 3 to 8 items to which fidelity scores are assigned for each component. Each item is defined and detailed descriptors are provided to indicate what constitutes a specific score. Scores range from 1 to 5 in a specific item: 1 “Unsatisfactory”, 2 “Needs Improvement”, 3 “Satisfactory”, 4 “Very Good” and 5 “Excellent”. A “global score” is also assigned for each of the seven components, and allowed the raters to evaluate the overall qualities of competence of the group leader and how well a specific component was delivered. The contents of the
RCCP-FAT and the theoretical foundation of the seven components of
RCCP model are summarized in
Table 1. An example of the scoring descriptors and criteria for a
RCCP component, Problem Solving Techniques, may be found in
Figure 1.
2.2. Development of Rater Training Materials
Two sets of video clips of RCCP sessions were produced to train raters in the use of the RCCP-FAT. Clinically-based, semi-scripted scenarios written by a media and simulation expert (LJN) and vetted by clinical experts (JS, VW) were used in the production of the demonstration clips. Simulated scenarios allow for standardization of scoring and interpretation of item descriptors on the RCCP-FAT, and are practical teaching tools.
Both sets of demonstration clips showed a group leader delivering a simulated RCCP group. Simulated patients with experience in RCCP methods were trained to portray group participants. Attention was paid to ensuring that the clips showed participants enacting a variety of behaviours and affects that approximate an authentic group experience and effectively demonstrate the items on the RCCP-FAT. The first set of clips provided raters with examples of experienced RCCP group leaders demonstrating the methods and techniques of the intervention as intended. These clips were used to train raters in the use of the tool and to ensure their understanding of item descriptors and their application. The second set of clips showed the group leader demonstrating the RCCP methods with different degrees of proficiency, and allowed learners to understand the nuances that would distinguish an “Excellent” rating from a “Satisfactory” rating, for example, in a specific item. All demonstration video clips were assessed and rated by an expert panel (JS, VW, LJN) using the RCCP-FAT prior to use in training, to ensure usability and to provide baseline scores for comparison with trained raters’ scoring.
2.3. Recruitment and Training Volunteer Raters and Research Ethics Clearance
Fifteen volunteers with relevant background – experience in education, in the practice of psychotherapy and/or in facilitating group interventions – were recruited from various postgraduate programs in Toronto, Ontario and trained (
Figure 2). Raters were first oriented to the foundational principles and methods of
RCCP by completing a self-directed e-learning program. They then participated in training – 8 hours in total – which included the following:
Discussion and questions regarding the RCCP and self-directed e-learning modules;
Systematic review of components of the RCCP-FAT;
Practice in use of the RCCP-FAT with standardized video clips;
Discussion of scored items, item consensus, scoring challenges;
Practice in use of the RCCP-FAT with standardized video clips; and
Discussion of scored items, item consensus (or lack thereof), rationale for scoring differences.
This research was conducted in accordance with the Declaration of Helsinki (1954) and Sinai Health System Research Ethics Board. All trained volunteer fidelity raters, RCCP group leaders, and carers participating in RCCP provided informed consent prior to engaging in the study.
2.4. Data Collection
Group leaders from the Reitman Centre experienced in the delivery of
RCCP were notified of the study and informed that two trained volunteer raters would observe 3 sessions of a
RCCP group they led. Observations of these specific sessions allowed for key components of the
RCCP, as described in
Table 1, to be assessed. Twelve cycles of
RCCP groups, facilitated by different group leaders, were observed and assessed by paired raters using the
RCCP-FAT. As paired raters assessed 3 sessions of each of the 12 cycles, a total of 36 sessions were assessed. This number has been reported in the literature on group intervention competence and adherence measurement as the number needed to compute inter-rater reliability for fidelity assessment tools[
21]. Given the time-intensive nature of the study, group leaders were rated by different pairs of raters over time [
22]. Written user feedback on the
RCCP-FAT provided by the volunteer raters was also reviewed.
2.5. Data Analysis
Rater Agreement of RCCP-FAT
For the purpose of data analysis, raters were randomly assigned to two groups: ‘Rater Group A’ and ‘Rater Group B’. Raters within each ‘Rater Group’ were then randomly paired for each
RCCP session observed to calculate the inter-rater reliability of
RCCP-FAT, which is calculated by examining the proportion of observed agreement between raters for each tool item (ρo = # of observed agreement ÷ total # of observations). Since agreement can be expected by chance alone, the weighted kappa statistic as a measure of inter-rater reliability was examined, with κ values between 0.01 and 0.20 indicating slight agreement, values 0.21 to 0.40 indicating fair agreement, and those above 0.41 considered moderate to substantial agreement [
23].
Correlation Between Itemized and Global Scores for Each Fidelity Assessment Component
As previously described, the RCCP-FAT evaluates seven components of the RCCP. Raters also gave a global score for each of the seven component measured. The global scores are intended to allow raters to evaluate the overall qualities of competence of the group leader and how well a specific component was delivered. To investigate the utility of global scoring, a correlational analysis was conducted between the average fidelity scores of all items in each component and the global score for the respective component.
Correlation Between Fidelity Ratings and RCCP Participants’ Satisfaction
The participant satisfaction survey data were collated and analyzed to determine whether carers’ perception of RCCP’s impact correlated with the overall treatment fidelity scores of trained group leaders as assessed by the RCCP-FAT. Correlation coefficients were calculated for the global scores of four sections in the RCCP-FAT – Dementia Education, Problem Solving Therapy, Therapeutic Simulation (the three main methods used in the RCCP), and Overall Global Score – and the average carers’ satisfaction survey scores related to these program components. For example, the global score given by raters for the “Dementia Education” section in the RCCP-FAT was correlated with the average satisfaction score of the Dementia Education-related statement on the satisfaction survey completed by carers: “The program improved my understanding of the behavioural symptoms associated with dementia.”
3. Results
Descriptive statistics of observations and ratings using the RCCP-FAT were calculated. In all, 11 trained raters participated in the study, and rated 12 RCCP group cycles. In each group cycle, 3 out of the 10-sessions RCCP group were observed and rated, and thus a total of 36 observed sessions. There were 1188 possible items to be rated by each of the two rater groups over the 36 observed sessions. Rater Group A and Rater Group B rated 67.7% and 65.5% of all possible items, respectively. Some items were not rated in each observed session as not all sections of the tool pertained to every group session. For example, items under Problem Solving Therapy and Therapeutic Simulation were not rated during Session 1 of the RCCP when these methods are not employed. The mean scores given by Rater Groups A and B were 4.12 and 4.09 respectively, on a scale of 1 to 5. A frequency count revealed that in both Rater Groups, a high cumulative percentage of scores 3, 4, 5 was awarded (97.1% and 96.6% for Rater Groups A and B respectively), with score 5 being awarded notably frequently (38.1% and 40.0% for Rater Groups A and B respectively).
In terms of rater agreement, there was a 54.3% agreement on the overall scoring of the
RCCP groups with a weighted kappa of 0.32 (95% CI, 0.02681- 0.3729), which is fair agreement. The kappa statistic by
RCCP-FAT component (
Table 2) revealed fair to moderate agreement in all components except for Horizontal Cohesion, which displayed a slight agreement.
Correlation of the average fidelity scores of all items in each of the seven sections in the
RCCP-FAT and the global score of the corresponding section was calculated and shown in
Table 3. There was a positive and statistically significant correlation between these for all components evaluated (R between 0.833 and 0.929; p <0.01).
The correlation calculation between the global scores of four sections in the
RCCP-FAT (Dementia Education, Problem Solving Therapy, Therapeutic Simulation and Overall Global Score) and the average carers’ satisfaction survey scores related to these program components can be found in
Table 3. The analysis indicated a significant, positive correlation between the global simulation score and carers’ satisfaction with simulation, with r = 0.626; p <0.01.
4. Discussion
The RCCP-FAT was developed to monitor and evaluate the intervention integrity of the RCCP, a multi-method intervention for informal dementia carers. The process of developing the tool systematically crystallized the clinical elements of RCCP, and helped to standardize training methods by creating a framework for providing feedback to learners that match the items on the RCCP-FAT. It also helped clarify the clinical methods of the RCCP and highlighted its complexity.
The need for a standardized approach to training methods that included a reliable method to measure treatment fidelity was obvious early on in the
RCCP training process. From our training experience, a lack of consistency in therapeutic skills was noted among different groups of learners, who were mental health clinicians with a variety of backgrounds and experience. Learners whose current practice did not include therapeutics may have difficulty integrating new methods therapeutically. The development of the
RCCP-FAT had an important impact on standardization of the training methods for incoming
RCCP group leaders in that training objectives, goals and learning activities are now closely linked to
RCCP-FAT components and items. Specifically, the
RCCP-FAT enhanced the development and inclusion of
RCCP group leader training materials that could address a broad range of psychotherapeutic skills and experience in learners [
13]. Assessment of therapeutic competency within the
RCCP-FAT ensures non-specific group therapy factors – warmth, genuineness, empathy and cohesion – are also being systematically taught to the new group leaders. Learners are observed during training and receive immediate expert feedback designed to match the items on the
RCCP-FAT. This use of
RCCP-FAT to guide training and later on, mentoring of new group leaders is essential in ensuring that the evidence based methods of the RCCP are retained during the scaling and dissemination of the
RCCP, as new group leaders are trained to deliver the
RCCP nationally and internationally.
The development of
RCCP-FAT followed the principle that a psychometric fidelity scale should measure an “intangible collection of abstract concepts and principles” [
26] such as warmth/genuineness, empathy and therapeutic alliance. Similar to previous studies, user feedback from our fidelity raters indicated that it is especially challenging to assess the adherence and competency of a psychotherapeutic intervention due to its dynamic and commonly individualized nature, involving the clinician and the client [
4,
8,
27]. Competency also includes the ability to flexibly adhere to a given intervention [
24], and this concept is similarly challenging to capture and measure on a standardized tool. Thus, as demonstrated by a study by Yeates et al [
31], it is common for more experienced group leaders to make normative rather than criterion-based judgments. In our study, the expectation of flexibility is built into the
RCCP methods in that each of the systematized treatment elements – dementia education, Problem Solving Therapy and therapeutic simulation – are designed to flexibly meet the specific learning needs of individual carers. Thus, a clinical or therapeutic decision to skip over a certain element during the group session may be misinterpreted by the volunteer raters as missed therapeutic opportunity, thus impacting the fidelity scoring.
Rating scales are used with the understanding that “mastery of the parts (i.e. discrete skills on a checklist) does not indicate competency of the whole” [
28] and global ratings may capture a more accurate picture of expertise than binary checklists [
29]. In addition, global ratings also allow the assessment of integrated skills [
30]. In this study, the global rating for each component was found to be highly correlated with the average scores of all items within the same component. Data analysis also indicated a significant, positive correlation between the global simulation score and carers’ satisfaction with simulation. This suggested that for the simulation component of the
RCCP, as fidelity to the methods increased, the satisfaction of the carers also improved, a common phenomenon observed in fidelity measurement [
1].
In conclusion, the RCCP-FAT demonstrates the value of a systematic fidelity tool to inform psychotherapy training and best practice. It functions as a mentoring guide, has shifted the approach to the design of other educational materials and thereby informed all the health professional training activities delivered according to the Reitman Centre CARERS Program model. It has also improved the clinical integrity of the CARERS Program delivered at the Reitman Centre and its satellite sties, as it continues to provide a common language for clinical discussion of the RCCP methods, and for training and mentoring to other health professionals.
5. Limitations
The challenges in conducting a fidelity measurement study described in the literature were encountered. Specifically, the use of volunteer raters with inconsistent availability might have impacted rater agreement despite steps taken to ensure standardization in training and scoring. Rating was also limited to 3 sessions from each 10 week cycle. This was done primarily to keep the necessary commitment of the volunteer raters to a minimum while allowing the sufficient assessment of the main methods used in the RCCP. Rating all 10 sessions may have improved the reliability of the findings. Future studies with a bigger sample size would allow for statistically sound conclusions.
Although detailed and specific descriptors and definitions were used to guide scoring of each component and the individual items within components, certain non-specific group therapy factors, such as warmth/genuineness, empathy and therapeutic alliance, remain difficult to measure because it is hard to decontextualize them from the therapeutic process.
Author Contributions
M Chiu, LJ Nelles, and A Lawson were responsible for study design, data collection and analysis. M Chiu and LJ Nelles wrote the introduction, methods, results and discussion sections of the manuscript. V Wesson wrote the introduction and discussion sections, has delivered the intervention to carers. J Sadavoy was the founding Reitman Centre program director, designed the intervention and evaluation framework, supervised the study design and critically reviewed the manuscript.
Funding
This work was supported by the Continuing Education Development Fund, Faculty of Medicine, University of Toronto, Toronto M5G 1V7 Canada, No. 8470775.
Acknowledgments
The authors would like to acknowledge Dr. Rhonda Feldman, Ms. Gita Lakhanpal and Ms. Sarah Gillespie at the Reitman Centre for their involvement in the development of the fidelity assessment tool; Drs. Molyn Leszcz, Paula Ravitz, and Robert Maunder for expert reading of the tool.
Ethics Approval
Study was approved by Sinai Health System Research Ethics Board.
Informed Consent Statement
Any research article describing a study involving humans should contain this statement. Please add “Informed consent was obtained from all subjects involved in the study.” OR “Patient consent was waived due to REASON (please provide a detailed justification).” OR “Not applicable.” for studies not involving humans. You might also choose to exclude this statement if the study did not involve humans.
Data Availability Statement
We encourage all authors of articles published in MDPI journals to share their research data. In this section, please provide details regarding where data supporting reported results can be found, including links to publicly archived datasets analyzed or generated during the study. Where no new data were created, or where data is unavailable due to privacy or ethical restrictions, a statement is still required. Suggested Data Availability Statements are available in section “MDPI Research Data Policies” at
https://www.mdpi.com/ethics.
Conflicts of Interest
None to declare.
References
- Belig AJ, Resnick B, Minicucci DS, Ogedegbe G, Ernst D, Borrelli B, et al. Enhancing treatment fidelity in Health behaviour change studies: Best practices and recommendations for the NIH behaviour change consortium. Health Psychol. 2004;23(5):443-451. [CrossRef]
- Dumas JE, Lynch AM, Laughlin JE, Phillips SE, Prinz RJ. Promoting intervention fidelity. Conceptual issues, methods, and preliminary results from the EARLY ALLIANCE prevention trial. Am J Prev Med. 2001;20(1 Suppl.):38–47. [CrossRef]
- Perepletchikova F, Kazdin AE. Treatment integrity and therapeutic change: Issues and research recommendations. Clin Psychol. 2005;12:365-383. [CrossRef]
- Santacroce SJ, Maccarelli LM, Grey M. Intervention fidelity. Clin Nurs Res. 2004;53(1):63–66. [CrossRef]
- Stein KF, Sargent JT, Rafaels N. Intervention research: establishing fidelity of the independent variable in nursing clinical trials. Adv Nurs Res. 2007;56(1):54–62. [CrossRef]
- Hepner KA, Howard S, Paddock SM, Hunter SB, Osilla KC, Watkins KE. A fidelity coding guide for a group cognitive behavioral therapy for depression [Full Analysis on the Internet]. Santa Monica: RAND Corporation; 2011. 84 p. Available from: http://www.rand.org/content/dam/rand/pubs/technical_reports/2011/RAND_TR980.pdf [Accessed on 12th January, 2017].
- Lenaway D, Halverson P, Sotnikov S, Tilson H, Corso L, Millington W. Public Health Systems Research: Setting a National Agenda. Am J Public Health. 2006;96(3):410–3. [CrossRef]
- Song MK, Happ MB, Sandelowski M. Development of a tool to assess fidelity to a psycho-educational intervention. J Adv Nurs. 2010;66(3):673-682. [CrossRef]
- Teague GB, Mueser KT, Rapp CA. Advances in fidelity measurement for mental health services research. Psychiat Serv. 2012;63(8):765-771. [CrossRef]
- Ardito RB, Rabellino D. Therapeutic Alliance and Outcome of Psychotherapy: Historical Excursus, Measurements, and Prospects for Research. Front Psychol. 2011;270(2). Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3198542/pdf/fpsyg-02-00270.pdf [Accessed 20th January 17].
- Hogue A, Henderson CE, Dauber S, Barajas PC, Fried A, Liddle HA. Treatment adherence, competence, and outcome in individual and family therapy for adolescent behavior problems. Journal of Consulting and Clinical Psychology. 2008;76:544–555. [CrossRef]
- Sheridan SM, Swanger-Gagne M, Welch GW, Kwon K, Garbacz SA. Fidelity measurement in consultation: Psychometric issues and preliminary examination. School Psych Rev. 2009;38(4):476.
- Mars T, Ellard D, Carnes D, Homer K, Underwood M, Taylor SJC. Fidelity in complex behaviour change interventions: a standardized approach to evaluate intervention integrity. BMJ Open. 2013;11(3):1-7. [CrossRef]
- Schoenwald SK, Garland AF, Chapman JE, Frazier SL, Sheidow AJ, Southam-Gerow MA. Toward the effective and efficient measurement of Implementation fidelity. Adm Policy Ment Health. 2011;38(1):32-43. [CrossRef]
- Dobson D, Cook TJ. Avoiding type III error in program evaluation: Results from a field experiment. Eval Program Plann. 1980;3:269–276. [CrossRef]
- Margison FR, McGrath G, Barkham M, Clark JM, Audin K, Connell J, et al. Measurement and psychotherapy. Evidence-based practice and practice-based evidence. Br J Psychiatry. 2000;177(2):123-30. [CrossRef]
- Chiu M, Wesson V, Sadavoy, J. Improving caregiving competence, stress coping, and mental well-being in family carers of individuals with dementia: Piloting the Reitman Centre CARERS program. World J. Psychiatry. 2013;3:65-75. [CrossRef]
- Richey R, Klein J. Developmental research methods: Creating knowledge from instructional design and development practice. Journal of Computing in Higher Education. 2005;16(2):23-38. [CrossRef]
- Lu W, Yanos PT, Gottlieb JD, et al. Using fidelity assessments to train frontline clinicians in the delivery of cognitive-behavioral therapy for PTSD in persons with serious mental illness. Psychiatr Serv. 2012;63(8):785-792. [CrossRef]
- North Carolina Evidence Based Practices Centre. Family Psychoeducation Fidelity scale. http://www.ncebpcenter.org/assets/FPE_Protocol.pdf. Updated 2002. [Accessed 12th Augustu, 2016].
- Hepner KA, Hunter SB, Paddock SM, Zhou A, Watkins KE. Training addiction counselors to implement CBT for depression. Adm Policy Ment Health. 2011;38(4):313-323. [CrossRef]
- Hallgren KA. Computing inter-rater reliability for observational data: An overview and tutorial. Tutor Quant Methods Psychol. 2012;8(1):23-34. [CrossRef]
- Viera AJ, Garrett JM. Understanding Interobserver Agreement: The Kappa Statistic. Fam Med 2005;37(5):360-363.
- Breitenstein SM, Gross D, Gravey C, Hill C, Fogg L, Resnick B. Implementation Fidelity in Community-Based Interventions. Res Nurs Health [Internet]. 2010 Apr;33(2):164-173. Available from: http://onlinelibrary.wiley.com/doi/10.1002/nur.20373/abstract [Accessed 22th August 2016].
- Norcross JC, Wampold BE. Evidence-based therapy relationships: research conclusions and clinical practices. Psychotherapy (Chic) [Internet]. 2011 Mar;48(1):98-102. Available from: http://psycnet.apa.org/?&fa=main.doiLanding&doi=10.1037/a0022161 [Accessed 13th August 2016].
- Cook DA, Beckman TJ. Current concepts in validity and reliability for psychometric instruments: theory and application. Am J Med. 2006;119(2):166-e7. [CrossRef]
- Carroll KM, Nich C, Sifry RL, Nuro KF, Frankforter TL, Ball SA, et al. A general system for evaluating therapist adherence and competence in psychotherapy research in the addictions. Drug and Alcohol Depend. 2000;57(3):225–238. [CrossRef]
- Ginsburg, LR, Tregunno, D, Norton, PG, Smee, S, de Vries, I, Sebok, SS, Medves, J et al. (2015). Development and testing of an objective structured clinical exam (OSCE) to assess socio-cultural dimensions of patient safety competency. BMJ Quality & Safety. 2015; 24(3):188–194. [CrossRef]
- Kim J, Neilipovitz D, Cardinal P, Chiu M. A comparison of global rating scale and checklist scores in the validation of an evaluation tool to assess performance in the resuscitation of critically ill patients during simulated emergencies (abbreviated as “CRM simulator study IB”). Simul Healthc. 2009;4(1):6-16. [CrossRef]
- Schuwirth LW, van der Vleuten CPM. Programmatic assessment: from assessment of learning to assessment for learning. Med Teach. 2011;33(6):478–485. [CrossRef]
- Yeates P, O'Neill P, Mann K, Eva K. 'You're certainly relatively competent': assessor biases due to recent experiences. Medical Education. 2013;47(9):910-922. [CrossRef]
- Knowles MS. The Modern Practice of Adult Education. Englewood Cliffs: Cambridge Adult Education; 1984.
- Imel, S. Using adult learning principles in adult basic and literacy education. Available from http://www.calpro-online.org/eric/docs/pab/00008.pdf Updated 1998. [Accessed 29th April 2016].
- Yalom, ID, Leszcz, M. The theory and practice of group psychotherapy. New York:Basic Books; 2005.
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).