Quality care for individuals facing chronic illnesses, functional limitations, or both has become a paramount goal in healthcare services worldwide. Person-centered care (PCC) stands as a cornerstone in achieving this objective, extending its influence beyond mere goal attainment to encompass the delivery of superior health maintenance and medical attention [1, 2, 3]. Beyond its role in upholding fundamental human rights, PCC offers a spectrum of advantages for both care recipients and providers [4, 5]. Furthermore, the successful implementation of PCC necessitates a specific skill set among healthcare professionals, equipping them to navigate the complexities of this evolving field [6]. The core components of PCC include [7]: a tailored, goal-oriented care strategy rooted in individual preferences; continuous evaluation of both the plan and personal objectives; collaborative support from a multidisciplinary team; seamless coordination across all healthcare and support services; ongoing provider education and information exchange; and a commitment to quality improvement driven by feedback from individuals and their caregivers.
The application of PCC is increasingly documented in academic literature. A notable example is McCormack’s widely recognized mid-range theory [8], a globally respected framework that outlines the principles of PCC and its practical application. This framework serves as a valuable resource for healthcare practitioners and researchers, particularly within hospital environments. PCC, as detailed in this framework, is defined as “an approach to practice that is developed through the establishment and nurturing of therapeutic relationships among all care providers, service users, and those significant to them, grounded in values of respect for individuals, the inherent right to self-determination, mutual respect, and understanding” [9].
A key principle of PCC is the inclusive definition of the ‘person’ at the heart of care. This encompasses not only the individual receiving care but also all parties involved in the care process [10, 11]. PCC emphasizes the necessity of equipping professionals with relevant competencies and methodologies. As highlighted earlier, caregivers are pivotal in shaping the quality of life for those in their care [12, 13, 14]. Recognizing the demanding nature of caregiving, it is crucial to prioritize the well-being of caregivers themselves. Emerging research on professional caregivers suggests that implementing PCC can yield substantial benefits for both the individual receiving care and their caregiver [15].
Despite the growing body of research and the widespread integration of PCC into health policy and research discourse [16], its implementation is not without challenges. A significant hurdle is the absence of a universally accepted definition of PCC [17], leading to complexities in areas like efficacy assessment [18, 19]. Furthermore, the inherent subjectivity in defining PCC dimensions and the infrequent utilization of standardized assessment tools present considerable challenges [20]. These limitations and the recognized need for standardized measurement spurred the development of the Person-Centered Care Assessment Tool (P-CAT) [21]. The P-CAT was conceived as a concise, cost-effective, user-friendly, versatile, and comprehensive instrument designed to deliver reliable and valid PCC measurements for research purposes [21].
Delving into the Person-Centered Care Assessment Tool (P-CAT)
Numerous tools are available to evaluate PCC from diverse perspectives, including those of caregivers and care recipients, and across various healthcare settings such as hospitals and residential care facilities. Among these, the P-CAT stands out as a particularly concise and straightforward tool that encapsulates the essential elements of PCC as defined in existing literature. Initially developed in Australia, the P-CAT was designed to assess the person-centered approach within long-term care environments for older adults with dementia. However, its application has broadened significantly, now encompassing diverse healthcare settings including oncology and psychiatric units [22, 23].
The P-CAT’s widespread adoption as a leading tool for measuring PCC by healthcare professionals [25, 26] can be attributed to its brevity, ease of administration, adaptability across various medical and care contexts, and its potential for emic applicability. Emic characteristics refer to constructs that maintain cross-cultural relevance, exhibiting comparable structure and interpretation across different cultures [24]. The P-CAT’s adaptability is further evidenced by its successful translation and validation in countries with diverse cultural and linguistic backgrounds. Since its inception, the P-CAT has been adapted for use in Norway [27], Sweden [28], China [29], South Korea [30], Spain [25], and Italy [31], demonstrating its broad international relevance.
The P-CAT instrument consists of 13 items, each rated on a 5-point ordinal scale ranging from “strongly disagree” to “strongly agree.” Higher scores on the P-CAT indicate a greater degree of person-centeredness. The tool is structured around three core dimensions: person-centered care (comprising 7 items), organizational support (4 items), and environmental accessibility (2 items). The original validation study (with a sample size of n = 220) [21] reported satisfactory internal consistency for the total scale (α = 0.84) and good test-retest reliability (r = .66) over a one-week period. A subsequent reliability generalization study in 2021 [32], which analyzed the internal consistency of the P-CAT across multiple studies, found a mean α value of 0.81 across 25 meta-analysis samples. This study identified mean age of the sample as the only variable with a statistically significant correlation to the reliability coefficient. In terms of internal structure validity, the P-CAT demonstrated three factors, accounting for 56% of the total variance. Content validity was rigorously assessed through expert reviews, literature analysis, and stakeholder input [33].
It is noteworthy that the consistent trends observed across various P-CAT validation studies might be influenced by a long-standing validity framework that categorizes validity into content, construct, and criterion validity [34, 35]. However, a re-evaluation of the P-CAT’s validity within a contemporary validity framework, which would offer a refined understanding of validity, has not yet been undertaken.
The Evolution of Scale Validity
Historically, validation has been viewed as a process focused primarily on the psychometric properties of measurement instruments [36]. In the early 20th century, with the increasing use of standardized tests in education and psychology, two primary definitions of validity emerged. The first defined validity as the extent to which a test measures its intended construct, while the second described it in terms of an instrument’s correlation with a specific variable [35].
However, validity theory has progressed significantly over the last century. The current understanding emphasizes that validity should be grounded in specific interpretations tailored to a particular purpose. It should not solely rely on empirically derived psychometric properties but should also be supported by the theoretical foundation of the measured construct. This evolution distinguishes between a classical approach (Classical Test Theory or CTT) and a modern approach to validity. Modern validity perspectives are generally characterized by: (a) a unified concept of validity and (b) validity judgments based on inferences and interpretations drawn from test scores [37, 38]. This advancement in validity theory led to the development of frameworks designed to guide the process of gathering evidence to support the appropriate use and interpretation of measurement scores [39].
The “Standards for Educational and Psychological Testing” (“Standards”), published by the American Educational Research Association (AERA), the American Psychological Association (APA), and the National Council on Measurement in Education (NCME) in 2014, serves this purpose. The “Standards” provide guidelines for evaluating the validity of score interpretations based on their intended applications. Two key conceptual shifts are central to this modern view of validity: first, validity is a unified concept centered on the construct itself; second, validity is defined as “the degree to which evidence and theory support the interpretations of test scores for proposed uses of tests” [37]. The “Standards” propose five sources of evidence to assess different facets of validity [37]: test content, response processes, internal structure, relationships to other variables, and consequences of testing. According to AERA et al. [37], test content validity pertains to the relevance of the administration process, subject matter, wording, and format of test items to the construct they are designed to measure. It is primarily assessed using qualitative methods, though quantitative approaches can also be employed. Response process validity focuses on analyzing the cognitive processes and interpretations of items by respondents, typically assessed through qualitative methods. Internal structure validity examines the interrelationships between test items and the underlying construct, using quantitative methods. Validity based on relationships with other variables involves comparing the measured construct with other theoretically relevant external variables, assessed quantitatively. Finally, validity based on testing consequences analyzes both intended and unintended outcomes that might arise from sources of invalidity, primarily using qualitative methods.
While validity is crucial for establishing a robust scientific foundation for test score interpretations, validation studies in healthcare have traditionally emphasized content, criterion, and construct validity, often overlooking the critical aspects of score interpretation and application [34].
The “Standards” framework is considered a valuable, theory-driven approach for evaluating questionnaire validity. Its strength lies in its capacity to analyze validity sources using both qualitative and quantitative methodologies, and its evidence-based nature [35]. However, due to limited awareness or the absence of standardized descriptive protocols, only a small number of instruments have been rigorously reviewed within the “Standards” framework to date [39].
The Imperative for Current Research
Despite the P-CAT’s widespread use in professional settings and its numerous validations [25, 27, 28, 29, 30, 31, 40], a comprehensive analysis of its validity within the “Standards” framework is lacking. This means that empirical evidence supporting the P-CAT’s validity has not been synthesized in a manner that facilitates a well-informed judgment based on a holistic view of available data.
Such a review is critically important, particularly given certain unresolved methodological concerns surrounding the P-CAT. For instance, while the original study identified the P-CAT as multidimensional, Bru-Luna et al. [32] recently pointed out that subsequent adaptations [25, 27, 28, 29, 30, 40] often prioritize the total score for interpretation, potentially neglecting its multidimensional nature. This suggests that the multidimensionality observed in the original study may not have been consistently replicated across different adaptations. Furthermore, Bru-Luna et al. [32] highlighted that the internal structure validity of the P-CAT is frequently underreported due to the absence of sufficiently rigorous methods for definitively establishing how its scores are derived and interpreted.
The validity of the P-CAT, especially its internal structure, remains an open question. Nevertheless, both substantial research and practical application underscore the relevance of this tool in assessing PCC. However, this perception, while widely held, is inherently subjective and may not be sufficient for a comprehensive and synthetic validity assessment based on prior validation studies. A robust validity assessment necessitates a conceptual model of validity, followed by a thorough review of existing P-CAT validity studies through the lens of this model.
Therefore, the primary objective of this study was to conduct a systematic review of the evidence provided by P-CAT validation studies, utilizing the “Standards for Educational and Psychological Testing” as a guiding framework.
References:
[1] Institute of Medicine. Crossing the quality chasm: a new health system for the 21st century. Washington, DC: National Academy; 2001.
[2] International Alliance of Patients’ Organizations. What is patient-centred healthcare? A review of definitions and principles. 2nd ed. London, UK: International Alliance of Patients’ Organizations; 2007.
[3] World Health Organization. WHO global strategy on people-centred and integrated health services: interim report. Geneva, Switzerland: World Health Organization; 2015.
[4] Britten N, Ekman I, Naldemirci Ö, Javinger M, Hedman H, Wolf A. Learning from Gothenburg model of person centred healthcare. BMJ. 2020;370:m2738.
[5] Van Diepen C, Fors A, Ekman I, Hensing G. Association between person-centred care and healthcare providers’ job satisfaction and work-related health: a scoping review. BMJ Open. 2020;10:e042658.
[6] Ekman N, Taft C, Moons P, Mäkitalo Å, Boström E, Fors A. A state-of-the-art review of direct observation tools for assessing competency in person-centred care. Int J Nurs Stud. 2020;109:103634.
[7] American Geriatrics Society Expert Panel on Person-Centered Care. Person-centered care: a definition and essential elements. J Am Geriatr Soc. 2016;64:15–8.
[8] McCormack B, McCance TV. Development of a framework for person-centred nursing. J Adv Nurs. 2006;56:472–9.
[9] McCormack B, McCance T. Person-centred practice in nursing and health care: theory and practice. Chichester, England: Wiley; 2016.
[10] Nolan MR, Davies S, Brown J, Keady J, Nolan J. Beyond person-centred care: a new vision for gerontological nursing. J Clin Nurs. 2004;13:45–53.
[11] McCormack B, McCance T. Person-centred nursing: theory, models and methods. Oxford, UK: Wiley-Blackwell; 2010.
[12] Abraha I, Rimland JM, Trotta FM, Dell’Aquila G, Cruz-Jentoft A, Petrovic M, et al. Systematic review of systematic reviews of non-pharmacological interventions to treat behavioural disturbances in older patients with dementia. The SENATOR-OnTop series. BMJ Open. 2017;7:e012759.
[13] Anderson K, Blair A. Why we need to care about the care: a longitudinal study linking the quality of residential dementia care to residents’ quality of life. Arch Gerontol Geriatr. 2020;91:104226.
[14] Bauer M, Fetherstonhaugh D, Haesler E, Beattie E, Hill KD, Poulos CJ. The impact of nurse and care staff education on the functional ability and quality of life of people living with dementia in aged care: a systematic review. Nurse Educ Today. 2018;67:27–45.
[15] Smythe A, Jenkins C, Galant-Miecznikowska M, Dyer J, Downs M, Bentham P, et al. A qualitative study exploring nursing home nurses’ experiences of training in person centred dementia care on burnout. Nurse Educ Pract. 2020;44:102745.
[16] McCormack B, Borg M, Cardiff S, Dewing J, Jacobs G, Janes N, et al. Person-centredness– the ‘state’ of the art. Int Pract Dev J. 2015;5:1–15.
[17] Wilberforce M, Challis D, Davies L, Kelly MP, Roberts C, Loynes N. Person-centredness in the care of older adults: a systematic review of questionnaire-based scales and their measurement properties. BMC Geriatr. 2016;16:63.
[18] Rathert C, Wyrwich MD, Boren SA. Patient-centered care and outcomes: a systematic review of the literature. Med Care Res Rev. 2013;70:351–79.
[19] Sharma T, Bamford M, Dodman D. Person-centred care: an overview of reviews. Contemp Nurse. 2016;51:107–20.
[20] Ahmed S, Djurkovic A, Manalili K, Sahota B, Santana MJ. A qualitative study on measuring patient-centered care: perspectives from clinician-scientists and quality improvement experts. Health Sci Rep. 2019;2:e140.
[21] Edvardsson D, Fetherstonhaugh D, Nay R, Gibson S. Development and initial testing of the person-centered Care Assessment Tool (P-CAT). Int Psychogeriatr. 2010;22:101–8.
[22] Tamagawa R, Groff S, Anderson J, Champ S, Deiure A, Looyis J, et al. Effects of a provincial-wide implementation of screening for distress on healthcare professionals’ confidence and understanding of person-centered care in oncology. J Natl Compr Canc Netw. 2016;14:1259–66.
[23] Degl’ Innocenti A, Wijk H, Kullgren A, Alexiou E. The influence of evidence-based design on staff perceptions of a supportive environment for person-centered care in forensic psychiatry. J Forensic Nurs. 2020;16:E23–30.
[24] Hulin CL. A psychometric theory of evaluations of item and scale translations: fidelity across languages. J Cross Cult Psychol. 1987;18:115–42.
[25] Martínez T, Suárez-Álvarez J, Yanguas J, Muñiz J. Spanish validation of the person-centered Care Assessment Tool (P-CAT). Aging Ment Health. 2016;20:550–8.
[26] Martínez T, Martínez-Loredo V, Cuesta M, Muñiz J. Assessment of person-centered care in gerontology services: a new tool for healthcare professionals. Int J Clin Health Psychol. 2020;20:62–70.
[27] Rokstad AM, Engedal K, Edvardsson D, Selbaek G. Psychometric evaluation of the Norwegian version of the Person-centred Care Assessment Tool. Int J Nurs Pract. 2012;18:99–105.
[28] Sjögren K, Lindkvist M, Sandman PO, Zingmark K, Edvardsson D. Psychometric evaluation of the Swedish version of the person-centered Care Assessment Tool (P-CAT). Int Psychogeriatr. 2012;24:406–15.
[29] Zhong XB, Lou VW. Person-centered care in Chinese residential care facilities: a preliminary measure. Aging Ment Health. 2013;17:952–8.
[30] Tak YR, Woo HY, You SY, Kim JH. Validity and reliability of the person-centered Care Assessment Tool in long-term care facilities in Korea. J Korean Acad Nurs. 2015;45:412–9.
[31] Brugnolli A, Debiasi M, Zenere A, Zanolin ME, Baggia M. The person-centered Care Assessment Tool in nursing homes: psychometric evaluation of the Italian version. J Nurs Meas. 2020;28:555–63.
[32] Bru-Luna LM, Martí-Vilar M, Merino-Soto C, Livia J. Reliability generalization study of the person-centered Care Assessment Tool. Front Psychol. 2021;12:712582.
[33] Edvardsson D, Innes A. Measuring person-centered care: a critical comparative review of published tools. Gerontologist. 2010;50:834–46.
[34] Hawkins M, Elsworth GR, Nolte S, Osborne RH. Validity arguments for patient-reported outcomes: justifying the intended interpretation and use of data. J Patient Rep Outcomes. 2021;5:64.
[35] Sireci SG. On the validity of useless tests. Assess Educ Princ Policy Pract. 2016;23:226–35.
[36] Hawkins M, Elsworth GR, Osborne RH. Questionnaire validation practice: a protocol for a systematic descriptive literature review of health literacy assessments. BMJ Open. 2019;9:e030753.
[37] American Educational Research Association, American Psychological Association. National Council on Measurement in Education. Standards for educational and psychological testing. Washington, DC: American Educational Research Association; 2014.
[38] Padilla JL, Benítez I. Validity evidence based on response processes. Psicothema. 2014;26:136–44.
[39] Hawkins M, Elsworth GR, Hoban E, Osborne RH. Questionnaire validation practice within a theoretical framework: a systematic descriptive literature review of health literacy assessments. BMJ Open. 2020;10:e035974.
[40] Le C, Ma K, Tang P, Edvardsson D, Behm L, Zhang J, et al. Psychometric evaluation of the Chinese version of the person-centred Care Assessment Tool. BMJ Open. 2020;10:e031580.