The Persona Centered Care Assessment Tool: A Comprehensive Guide and Validity Review

Quality care for individuals facing chronic illnesses, functional limitations, or a combination of both has become a paramount objective within healthcare and support services. Person-centered care (PCC) stands as a cornerstone not only for achieving this objective but also for delivering superior health maintenance and medical attention [1,2,3]. Beyond upholding fundamental human rights, PCC offers substantial advantages for both care recipients and providers [4, 5]. Furthermore, the implementation of PCC necessitates a specific skill set for healthcare professionals, equipping them to navigate the ongoing challenges within this domain [6]. Key components of PCC encompass [7]: personalized, goal-oriented care plans tailored to individual preferences; continuous plan reviews and goal adjustments; support from multidisciplinary teams; seamless coordination among healthcare and support providers; ongoing education and information exchange for providers; and quality enhancement through feedback from individuals and their caregivers.

The application of PCC is increasingly documented in existing literature. A prominent example is McCormack’s widely recognized mid-range theory [8], an internationally acclaimed framework that guides the practical implementation and research of PCC, particularly within hospital settings. This framework defines PCC as “an approach to practice that is established through the formation and fostering of therapeutic relationships between all care providers, service users, and others significant to them, underpinned by values of respect for persons, [the] individual right to self-determination, mutual respect, and understanding” [9].

Crucially, PCC emphasizes that the “person” at the heart of care extends beyond the recipient to include everyone involved in the care process [10, 11]. Effective PCC requires professionals to be proficient in relevant skills and methodologies, as caregivers significantly influence the quality of life for those needing care [12,13,14]. Moreover, acknowledging the demanding nature of caregiving, caregiver well-being is also a critical consideration. Research indicates that implementing PCC can yield benefits for both care recipients and caregivers in professional settings [15].

Despite extensive literature and frequent integration of PCC into health policy and research [16], several complexities persist. A universal definition of PCC remains elusive [17], leading to challenges in areas such as efficacy assessment [18, 19]. Further complicating matters are the difficulties in quantifying the subjective aspects of PCC and the inconsistent use of standardized measurement tools [20]. These challenges and the recognized need for standardized assessment led to the development of the Person-Centered Care Assessment Tool (P-CAT) [21]. The P-CAT was conceived as a concise, cost-effective, user-friendly, versatile, and comprehensive instrument for reliably and validly measuring PCC in research contexts [21].

Understanding the Person-Centered Care Assessment Tool (P-CAT)

While various tools exist to evaluate PCC from different viewpoints (caregiver or recipient) and across diverse settings (hospitals, nursing homes), the P-CAT stands out as a particularly concise and straightforward option. It encapsulates the core elements of PCC identified in the literature. Initially designed in Australia for assessing long-term residential care for older adults with dementia, its application is expanding to other healthcare domains, including oncology units [22] and psychiatric facilities [23].

The P-CAT’s appeal lies in its brevity, ease of administration, adaptability across various medical and care environments, and its potentially emic qualities (constructs applicable across cultures with consistent structure and interpretation [24]). This has positioned the P-CAT as a frequently employed instrument by professionals for PCC measurement [25, 26]. Its reach extends across numerous countries with diverse cultural and linguistic backgrounds, including adaptations in Norway [27], Sweden [28], China [29], South Korea [30], Spain [25], and Italy [31].

The P-CAT consists of 13 items, each rated on a 5-point scale ranging from “strongly disagree” to “strongly agree,” where higher scores indicate greater person-centeredness. It encompasses three dimensions: person-centered care (7 items), organizational support (4 items), and environmental accessibility (2 items). The original validation study (n = 220; [21]) demonstrated satisfactory internal consistency for the total scale (α = 0.84) and good test-retest reliability (r =.66) over a one-week period. A 2021 reliability generalization study [32] analyzing the internal consistency of the P-CAT across 25 meta-analysis samples (including some validation studies from this review) found a mean α of 0.81. The average age of the sample was the only variable significantly correlated with the reliability coefficient. Factor analysis revealed three factors explaining 56% of the total variance, and content validity was affirmed through expert reviews, literature analysis, and stakeholder input [33].

The consistency observed across different P-CAT validation studies might stem from a long-standing validity framework that distinguishes between content, construct, and criterion validity [34, 35]. However, a contemporary re-evaluation of P-CAT validity within a more modern framework, offering a refined definition of validity, remains unaddressed.

Scale Validity: Classical vs. Modern Perspectives

Traditionally, validation has been viewed as a process centered on the psychometric properties of a measurement instrument [36]. Early 20th-century definitions, coinciding with the increased use of standardized tests in education and psychology, described validity in two ways: either as the extent to which a test measures its intended construct or as the correlation of an instrument with another variable [35].

However, validity theory has progressed significantly over the last century. The current understanding emphasizes that validity should be grounded in specific interpretations for a defined purpose. It should not solely rely on empirical psychometric properties but also be informed by the theoretical underpinnings of the measured construct. Distinguishing between classical test theory (CTT) and a modern approach highlights this evolution in the concept of validity. Modern validity concepts are generally characterized by: (a) a unified view of validity and (b) validity judgments based on inferences and interpretations derived from test scores [37, 38]. This advancement led to the development of frameworks for gathering evidence to support the appropriate use and interpretation of instrument scores [39].

The “Standards for Educational and Psychological Testing” (“Standards”), published by the American Educational Research Association (AERA), the American Psychological Association (APA), and the National Council on Measurement in Education (NCME) in 2014, serve as such a guiding framework. These standards offer guidelines for evaluating the validity of score interpretations based on their intended application. Two key aspects of this modern perspective are: first, validity as a unitary concept focused on the construct itself; and second, validity defined as “the degree to which evidence and theory support the interpretations of test scores for proposed uses of tests” [37]. The “Standards” outline five sources of validity evidence [37]: test content, response processes, internal structure, relations to other variables, and consequences of testing. According to AERA et al. [37], test content validity refers to the alignment of the administration process, subject matter, item wording, and format with the intended construct. It is primarily assessed using qualitative methods, though quantitative approaches can be incorporated. Response process validity examines the cognitive processes and item interpretation by respondents, relying on qualitative methods. Internal structure validity, assessed quantitatively, focuses on the interrelationships between items and the construct. Validity based on relationships with other variables involves comparing the measured variable with theoretically relevant external variables, using quantitative methods. Finally, validity based on testing consequences analyzes both intended and unintended outcomes that might indicate invalidity, predominantly using qualitative methods.

While validity is crucial for establishing a robust scientific foundation for test score interpretations, health-related validation studies have historically prioritized content, criterion, and construct validity, often neglecting the interpretation and application of scores [34].

The “Standards” framework is considered a valuable, theory-driven approach for evaluating questionnaire validity because of its capacity to analyze validity sources using both qualitative and quantitative methodologies and its evidence-based nature [35]. However, due to limited awareness or the absence of standardized protocols, relatively few instruments have been evaluated using the “Standards” framework to date [39].

Current Study: Addressing the Validity of the P-CAT

Despite the widespread use of the P-CAT and its seven existing validation studies [25, 27,28,29,30,31, 40], a validity analysis within the “Standards” framework is lacking. Consequently, empirical evidence supporting the P-CAT’s validity in a manner that facilitates informed judgments based on synthesized available data is yet to be established.

Such a review is crucial given unresolved methodological issues surrounding the P-CAT. For instance, while the original study identified the multidimensional nature of the P-CAT, Bru-Luna et al. [32] noted that subsequent adaptations [25, 27,28,29,30, 40] often utilize the total score for interpretation, neglecting its multidimensionality. This suggests that the original multidimensional structure has not been consistently replicated. Bru-Luna et al. [32] also pointed out that the internal structure validity of the P-CAT is frequently underreported due to a lack of robust methodologies to definitively establish score calculation methods.

Despite these unresolved questions regarding the P-CAT’s validity, particularly its internal structure, both substantive research and professional practice indicate its relevance in assessing PCC. However, this perception, being largely judgment-based, may not suffice for a comprehensive and synthesized validity assessment based on prior validation studies. A proper validity evaluation necessitates a model for conceptualizing validity, followed by a review of existing P-CAT validity studies using this model.

Therefore, the primary objective of this study is to conduct a systematic review of the evidence provided by P-CAT validation studies, employing the “Standards” as a guiding framework.

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *