One Size Does Not Fit All- Choosing Practical Cognitive Screening Tools for Your Practice
Frank J Molnar
Sophiya Benjamin
Stacey A Hawkins
Melanie Briscoe
Sabeen Ehsan
SimpleOriginal

Summary

Millions undergo cognitive testing yearly, but many free tools shift to fee-based. This article proposes criteria to guide clinicians in choosing free, validated tools while addressing training, diversity, and methodological gaps.

2020

One Size Does Not Fit All- Choosing Practical Cognitive Screening Tools for Your Practice

Keywords Montreal Cognitive Assessment; cognition; dementia'; screening; tests

Abstract

Every year, millions of patients worldwide undergo cognitive testing. Unfortunately, new barriers to the use of free open access cognitive screening tools have arisen over time, making accessibility of tools unstable. This article is in follow-up to an editorial discussing alternative cognitive screening tools for those who cannot afford the costs of the Mini-Mental State Examination and Montreal Cognitive Assessment (see www.dementiascreen.ca). The current article outlines an emerging disruptive "free-to-fee" cycle where free open access cognitive screening tools are integrated into clinical practice and guidelines, where fees are then levied for the use of the tools, resulting in clinicians moving on to other tools. This article provides recommendations on means to break this cycle, including the development of tool kits of valid cognitive screening tools that authors have contracted not to charge for (i.e., have agreed to keep free open access). The PRACTICAL.1 Criteria (PRACTIcing Clinician Accessibility and Logistical Criteria Version 1) are introduced to help clinicians select from validated cognitive screening tools, considering barriers and facilitators, such as whether the cognitive screening tools are easy to score and free of cost. It is suggested that future systematic reviews embed the PRACTICAL.1 criteria, or refined future versions, as part of the standard of review. Methodological issues, the need for open access training to insure proper use of cognitive screening tools, and the need to anticipate growing ethnolinguistic diversity by developing tools that are less sensitive to educational, cultural, and linguistic bias are discussed in this opinion piece. J Am Geriatr Soc 68:2207-2213, 2020.

1 BACKGROUND

Every year, millions of patients worldwide undergo cognitive testing for a variety of reasons—one of the most common being to detect signs of dementia, which may impact on function and safety. Unfortunately, the strengths and weaknesses of the cognitive screening tools available to clinicians are often not well understood by some of the clinicians who apply them. Furthermore, new barriers to the use of free open access cognitive screening tools have arisen over time, making the choice and accessibility of tools unstable. In 2001, the intellectual property rights for the Mini-Mental State Examination (MMSE) were transferred to Psychological Assessment Resources (PAR Inc). When costs for use of the MMSE were levied, many healthcare practitioners switched to the Montreal Cognitive Assessment (MoCA), which then became the standard of care. In 2019, one of the developers of the MoCA announced plans to charge for access and training as of September 2020 (note: not all the developers of the MoCA are involved in this request). Many clinicians may stop using the MoCA, as they did the MMSE. We seem to be entering a “free-to-fee” cycle of instability, where free open access tests are integrated into practice and guidelines, and after widespread uptake, new fees are levied and then clinicians switch to other tests (see “free-to-fee” cycle in Figure 1). This article follows up on a prior editorial (www.dementiascreen.ca) by reviewing relevant methodological issues and the resultant tactics and investments required to break this cycle. This article is not a systematic review but rather an opinion piece that represents the culmination of 20 years of study, discussions with colleagues, and observations of the application of cognitive screening tools in clinical practice.

Figure 1

Figure 1

Emerging “Free-to-Fee” cycle to be avoided in the future.

1.1 Screening Tools Are Distinct from Diagnostic Tools

Screening answers the question “is a potential problem present?” Diagnosis answers the question “what is the cause/etiology of the problem if it is present?” These are two different questions.

The result of a screening test alone is insufficient to make the diagnosis of dementia and, conversely, one can make a diagnosis of dementia without a screening test, based on history and clinical examination. This article will focus on screening tools.

1.2 One Size Does Not Fit All: Different Types of Screening Tools May Be Suited to Different Circumstances (e.g., Different Settings and Different Patients)

Authors have described several approaches to screening:

  1. Cognitive tests administered to patients (e.g., MMSE and MoCA),

  2. Questions regarding cognition and function asked of proxy informants (e.g., Informant Questionnaire on Cognitive Decline in the Elderly (IQCODE) and AD8),

  3. Functional assessments that use direct observation-of-test tasks that are more common in specialty settings where occupational therapy is available,

  4. Screening by a self-administered questionnaire,

  5. Clinician observations of behavior suggesting cognitive impairment.

    7

Each of these approaches have complementary roles.

Cognitive tests administered to patients are useful “since many older patients do not arrive at the doctor's office with someone who has known them well for a long time and is willing to discuss the subject's failing abilities (which may be against cultural or personal norms).”

Proxy informant-based questions provide longitudinal information (change over time) and may be less sensitive to education or culture if following patients from their own functional baseline. They require reliable and accurate informants.

Both proxy informant-based questions and functional assessments may be useful when cognitive tests administered to patients are inaccurate due to ethnolinguistic diversity, which is growing rapidly in many counties.

Theoretically, both proxy informant-based questions and screening by a self-administered questionnaire may be started in the waiting room, thereby saving clinician time.

Clinician observations of behavior suggesting cognitive impairment7 take little to no time and may trigger some of the previously mentioned screens.

1.3 Myth Busting: Psychometric Properties (e.g., Sensitivity, Specificity, Positive Predictive Value, and Negative Predictive Value) of Cognitive Screening Tools Are Not as Stable or Predictable as We Like to Believe

Factors that alter psychometric properties of cognitive screening tools were outlined in www.dementiascreen.ca and are described in greater detail in Table 1.

Table 1. Factors that Alter Psychometric Properties (e.g., Sensitivity, Specificity, PPV, and NPV) of Cognitive Screening Tools (Based on www.dementiascreen.ca)

1.

Spectrum of disease (type(s) and severity of dementia in population studied)

2.

Prevalence of disease (likely lower prevalence in primary care than in geriatric clinics, memory clinics, and geriatric day hospitals). This means psychometric properties (e.g., sensitivity, specificity, PPV, and NPV) may be significantly different in such specialty clinics relative to values in primary care settings.

3.

Setting (e.g., community vs primary care vs emergency department vs in hospital vs long-term care vs specialty clinic).

4.

Cutoff (cutpoint) adopted—for tests where 0 indicates severe impairment and higher scores reflect better cognition (e.g., MMSE and MoCA); as one raises the cutoff, the sensitivity increases (more people with disease fall below the cutoff and are detected) but the specificity decreases (more people free of disease also fall below the cutoff and are falsely labeled as impaired). There is a trade-off of sensitivity vs specificity; as one increases, the other tends to decrease.

5.

What one is screening for: dementia, MCI, or cognitive impairment in general (due to dementia, MCI, delirium, and/or depression).

6.

Language: cognitive tests administered to patients and questionnaires applied to proxy informants represent language-based testing that must be validated independently in each language (i.e., it is not valid to merely translate and retain the same cutoff scores as some test developers have done).

7.

Culture—screening tools are often developed in specific cultures and may not work as well in cultures they have not been developed and validated in.

8.

Education—the impact of education on cutoff scores is well known. Some screening tools provide compensatory scoring approaches.

9.

Targeting: whether one applies the screen to all comers (the traditional definition of population screening) or only to higher-risk individuals (e.g., targeting those with advanced age (>75- and >85-year cutoffs have been cited), vascular risk factors, late-onset depression, family history, and subjective memory complaints).

Abbreviations: MCI, mild cognitive impairment; MMSE, Mini-Mental State Examination; MoCA, Montreal Cognitive Assessment; NPV, negative predictive value; PPV, positive predictive value.

Point 9 in Table 1 regarding who to screen (i.e., targeting) can create even more variability and unpredictability in the psychometrics of cognitive screening tools.

Rather than screening all comers, several experts have suggested screening based on risk stratification. Brodaty and Yokomizo and colleagues suggested screening those aged 75 years and older. Ismail et al expanded on this by suggesting that screening be based on both age and comorbidities associated with dementia.

Employing the above risk stratification will alter psychometrics of cognitive screening tools—sensitivity and specificity will not be the same in a practice employing these risk stratification criteria as in studies that did not employ the same risk stratification.

The bottom line is that sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) can vary widely from study to study (making studies hard if not impossible to compare) and from clinical setting to clinical setting (meaning test accuracy in one's own clinical setting may be different from study psychometrics). These psychometric values can vary even more as clinicians employ different targeting criteria regarding who they will and will not screen if the same targeting criteria were not used in the studies.

Sensitivity, specificity, PPV, and NPV are not as stable or fixed as we like to believe. Rather, they can vary widely and are dependent on the factors outlined in Table 1. At most, one can say that a cognitive screening test is reasonable to use if it has demonstrated acceptable sensitivity and specificity in a setting similar to the one in which one is practicing with similar patients and similar targeting criteria.

Selection of cognitive screening tests for clinical practice therefore cannot be based solely on potentially unstable psychometrics (e.g., sensitivity and specificity). Clinicians require additional criteria to help them select the most appropriate cognitive screening tools for their practice, as described below.

1.4 The PRACTICAL.1 Criteria to Help Clinicians Make Informed Practical Selections

This article follows up on a prior editorial with the PRACTICAL.1 criteria (PRACTIcing Clinician Accessibility and Logistical Criteria Version 1), which represent selected, modernized versions of some of Milne's Practicality, Feasibility, and Range of Applicability criteria.

The PRACTICAL.1 criteria include 7 filters:

  • Filter 1: Remove tests not validated in multiple studies for recommended setting.

  • Filter 2: Remove tests that charge for use, training, and/or resources to apply the test.

  • Filter 3: Remove tests with significant challenges in scoring.

  • Filter 4: Remove tests without short-term memory testing.

  • Filter 5: Remove tests without executive function.

  • Filter 6: Remove tests that take longer than 10 minutes.

  • Filter 7: Remove tests taking longer than 5 minutes.

1.5 Illustrating How to Apply the PRACTICAL.1 Criteria to Select Tools that Are Most Feasible in Frontline Clinical Practice

The U.S. Preventive Services Task Force (USPSTF) systematic review focused on cognitive screening tools validated in at least two primary care settings. This review selected 16 cognitive tests: very brief (≤5 minutes), brief (6–10 minutes), and longer (>10 minutes) cognitive screening tools. Very brief tests included six tests administered to patients (e.g., the Clock Drawing Test (CDT), Lawton Instrumental Activities of Daily Living (IADL) Scale, Memory Impairment Screen (MIS), Short-Portable Mental Status Questionnaire and Mental Status Questionnaire (SPMSQ/MSQ), Mini-Cog, and Verbal Fluency) and two proxy informant-based tests (e.g., AD8 Dementia Screening Interview and Functional Activities Questionnaire (FAQ)). Brief tests included six additional tests administered to patients (e.g., MMSE, 7-Minute Screen (7MS), Abbreviated Mental Test (AMT), MoCA, the St. Louis University Mental Status Examination (SLUMS), and the Telephone Interview for Cognitive Status (TICS/modified TICS)). Long tests included the IQCODE (26-item IQCODE and the 12-item IQCODE-short).

Starting with a list of 16 USPSTF options meeting filter 1 (validation in multiple studies for recommended setting), if we apply filter 2, removing tests that charge for use, training, and/or resources to apply the test (e.g., MMSE, MoCA, AMT, and TICS), we are left with 12 options: CDT, 7MS, SPMSQ/MSQ, Verbal Fluency, MIS, 26-item IQCODE and the 12-item IQCODE-short, SLUMS, Mini-Cog, Lawton IADL, AD8, and FAQ.

If we then apply filter 3, removing tests that have significant challenges in scoring (e.g., CDT, 7MS, SPMSQ, and MSQ), the list decreases to nine options: Verbal Fluency, MIS, 26-item IQCODE and the 12-item IQCODE-short, SLUMS, Mini-Cog, Lawton IADL, AD8, and FAQ.

Loss of short-term memory is the presenting feature of Alzheimer's dementia. Many clinicians would likely feel screening tests should therefore directly test short-term memory. Applying filter 4, by removing tests without short-term memory testing (Verbal Fluency), reduces the list to eight options: MIS, 26-item IQCODE and the 12-item IQCODE-short, SLUMS, Mini-Cog, Lawton IADL, AD8, and FAQ.

Some clinicians may feel screening tests should test executive function to detect loss of ability to manage medications, driving risk, as well as to detect dementias with frontal lobe dysfunction (e.g., frontotemporal dementia, Lewy body dementia, and Parkinson's dementia). Applying filter 5, by removing tests without executive function testing (MIS), further reduced the list to seven options: 26-item IQCODE and the 12-item IQCODE-short, SLUMS, Mini-Cog, Lawton IADL, AD8, and FAQ.

Time pressure in primary care is often more demanding than in specialty clinics. If we apply filter 6, by removing tests that take longer than 10 minutes, such as the 26-item IQCODE and the 12-item IQCODE-short, then we are left with five options: SLUMS, Mini-Cog, Lawton IADL, AD8, and FAQ.

Some clinicians may desire very brief tests. Applying filter 7 by removing tests taking longer than 5 minutes selects out the SLUMs, thereby reducing the list to four options: Mini-Cog, Lawton IADL, AD8, and FAQ.

How (i.e., the sequence of filters to employ) and how much (i.e., how many filters to employ) one wishes to shrink the field of candidate tools depends on the above decisions—the PRACTICAL.1 criteria empower clinicians to make these filtering decisions when selecting cognitive screening tools for their clinical practice.

2 DISCUSSION

In addition to the 16 tests selected by the USPSTF, others have recommended the Modified Mini-Mental Examination (3MS), a four-item version of the MoCA, and the Rowland Universal Dementia Assessment Scale (RUDAS) all of which will be addressed below.

The 3MS is a 100-point scale that is likely too long for primary care practice. The Ottawa 3DY (O3DY) is a brief cognitive screening tool, derived from the 3MS, composed of four questions that do not require equipment, paper, or pencil (O3DY is purely verbal). The 3DY questions are Day of the week, Date, DLROW (WORLD spelled backwards), and Year. The psychometric properties have been reported in emergency department studies. The O3DY has been validated in French. Rather than studying the long 3MS in primary care settings, perhaps its shorter derivative, the O3DY, may be more suited to primary care validation research and, if found to be valid, primary care practice.

The four-item version of the MoCA has just completed the derivation stage. To our knowledge, it has not yet been validated in multiple primary care settings. It is premature to recommend this test for widespread use in primary care. Given the upcoming cost to use the MoCA, it is unclear if a charge will be attached to the use of shorter versions of the MoCA.

As points six to eight in Table 1 demonstrate, language, education, and culture can have major impacts on the psychometric properties of cognitive screening tools. As the number of seniors with limited English proficiency continues to grow, we will increasingly need cognitive screening tests that are less affected by education, language, and culture. Ismail et al identified the RUDAS as being “relatively unaffected by gender, years of education, and language of preference.” Although this screening test is lengthy, it may prove helpful in select situations in primary care settings or in specialty settings where more time and resources are available.

3 CONCLUSION

In preparing this article, the authors were repeatedly asked “what single test should all clinicians use … what is the best test?” These questions are revealing. Perhaps we have been unwisely overreliant on one test. Searching for the holy grail of the one best test may be folly. The reality is there are different complementary approaches to screening that work in different situations and for different patients, as described above—one size does not fit all.

Research alone will not break the “free-to-fee” cycle depicted in Figure 1. To facilitate change, we recommend the steps outlined in Table 2.

Table 2. Steps Required to Break the “Free-to-Fee” Cycle Depicted in Figure 1

Table 2Table 2 continued

Abbrevitions: ASAP, as soon as possible; IADL, Instrumental Activities of Daily Living; MoCA, Montreal Cognitive Assessment; PRACTICAL.1, PRACTIcing Clinician Accessibility and Logistical Criteria Version 1; USPSTF, U.S. Preventive Services Task Force.

To further empower readers, Table 2 includes links to the above mentioned screening tools, thereby allowing readers to better familiarize themselves with the tools to inform their selections.

In addition to providing guidance to clinicians, we are also hopeful this article will serve as a call to action for our national professional organizations as well as granting agencies funding the development and validation of cognitive screening tools.

Open Article as PDF

Abstract

Every year, millions of patients worldwide undergo cognitive testing. Unfortunately, new barriers to the use of free open access cognitive screening tools have arisen over time, making accessibility of tools unstable. This article is in follow-up to an editorial discussing alternative cognitive screening tools for those who cannot afford the costs of the Mini-Mental State Examination and Montreal Cognitive Assessment (see www.dementiascreen.ca). The current article outlines an emerging disruptive "free-to-fee" cycle where free open access cognitive screening tools are integrated into clinical practice and guidelines, where fees are then levied for the use of the tools, resulting in clinicians moving on to other tools. This article provides recommendations on means to break this cycle, including the development of tool kits of valid cognitive screening tools that authors have contracted not to charge for (i.e., have agreed to keep free open access). The PRACTICAL.1 Criteria (PRACTIcing Clinician Accessibility and Logistical Criteria Version 1) are introduced to help clinicians select from validated cognitive screening tools, considering barriers and facilitators, such as whether the cognitive screening tools are easy to score and free of cost. It is suggested that future systematic reviews embed the PRACTICAL.1 criteria, or refined future versions, as part of the standard of review. Methodological issues, the need for open access training to insure proper use of cognitive screening tools, and the need to anticipate growing ethnolinguistic diversity by developing tools that are less sensitive to educational, cultural, and linguistic bias are discussed in this opinion piece. J Am Geriatr Soc 68:2207-2213, 2020.

BACKGROUND

Millions of patients worldwide undergo cognitive testing annually, often to identify signs of dementia that could affect daily function and safety. Clinicians sometimes lack a full understanding of the strengths and weaknesses of available cognitive screening tools. Additionally, new obstacles have emerged regarding the use of free, openly accessible screening tools, leading to instability in their selection and availability. For example, the Mini-Mental State Examination (MMSE) became subject to fees after 2001, prompting many healthcare professionals to adopt the Montreal Cognitive Assessment (MoCA) as the new standard. In 2019, plans were announced to charge for MoCA access and training starting September 2020. This situation suggests a recurring "free-to-fee" pattern, where widely used, initially free tests become costly, causing clinicians to seek alternatives. This document discusses methodological concerns and proposes strategies to interrupt this cycle.

Figure 1 Emerging “Free-to-Fee” cycle to be avoided in the future.

Screening Tools Are Distinct from Diagnostic Tools

Screening assessments determine if a potential health problem exists. Diagnostic tools, in contrast, identify the specific cause of an existing problem. These serve different purposes. A screening test result alone is not enough to diagnose dementia. Conversely, a dementia diagnosis can be made through patient history and clinical examination, without a screening test. This document focuses on cognitive screening tools.

Different Types of Screening Tools May Be Suited to Different Circumstances

Multiple methods exist for cognitive screening. These include direct cognitive tests given to patients, questions about cognition and function asked of proxy informants (such as family members), functional assessments involving direct observation of tasks, self-administered questionnaires, and clinician observations of patient behavior. Each method serves a complementary purpose.

Direct cognitive tests are valuable because many older patients attend appointments alone, without a long-term acquaintance available to discuss declining abilities, which might also be culturally sensitive.

Proxy informant-based questions offer insights into changes over time and may be less influenced by a patient's education or cultural background when tracking their functional baseline. However, these methods rely on accurate and reliable informants.

Both informant-based questions and functional assessments are particularly useful when direct patient cognitive tests might be inaccurate due to diverse linguistic and cultural backgrounds, a growing demographic trend.

Self-administered questionnaires and proxy informant questions can be initiated in the waiting room, potentially saving clinician time. Clinician observations of behavior indicating cognitive impairment require minimal time and can prompt the use of other screening methods.

Psychometric Properties of Cognitive Screening Tools Are Not As Stable As Believed

Various factors, as detailed in Table 1, can change the psychometric properties of cognitive screening tools. These properties include sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV).

Table 1. Factors that Alter Psychometric Properties (e.g., Sensitivity, Specificity, PPV, and NPV) of Cognitive Screening Tools

The selection of patients for screening, also known as targeting, introduces further variability in psychometric outcomes. Some experts propose screening based on risk levels rather than testing everyone. For instance, recommendations include screening individuals aged 75 and older, or those with specific age and health conditions linked to dementia.

Using such risk-based screening changes the psychometric characteristics of the tools. Sensitivity and specificity will differ in a clinical practice using these criteria compared to research studies that did not.

Ultimately, sensitivity, specificity, PPV, and NPV can vary considerably between studies and across different clinical environments. This means a test's accuracy in one setting might differ from reported study results. These values can fluctuate even more when clinicians use different criteria for whom to screen, if these criteria were not matched in the original studies.

Therefore, psychometric values are not fixed. They are highly dependent on various factors. A cognitive screening test is considered suitable if it has shown acceptable sensitivity and specificity in a setting, patient group, and targeting approach similar to the clinician's practice. Consequently, clinicians need criteria beyond potentially unstable psychometrics to choose the most appropriate screening tools for their practice.

The PRACTICAL.1 Criteria for Informed Practical Selections

To assist clinicians in selecting suitable screening tools, the PRACTICAL.1 criteria (PRACTIcing Clinician Accessibility and Logistical Criteria Version 1) have been developed. These criteria update earlier guidelines concerning practicality, feasibility, and applicability.

The PRACTICAL.1 criteria involve seven filters:

  • Filter 1: Exclude tests not validated across multiple studies for the intended clinical setting.

  • Filter 2: Exclude tests that incur costs for use, training, or associated resources.

  • Filter 3: Exclude tests that present notable difficulties in scoring.

  • Filter 4: Exclude tests that do not assess short-term memory.

  • Filter 5: Exclude tests that do not assess executive function.

  • Filter 6: Exclude tests requiring more than 10 minutes to administer.

  • Filter 7: Exclude tests requiring more than 5 minutes to administer.

Illustrating the Application of PRACTICAL.1 Criteria for Clinical Selection

The U.S. Preventive Services Task Force (USPSTF) review identified 16 cognitive screening tools validated in primary care, categorized by administration time: very brief (≤5 minutes), brief (6–10 minutes), and longer (>10 minutes). These included tests administered directly to patients (e.g., Mini-Cog, Verbal Fluency) and those based on proxy informants (e.g., AD8, FAQ).

Applying the PRACTICAL.1 criteria begins with the 16 USPSTF-approved tests. Filter 1 ensures validation in multiple studies. Next, Filter 2 removes tests that incur charges for use or training, such as the MMSE and MoCA, reducing the options to 12.

Filter 3 then excludes tests with significant scoring challenges, further narrowing the list to nine options, including Verbal Fluency, Mini-Cog, and several informant-based tools. Since short-term memory loss is a key feature of Alzheimer's dementia, Filter 4 removes tests without this assessment (e.g., Verbal Fluency), leaving eight options.

Clinicians may also prioritize the assessment of executive function due to its relevance in managing medications, driving safety, and detecting specific dementias. Filter 5 removes tests lacking executive function assessment (e.g., MIS), resulting in seven remaining options.

Given the time constraints in primary care, Filter 6 removes tests longer than 10 minutes, such as the IQCODE variants, leaving five options: SLUMS, Mini-Cog, Lawton IADL, AD8, and FAQ. For clinicians preferring very brief tests, Filter 7 excludes those over 5 minutes, ultimately narrowing the choice to four: Mini-Cog, Lawton IADL, AD8, and FAQ. The specific sequence and number of filters applied are determined by the clinician's needs. The PRACTICAL.1 criteria provide a framework for these decisions in selecting cognitive screening tools.

DISCUSSION

Beyond the 16 tests identified by the USPSTF, other recommended tools include the Modified Mini-Mental Examination (3MS), a four-item MoCA version, and the Rowland Universal Dementia Assessment Scale (RUDAS).

The 3MS, a 100-point scale, is generally too long for primary care use. However, the Ottawa 3DY (O3DY), a shorter, verbal tool derived from the 3MS, consists of four questions: Day of the week, Date, DLROW (WORLD spelled backward), and Year. Its psychometric properties have been documented in emergency department studies and it has been validated in French. The O3DY may be a more appropriate subject for primary care validation research and subsequent practice than the longer 3MS.

The four-item MoCA version has completed its initial development but lacks validation in multiple primary care settings. Therefore, its widespread recommendation for primary care is currently premature. The potential for fees on shorter MoCA versions remains uncertain given the upcoming costs for the full MoCA.

Factors such as language, education, and culture significantly influence the psychometric properties of cognitive screening tools, as highlighted in Table 1. With an increasing population of seniors with limited English proficiency, there is a growing need for tests less affected by these demographic factors. The RUDAS has been identified as relatively unaffected by gender, education level, or preferred language. While lengthy, the RUDAS may be beneficial in specific primary care scenarios or in specialty settings with greater time and resources.

CONCLUSION

A common question posed to the authors was about the single best cognitive screening test for all clinicians. This inquiry suggests an over-reliance on a solitary tool. The pursuit of one universal "best" test may be misguided, as diverse, complementary screening approaches are more effective in various situations and for different patient populations. A "one size fits all" approach is not practical.

Addressing the "free-to-fee" cycle, as illustrated in Figure 1, requires more than just research. Significant steps are necessary to promote sustainable change within clinical practice.

Table 2. Steps Required to Break the “Free-to-Fee” Cycle Depicted in Figure 1

Table 2 also provides resources, including links to the discussed screening tools, enabling clinicians to become more familiar with these options and make informed selections. This document aims not only to guide clinicians but also to serve as an urgent call to action for professional organizations and funding agencies involved in the development and validation of cognitive screening tools.

Open Article as PDF

Abstract

Every year, millions of patients worldwide undergo cognitive testing. Unfortunately, new barriers to the use of free open access cognitive screening tools have arisen over time, making accessibility of tools unstable. This article is in follow-up to an editorial discussing alternative cognitive screening tools for those who cannot afford the costs of the Mini-Mental State Examination and Montreal Cognitive Assessment (see www.dementiascreen.ca). The current article outlines an emerging disruptive "free-to-fee" cycle where free open access cognitive screening tools are integrated into clinical practice and guidelines, where fees are then levied for the use of the tools, resulting in clinicians moving on to other tools. This article provides recommendations on means to break this cycle, including the development of tool kits of valid cognitive screening tools that authors have contracted not to charge for (i.e., have agreed to keep free open access). The PRACTICAL.1 Criteria (PRACTIcing Clinician Accessibility and Logistical Criteria Version 1) are introduced to help clinicians select from validated cognitive screening tools, considering barriers and facilitators, such as whether the cognitive screening tools are easy to score and free of cost. It is suggested that future systematic reviews embed the PRACTICAL.1 criteria, or refined future versions, as part of the standard of review. Methodological issues, the need for open access training to insure proper use of cognitive screening tools, and the need to anticipate growing ethnolinguistic diversity by developing tools that are less sensitive to educational, cultural, and linguistic bias are discussed in this opinion piece. J Am Geriatr Soc 68:2207-2213, 2020.

BACKGROUND

Millions of patients globally undergo cognitive testing annually, often to identify signs of dementia, a condition that can affect daily functioning and safety. However, some clinicians who use these tools may not fully understand their advantages and limitations. Additionally, new obstacles have emerged that hinder the use of free and publicly available cognitive screening tools, leading to instability in tool selection and access.

For instance, in 2001, ownership of the Mini-Mental State Examination (MMSE) was transferred to Psychological Assessment Resources (PAR Inc). When fees for the MMSE were introduced, many healthcare professionals adopted the Montreal Cognitive Assessment (MoCA) as the new standard. In 2019, one of the MoCA's creators announced intentions to charge for its use and training starting in September 2020. Many clinicians may discontinue using the MoCA, similar to what happened with the MMSE. This pattern suggests a "free-to-fee" cycle of instability, where free and openly accessible tests become widely adopted and integrated into clinical practice and guidelines. After broad acceptance, new fees are introduced, prompting clinicians to switch to alternative tests. This article builds on a previous editorial by examining methodological challenges and discussing strategies and resources needed to interrupt this cycle. This paper is not a systematic review; instead, it offers an informed perspective developed from two decades of research, professional discussions, and practical observations of cognitive screening tool use.

Screening Tools Are Distinct from Diagnostic Tools

Screening tests identify if a potential problem exists. Diagnostic tests, in contrast, determine the specific cause or origin of that problem. These are distinct inquiries.

A screening test result alone is not enough to diagnose dementia. Conversely, a dementia diagnosis can be made without a screening test, relying instead on patient history and clinical examination. This article focuses on screening tools.

One Size Does Not Fit All: Different Types of Screening Tools May Be Suited to Different Circumstances (e.g., Different Settings and Different Patients)

Experts have outlined several ways to conduct screening:

  1. Cognitive tests administered directly to patients (e.g., MMSE and MoCA).

  2. Questions regarding cognition and function asked of proxy informants (e.g., Informant Questionnaire on Cognitive Decline in the Elderly (IQCODE) and AD8).

  3. Functional assessments that use direct observation of test tasks, more common in specialty settings where occupational therapy is available.

  4. Screening by a self-administered questionnaire.

  5. Clinician observations of behavior suggesting cognitive impairment.

Each approach plays a complementary role.

Administering cognitive tests directly to patients is useful because many older individuals visit medical offices without a long-term acquaintance present to discuss their declining abilities, which might also conflict with cultural or personal norms.

Questions asked of proxy informants, such as family members, offer information about changes over time. These questions may be less influenced by a patient's education or cultural background if tracking their functional baseline. However, they rely on truthful and accurate information from the informant.

Both proxy-based questions and functional assessments can be valuable when direct patient cognitive tests are inaccurate, especially in areas with increasing ethnolinguistic diversity.

In theory, both proxy informant questions and self-administered questionnaires can be initiated in the waiting room, potentially saving clinician time.

Clinician observations of behaviors indicating cognitive impairment require minimal time and can prompt the use of other screening methods.

Myth Busting: Psychometric Properties (e.g., Sensitivity, Specificity, Positive Predictive Value, and Negative Predictive Value) of Cognitive Screening Tools Are Not as Stable or Predictable as We Like to Believe

Factors that influence the psychometric properties of cognitive screening tools, such as sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV), have been identified.

Decisions about who to screen, or targeting specific populations, can introduce further variability and unpredictability into the psychometric accuracy of cognitive screening tools.

Instead of universal screening, some experts propose screening based on risk levels. For example, some recommend screening individuals aged 75 and older, while others suggest screening based on both age and co-occurring health conditions linked to dementia.

Using these risk stratification methods will change the psychometric properties of cognitive screening tools. Sensitivity and specificity will differ in a clinical practice applying these criteria compared to studies that did not use the same stratification.

Ultimately, values such as sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) can differ significantly across studies, making comparisons difficult. They also vary from one clinical setting to another, meaning a test's accuracy in a specific practice may not match published study results. These psychometric values can fluctuate even more if clinicians use different criteria for who to screen compared to those used in research studies.

These psychometric properties are not fixed; instead, they can vary considerably depending on various factors. A cognitive screening test is considered appropriate for use only if it has shown acceptable sensitivity and specificity in a setting, patient population, and screening criteria similar to those in one's own practice.

Therefore, choosing cognitive screening tests for clinical practice should not rely solely on potentially inconsistent psychometric values. Clinicians need additional criteria to select the most suitable tools for their specific practice, as discussed in the following section.

The PRACTICAL.1 Criteria to Help Clinicians Make Informed Practical Selections

This article introduces the PRACTICAL.1 criteria (PRACTIcing Clinician Accessibility and Logistical Criteria Version 1). These criteria are updated versions of earlier guidelines, designed to help clinicians make practical and informed choices about screening tools.

The PRACTICAL.1 criteria consist of seven filters:

  • Filter 1: Exclude tests not validated in multiple studies for the intended setting.

  • Filter 2: Exclude tests that incur costs for use, training, or associated resources.

  • Filter 3: Exclude tests with significant scoring difficulties.

  • Filter 4: Exclude tests that do not assess short-term memory.

  • Filter 5: Exclude tests that do not assess executive function.

  • Filter 6: Exclude tests exceeding 10 minutes in duration.

  • Filter 7: Exclude tests exceeding 5 minutes in duration.

Illustrating How to Apply the PRACTICAL.1 Criteria to Select Tools that Are Most Feasible in Frontline Clinical Practice

The U.S. Preventive Services Task Force (USPSTF) conducted a review of cognitive screening tools validated in at least two primary care settings. This review identified 16 cognitive tests, categorized by duration: very brief (5 minutes or less), brief (6–10 minutes), and longer (over 10 minutes). Very brief tests included six patient-administered tests (e.g., Clock Drawing Test (CDT), Lawton Instrumental Activities of Daily Living (IADL) Scale, Memory Impairment Screen (MIS), Short-Portable Mental Status Questionnaire and Mental Status Questionnaire (SPMSQ/MSQ), Mini-Cog, and Verbal Fluency) and two informant-based tests (e.g., AD8 Dementia Screening Interview and Functional Activities Questionnaire (FAQ)). Brief tests included six more patient-administered tests (e.g., MMSE, 7-Minute Screen (7MS), Abbreviated Mental Test (AMT), MoCA, the St. Louis University Mental Status Examination (SLUMS), and the Telephone Interview for Cognitive Status (TICS/modified TICS)). Longer tests included the Informant Questionnaire on Cognitive Decline in the Elderly (IQCODE) in both 26-item and 12-item versions.

Beginning with the 16 USPSTF options that meet Filter 1 (validation in multiple studies for the recommended setting), applying Filter 2 involves removing tests that charge for use, training, or resources (e.g., MMSE, MoCA, AMT, and TICS). This process narrows the list to 12 options: CDT, 7MS, SPMSQ/MSQ, Verbal Fluency, MIS, 26-item IQCODE and 12-item IQCODE-short, SLUMS, Mini-Cog, Lawton IADL, AD8, and FAQ.

Applying Filter 3, which removes tests with significant scoring difficulties (e.g., CDT, 7MS, SPMSQ, and MSQ), further reduces the list to nine options: Verbal Fluency, MIS, 26-item IQCODE and 12-item IQCODE-short, SLUMS, Mini-Cog, Lawton IADL, AD8, and FAQ.

Short-term memory loss is a common initial symptom of Alzheimer's dementia. Therefore, many clinicians believe screening tests should directly assess short-term memory. Applying Filter 4, by removing tests that do not include short-term memory assessment (e.g., Verbal Fluency), shortens the list to eight options: MIS, 26-item IQCODE and 12-item IQCODE-short, SLUMS, Mini-Cog, Lawton IADL, AD8, and FAQ.

Some clinicians may also believe screening tests should evaluate executive function. This helps detect difficulties with medication management or driving risks, and identifies dementias involving frontal lobe dysfunction (e.g., frontotemporal dementia, Lewy body dementia, and Parkinson's dementia). Applying Filter 5, by removing tests without executive function assessment (e.g., MIS), reduces the list to seven options: 26-item IQCODE and 12-item IQCODE-short, SLUMS, Mini-Cog, Lawton IADL, AD8, and FAQ.

Primary care settings often face more significant time constraints than specialty clinics. Applying Filter 6, by removing tests that take longer than 10 minutes (e.g., 26-item IQCODE and 12-item IQCODE-short), leaves five options: SLUMS, Mini-Cog, Lawton IADL, AD8, and FAQ.

Some clinicians may prefer very brief tests. Applying Filter 7, by removing tests that exceed 5 minutes, eliminates the SLUMS, narrowing the list to four options: Mini-Cog, Lawton IADL, AD8, and FAQ.

The sequence and number of filters applied depend on individual clinical needs. The PRACTICAL.1 criteria enable clinicians to make informed decisions when selecting cognitive screening tools for their practice.

DISCUSSION

Beyond the 16 tests identified by the USPSTF, other recommended tools include the Modified Mini-Mental Examination (3MS), a four-item version of the MoCA, and the Rowland Universal Dementia Assessment Scale (RUDAS). These will be discussed further.

The 3MS, a 100-point scale, is likely too extensive for typical primary care use. However, the Ottawa 3DY (O3DY), a brief verbal cognitive screening tool derived from the 3MS, consists of four questions that require no equipment: Day of the week, Date, DLROW (WORLD spelled backwards), and Year. Its psychometric properties have been documented in emergency department studies, and it has been validated in French. Instead of researching the lengthy 3MS for primary care, its shorter derivative, the O3DY, might be more appropriate for validation studies and subsequent use in primary care practice.

A four-item version of the MoCA has recently completed its developmental phase. However, it has not yet been validated in multiple primary care settings. Therefore, recommending this test for widespread use in primary care is premature. With the impending cost for MoCA use, it remains uncertain whether shorter MoCA versions will also incur fees.

Language, education, and culture significantly influence the psychometric properties of cognitive screening tools. As the population of older adults with limited English proficiency increases, the demand grows for cognitive screening tests less affected by these factors. The Rowland Universal Dementia Assessment Scale (RUDAS) has been noted as "relatively unaffected by gender, years of education, and language of preference." Although the RUDAS is a longer test, it could be beneficial in specific primary care scenarios or in specialty settings where more time and resources are available.

CONCLUSION

During the preparation of this article, a common question posed was, "What single test should all clinicians use, or what is the best test?" Such questions reveal a potential over-reliance on a single test. The pursuit of one "best" test may be misguided. The reality is that various complementary screening approaches exist, each suitable for different situations and patient populations. A "one-size-fits-all" approach is not effective.

Research alone will not halt the "free-to-fee" cycle. To encourage positive change, specific steps are necessary.

To further inform readers, various screening tools can be explored to help clinicians make informed selections.

Beyond guiding clinicians, this article aims to serve as a call to action for national professional organizations and granting agencies that fund the development and validation of cognitive screening tools.

Open Article as PDF

Abstract

Every year, millions of patients worldwide undergo cognitive testing. Unfortunately, new barriers to the use of free open access cognitive screening tools have arisen over time, making accessibility of tools unstable. This article is in follow-up to an editorial discussing alternative cognitive screening tools for those who cannot afford the costs of the Mini-Mental State Examination and Montreal Cognitive Assessment (see www.dementiascreen.ca). The current article outlines an emerging disruptive "free-to-fee" cycle where free open access cognitive screening tools are integrated into clinical practice and guidelines, where fees are then levied for the use of the tools, resulting in clinicians moving on to other tools. This article provides recommendations on means to break this cycle, including the development of tool kits of valid cognitive screening tools that authors have contracted not to charge for (i.e., have agreed to keep free open access). The PRACTICAL.1 Criteria (PRACTIcing Clinician Accessibility and Logistical Criteria Version 1) are introduced to help clinicians select from validated cognitive screening tools, considering barriers and facilitators, such as whether the cognitive screening tools are easy to score and free of cost. It is suggested that future systematic reviews embed the PRACTICAL.1 criteria, or refined future versions, as part of the standard of review. Methodological issues, the need for open access training to insure proper use of cognitive screening tools, and the need to anticipate growing ethnolinguistic diversity by developing tools that are less sensitive to educational, cultural, and linguistic bias are discussed in this opinion piece. J Am Geriatr Soc 68:2207-2213, 2020.

Background

Every year, millions of individuals globally undergo cognitive testing, often to detect early signs of conditions like dementia. These conditions can affect a person's ability to function safely. However, the strengths and weaknesses of the cognitive screening tools used by healthcare professionals are not always fully understood. Over time, new challenges have emerged, making it harder to access free cognitive screening tools. For example, a widely used tool, the Mini-Mental State Examination (MMSE), became costly in 2001. Many clinicians then switched to the Montreal Cognitive Assessment (MoCA), which became a common standard. In 2019, plans were announced to charge for the MoCA as well. This trend suggests a pattern where free, open-access tests become popular and integrated into practice, but then new fees are introduced, forcing clinicians to seek other options. This situation creates instability in the use of screening tools.

Screening Tools Versus Diagnostic Tools

It is important to understand the difference between screening and diagnostic tools. A screening tool helps determine if a potential problem might be present. A diagnostic tool, on the other hand, identifies the specific cause of a problem if one is found. These are two different purposes. A screening test alone is not enough to diagnose dementia. Conversely, a diagnosis of dementia can be made through a patient's history and clinical examination, even without a screening test. This discussion focuses primarily on screening tools.

Different Screening Approaches

No single screening tool works for everyone or every situation. Various methods exist, each suited to different circumstances, settings, and patients. These approaches include:

  • Cognitive tests given directly to patients, such as the MMSE or MoCA. These are useful when a patient does not have someone who knows them well enough to provide information.

  • Questions asked of family members or caregivers (called "proxy informants") about a patient's thinking and daily functions. These can provide information on how a person's abilities have changed over time and may be less affected by a person's education or cultural background, provided reliable informants are available.

  • Functional assessments that involve directly observing a person complete tasks. These are more common in specialized settings.

  • Self-administered questionnaires that patients can complete themselves.

  • Clinician observations of a patient's behavior that might suggest cognitive impairment.

Each method plays a complementary role. For instance, questions asked of proxy informants or self-administered questionnaires can save clinician time by being started in the waiting room. Proxy-based questions and functional assessments can be especially helpful when patient-administered tests might be less accurate due to language or cultural differences.

Understanding Tool Accuracy

The accuracy of cognitive screening tools, often measured by properties like sensitivity (how well it identifies those with a condition) and specificity (how well it identifies those without a condition), is not fixed or easily predicted. Many factors can change these measures, making it difficult to compare findings from different studies or to assume a tool will perform the same way in every clinical setting. For example, who is screened (e.g., all patients versus only those at higher risk) can significantly alter a tool's accuracy. Therefore, the choice of a cognitive screening tool for clinical use cannot be based solely on these accuracy measures, as they can vary widely. Clinicians need additional practical criteria to select the most appropriate tools.

Practical Tool Selection Criteria

To help clinicians choose the most suitable cognitive screening tools for their practice, a set of practical criteria has been developed. These criteria, called PRACTICAL.1, act as filters to narrow down the options. They consider aspects like:

  • Whether the test has been widely studied and proven effective in the recommended settings.

  • If there are costs associated with using the test, training for it, or accessing its materials.

  • How difficult the test is to score.

  • Whether the test includes components for assessing short-term memory and executive function (higher-level thinking skills).

  • The time required to administer the test, with preferences for shorter tests.

Applying the Criteria

These PRACTICAL.1 criteria can be applied step-by-step to select the most practical tools. For example, starting with a list of validated cognitive tests, one could first remove those that charge fees. From the remaining options, tests that are difficult to score could then be removed. Next, tests that do not assess key cognitive areas like short-term memory or executive function could be eliminated. Finally, tests that take longer than a desired time limit (e.g., 10 minutes, then 5 minutes) could be filtered out. The specific sequence and number of filters used depend on a clinician's individual needs and priorities in their practice. This filtering process empowers clinicians to make informed decisions when selecting tools.

Further Considerations for Screening Tools

Beyond the tools commonly reviewed, others have been recommended, such as the Modified Mini-Mental Examination (3MS), a four-item version of the MoCA, and the Rowland Universal Dementia Assessment Scale (RUDAS). The 3MS is generally considered too long for use in a primary care setting, but a shorter version derived from it, the Ottawa 3DY, may be more suitable for future research and practice. The four-item MoCA is still in development and has not yet been widely studied in primary care settings. The RUDAS is notable because it is less influenced by factors like gender, education level, and language preference, making it potentially helpful for diverse patient populations, although it can be a longer test. Given that language, education, and culture can significantly impact screening tool results, tests like RUDAS that are less affected by these factors are increasingly important as the population becomes more diverse.

Conclusion

It is a common misconception that there is one "best" cognitive screening test for all situations. In reality, different approaches and tools are suitable for different patients and circumstances. This document highlights that research alone will not resolve the instability caused by the "free-to-fee" cycle in cognitive testing. Specific actions are needed from national professional organizations and funding agencies to help break this cycle and ensure ongoing access to reliable and affordable screening tools.

Open Article as PDF

Abstract

Every year, millions of patients worldwide undergo cognitive testing. Unfortunately, new barriers to the use of free open access cognitive screening tools have arisen over time, making accessibility of tools unstable. This article is in follow-up to an editorial discussing alternative cognitive screening tools for those who cannot afford the costs of the Mini-Mental State Examination and Montreal Cognitive Assessment (see www.dementiascreen.ca). The current article outlines an emerging disruptive "free-to-fee" cycle where free open access cognitive screening tools are integrated into clinical practice and guidelines, where fees are then levied for the use of the tools, resulting in clinicians moving on to other tools. This article provides recommendations on means to break this cycle, including the development of tool kits of valid cognitive screening tools that authors have contracted not to charge for (i.e., have agreed to keep free open access). The PRACTICAL.1 Criteria (PRACTIcing Clinician Accessibility and Logistical Criteria Version 1) are introduced to help clinicians select from validated cognitive screening tools, considering barriers and facilitators, such as whether the cognitive screening tools are easy to score and free of cost. It is suggested that future systematic reviews embed the PRACTICAL.1 criteria, or refined future versions, as part of the standard of review. Methodological issues, the need for open access training to insure proper use of cognitive screening tools, and the need to anticipate growing ethnolinguistic diversity by developing tools that are less sensitive to educational, cultural, and linguistic bias are discussed in this opinion piece. J Am Geriatr Soc 68:2207-2213, 2020.

1 BACKGROUND

Many people around the world take tests to check their thinking skills each year. One main reason is to look for signs of memory problems like dementia. Dementia can make it hard for a person to do daily tasks safely. Doctors who use these tests sometimes do not fully understand how good or bad the tests are. Also, it has become harder to find free thinking skill tests.

For example, a test called MMSE used to be free, but then people had to pay for it. So, many doctors started using another test called MoCA. Now, it seems that MoCA will also start costing money. This can make it hard for doctors to keep using the same tests. It creates a cycle where free tests become popular, then they start to cost money, and doctors have to find new ones. This paper shares ideas based on many years of looking at how thinking skill tests are used in clinics.

1.1 Screening Tools Are Different from Tests for Diagnosis

Screening tests help find out if a problem might be there. A diagnosis test finds out what is causing the problem. These are not the same. A screening test by itself cannot say if a person has dementia. Also, doctors can find out if a person has dementia without using a screening test. This article mainly talks about screening tests.

1.2 Different Types of Screening Tools Work for Different Situations

There are many ways to check a person's thinking skills. These include:

  • Tests given directly to the person.

  • Asking family or friends about changes in the person's thinking and daily life.

  • Watching a person do everyday tasks.

  • Tests a person fills out on their own.

  • Doctors watching for signs of thinking problems.

Each way has its own strengths. Tests given to the person are good when no one else is there to talk about their changes. Asking family and friends can show how a person's thinking changes over time. These also work well if a person speaks a different language or has a different background, where standard tests might not be right. Some tests can even be started in the waiting room to save time. A doctor's quick observation might also lead to more checks.

1.3 Test Results Are Not Always the Same

How well a thinking test works can change based on many things. These things make it hard to know if a test will be just as good in one doctor's office as it was in a study. For example, some experts say it is better to test only certain people, like those over 75 or those who have other health problems linked to dementia.

When doctors test only certain groups of people, the test results might look different than in studies where everyone was tested. This means how accurate a test is can change a lot depending on who is being tested and where. Doctors cannot just pick a test based on how good it seemed in a study. They need other ways to choose the best tests for their own patients.

1.4 Helpful Rules for Doctors to Pick Tests (PRACTICAL.1 Criteria)

This paper shares new guidelines to help doctors choose thinking tests. These guidelines are called the PRACTICAL.1 criteria. They are like a checklist with seven steps:

  1. Do not use tests that have not been proven reliable in many studies.

  2. Do not use tests that cost money for using them or for learning how to use them.

  3. Do not use tests that are hard to score.

  4. Do not use tests that do not check short-term memory.

  5. Do not use tests that do not check "executive function" (things like planning and problem-solving).

  6. Do not use tests that take longer than 10 minutes.

  7. Do not use tests that take longer than 5 minutes.

1.5 How to Use the PRACTICAL.1 Criteria to Pick Tests

A group called the USPSTF looked at 16 different thinking tests that have been proven to work in clinics. These tests took different amounts of time, some very short (5 minutes or less), some short (6-10 minutes), and some longer.

Doctors can use the PRACTICAL.1 criteria to choose the best tests for their office. First, they remove tests that cost money. Then, they remove tests that are hard to score. Next, they remove tests that do not check short-term memory or executive function. Finally, they remove tests that take too much time, like more than 10 minutes or even more than 5 minutes if they want a very quick test. This way, doctors can choose the tests that fit their needs best.

2 DISCUSSION

Other tests have also been suggested. For example, the 3MS test is very long and likely too much for most doctor visits. A shorter version, called O3DY, might be better for doctors to use. It only has four questions and does not need paper or a pencil.

A very short version of the MoCA test is also being developed, but it is not fully tested yet for common use. Also, it is not clear if this shorter MoCA will also cost money in the future.

It is important to remember that a person's language, schooling, and culture can change how well a test works. As more older adults speak different languages, tests that are not affected by language or education are needed. One test, called RUDAS, seems to work well no matter a person's background. While it is a longer test, it could be useful in certain doctor's offices where there is more time.

3 CONCLUSION

Many people ask what single test all doctors should use. But there is no one "best" test. Different ways of checking thinking skills work best for different people and in different situations. What works for one person may not work for another.

To stop the cycle of free tests becoming paid tests, more than just research is needed. Steps must be taken to help doctors and patients. This paper hopes to guide doctors and also encourage larger groups and funders to help make better, long-lasting choices for thinking skill tests.

Open Article as PDF

Footnotes and Citation

Cite

Molnar, F. J., Benjamin, S., Hawkins, S. A., Briscoe, M., & Ehsan, S. (2020). One Size Does Not Fit All: Choosing Practical Cognitive Screening Tools for Your Practice. Journal of the American Geriatrics Society (JAGS), 68(10), 2207–2213. https://doi.org/10.1111/jgs.16713

    Highlights