There is no one-size-fits-all brain model.
Machine learning has helped scientists understand how the brain generates complex human characteristics, revealing patterns of brain activity linked to actions such as working memory, traits such as impulsivity, and states such as depression. Scientists can use these methods to develop models of these relationships, which can then be used to make predictions about people’s behavior and health.
However, they only work if the models are representative of everyone, and previous research has shown that they do not. For each model, there are some unsuitable individuals.
researchers from Yale University They analyzed who these models tend to fail at, why this happens, and what can be done to fix it in a study recently published in the journal temper nature.
According to the lead author of the research, MD. A student at Yale University Abigail Greene School of Medicine, the models must be applicable to any particular person in order to be most useful.
“If we want to move this kind of work into a clinical application, for example, we need to make sure that the model applies to the patient sitting in front of us,” she said.
Green and her colleagues are studying two approaches that they believe may help the models provide a more accurate psychological classification. The first is to classify patient groups more accurately. For example, the diagnosis of schizophrenia covers a wide range of symptoms and may vary greatly from person to person. Researchers may be able to classify individuals in more accurate ways if they have a better knowledge of the neural underpinnings of schizophrenia, including its symptoms and subtypes.
Second, certain characteristics, such as impulsivity, are characteristic of a variety of conditions. Understanding the neural basis of impulsivity may help clinicians treat these symptoms more effectively, regardless of the medical diagnosis.
“And both advances will have implications for treatment responses,” Green said. “The more we can better understand these subgroups of individuals who may or may not carry the same diagnoses, the better we can design therapies that are appropriate for them.”
But first, the models have to be generalizable to everyone, she said.
To understand the failure of the model, Green and her colleagues first trained models that can use patterns of brain activity to predict how well a person will accomplish a variety of cognitive tests. When tested, the models correctly predicted how well most individuals would score goals. But for some people, they were wrong, wrongly expecting people to score poorly when they actually score well, and vice versa.
The research team then looked at which models failed to categorize correctly.
“We found that there was consistency — the same individuals were misclassified across tasks and across analyses,” Green said. “And people who were misclassified in one data set have something in common with those who were misclassified in another data set. So there was really something meaningful about the misclassification.”
Next, they looked to see if these similar misclassifications could be explained by differences in the brains of these individuals. But there were no consistent differences. Instead, they found that misclassification was related to sociodemographic factors such as age and education, and to clinical factors such as symptom severity.
Ultimately, they concluded that the models do not reflect cognitive ability alone. Instead, Green explained, they were reflecting more complex “profiles” – a kind of blending of cognitive abilities and different social, demographic and clinical factors.
“The models let down anyone who didn’t fit into this stereotypical look,” she said.
As an example, the models used in the study associated more education with higher scores on cognitive tests. Any individuals with less education and good grades did not fit into the model profile and were therefore often mistakenly predicted to be a low scorer.
Adding to the complexity of the problem, the model did not have access to sociodemographic information.
“Sociodemographic variables are included in the cognitive test score,” Green explained. Essentially, biases in how cognitive tests are designed, administered, scored, and interpreted can seep into the results obtained. And bias is a problem in other areas, too; Research has revealed how input data bias affects models used in criminal justice and health care, for example.
“So the test scores themselves are composites of cognitive ability and these other factors, and the model predicts the composite,” Green said. This means that researchers need to think carefully about what is actually being measured by a given test, and, in turn, what the model predicts.
The study authors offer several recommendations on how to mitigate the problem. They suggest that during the study design phase, scientists should use strategies that reduce bias and increase the validity of the measurements they use. After researchers have collected data, they must often use statistical methods that correct for remaining stereotypical features.
The researchers say that taking these measures will lead to models that better reflect the cognitive structure under study. But they note that complete elimination of bias is unlikely, so it must be recognized when interpreting model outputs. In addition, for some measures, it may turn out that more than one model is necessary.
“There will be a point where you just need different models for different groups of people,” said Todd Constable, professor of radiology and biomedical imaging at Yale University School of Medicine and senior author of the study. “One model will not fit all.”
Reference: “Brain and phenotype models fail for individuals who challenge stereotype models” by Abigail S. Greene, Shailene Shen, Stephanie Noble, Corey Horen, C. , Daniel S. Barron, Gerard Sanakura, Vinod H. Srihari, Scott W. Woods, Dustin Shainost, and R Todd Constable, August 24, 2022, Available here. temper nature.
DOI: 10.1038 / s41586-022-05118-w