Leveraging the best algorithms for precision in precision medicine

AI_ Machine Learning Compressed

There is no one-size-fits-all algorithm for AI that drug developers can apply and quickly identify the features they are looking for. Finding an answer to this dilemma has major implications for the field of precision medicine.

Currently, researchers select the best algorithms in a modular approach to build custom analytics engines that answer specific questions in an unbiased and repeatable manner.

Zeeshan Ahmed, MD, assistant professor of medicine at the Robert Wood Johnson College of Medicine, Rutgers Biomedical and Health Sciences, said in an interview with Biospace.

So far, there has been very little effort to organize and understand the many computing methods in this field.

Key AI/ML goals for precision medicine

a reconsidering Posted in Briefings in Bioinformatics Among the first. Ahmed and colleagues examined five years of literature in whole genome sequencing or whole exome sequencing to identify 32 of the most widely used AI/machine learning algorithms and approaches used to provide precision medicine insights.

The team compared the scientific objectives, methodologies, data sources, ethics, and gaps for each of these methods.

For artificial intelligence/machine learning to be more useful to drug developers, several things are required. Ahmed said: On her head:

  • Efficient data collection, quality checking, purification and AI/Machine-ready (data) generation
  • Modeling data, creating valid associations between predictive input variables and expected outcomes
  • Training and model validation for predictive performance evaluation.

“During situations, when the data is large, it is important to ensure the right balance between training and actual data sets to avoid overfitting,” Ahmed noted.

For artificial intelligence/machine learning to be more useful, data must be standardized to enable more accurate searches. Ensuring that the data uses the same terminology to refer to the same items helps ensure that all relevant information can be identified and analyzed.

There has to be a way to correct errors in the data as well. Data entered manually, for example, may contain inaccurate information. Study data should also cover multiple diseases and distinct populations to reflect the broad way in which diseases, conditions, and symptoms manifest.

The role of AI/ML in drug reuse of COVID-19

Recently Research in Biomedicine and pharmacotherapy, conducted by Kyung-Hyun Choi of Jeju National University in Korea and colleagues note the value of ML and deep learning in drug reuse for COVID-19 treatments. These methods helped them distinguish between drug targets and gene products that affect target activity.

In the paper, Choi explained that each type of analysis has its own set of algorithms. The types of analyzes used for machine learning included k-nearest neighbor (unsupervised learning method), random forest and support vector machine, among others.

Deep learning techniques include artificial neural networks, convolutional neural networks, and long-term memory. Artificial intelligence algorithms have been used for link prediction, node prediction or graph prediction and other tasks.

When applying artificial intelligence/machine learning to research, Choi and colleagues write: “The limitations include … inconsistencies … in biological networks,” as well as challenges associated with different networks that can lead to bias in the outcome. To overcome these issues, he recommended the use of heterogeneous data from multiple sources to enhance the reliability of the analyses.

else study Posted in current drug targets Last year he reviews ML tools used to identify bioactive compounds from among millions of candidates.

It found, among other things, that the SVM algorithm was more effective than others in indicating the best-used classification model for human intestinal absorption predictions. However, the quantitative structure-activity relationship (QSAR) model predicted inhibitory effects of flavonoids on specific indicators. Obviously, the choice of algorithm is important.

Black box decryption

Until a few years ago, AI was often seen as a black box ingesting data and throwing out results without providing researchers with the details needed to understand how those results were drawn.

“You learn how thousands of inputs connect to hundreds or thousands of outputs,” said David Longo, co-founder and CEO of Ordaos. Biospace. Machine learning algorithms “learn the intrinsic relationships between – for example – amino acids, motifs, domains … in a non-linear and complex way, so there is still some kind of black box element for AL/ML, depending on how it is created.”

In general, modern AI/ML algorithms allow a degree of insight into how individual algorithms arrive at their conclusions.

For example, Ordaos, which develops small proteins, “provides a background trace of each amino acid that is changed and how that affects the properties that come out of that protein,” Longo said. For researchers, this is a huge benefit.

Longo continued that innovation in AI/machine learning today is “not necessarily about creating new individual components, but about putting them together in interesting ways.”

He cited Ordaos’ multitasking learning model as an example.

Traditionally, ML models have been developed by training the algorithm in a specific area – the structure predictor only trains the structures and, with a few more steps, creates a model. Using this model for a slightly different purpose requires retraining. In contrast, the Ordaos model learns from multiple tasks simultaneously, which somewhat contradicts Ahmed’s view of the specificity of the algorithm.

Choosing the right algorithms

AI/machine learning analytic approaches have the potential to help develop an enhanced systems-level understanding of disease mechanisms and treatment implications, and can replace the heterogeneity of existing genetic and statistical methods with heterogeneity. However, realizing this value requires choosing the right algorithms for the job.

“It is important to measure and avoid algorithmic bias,” Ahmed said. “Categorizing tasks based on the available predictor variables is an essential step to properly address the problem of choosing the appropriate AI/ML algorithm.”

Ahmed said, “In my lab, we practice personalized medicine based on AI/machine learning. We produce AI/machine learning ready datasets based on clinical and multi-omics/genomic profiles and are developing automated pipelines to analyze and perform predictive analysis.”

“Moreover, we are dealing with ethical issues, which include protecting health information associated with multi-omics/genome data sets,” he continued.

The trend of analytics is shifting from generating big data to predictive analysis, interpretation, and use of that data. For these predictions to be accurate, the underlying assumptions must also be accurate, and this requires choosing appropriate algorithms for the questions.

Leave a Comment