<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>deep learning in cancer diagnostics &#8211; Science</title>
	<atom:link href="https://scienmag.com/tag/deep-learning-in-cancer-diagnostics/feed/" rel="self" type="application/rss+xml" />
	<link>https://scienmag.com</link>
	<description></description>
	<lastBuildDate>Wed, 15 Apr 2026 12:02:42 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>

 
<site xmlns="com-wordpress:feed-additions:1">73899611</site>	<item>
		<title>AI Predicts Early Gastric Cancer Recurrence via Biopsy</title>
		<link>https://scienmag.com/ai-predicts-early-gastric-cancer-recurrence-via-biopsy/</link>
		
		<dc:creator><![CDATA[SCIENMAG]]></dc:creator>
		<pubDate>Wed, 15 Apr 2026 12:02:42 +0000</pubDate>
				<category><![CDATA[Medicine]]></category>
		<category><![CDATA[AI for early gastric cancer recurrence prediction]]></category>
		<category><![CDATA[AI-powered cancer treatment planning]]></category>
		<category><![CDATA[artificial intelligence in oncology]]></category>
		<category><![CDATA[automated tumor aggressiveness detection]]></category>
		<category><![CDATA[deep learning in cancer diagnostics]]></category>
		<category><![CDATA[digital biopsy technology]]></category>
		<category><![CDATA[gastric cancer relapse risk assessment]]></category>
		<category><![CDATA[histopathological image analysis]]></category>
		<category><![CDATA[improving gastric cancer survival rates]]></category>
		<category><![CDATA[machine learning for biopsy interpretation]]></category>
		<category><![CDATA[neural networks in pathology]]></category>
		<category><![CDATA[non-invasive cancer prognostication]]></category>
		<guid isPermaLink="false">https://scienmag.com/ai-predicts-early-gastric-cancer-recurrence-via-biopsy/</guid>

					<description><![CDATA[In a groundbreaking advance poised to redefine the clinical landscape of gastric cancer management, a team of researchers has unveiled a pioneering digital biopsy tool powered by deep learning algorithms designed to predict early recurrence in gastric cancer patients. This innovative approach signals a transformative shift in oncological diagnostics, leveraging artificial intelligence to extract nuanced [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>In a groundbreaking advance poised to redefine the clinical landscape of gastric cancer management, a team of researchers has unveiled a pioneering digital biopsy tool powered by deep learning algorithms designed to predict early recurrence in gastric cancer patients. This innovative approach signals a transformative shift in oncological diagnostics, leveraging artificial intelligence to extract nuanced biological insights that remain elusive to conventional pathological assessment.</p>
<p>Gastric cancer remains one of the deadliest malignancies worldwide, largely due to its often-late diagnosis and the high propensity for postoperative recurrence. Early identification of patients at elevated risk of relapse is critical for tailoring adjuvant therapies and improving survival outcomes. Traditional biopsy techniques, while instrumental, are limited by invasiveness and interpretative variability. The advent of a non-invasive digital biopsy method utilizing deep learning heralds a new era in cancer prognostication.</p>
<p>At the heart of this technological leap is a sophisticated neural network architecture trained on vast datasets of histopathological images sourced from gastric cancer patients. By ingesting digitized tissue slides, the model learns to discern complex morphological patterns and subtle features indicative of tumor aggressiveness and recurrence potential. Unlike human observers, these algorithms can integrate multidimensional data, transcending conventional visual analysis to predict biological behavior with unprecedented accuracy.</p>
<p>The digital biopsy does not require additional tissue sampling; instead, it reinterprets existing pathological imaging with computational precision. This paradigm allows for rapid, reproducible assessment without the logistical and ethical challenges of acquiring more invasive samples. Crucially, the model&#8217;s predictive capability is calibrated to identify recurrence risk within a clinically relevant early postoperative window, offering oncologists a vital prognostic tool for therapeutic decision-making.</p>
<p>Validation of the model involved rigorous retrospective and prospective clinical cohorts, illustrating its robustness and generalizability across diverse patient populations. The deep learning system consistently outperformed traditional staging metrics and established biomarkers, underscoring the potency of artificial intelligence in refining cancer prognosis. This robustness is attributable to the model&#8217;s ability to integrate both spatial tissue heterogeneity and texture-based features that human pathology evaluation often overlooks.</p>
<p>Underlying this success is the model’s architecture, which employs convolutional neural networks (CNNs) optimized for image recognition tasks. CNNs are adept at identifying hierarchical feature representations, capturing low-level edges and textures while contextualizing them within broader morphological frameworks. This hierarchical learning mimics aspects of human visual processing but with the scalability and objectivity of machine computation.</p>
<p>In addition to predicting early recurrence, the system offers interpretability features enabling clinicians to visualize which tissue regions contribute most significantly to risk predictions. This transparency enhances clinical trust and facilitates integrative decision-making, bridging the gap between black-box AI models and practical oncological application. By highlighting histological hallmarks linked to aggressive phenotypes, the tool also fuels ongoing research into gastric cancer biology.</p>
<p>The integration of this digital biopsy into routine clinical workflows promises multiple advantages. It can streamline patient stratification for adjuvant therapy trials, personalize follow-up protocols, and potentially reduce healthcare costs by focusing resources on patients with the highest need. Moreover, its scalability offers promise for deployment in resource-limited settings where expert pathological review is often scarce.</p>
<p>Researchers employed meticulous preprocessing steps to ensure data quality, including stain normalization and artifact removal, which are critical for model accuracy. These technical refinements guard against biases induced by slide preparation variability and enable the model to generalize across different laboratory conditions, an essential feature for real-world application.</p>
<p>Beyond its immediate clinical utility, this study exemplifies the expanding role of digital pathology combined with AI in oncology. The digital biopsy model could serve as a template for similar approaches in other cancer types, where early prediction of recurrence remains a daunting challenge. Extending these methodologies may ultimately facilitate a new generation of personalized oncology care predicated on multi-modal data fusion.</p>
<p>Despite the promise, several challenges remain before widespread adoption. Regulatory approval pathways must adapt to accommodate AI-based diagnostics, and prospective clinical trials are necessary to establish impact on patient outcomes definitively. Additionally, ethical considerations regarding data privacy, algorithmic fairness, and explainability will be paramount to ensuring equitable and responsible deployment.</p>
<p>The researchers foresee continual improvement of the model through incorporation of multi-omics data, including genomic and transcriptomic profiles, harmonizing molecular and morphological insights for even finer prognostic granularity. Such integrative frameworks could elucidate tumor evolution dynamics, resistance mechanisms, and potential therapeutic targets, further enhancing personalized medicine.</p>
<p>In sum, this deep learning-based digital biopsy represents a landmark convergence of pathology, artificial intelligence, and clinical oncology. By transforming static histological images into dynamic, predictive biomarkers, it promises to increase diagnostic precision, tailor treatments, and ultimately improve survival rates for gastric cancer patients worldwide. This innovation stands as a testament to the power of interdisciplinary collaboration in tackling complex medical challenges.</p>
<p>The publication of these findings in <em>Nature Communications</em> underscores the scientific rigor and transformative potential of the work. As the oncology community embraces this new tool, it sets the stage for a future where AI-driven diagnostics are integral to cancer care, offering hope and enhanced clinical pathways for countless patients facing this formidable disease.</p>
<hr />
<p><strong>Subject of Research</strong>: Deep learning-based digital biopsy for predicting early recurrence in gastric cancer</p>
<p><strong>Article Title</strong>: A deep learning–based digital biopsy for predicting early recurrence in gastric cancer</p>
<p><strong>Article References</strong>:<br />
Ding, P., Chen, S., Guo, H. <em>et al.</em> A deep learning–based digital biopsy for predicting early recurrence in gastric cancer. <em>Nat Commun</em> (2026). <a href="https://doi.org/10.1038/s41467-026-71347-6">https://doi.org/10.1038/s41467-026-71347-6</a></p>
<p><strong>Image Credits</strong>: AI Generated</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">151513</post-id>	</item>
		<item>
		<title>Deep Learning-Powered Virtual Multiplex Immunostaining of Label-Free Tissues Advances Vascular Invasion Assessment</title>
		<link>https://scienmag.com/deep-learning-powered-virtual-multiplex-immunostaining-of-label-free-tissues-advances-vascular-invasion-assessment/</link>
		
		<dc:creator><![CDATA[SCIENMAG]]></dc:creator>
		<pubDate>Fri, 06 Mar 2026 13:45:40 +0000</pubDate>
				<category><![CDATA[Technology and Engineering]]></category>
		<category><![CDATA[AI-powered immunohistochemistry]]></category>
		<category><![CDATA[deep learning for pathology]]></category>
		<category><![CDATA[deep learning in cancer diagnostics]]></category>
		<category><![CDATA[label-free tissue imaging]]></category>
		<category><![CDATA[multiplexed immunostaining without staining]]></category>
		<category><![CDATA[non-destructive tissue analysis methods]]></category>
		<category><![CDATA[overcoming limitations of conventional IHC]]></category>
		<category><![CDATA[rapid multiplexed imaging techniques]]></category>
		<category><![CDATA[scalable AI solutions in clinical pathology]]></category>
		<category><![CDATA[UCLA cancer research innovations]]></category>
		<category><![CDATA[vascular invasion assessment in thyroid cancer]]></category>
		<category><![CDATA[virtual multiplex immunostaining technology]]></category>
		<guid isPermaLink="false">https://scienmag.com/deep-learning-powered-virtual-multiplex-immunostaining-of-label-free-tissues-advances-vascular-invasion-assessment/</guid>

					<description><![CDATA[In a transformative leap for cancer diagnostics, a pioneering study by researchers at the University of California, Los Angeles (UCLA), in collaboration with global partners, has unveiled a cutting-edge deep learning-based method for virtual multiplexed immunostaining (mIHC). This innovative technology offers a rapid, accurate, and non-destructive alternative to conventional staining techniques, which are often labor-intensive [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>In a transformative leap for cancer diagnostics, a pioneering study by researchers at the University of California, Los Angeles (UCLA), in collaboration with global partners, has unveiled a cutting-edge deep learning-based method for virtual multiplexed immunostaining (mIHC). This innovative technology offers a rapid, accurate, and non-destructive alternative to conventional staining techniques, which are often labor-intensive and prone to inconsistencies. Published in the journal BME Frontiers, the breakthrough leverages the power of artificial intelligence to generate multiplexed immunostained images from label-free tissue sections, promising to redefine how pathologists assess vascular invasion in thyroid cancer.</p>
<p>Traditional immunohistochemistry (IHC) remains a cornerstone in oncology for identifying cellular markers critical to diagnosis and treatment planning. However, it requires physically staining separate tissue sections for each marker, such as ERG for endothelial cells or PanCK for epithelial cells, which can lead to increased sample consumption, elevated costs, and potential variability between sections. Even the advanced multiplexed IHC methods, though capable of simultaneous multi-marker staining, demand complex protocols and specialized instrumentation, limiting their accessibility within routine clinical pathology. These challenges have driven the search for more efficient, scalable solutions.</p>
<p>The UCLA-led team, spearheaded by professors Aydogan Ozcan and Nir Pillar, has developed a revolutionary approach that transcends traditional staining limitations by utilizing autofluorescence (AF) microscopy combined with advanced deep learning architectures. Their method captures unstained tissue images under multiple autofluorescence channels (DAPI, FITC, TxRed, and Cy5), providing rich intrinsic biochemical information without the need for exogenous dyes. This label-free imaging serves as the input for a conditional generative adversarial network (cGAN), which holistically synthesizes high-fidelity virtual IHC images encompassing ERG, PanCK, and classic hematoxylin and eosin (H&amp;E) stains from the same tissue section.</p>
<p>At the heart of this system lies a cGAN framework composed of two neural networks working in tandem: a generator tasked with producing realistic virtual stains, and a discriminator that critically evaluates the authenticity of these images to refine the generator’s output iteratively. Enhancing the model’s multiplexing capability, the researchers incorporated a Digital Staining Matrix (DSM), a novel component concatenated with label-free inputs, enabling simultaneous generation of multiple marker images from a single input. This design eliminates the need for repeated physical staining procedures, preserving precious tissue material and expediting diagnostic workflows.</p>
<p>The team rigorously trained and validated their virtual mIHC model using a comprehensive paired dataset of autofluorescence and histochemically stained images collected from thyroid tissue microarrays. This extensive training allowed the cGAN to learn complex mappings between unstained autofluorescent signals and their stained counterparts, overcoming heterogeneity in tissue architecture and staining intensity. Quantitative assessments revealed that the synthetic images achieved remarkable concordance with traditional IHC slides in terms of cellular morphology, staining patterns, and marker localization.</p>
<p>To establish clinical relevance, blinded evaluations were conducted by board-certified pathologists who verified that the virtual stains faithfully replicated key diagnostic features of ERG and PanCK expression, as well as general tissue morphology via H&amp;E staining. Importantly, the method demonstrated exceptional accuracy in identifying and localizing vascular invasion within thyroid tumor samples — a critical parameter linked to metastatic potential and patient prognosis. The pathologists noted the virtual stains preserved spatial context and cellular detail, essential for nuanced histopathological interpretation.</p>
<p>This virtual multiplexed immunostaining technology carries profound implications for both research and clinical practice. By obviating the need for multiple physical stainings, it mitigates the loss of tissue samples, reduces turnaround times, and lowers procedural costs. The AI-driven approach also circumvents variability inherent to manual staining protocols, enhancing reproducibility and diagnostic confidence. Moreover, its reliance on label-free autofluorescence images suggests easy integration with existing microscopy setups, facilitating deployment even in resource-limited settings.</p>
<p>Beyond thyroid cancer, the research team anticipates extending this framework to a wide array of tissue types and pathological conditions. Future studies are planned to validate performance across diverse multi-institutional cohorts, ensuring robustness and generalizability. The approach heralds a new paradigm for multiplexed histological analysis, where virtual staining powered by deep learning can augment or even replace conventional methods, leading to more personalized and timely patient care.</p>
<p>From a technical perspective, this work exemplifies the synergy between optical imaging and cutting-edge generative models. The cGAN’s capability to learn conditional mappings enables it to disentangle complex fluorescence signals and reconstruct multiple stains with high fidelity. The integration of the DSM further advances multiplexing by providing the network with explicit instructions about the desired stains, a strategy that can be adapted for other marker panels as the field evolves. This flexibility is pivotal for tailoring diagnostics to specific clinical questions.</p>
<p>In sum, the UCLA team’s deep learning-enabled virtual multiplexed immunostaining represents a watershed moment in digital pathology, combining precision, efficiency, and scalability. It opens a new vista for histopathology where AI augments human expertise, optimizes resource use, and sharpens diagnostic accuracy. As this technology matures, it promises to become a mainstay in pathology labs worldwide, catalyzing improved outcomes for patients confronting cancer and other diseases characterized by complex tissue microenvironments.</p>
<hr />
<p><strong>Subject of Research</strong>: Human tissue samples<br />
<strong>Article Title</strong>: Deep Learning-Enabled Virtual Multiplexed Immunostaining of Label-Free Tissue for Vascular Invasion Assessment<br />
<strong>News Publication Date</strong>: 10-Feb-2026<br />
<strong>Web References</strong>: <a href="http://dx.doi.org/10.34133/bmef.0226">10.34133/bmef.0226</a><br />
<strong>Image Credits</strong>: Ozcan Lab@UCLA</p>
<h4><strong>Keywords</strong></h4>
<p>Deep learning, Immunohistochemistry, Artificial intelligence, Multiplexed immunostaining, Autofluorescence microscopy, Digital pathology, Generative adversarial networks, Thyroid cancer, Vascular invasion, Histopathology</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">141673</post-id>	</item>
		<item>
		<title>Deep Learning Predicts Kidney Cancer Stages</title>
		<link>https://scienmag.com/deep-learning-predicts-kidney-cancer-stages/</link>
		
		<dc:creator><![CDATA[SCIENMAG]]></dc:creator>
		<pubDate>Fri, 17 Oct 2025 16:33:58 +0000</pubDate>
				<category><![CDATA[Cancer]]></category>
		<category><![CDATA[3D deep learning models for medical imaging]]></category>
		<category><![CDATA[artificial intelligence in radiology]]></category>
		<category><![CDATA[clear cell renal cell carcinoma research]]></category>
		<category><![CDATA[CT scan analysis for cancer]]></category>
		<category><![CDATA[deep learning in cancer diagnostics]]></category>
		<category><![CDATA[enhancing precision in cancer treatment planning]]></category>
		<category><![CDATA[improving accuracy in cancer diagnostics]]></category>
		<category><![CDATA[kidney cancer staging using AI]]></category>
		<category><![CDATA[multicenter study on ccRCC]]></category>
		<category><![CDATA[preoperative tumor staging methods]]></category>
		<category><![CDATA[retrospective data analysis in healthcare]]></category>
		<category><![CDATA[Transformer-ResNet architecture in medicine]]></category>
		<guid isPermaLink="false">https://scienmag.com/deep-learning-predicts-kidney-cancer-stages/</guid>

					<description><![CDATA[In a groundbreaking advance poised to reshape the landscape of preoperative cancer diagnostics, researchers have unveiled a sophisticated deep learning approach using CT scans to predict tumor staging in clear cell renal cell carcinoma (ccRCC). This multicenter study leverages cutting-edge artificial intelligence to enhance the precision of T and TNM staging, critical elements in planning [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>In a groundbreaking advance poised to reshape the landscape of preoperative cancer diagnostics, researchers have unveiled a sophisticated deep learning approach using CT scans to predict tumor staging in clear cell renal cell carcinoma (ccRCC). This multicenter study leverages cutting-edge artificial intelligence to enhance the precision of T and TNM staging, critical elements in planning effective treatment strategies for this prevalent kidney cancer subtype. By integrating a novel Transformer-ResNet (TR-Net) architecture, the research promises to reduce traditional reliance on subjective radiological interpretation and the inconsistent variability it entails.</p>
<p>The study, encompassing data from over a thousand ccRCC patients collected retrospectively across five distinct medical centers, reflects one of the most comprehensive efforts to date to apply AI in this clinical context. The researchers combined data from two centers for model development and testing, while further data from three additional centers served as external validation sets, ensuring the robustness and generalizability of the models. Such a vast, multicenter dataset is crucial for training AI systems that maintain accuracy across diverse populations and imaging protocols.</p>
<p>Central to this endeavor were two 3D deep learning models designed explicitly for preoperative staging tasks: one targeting tumor size and extent classification (T staging) categorized into T1, T2, and combined T3 + T4 stages, and the other predicting the full TNM stage spectrum from I through IV. These models utilized corticomedullary phase CT scans, a crucial imaging phase highlighting tumor vascularity and structure, which offers rich information pivotal for staging analysis. The incorporation of a Transformer-ResNet backbone enabled the models to capture complex spatial relationships within volumetric imaging data efficiently.</p>
<p>Performance metrics across the validation cohorts underscored the promising capabilities of these AI tools. For T staging, the models achieved micro-average AUC values exceeding 0.93, indicating excellent overall discriminatory power, accompanied by macro-average AUCs close to 0.85 and accuracy rates surpassing 84% on average. Similarly, for TNM staging, micro-AUCs hovered around 0.93 with macro-AUCs between 0.81 and 0.89, reflecting the consistent precision across different staging categories. These results suggest that AI-assisted staging could feasibly complement or even surpass traditional radiological evaluations for preoperative staging.</p>
<p>However, the models showed relatively diminished accuracy when identifying advanced tumor subclasses, particularly T3 + T4 tumors and stage III in the TNM classification. AUC values in these categories ranged from approximately 0.67 to 0.80, indicating moderate performance and highlighting ongoing challenges in differentiating more complex tumor presentations. These findings emphasize the nuanced difficulties deep learning models face when confronting heterogenous and often ambiguous radiographic features typical of higher-stage tumors.</p>
<p>To enhance clinical interpretability, the researchers employed Gradient-weighted Class Activation Mapping (Grad-CAM), visualizing the model’s focal areas within the tumor regions during prediction. These heatmaps not only asserted that the AI anchored its decisions on clinically relevant tumor characteristics but also offered a transparent mechanism to build clinician trust in the algorithms. Interpretability remains a crucial step toward regulatory acceptance and real-world deployment of AI in healthcare.</p>
<p>Importantly, the study extended beyond pure algorithmic development by implementing a human-machine collaboration experiment, revealing that radiologists supported by AI predictions achieved higher diagnostic accuracy than either operating alone. This synergistic effect suggests deep learning models are not intended to replace medical expertise but rather to augment radiologists’ capabilities, reduce interobserver variability, and foster more standardized staging outcomes essential for personalized therapy.</p>
<p>The technical backbone of the deep learning methodology consisted of a Transformer-ResNet architecture, tailored to integrate the strengths of convolutional neural networks in feature extraction with the sophisticated contextual learning capabilities of transformer units. This hybrid design enables the model to effectively analyze 3D volumetric data, capturing the intricate texture and morphological patterns that inform tumor staging in ccRCC. Such architectural innovations represent a vital advance over previous 2D or less context-aware approaches.</p>
<p>While the current results indicate substantial progress, the authors acknowledge that further refinement is necessary, particularly in enhancing performance for advanced-stage tumors. This may involve incorporating multimodal imaging inputs, finer-grained subclassifications, or supplementary clinical data to enrich model context. Moreover, prospective studies and integration into clinical workflows are essential next steps to validate utility and impact in routine oncology practice.</p>
<p>The translational potential of this research is significant. Accurate preoperative staging guides surgical planning, eligibility for targeted therapies, and prognostication. By providing radiologists with interpretable, AI-driven insights, these models may expedite decision-making, improve patient outcomes, and reduce healthcare costs driven by diagnostic uncertainty or treatment complications. Given the global burden of kidney cancer, accessible AI tools could become invaluable in diverse healthcare settings.</p>
<p>This study also exemplifies the growing trend toward leveraging multi-institutional collaborations in medical AI research, which is critical to overcoming overfitting and ensuring model generalizability across populations and hardware variability. The robust external validations employed here serve as a benchmark for future investigations aiming to translate AI models from experimental settings into dependable clinical assets.</p>
<p>In conclusion, this seminal investigation into CT-based deep learning for ccRCC staging marks a transformative step forward in oncologic imaging. By harnessing advanced neural architectures and extensive multicenter data, the research outlines a path toward more objective, reproducible, and clinically empowering diagnostic tools. While further enhancements remain imperative, the demonstrated interpretability and human-machine synergy provide a compelling blueprint for the future integration of AI into cancer care paradigms.</p>
<p>As the medical community increasingly embraces artificial intelligence, studies such as these underscore the critical balance between technological innovation and clinical applicability. Future directions may explore real-time decision support during surgical and interventional procedures, integration with pathology and genomics, and expansion to other malignancies. Ultimately, AI-powered imaging analyses promise to sharpen the precision of cancer staging and personalize treatment strategies, bringing tangible benefits to patient care worldwide.</p>
<p>These pioneering CT-based 3D TR-Net deep learning models thus represent not only a technical achievement but also a catalyst for evolving multidisciplinary collaboration and a more nuanced understanding of cancer heterogeneity. Their deployment in clinical practice could redefine standards of care in renal oncology and potentially serve as a template for AI applications across myriad disease states.</p>
<hr />
<p><strong>Subject of Research</strong>: Preoperative T and TNM staging in clear cell renal cell carcinoma using CT-based deep learning</p>
<p><strong>Article Title</strong>: Multicenter study of CT-based deep learning for predicting preoperative T staging and TNM staging in clear cell renal cell carcinoma</p>
<p><strong>Article References</strong>:<br />
Li, W., Xi, Y., Lu, M. et al. Multicenter study of CT-based deep learning for predicting preoperative T staging and TNM staging in clear cell renal cell carcinoma. <em>BMC Cancer</em> 25, 1604 (2025). <a href="https://doi.org/10.1186/s12885-025-14836-z">https://doi.org/10.1186/s12885-025-14836-z</a></p>
<p><strong>Image Credits</strong>: Scienmag.com</p>
<p><strong>DOI</strong>: <a href="https://doi.org/10.1186/s12885-025-14836-z">https://doi.org/10.1186/s12885-025-14836-z</a></p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">92994</post-id>	</item>
	</channel>
</rss>
