Friday, July 11, 2025
Science
No Result
View All Result
  • Login
  • HOME
  • SCIENCE NEWS
  • CONTACT US
  • HOME
  • SCIENCE NEWS
  • CONTACT US
No Result
View All Result
Scienmag
No Result
View All Result
Home Science News Mathematics

Inside AI Minds: How Machines Learn Just Like Humans

July 2, 2025
in Mathematics
Reading Time: 4 mins read
0
The connection between human and machine learning
67
SHARES
606
VIEWS
Share on FacebookShare on Twitter
ADVERTISEMENT

A groundbreaking study published in Nature Communications by researchers at the Technical University of Denmark unveils a remarkable geometric principle connecting human and machine learning paradigms. The study elucidates how a mathematical property known as convexity underpins the formation of conceptual knowledge both in the human brain and in artificial intelligence (AI) systems, promising to transform our understanding of how learning occurs across these disparate domains.

Convexity, at its core, is a geometric concept that has historically been applied in mathematics and cognitive science to describe how ideas cluster together forming coherent conceptual spaces. For humans, concepts such as “cat” or “wheel” do not exist as isolated points but rather as regions where diverse instances cluster cohesively. This coherency is characterized by convexity—the idea that if two points belong to the concept, every point along the shortest path connecting them also resides within that concept’s region. Imagine a rubber band stretched around all the examples of a concept, encapsulating its essential variations without gaps or outliers. This principle facilitates robust abstraction, enabling effortless generalization from limited examples.

The study probes whether AI systems, particularly deep neural networks, mimic this inherently human property in their internal representations. Despite AI’s complexity and learning mechanisms that vastly differ from biological brains, models trained on vast datasets develop internal ‘latent spaces’—abstract multidimensional maps—where knowledge is organized. The critical question addressed is whether these latent spaces also exhibit convexity, implying a shared structural principle between human cognition and machine intelligence.

ADVERTISEMENT

To measure this, researchers introduced innovative metrics to examine two distinct types of convexity within AI latent spaces: Euclidean convexity and graph convexity. Euclidean convexity assesses whether the straight line between any two points in a given conceptual region lies completely within that region—akin to traditional geometric convexity. Graph convexity extends this concept to non-Euclidean, curved spaces common in neural networks where straight lines give way to minimal or geodesic paths across a network of data points. This nuanced approach recognizes that AI’s internal landscapes are often highly complex and nonlinear.

The team applied these convexity metrics across a diverse spectrum of AI models handling different data modalities—images, text, audio, human activity patterns, and medical datasets. Remarkably, they discovered that convexity is not a rare artifact but a pervasive property emerging during training, regardless of data type or task. This suggests convexity may be a fundamental and universal organizing principle in deep learning, mirroring its importance in human concept formation.

Moreover, the study delved into how convexity evolves through the AI training pipeline. Deep models commonly undergo two stages: pretraining on broad datasets to learn generalizable features and fine-tuning on specific tasks to refine their abilities. Results indicated that pretraining already establishes convex conceptual regions. Fine-tuning amplifies this property, sharpening the boundaries of classifications and increasing the convexity of the decision regions. This refinement mirrors how humans start with broad categories and progressively specialize with experience and practice.

Intriguingly, the researchers identified a predictive relationship between the convexity of pretraining representations and the models’ performance after fine-tuning. Models exhibiting more convex concept regions early on perform better in specialized tasks later—indicating that convexity could serve as a reliable indicator of a model’s learning potential. This insight opens exciting possibilities for AI development by evaluating models based on the geometry of their internal representations before task-specific training.

The implications of these findings extend beyond academic insight. Convexity may offer a new lens to design AI systems that generalize more efficiently from limited data, a longstanding challenge in machine learning. If AI architectures can be endowed or guided to cultivate convex decision regions during training, they could achieve greater accuracy and reliability even when examples are scarce. This is transformative for real-world applications where data collection is costly or sensitive, such as in healthcare diagnostics or personalized education technologies.

Furthermore, the identification of convexity as a shared structural feature bridges the conceptual divide between biological and artificial intelligence. It suggests that despite evolutionary and mechanistic differences, learning systems converge towards similar organizational principles to process and interpret information. This connection enhances the interpretability and explainability of AI systems—key concerns as these technologies increasingly influence critical societal functions.

The study settles crucial groundwork for future interdisciplinary research integrating cognitive science, geometry, and AI development. By formalizing and quantifying convexity in artificial neural representations, it provides tools to explore how machines can be made to ‘think’ more like humans in a rigorous, mathematically grounded sense. This convergence could usher in a new generation of explainable AI systems where decision-making processes are transparent and intuitively aligned with human conceptual understanding.

As AI continues to permeate diverse sectors, from autonomous vehicles to conversational agents, the ability to reliably quantify and cultivate convexity in latent spaces promises to make these technologies safer, more trustworthy, and easier to collaborate with. The potential for convexity-focused training protocols invites a paradigm shift from purely performance-driven model optimization to geometrically principled design, fostering AI whose internal logic resonates with human thought processes.

While much remains to be explored, including the mechanistic origins of convexity during learning and how it interacts with other geometric and topological features of latent spaces, this pioneering work lays the foundation for demystifying the deep learning “black box.” It points to an elegant and universal principle that not only connects how humans and machines conceptualize the world but also suggests practical pathways for crafting AI systems that are more intelligent, adaptable, and aligned with human values.

This breakthrough is part of the broader “Cognitive Spaces – Next Generation Explainable AI” project funded by the Novo Nordisk Foundation, which aims to develop transparent and user-interpretable AI systems. By shining a light on the geometric secrets of AI’s internal representations, this research is poised to influence both theoretical understanding and applied innovation across artificial intelligence and cognitive neuroscience for years to come.


Subject of Research: Convex decision regions in deep network representations bridging human and machine learning

Article Title: On convex decision regions in deep network representations

News Publication Date: 2-Jul-2025

Web References: https://www.nature.com/articles/s41467-025-60809-y

Image Credits: DTU

Keywords: Convexity, deep learning, latent spaces, human cognition, explainable AI, neural networks, conceptual spaces, machine learning, fine-tuning, pretraining

Tags: abstraction in artificial intelligenceclustering concepts in AIconceptual knowledge formationconvexity in cognitive sciencedeep neural networks and human cognitiongeneralization from limited examplesgeometric principles in AIhuman and machine learning similaritiesinternal representations of AI systemsmathematical properties in learningNature Communications study on AItransformative AI research findings
Share27Tweet17
Previous Post

PD-(L)1 Inhibitors and Anlotinib Boost SCLC Therapy

Next Post

New Fossil Discoveries Reveal Climate Tipping Point Triggered Earth’s Most Famous Extinction

Related Posts

blank
Mathematics

Intra-Arterial Tenecteplase Boosts Recovery After Successful Endovascular Stroke Treatment

July 5, 2025
The quantum circuit of the proposed quantum search algorithm for continuous search problems
Mathematics

Groundbreaking Quantum Search Algorithm Revolutionizes Continuous Domain Exploration

July 3, 2025
blank
Mathematics

Dual-Wavelength Narrowband Thermal Emitter Enables Angle- and Polarization-Selective Infrared Multilevel Encryption

July 3, 2025
Quantum computer simulates spontaneous symmetry breaking at zero temperature
Mathematics

Quantum Computer Models Spontaneous Symmetry Breaking at Absolute Zero Temperature

July 2, 2025
A new indicator to help predict mpox symptom progression
Mathematics

Using Viral Load Tests to Predict Mpox Severity at Onset of Skin Lesions

July 2, 2025
World-unique method enables simulation of error-correctable quantum computers
Mathematics

Revolutionary Method Paves the Way for Simulating Error-Correctable Quantum Computers

July 2, 2025
Next Post
blank

New Fossil Discoveries Reveal Climate Tipping Point Triggered Earth’s Most Famous Extinction

  • Mothers who receive childcare support from maternal grandparents show more parental warmth, finds NTU Singapore study

    Mothers who receive childcare support from maternal grandparents show more parental warmth, finds NTU Singapore study

    27521 shares
    Share 11005 Tweet 6878
  • Bee body mass, pathogens and local climate influence heat tolerance

    639 shares
    Share 256 Tweet 160
  • Researchers record first-ever images and data of a shark experiencing a boat strike

    504 shares
    Share 202 Tweet 126
  • Warm seawater speeding up melting of ‘Doomsday Glacier,’ scientists warn

    308 shares
    Share 123 Tweet 77
  • Probiotics during pregnancy shown to help moms and babies

    256 shares
    Share 102 Tweet 64
Science

Embark on a thrilling journey of discovery with Scienmag.com—your ultimate source for cutting-edge breakthroughs. Immerse yourself in a world where curiosity knows no limits and tomorrow’s possibilities become today’s reality!

RECENT NEWS

  • How Social Media Friends Shape Travel Choices
  • Emojis in WeChat: Age Differences Explained by Relevance Theory
  • Boosting Exercise Adherence in Severe Obesity Pre-Surgery
  • Catalytic Cycle Revolutionizes Crude Hydrogen Handling

Categories

  • Agriculture
  • Anthropology
  • Archaeology
  • Athmospheric
  • Biology
  • Bussines
  • Cancer
  • Chemistry
  • Climate
  • Earth Science
  • Marine
  • Mathematics
  • Medicine
  • Pediatry
  • Policy
  • Psychology & Psychiatry
  • Science Education
  • Social Science
  • Space
  • Technology and Engineering

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 5,188 other subscribers

© 2025 Scienmag - Science Magazine

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • HOME
  • SCIENCE NEWS
  • CONTACT US

© 2025 Scienmag - Science Magazine

Discover more from Science

Subscribe now to keep reading and get access to the full archive.

Continue reading