
搜索网站、位置和人员

走进西湖
院系设置
开云app官网下载安卓
招生与奖励
新闻与活动
校园生活
开云体育登录入口网页版官网下载
人才招聘
师生入口
新闻与活动 活动信息
交叉科学中心系列讲座CIS Seminar | Prof. Carlo V. Cannistraci: Brain-inspired sparse network science for next generation efficient and sustainable AI
时间
2025年4月14日(周一)
下午14:00-15:00
地点
西湖大学云谷校区E10-315
主持
西湖大学工学院,交叉科学中心兼聘,吴泰霖教授
受众
全体师生
分类
学术与研究
交叉科学中心系列讲座CIS Seminar | Prof. Carlo V. Cannistraci: Brain-inspired sparse network science for next generation efficient and sustainable AI
时间:2025年4月14日(周一)下午14:00-15:00
Time:14:00-15:00, Monday, April 14th, 2025
主持人: 西湖大学工学院,交叉科学中心兼聘,吴泰霖教授
Host: Prof. Tailin Wu, Assistant Professor, School of Engineering, Associated with the Center for Interdisciplinary Studies (CIS), Westlake University
地址:西湖大学云谷校区E10-315
Venue: E10-315, Yungu Campus, Westlake University
讲座语言:英文
Lecture Language: English
Prof. Carlo V. Cannistraci
Zhou Yahui Chair Professor
Chief Scientist, Tsinghua Laboratory of Brain and Intelligence (THBI)
Director, Center for Complex Network Intelligence (CCNI) at THBI
Tsinghua University
主讲人/Speaker:
Carlo Vittorio Cannistraci is a theoretical engineer and computational innovator. He is a Chair Professor in the Tsinghua Laboratory of Brain and Intelligence (THBI) and adjunct professor in the Department of Computer Science and in the School of Biomedical Engineering at Tsinghua University. He directs the Center for Complex Network Intelligence (CCNI) in THBI, which aims to create pioneering algorithms at the interface between information science, physics of complex systems, complex networks and machine intelligence, with a focus in brain/life-inspired computing for efficient artificial intelligence and big data analysis. These computational methods are often applied to precision biomedicine, neuroscience, social and economic science.
讲座摘要/Abstract:
Artificial neural networks (ANNs) are foundational to contemporary artificial intelligence (AI), however their conventional fully connected architectures are computationally inefficient. Contemporary large language models consume vast amounts of power at rates over 100 times that of the human brain. In stark contrast, the brain's inherently sparse connectivity facilitates exceptional capabilities with minimal expenditure: learning with just a few watts.
Brain-inspired network science research can play a relevant role in designing low-consumption and efficient deep learning. We need to develop concepts and theories for an ecological and sustainable approach to AI, and some of these new computing paradigms can be inspired from the physics of the brain network architecture and its complex systems biology.
At the Center for Complex Network Intelligence (CCNI) within the Tsinghua Laboratory of Brain and Intelligence (THBI), our research focuses on three pivotal features of brain networks that contribute to efficient computation:
1. Connectivity Sparsity: Implementing sparse connections to reduce computational overhead while maintaining performance.
2. Connectivity Morphology: Exploring the spatial patterns of neural connections to optimize information processing.
3. Neuro-Glia Coupling: Investigating the interactions between neurons and glial cells to enhance computational efficiency.
This talk will introduce the Cannistraci-Hebb Training soft rule (CHTs), a brain-inspired network science theory that employs a gradient-free approach, relying solely on network topology to predict sparse connectivity during dynamic sparse training. CHTs have demonstrated the potential to achieve ultra-sparse networks with approximately 1% connectivity, outperforming fully connected networks in various tasks. Additionally, we will discuss our recent study on the relationship between sparse morphological connectivity and spatiotemporal intelligence. This research introduces neuromorphic dendritic network computation with silent synapses, a model that emulates visual motion perception by integrating synaptic organization with dendritic tree-like morphology. The model exhibits exceptional performance in visual motion perception tasks, underscoring the potential of bio-inspired approaches to enhance the transparency and efficiency of modern AI systems.
References
1. Brain-Inspired Sparse Training enables Transformers and LLMs to perform as fully connected. Y Zhang, J Zhao, W Wu, Z Liao, U Michieli, CV Cannistraci. arXiv preprint arXiv:2501.19107, 2025.
2. Epitopological Learning and Cannistraci-Hebb Network Shape Intelligence Brain-Inspired Theory for Ultra-Sparse Advantage in Deep Learning. Y Zhang, J Zhao, W Wu, A Muscoloni, CV Cannistraci. The Twelfth International Conference on Learning Representations (ICLR) 2024.
3. Neuromorphic dendritic network computation with silent synapses for visual motion perception. E Baek, S Song, CK Baek, Z Rong, L Shi, CV Cannistraci. Nature Electronics, 1-12, 2024.
4. Network shape intelligence outperforms AlphaFold2 intelligence in protein interaction prediction. I Abdelhamid, A Muscoloni, ..., CV Cannistraci. bioRxiv, 2023.08. 10.552825, 2023.
5. From link-prediction in brain connectomes and protein interactomes to the local-community-paradigm in complex networks. CV Cannistraci, G Alanis-Lobato, T Ravasi. Scientific reports 3 (1), 1613, 2013.
讲座联系人/Contact:
交叉科学中心,朱子霖,邮箱:zhuzilin@westlake.edu.cn
Center for Interdisciplinary Studies (CIS), Ms. Zilin Zhu, Email: zhuzilin@westlake.edu.cn