KeyNotes

Keynote1
New Generation of Information Technology
Time: 9:00-10:00 Nov.26, 2022
Prof. Guoliang Chen ( Academician of Chinese Academy of Sciences, Professor of Nanjing University of Posts and Telecommunications )


Biograph: Guoliang Chen is Academician of Chinese Academy of Sciences and is Professor of Nanjing University of Posts and Telecommunications. He is a PhD supervisor and Honorary Dean of School of Computer Science and Technology, Nanjing University of Posts and Telecommunications. Professor Chen is also the Director of Institute of High Performance Computing and Big Data Processing, Nanjing University of Posts and Telecommunications, the Director of Academic Committee of Nanjing University of Posts and Telecommunications, the Deputy Director of the Academic Committee of the Wireless Sensor Network of Jiangsu Provincial High-tech Key Lab. He is the First National Teaching Teacher of Higher Education and enjoys national government special allowance. He received a Ph.D. degree from Xi'an Jiaotong University in 1961. At the same time, Professor He serves as part-time position of Dean of the School of Software Science and Technology, University of Science and Technology of China, Dean of School of Computer Science, Shenzhen University, Director of National High-Performance Computing Center, Director of Instructional Committee of Computer Basic Course of Higher Education Ministry, Director of International High-Performance Computing (Asia), China Computer Society Director and director of the High Performance Computing Professional Committee, etc. And Professor Chen also serves as Director of the Academic Committee of the National Key Laboratory about computer science.
His research interests mainly include parallel algorithms and high-performance computing and its applications. Professor Chen has undertaken more than 20 scientific research projects including the National 863 Plan, the National “Climbing” Plan, the National 973 Plan, and the National Natural Science Foundation of China. A number of research achievements have been widely quoted at home and abroad and reached the international advanced level. He has published more than 200 papers and published more than 10 academic works and textbooks. He won the Second Prize of National Science and Technology Progress Award, the First Prize of Science and Technology Progress Award and the Second Prize of the Ministry of Education, the First Prize of Science and Technology Progress Award of the Chinese Academy of Sciences, the Second Prize of the National Teaching Achievement, the First Prize of the Ministry of Water Resources, and the Second Progress of Anhui Province Science and Technology Progress Awards, 2009 Anhui Provincial Major Science and Technology Achievement Awards, etc. Professor Chen won the 15th anniversary of the advanced personal important contribution award of National 863 Plan, Baosteel Education Fund outstanding teacher’s special award, and the glorious title of the model worker in Anhui Province.
For years, Professor Chen has developed a complete set of parallel algorithm disciplines for “algorithmic theory-algorithm design-algorithm implementation-algorithm application” around the teaching and research of parallel algorithms. He proposed the parallel computing research method of "parallel machine architecture-parallel algorithm-parallel programming", established China's first national high-performance computing center, built a parallel research and teaching base for China's parallel algorithms, and trained more than 200 Postdoctoral, doctoral and postgraduate students. Professor Chen is the academic leader in non-numerical parallel algorithm research in China and has a certain influence and status in academic circles and education circles at home and abroad. Academician Chen first established China’s first national high-performance computing center in 1995, and successfully developed China’s first domestic high-performance general-purpose processor chip Godson single-core, four-core and eight-core, KD-50, KD-60 and KD-90 in 2007, 2009, 2012 and 2014 respectively, which provide infrastructure for cloud computing, big data processing and universal high performance computing in China.

Keynote2
Adaptive Machine Learning for Data Streams
Time: 10:00-11:00 Nov.26, 2022
Prof. Albert Bifet ( University of Waikato, New Zealand )

Biograph: Albert Bifet is Director of the AI Institute at the University of Waikato and a professor of Big Data at the Institut Polytechnique de Paris. Previously he worked at Huawei Noah's Ark Lab in Hong Kong, Yahoo Labs in Barcelona, and UPC BarcelonaTech. He is the co-author of a book on Machine Learning from Data Streams published at MIT Press. He is one of the leaders of MOA, scikit-multiflow and SAMOA software environments for implementing algorithms and running experiments for online learning from evolving data streams.

Speech abstract: Advanced analysis of big data streams from sensors and devices is bound to become a key area of data mining research as the number of applications requiring such processing increases. Dealing with the evolution over time of such data streams, i.e., with concepts that drift or change completely, is one of the core issues in stream mining. In this talk, I will present an overview of data stream mining, and I will introduce some popular open source tools for data stream mining.

Keynote3
Physics-informed deep learning for quantum systems: moving from speed to accuracy
Time: 14:00-14:40 Nov.26, 2022
Prof. Sam Vinko ( University of Oxford, UK )



Biograph: Sam Vinko is an Associate Professor of High Energy Density Physics at the University of Oxford, a University Research Fellow of the Royal Society, and a Fellow at Trinity College Oxford. He pioneered the use of x-ray free-electron lasers to explore extreme states of matter, and is engaged in a wide range of experimental, computational and theoretical investigations into strongly coupled quantum plasmas. He is particularly interested in how novel experimental facilities can be used together with advanced computational tools to further our understanding of matter at high densities and extreme pressures. In 2015 he shared the American Physical Society’s John Dawson Award for Excellence in Plasma Physics Research, and won the Young Scientist Prize in Plasma Physics by the International Union of Pure and Applied Physics in 2016. He is co-founder of Machine Discovery Ltd, a university spin out company developing machine learning tools to enhance and accelerate computational research and development.

Speech abstract: Across the sciences machine learning is often associated with the construction of surrogate models which, at the most basic level, trade prediction accuracy for execution speed. By replacing physical models with appropriately trained deep neural networks (NN), faster predictions can be made, allowing for a more complete exploration of parameter space. This paradigm stands in sharp contrast with most recent physics-informed learning approaches, where the main objective is, instead, to improve the accuracy of scientific prediction, often sacrificing speed in the process. This approach forms the basis of recent advances combining density functional theory (DFT) for quantum chemistry calculations with machine learning to construct new chemically-accurate exchange-correlation (xc) functionals. In particular, differentiable programming is proving to be a formidable tool to train NN-based xc functionals using a combination of quantum simulation and experimental data. In this talk I will provide a brief outline of some recent developments in the field, and present the first fully-differentiable 3D density functional theory simulator (DQC – Differentiable Quantum Chemistry) where the exchange-correlation functional can be efficiently represented by a trainable deep neural network [8]. We demonstrate how this approach helps construct highly accurate exchange-correlation functionals using heterogeneous experimental data even for extremely limited datasets: using only eight experimental values on diatomic molecules, the trained exchange-correlation networks enable improved prediction accuracy of atomization energies across a collection of 104 molecules containing new bonds, new atoms, and new molecules not present in the training dataset. These advances promise to have multiple applications in the predictive modelling of real systems across chemistry, material science, and high energy density physics.

Keynote4
Machine-Learning Enhanced Optimization for Task Scheduling in High-Performance Computing
Time: 9:00-10:00 Nov.27, 2022
Prof. Hong Shen ( Sun Yat-sen University, China )

Biograph: Hong Shen is a specially-appointed Professor in Sun Yat-sen University, China, where he was the foundation Director of SYSU's Institute for Advanced Computing. He is also an Adjunct Professor in the University of Adelaide, Australia, where he was a tenured Professor (Chair of Computer Science) for 15 years. He received the BS degree from Beijing University of Science and Technology, MS degree from University of Science and Technology of China, and PhD degree from Abo Akademi University, Finland. With main research interests in parallel and distributed computing, privacy preserving computing and high performance networks, he has led numerous research centers and projects in different countries. He has published 400+ papers including over 100 papers in major international journals such as a variety of IEEE and ACM transactions. Prof. Shen received many honors and awards, and served on different roles in professional societies, journal editorial boards and conference committees.

Speech abstract: How to effectively schedule application jobs submitted by geographically dispersed users is a core task of today's high-performance computing in large-scale datacenters, which is especially important for managing shared resources in a cloud computing environment. High-performance computing jobs have the characteristics of large scale and complex correlations among computation tasks. The key to job scheduling is to optimize the scheduling of both computation tasks in a job and parallel data transmission flows (coflows) among the tasks according to their data correlations. Because these problems are NP-hard, various greedy strategies, heuristic and machine learning based approaches have been proposed to obtain sub-optimal solutions. In view of the performance bottlenecks of the existing scheduling methods, in this talk, as an example of bridging HPC and AI, I will introduce our recent work in combining machine learning and optimization techniques for scheduling tasks of high-performance computing jobs. I will first give a comparative overview on traditional optimization strategies and machine learning, and outline our approaches of combining them in different settings to achieve the desired performance improvement in different dimensions. Next, for offline task scheduling, I will present our approach of combining search with regression in the Bayesian optimization framework to accommodate jobs with weighted completion time for offline scheduling. For online task scheduling, I will present our method of integrating greedy optimization with reinforcement learning to improve the overall performance guarantee. Finally, I will discuss how to promote the application of AI techniques in high-performance computing in general, as well as the integration between HPC and AI computing this emerging trend that has increasing importance to both domains of HPC and AI.

Keynote5
Data-efficient Graph Learning by Knowledge Transferring and Augmentation
Time: 10:00-11:00 Nov.27, 2022
Prof. Xiangliang Zhang ( University of Notre Dame, USA )

Biograph: Dr. Xiangliang Zhang is currently an Associate Professor and directs the Machine Intelligence and Knowledge Engineering (MINE) Laboratory in the Department of Computer Science and Engineering at University of Notre Dame, USA. She received the Ph.D. degree in computer science from INRIA-University Paris-Sud, France, in July 2010. She has authored or co-authored over 200 refereed papers in various journals and conferences. Her current research interests lie in designing machine learning algorithms for learning from complex and large-scale streaming data and graph data. She was invited to deliver an Early Career Spotlight talk at IJCAI-ECAI 2018. She regularly serves on the Program Committee for premier conferences like SIGKDD (Senior PC), AAAI (Area Chair, Senior PC), IJCAI (Area Chair, Senior PC), etc. She also serves as the Editor-in-Chief of SIGKDD Explorations and an associated editor for IEEE Transactions on Dependable and Secure Computing (TDSC) and Information Sciences.

Speech abstract: Canonical graph learning models have achieved remarkable progress in modeling and inference on graph-structured data. However, they inevitably suffer from the data scarcity issue, e.g., the label scarcity issue due to the expense and hardness of data annotation in practice, and the edge incompleteness issue due to the difficulty of true-fact collection in graph construction. This talk will focus on data-efficient graph learning that attempts to address the prevalent data scarcity issue in graph mining problems. The general ideas are to transfer knowledge from related resources to obtain the models with generalizability to the graphs with mere annotations, or to alleviate the data scarcity by data augmentation using generative models. Application examples of graph classification and completion will be presented to showcase the ideas.

Keynote6
A Hybrid Deep Learning Processor Architecture for Efficient Training
Time: 11:00-12:00 Nov.27, 2022
Prof. Yunji Chen ( Institute of Computing Technology Chinese Academy of Sciences, China )


Biograph: Yunji Chen was born in Nanchang, China, in 1983. Currently, he is a fullprofessor at Institute of Computing Technology ,Chinese Academy of Sciences. Currently, he leads his lab to develop neuralnetwork processors. Before that, he participated in the Godson/Loongsonproject for more than ten years, and was a chief architect of Godson-3microprocessor. Yunji Chen has authored or coauthored 1 book and over 60papers on various conferences (including ISCA, HPCA, MICRO, ASPLOS, ICSE,ISSCC, Hot Chips, IJCAI, FPGA, and SPAA) and jounals (including IEEE JSSC, IEEETC, IEEE TPDS, IEEE TIP, IEEE TCAD, ACM TOCS, ACM TIST, ACM TACO, ACMTODAES, ACM Computing Surveys, and IEEE Micro). He was a recipient ofASPLOS'14 and MICRO'14 best paper awards for advances in neural networkprocessors.(http://www.ict.ac.cn/english/)

Speech abstract: Deep neural network (DNN) training is notoriously time-consuming, and quantization is promising to improve the training efficiency with reduced bandwidth/storage requirements and computation costs. However, state-of-the-art quantized algorithms with negligible training accuracy loss, which require on-the-fly statistic-based quantization over a great amount of data (e.g., neurons and weights) and high-precision weight update, cannot be effectively deployed on existing DNN accelerators. To address this problem, we propose the first customized architecture for efficient quantized training with negligible accuracy loss. The proposed deep learning processor architecture features a hybrid architecture consisting of an ASIC acceleration core and a near-data-processing (NDP) engine. The acceleration core mainly targets at improving the efficiency of statistic-based quantization with specialized computing units for both statistical analysis (e.g., determining maximum) and data reformating, while the NDP engine avoids transferring the high-precision weights from the off-chip memory to the acceleration core. Experimental results show that on the evaluated benchmarks, our architecture improves the energy efficiency of DNN training by 6.41× and 1.62×, performance by 4.20× and 1.70× compared to GPU and TPU, respectively, with only ⩽ 0.4% accuracy degradation compared with full precision training.