2025 7th International Conference on Artificial Intelligence Technologies and Applications(ICAITA 2025)

Speakers



Speakers


Prof. Weijia Jia

IEEE Fellow,Dean of Institute of AI&Future Networking

Beijing Normal-Hong Kong Baptist University, China

BIO: Dean of BNU-UIC Institute of Aritificial Intelligence and Futuer Networks, Beijing Normal University (Zhuhai) and has been the Zhiyuan Chair Professor of Shanghai Jiaotong University, China. He was the Chair Professor and the Deputy Director of State Kay Laboratory of Internet of Things for Smart City at the University of Macau. He received BSc/MSc from Center South University, China in 82/84 and Master of Applied Sci./PhD from Polytechnic Faculty of Mons, Belgium in 92/93, respectively, all in computer science. For 93-95, he joined German National Research Center for Information Science (GMD) in Bonn (St. Augustine) as a research fellow. From 95-13, he worked in City University of Hong Kong as a professor.  His  contributions have been reconganized as optimal network routing and deployment; anycast and QoS routing, sensors networking, AI (knowledge relation extractions; NLP etc.) and edge computing. He has over 500 publications in the prestige international journals/conferences and research books and book chapters. He has received the best product awards from the Internatonal Science & Tech. Expo (Shenzhen) in 2011/2012 and the 1st Prize of Scientific Research Awards from the Ministry of Education of China in 2017 (list 2). He has served as area editor for various prestige international journals, chair and PC member/keynote speaker for many top international conferences. He is the Fellow of IEEE and the Distinguished Member of CCF.


Speech Title: Edge and LLM Computing

Abstract: Edge Computing (EC) is a flexible architecture to support distributed domain-specific applications with cloud-like quality of service. However, current EC still lacks the effective support mechanism when facing many heterogeneous task requirements with diversified QoS. Such quality support mechanism can be critical for industrial internet and smart city applications. Due to the features of lightweight and easy deployment, the use of containers has emerged as a promising approach for EC. Before running the container, an image composed of several layers must exist locally. However, it has been conspicuously neglected by existing work that task scheduling at the granularity of the layer instead of the image can significantly reduce the task completion time to further meet the real-time requirement and resource efficiency in resource-limited EC. Based on the observations, this talk will introduce our recent investigations on novel task/container layer scheduling algorithms in the heterogeneous EC environments working with LLM efficiently.




贾伟加.jpg

yytang.jpg

Prof. Yuanyan Tang

IEEE Life Fellow, IAPR Fellow, AAIA Fellow

University of Macau, China

BIO: Yuanyan Tang  is a Chair Professor in Faculty of Science and Technology at University of Macau and Professor/Adjunct Professor/Honorary Professor at several institutes in China, USA, Canada, France, and Hong Kong.  His  current interests include wavelets, pattern recognition, image processing, and artificial intelligence. He has published more than 400 academic papers and is the author/coauthor of over 25 monographs/books/bookchapters. He is the Founder and Editor-in-Chief of International Journal on Wavelets, Multiresolution, and Information Processing (IJWMIP), and Associate Editors of several international journals. He is the Founder and Chair of pattern recognition committee in IEEE SMC. He has serviced as general chair, program chair, and committee member for many international conferences. Dr. Tang is the Founder and General Chair of the series International Conferences on Wavelets Analysis and Pattern Recognition (ICWAPRs). He is the Founder and Chair of the Macau Branch of International Associate of Pattern Recognition (IAPR). Dr. Y. Y. Tang is a Fellow of IEEE, and Fellow of IAPR.


Speech Title: My Experience with Artificial Intelligence

Abstract: Artificial intelligence has developed rapidly in recent years and has been widely used. This is the result of the hard work of countless scientists and engineers for more than 70 years. The author has been engaged in the research of artificial intelligence for more than 40 years since the 1980s (1982), and has experienced several stages of development of artificial intelligence. In this report, the author uses popular language to introduce the basic principles of artificial intelligence, the main fields of artificial intelligence, and the work of artificial intelligence that the author has participated in for more than 40 years. This report also introduces some examples of artificial intelligence applications. Finally, the bottlenecks and directions of the development of artificial intelligence are discussed.





Prof. De-Shuang Huang

IEEE Fellow, IAPR Fellow, AAIA Fellow & AIIA Fellow

Ningbo Institute of Digital Twin, China

BIO: 

Biological sequence motif mining by deep learning method, i.e., mining transcription factor/translation factor (TF) plays a central role in gene regulation. Knowing the binding specificities of TFs is essential for developing models of the regulatory processes in biological systems and for deciphering the mechanism of gene expression. In this talk, I will first present the fundamental issue for motif prediction of biological sequences, then systematically present motif prediction of biological sequences in combination with the popular emerging technology “Deep Neural Networks”. Firstly, we briefly introduce several classical models for deep neural network and the research status of biological sequence motif prediction. Secondly, we provide a detailed introduction to two types of models for sequence motif prediction, including the “sequence-level” and “nucleotide-level” models. Finally, we point out and over-review some new research problems in this aspect.



Speech Title: A Preliminary Research on High-Order Nonstandard Tensor Representation Learning Model

Abstract: High-order nonstandard tensors are typical data structures in big data and artificial intelligence applications. Accurate and efficient representation learning of such tensors is a crucial prerequisite for subsequent knowledge discovery and pattern recognition. In this study, we focus on representation learning methods for third-order non-standard tensors and propose a series of representation learning models based on the tensor CP decomposition principle, preliminarily achieving efficient and accurate representation learning for third-order non-standard tensors. Related papers have been published in journals such as IEEE T-PAMI, T-KDE, and T-CYB.

黄德双.jpg



奥利弗.png


Prof. Pietro S. Oliveto

Southern University of Science and Technology, China

BIO: Pietro Oliveto received the Laurea degree and PhD degree in computer science respectively from the University of Catania, Italy in 2005 and from the University of Birmingham, UK in 2009. He has been EPSRC PhD+ Fellow (2009-2010) and EPSRC Postdoctoral Fellow (2010-2013) at the University of Birmingham, UK and Vice-Chancellor's Fellow (2013-2016) and EPSRC Early Career Fellow (2015-2020) at the University of Sheffield, UK. Before moving to SUSTech he was Chair in Algorithms at the Department of Computer Science, University of Sheffield, UK.His main research interest is the performance analysis, in particular the time complexity, of bio-inspired computation techniques including evolutionary algorithms, genetic programming, artificial immune systems, hyper-heuristics and algorithm configuration. He is currently building a Theory of Artificial Intelligence Lab at SUSTech.


He has guest-edited journal special issues of Computer Science and Technology, Evolutionary Computation, Theoretical Computer Science, IEEE Transactions on Evolutionary Computation and Algorithmica. He has co-Chaired the IEEE symposium on Foundations of Computational Intelligence (FOCI) from 2015 to 2021 and has been co-program Chair of the ACM Conference on Foundations of Genetic Algorithms (FOGA 2021) and Theory Track co-chair at GECCO 2022 and GECCO 2023. He is part of the Steering Committee of the annual workshop on Theory of Randomized Search Heuristics (ThRaSH), was Leader of the Benchmarking Working Group of the COST Action ImAppNIO, is member of the EPSRC Peer Review College and is Associate Editor of IEEE Transactions on Evolutionary Computation.


Speech Title: Computational Complexity Analysis of Sexual Evolution for the Design of Better General Purpose Algorithms for AI

Abstract: Large classes of the general-purpose optimisation algorithms at the heart of modern artificial intelligence and machine learning technologies are inspired by models of Darwinian evolution. In this talk we show how the foundational computational complexity analysis of such algorithms leads to an understanding of their behaviour and performance. Such understanding in turn allows informed decisions on how to set their many parameters and how to improve the algorithms to allow for the obtainment of better solutions in shorter time. We provide two concrete examples of how such analyses can lead to counter intuitive insights into how to design sexual evolution inspired algorithms (using populations and recombination) and how to set their parameters such that they can considerably outperform their single trajectory and mutation only (asexual) counterparts at hillclimbing unimodal functions, and at escaping from local optima. We conclude the talk by presenting experimental results that confirm the superiority of the designed algorithms that was proven for benchmark functions with significant structures, for classical combinatorial optimisation problems with practical applications.