The 27th International Conference on Neural Information Processing
(ICONIP2020)

November 18 - November 22, 2020
Online
Bangkok, Thailand


Call for paper ( pdf / txt )

Topics

Theory and Algorithms

Computational and Cognitive Neurosciences

Human Centered Computing

Applications

Venue

Online
Bangkok, Thailand

App

The 27th International Conference on Neural Information Processing (ICONIP2020) aims to provide a leading international forum for researchers, scientists, and industry professionals who are working in neuroscience, neural networks, deep learning, and related fields to share their new ideas, progresses and achievements.

ICONIP2020 will be held online instead of physically in Bangkok, Thailand, during November 18 - November 22, 2020. We would still be collocating with the online events of CSBio2020 and DLAI4.

ICONIP2020 will be held in Bangkok, Thailand, during November 18 - November 22, 2020 and will be collocated with several events, including ACML, iSAI-NLP, AIoT, CSBio, and DLAI4.


ICONIP2020 Tutorial Session Schedule on 21 Nov. and 22 Nov. 2020 is here.




The online conference web page is https://iconip2020.innovasive.co.th/

The ICONIP online conferences start at 8:00am (GMT+7)

Remark: All time that shown in the schedule is Thailand local time (GMT+7)




Neural Information Processing


27th International Conference, ICONIP 2020, Bangkok, Thailand, November 18–22, 2020, Proceedings, Part I - V 
Link to the proceedings
Proceedings Part I (LNCS) Proceedings Part II (LNCS) Proceedings Part III (LNCS) Proceedings Part IV (CCIS) Proceedings Part V (CCIS)

PROGRAM



The conference program is now available.


Update : 13 Nov. 2020

Dropbox here.

Download file here.

ICONIP2020 Online presentation registration



Dear ICONIP2020 authors and audiences.
The 27th International Conference on Neural Information Processing (ICONIP2020)
On 18-22 Nov. will be fully online conferences. All presenters and audiences need some preparation.

The virtual presentation will be arranged via Zoom Meeting (https://zoom.us). The Zoom Meeting account is required to join the conference (Free basic account is fine). The presenter will be invited to the session via the Zoom account.

Please prepare for your online presence as follows:
      1. Download and register for the Zoom meeting application at (https://zoom.us/signup). The Zoom free basic account is fine.
      2. All conference sessions will be organized through the Zoom video conference and Zoom chat application.
      3. The email that links to the zoom account must be submitted to the online conference registration page.
      4. The Zoom application will be the primary communication channel during the conferences.
      5. The author will be invited to join the Zoom chat channels relate to her/his presentation session(s) and also the keynote sessions.
      6. Author and Listener are able to join the desired conferences sessions via the link in the main conferences program table. The user and password are required to access the online conference webpage. https://iconip2020.innovasive.co.th
      7. The presenters have to join their presentation sessions ten (10) mins before the session starts. Send a message to inform your attendance via the Zoom chat channel. There will be a staff that assists all authors to arrange the presentation.
      8. The author has 5 mins for their presentation (not including the Q&A session).
      9. The live online presentation and pre-recorded video play are options for all presenters. However, all presenters have to upload fifteen (15) mins video presentation to the provided webpage (see the instruction below), in case of network problems. (The online presentation registration web page will available on 27 Oct. 2020)
      10. Pre-record 15 mins of presentation video. 1 paper / 1 clip is required. The video has to be uploaded to the online presentation registration website. “Video preparation instructions” is shown below.
      11. The required information for the online conferences registration and video uploading is: (please prepare the following information before going to the online presentation registration page.)
          1. User and Password (please contact vajirasak.van@mail.kmutt.ac.th, if you do not get a mail with the user and password) *** NOT the Zoom meeting account
          2. Presenter’s name
          3. Presenter’s photo (will be shown on the conference program)
          4. Presenter’s email which links to the Zoom account (Zoom register page: https://zoom.us/signup)
          5. A 15 minutes video clip (the video preparation instruction is shown below).
      12. The presenter has to register for the online conference page at URL: https://iconip2020.innovasive.co.th


All presentation sessions will be arranged between date: 18-20 Oct. time: 7:30-17:00 (GMT+7 - Thailand Time)

A user-name and password are required for online conference registration, video uploading, and online presentation. If you still do not have an email with the authentication info. Please contact vajirasak.van@mail.kmutt.ac.th

Presentation video recording instructions:


This is a unique opportunity to showcase your research via a high-quality video, so please take great care in preparing your presentation. All pre-recorded presentations will be made available to conference participants during the conference period.

The recordings must satisfy the following requirements:
      - The video must be in the English language.
      - The recommended video resolution is 1280x720 (HD720)
      - They must be no longer than 15 minutes in total.
      - The first 5 minutes should consist of a self-contained, high-level overview of the contribution, similar to a spotlight presentation. (See below for why this is important.)
      - Deadline for submitting your video: 7th Nov 2020

Submitting your recording by this deadline is a requirement for inclusion in the proceedings and in the conference program.!

During the conference, you will be required to be present in Q&A sessions, of which approximately 10 minutes will be dedicated to your paper. At the start of your Q&A sessions, only the 5-minute high-level overview at the start of your pre-recorded presentation will be streamed, to prime the attendees and get the discussion going. Please keep this in mind when preparing your presentation. Please contact vajirasak.van@mail.kmutt.ac.th for any issue related to the online presentation registration.

Registration Online Payment by PayPal


Type APNNS Members APNNS Non-Members
Authors 200 USD
(additional paper 100 USD each)
250 USD
(additional paper 125 USD each)
LNCS: up to 2 extra pages with additional page charge of 100 USD per page
CCIS: max of one extra page with additional charge of 200 USD
Listeners 50 USD (General)
0 USD (Student)
100 USD (General)
15 USD (Student)


  1. [APNNS Member] https://www.apnns.org/member/my-page/iconip2020-payment/
  2. [APNNS Non-member] https://iconip2020.apnns.org/payment/
  3. For any problems with respect to PayPal registration, please contact vajirasak.van@mail.kmutt.ac.th

IMPORTANT NOTES

  1. • An accepted paper will be published in the proceedings only if the final camera-ready version is accompanied by the payment of at least one regular registration for one of the authors.

  2. • No accepted paper is included in the paper publication without the payment of the required registration fee.

  3. • The page limit for each paper is specified in your acceptance email. Additional pages will be permitted for an additional charge of 100 USD for each page in LNCS papers and an additional charge of 200 USD for the extra page in CCIS papers.

  4. • To qualify for the student rate, a proof of current full-time student status (i.e., a copy of student card or certification from University) will be required.

  5. • Cancellation Policy for Authors: Refund (less $75 USD for handling charge) will be provided for cancellation requests received by 28 Sept 2020. Refund will NOT be provided for cancellation after this date. No refunds for Listeners.

  6. • APNNS reserves the right to exclude a paper from distribution after the conference, if the paper is not presented by one of the authors at the conference. In extenuating circumstances, alternative presentation arrangements can be made.

  7. • To join APNNS, please go to the APNNS membership site (https://www.apnns.org/member/). The annual APNNS membership fee is: 15 USD (regular) and 10 USD (student). Keep in mind that a few days will be needed to approve your membership.

Topics



ICONIP2020 will deliver keynote speeches, invited talks, full paper presentations, posters, tutorials, workshops, social events, etc. Topics covered include but are not limited to:




Theory and Algorithms

  • Causality and explainable AI
  • Computational intelligence
  • Control and decision theory
  • Constraint and uncertainty theory
  • Machine learning
  • Neurodynamics
  • Neural network models
  • Optimization
  • Pattern recognition
  • Time series analysis

Computational and Cognitive Neurosciences

  • Affective and cognitive learning
  • Biometric systems/interfaces
  • Brain-machine interface
  • Computational psychiatry
  • Decision making and control
  • Neuroeconomics
  • Neural data analysis
  • Reasoning and consciousness
  • Sensory perception
  • Social cognition








Human Centred Computing

  • Bioinformatics
  • Biomedical information
  • Healthcare
  • Human activity recognition
  • Human-centred design
  • Human–computer interaction
  • Neuromorphic hardware
  • Recommender systems
  • Social networks
  • Sports and rehabilitation

Applications

  • Big data analysis
  • Computational finance
  • Image processing and computer vision
  • Data mining
  • Information security
  • Information retrieval
  • Multimedia information processing
  • Natural language processing
  • Robotics and control
  • Web search and mining

Important Dates



Workshop/Special Session Proposal Deadline: May 1, 2020
Tutorial Proposal Deadline: May 1, 2020
Notification of Workshop/Special Session/Tutorial Proposal: May 8, 2020
Paper Submission Deadline: June 1, 2020 June 28, 2020
Paper Notification Date: August 31, 2020
Paper Camera Ready Deadline: September 15, 2020
Date of Conference: November 18-22, 2020

Paper Information


Papers should be written in English and follow the Springer LNCS format. Paper submissions is single-blind review, so author names can be shown in the submission. The submission of a paper implies that the paper is original and has not been submitted under review or is not copyright-protected elsewhere and will be presented by an author if accepted. All submitted papers will be refereed by experts in the field based on the criteria of originality, significance, quality, and clarity.

The Proceedings will be published in the Springer’s series of Lecture Notes in Computer Science (LNCS) and Communications in Computer and Information Science (CCIS). Selected papers will be published in a special issue of an SCI journal.

Instruction for Camera-ready submissions


  • Important 1: Each accepted paper should follow the following LaTeX INSTRUCTION to prepare the final manuscript.

  • Important 2: No paper can be published in the proceedings without being accompanied by a Completed Springer Copyright Transfer Form. You must complete and submit this form to have your paper included in the conference proceedings.


Instruction for LNCS papers


  • Important 1: The page limit of accepted papers is 12, while additional two pages are allowed by paying extra page charges for 100 USD each page. The file size limit is 8MB.

  • Important 2: Each final paper submission must include a PDF file and all source files.

Once you are ready to submit it, please upload 3 files to the CMT system:

  • A PDF document with the final compiled camera-ready version of the paper. The PDF file should be named sub_.pdf, where ID is your paper ID, e.g. if your paper ID is 789, then name your PDF sub_789.pdf.
  • .zip file containing all the source files of the paper, namely, the text of the paper and all the illustrative materials (figures), so that the PDF can be fully re-compiled from the source files. The zip file should be named source_.zip, where ID is your paper ID.
  • A signed copyright form, in order for your paper to be published in the conference proceedings, a *signed Springer Copyright Form* must be submitted for each paper. The PDF file should be named LNCS_copyright_.pdf, where ID is your paper ID. The form can be downloaded at https://www.apnns.org/ICONIP2020/file/ICONIP2020_Contract_Book_Contributor_Consent_to_Publish_LNCS_SIP.pdf.

Instruction for CCIS papers


  • Important 1: The page limit of accepted papers is 8, while additional one page is allowed by paying extra page charge of 200 USD. The file size limit is 8MB.

  • Important 2: Each final paper submission must include a PDF file and all source files.

  • Once you are ready to submit it, please upload 3 files to the CMT system:

    • A PDF document with the final compiled camera-ready version of the paper. The PDF file should be named sub_.pdf, where ID is your paper ID, e.g. if your paper ID is 789, then name your PDF sub_789.pdf.
    • .zip file containing all the source files of the paper, namely, the text of the paper and all the illustrative materials (figures), so that the PDF can be fully re-compiled from the source files. The zip file should be named source_.zip, where ID is your paper ID.
    • A signed copyright form, in order for your paper to be published in the conference proceedings, a *signed Springer Copyright Form* must be submitted for each paper. The PDF file should be named CCIS_copyright_.pdf, where ID is your paper ID. The form can be downloaded at https://www.apnns.org/ICONIP2020/file/ICONIP2020_Contract_Book_Contributor_Consent_to_Publish_CCIS_SIP.pdf.

    LaTeX Instruction


    • You must use the LaTeX style from LNCS Springer templates provided in Overleaf:
      https://www.overleaf.com/latex/templates/springer-lecture-notes-in-computer-science/kzwwpvhwnvfj#.WuA4JS5uZpi
    • Please use as few Latex files as possible and name the main file clearly, e.g., main.tex, so it is clear for other people how the source should be compiled. Ideally, ensure that it is possible to compile your paper using pdflatex main.tex Please place all references in the main.tex file (NOT in a bib file).
    • Please mark the corresponding author clearly, by putting \Letter in brackets after the author's name in the \author block. Use package \usepackage[misc]{ifsym}
    • Please use the \tocauthor and \toctitle fields, as these are important to create the publication's Table of Contents.
    • Please make sure that your paper is properly formatted. DO NOT change the format required by Springer or their style sheets. Do not reduce spaces, fonts or anything else. Springer will remove all LaTeX instructions that change their format, and the paper length will be judged by whatever remains. Note that the contribution will be compiled from the sources, not from your PDF, and your contribution may look entirely different in the end, if you modify the formatting.
    • Other advice for generating your camera-ready document can be found here:
      https://www.apnns.org/ICONIP2020/file/camera_ready_instructions.pdf
    • Once data processing is finished; Springer will contact all corresponding authors and ask them to check their papers. Your quick interaction with Springer will be greatly appreciated.

    Final papers after acceptance will normally be 10 pages with a maximum of 12 pages in length. The page count includes everything, including references and appendices. Please follow Springer’s proceedings LaTeX templates provided in Overleaf (https://www.overleaf.com/latex/templates/springer-lecture-notes-in-computer-science/kzwwpvhwnvfj#.WuA4JS5uZpi).


    Submission site


    https://cmt3.research.microsoft.com/User/Login?ReturnUrl=%2Ficonip2020

Workshop/Special Sessions of ICONIP 2020

Special Sessions


(1) Human-in-the-Loop Interactions in Machine Learning
Dr. Zehong (Jimmy) Cao, University of Tasmania, TAS, Australia (Zehong.Cao@utas.edu.au) Prof. Chin-Teng Lin, University of Technology Sydney, NSW, Australia Prof. Dongrui Wu, Huazhong University of Science and Technology, Wuhan, China

Abstract: Extracting information from a massive amount of humans’ natural behaviour and cognition patterns has allowed supporting the machine learning and decisions in many fields, ranging from computer science to engineering. Human-in-the-loop approaches interact with machine learning are gaining popularity as a better approach to training more accurate models, because the human feedback into the learning loop of the machine can help it improve faster. Recent advances in machine learning, are giving momentum to human-in the-loop approaches to enable complex paradigms that operate in connection with human beings. Given the remarkable achievement associated with the processing of human physiological signals obtained from neuroimaging modalities and cognitive systems, it has been proposed as a useful and effective framework for the modelling and understanding of human behaviour and cognition patterns as well as to enable a direct communication pathway between the human and machine. This paves the way for developing new human-in-the loop interacting and interfacing techniques in reasoning and machine learning that foster the capabilities for understanding and modelling the training process.

(2) 13th International Workshop on Artificial Intelligence and Cybersecurity (AICS 2020)
Dr. Kitsuchart Pasupa, (kitsuchart@it.kmitl.ac.th ) Prof. Kaizhu Huang, Xi' Jiaotong-Liverpool University

Abstract: The 13th International Workshop on Artificial Intelligence and Cybersecurity (AICS 2020) was previously the International Data Mining and Cybersecurity Workshop (DMC) which has been held for ten consecutive years. The purpose of AICS is to raise the awareness of cybersecurity, promote the potential of industrial applications, and give young researchers exposure to the key issues related to the topic and to ongoing works in this area. AICS 2020 will provide a forum for researchers, security experts, engineers, and students to present latest research, share ideas, and discuss future directions in the fields of data mining, artificial intelligence, and cybersecurity. Website: http://www.csmining.org/

(3) Healthcare Analytics-Improving Healthcare outcomes using Big Data Analytics
Dr. Imran Razzak, School of Information Technology, Deakin University Geelong, Australia, imran.razzak@deakin.edu.au
Dr. Peter Eklund, School of Information Technology, Deakin University, Australia
Dr. Ibrahim A Hameed, Norwegian University of Science and Technology, Norway

Abstract: The field of health informatics has revolutionized the face of health care in the past decade. The increasingly aging population, prevalence of chronic diseases and rising costs has brought about some unique healthcare challenges to our global society. Informatics based solutions have not only changed the way in which information is collected and stored but also played a crucial role in the management and delivery of healthcare. Intelligent and automated data processing has never been more important than it is today. In recent years, intelligent systems have emerged as a promising tool for solving problems in various healthcare related domains. With the advent of various swift data acquisition systems and recent developments in the health care information technology, huge amounts of data have been amassed in different forms. One of the key challenges in this domain is to build intelligent systems for effectively modelling, organizing and interpreting the available healthcare data. Healthcare service providers are increasingly acknowledging the strategic importance of data analytics. However, the challenge becomes how to take Big Data and translate it into information that can be used by healthcare professionals for decision making to improve healthcare outcomes and improve the quality of care.

(4) The Synergy of Software Engineering Automation and Machine Learning (SSEA-ML)
Dr. Sajid Anwar, Center for Excellence in Information Technology, Institute of Management Sciences, Peshawar, Pakistan, sajid.anwar@imsciences.edu.pk
Dr. Abdul Rauf, RISE-Research Institute of Sweden in Vasteras, Sweden
Dr. Imran Razzak, School of Information Technology, Deakin University, Australia

Abstract: The contemporary economies and their GDP’s are heavily reliant on the up-to-date computer-based systems both in terms of operations to manufacture a product as well as in forming it as a viable commodity. The rapid growth in the technological development paradigms has led to numerous challenges to software development engineers, ranging from micro technological challenges to human-machine communication in industry 4.0. Emergence of Machine Learning (ML) as the epic center of computational research in last decade has helped researchers to come up with optimal solutions. With the emergence of industry 4.0, the application of ML has now widened to all phases of system development life cycle from requirements to maintenance and from planning to continuous improvement. This widening of scope for ML has led to extended and improved development and application of intelligent tools for automatic extraction of information from different documents, identification of functional and non-functional requirements, and test suites etc. With the swift progressions in ML and artificial intelligence, use of ML-based techniques and methodologies for software engineering are introduced and further optimized for greater efficiency in software engineers, processes and the product. From requirements to test cases, and from architecture to documentation, the ML artifacts and tools are now being employed.

(5) Advanced Machine Learning Approaches in Cognitive Computing
Dr. Jonathan H. Chan, jonathan@sit.kmutt.ac.th, Computer Science at the School of Information Technology, King Mongkut's University of Technology Thonburi
Dr. Phayung Meesad , phayung.m@it.kmutnb.ac.th,
Dr. Kuntpong Woraratpanya, kuntpong@it.kmitl.ac.th,
Prof. Yoshimitsu Kuroki, kuroki@kurume-nct.ac.jp.

Abstract: Cognitive technology has far-reaching applications in multiple sectors and is transforming global business today. Cognitive computing applications can be used in finance and investment firms to analyze the market in specific ways for their clients and make valuable suggestions. In healthcare and veterinary medicine, physicians can use cognitive computing tools to interact with past patient records and a database of medical information to aid and guide treatment. Cognitive computing applications in the travel industry could aggregate available travel information including flight and resort prices and availability, and combine that with user preference, budget, etc., to help deliver a streamlined, customized travel experience that could save consumers time, money, or both. In the health and wellness domain, data collected from wearable devices like a FitBit or Apple Watch can help personal trainers and individuals get suggestions for how to change their diet or exercise program, or even how to manage their sleep and stress-reducing routines. It is undeniable that one of the key successes of modern applications comes from cognitive computing; therefore, this special issue focuses on the recent and high-quality works in a research domain to promote key advances in cognitive computing technology, covering all theoretical and practical aspects from basic research to development of applications, and providing overviews of the state-of-the-art in emerging domains.

(6) Randomization-Based Deep and Shallow Learning Algorithms
Prof. Ponnuthurai Nagaratnam Suganthan, EPNSugan@ntu.edu.sg, Nanyang Technology University
Dr. M. Tanveer, mtanveer@iiti.ac.in, Indian Institute of Technology

Abstract: Randomization-based learning algorithms have received considerable attention from academics, researchers, and domain workers because randomization-based neural networks can be trained by non-iterative approaches possessing closed-form solutions. Those methods are in general computationally faster than iterative solutions and less sensitive to parameter settings. Even though randomization-based non-iterative methods have attracted much attention in recent years, their deep structures have not been sufficiently developed nor benchmarked. This special session aims to bridge this gap. The first target of this special session is to present the recent advances of randomization- based learning methods. Randomization based neural networks usually offer non-iterative closed form solutions. Secondly, the focus is on promoting the concepts of non-iterative optimization with respect to counterparts, such as gradient-based methods and derivative-free iterative optimization techniques. Besides the dissemination of the latest research results on randomization-based and/or non-iterative algorithms, it is also expected that this special session will cover some practical applications, present some new ideas and identify directions for future studies. Selected papers will be invited to Applied Soft Computing Journal Special Issue.

(7) Graph Neural Networks for Cognition and Development
Dr. Xu Yang, xu.yang@ia.ac.cn, Institute of Automation, Chinese Academy of Sciences
Dr. Shen-Lan Liu liusl@dlut.edu.cn
Prof. Zhi-Yong Liu, zhiyong.liu@ia.ac.cn

Abstract: Cognitive ability and developmental function have widely been considered to be highly related to essence of intelligence. The structural data, containing both the attributions and relations, are of particular importance for cognition and development research, such as the structured features or structured knowledge. Graph provides a natural way to represent and analyze structures in these data, and the graph neural networks (GNNs), as the deep learning models on graphs, have demonstrated superior performances in different types of structural data processing tasks, which are attributed to the powerful structural representation and inference ability of GNNs. Among these tasks, this special session focuses on the GNNs based methods and applications to cognition and development tasks of autonomous systems. As many new types of GNNs and typical applications are currently emerging to cater for the needs of processing and understanding of structural data in cognition and development research. The objective of the special session is thus to provide an opportunity for researchers and engineers from both academia and industry to publish their latest and original results on underlying theory, models, optimization algorithms, and applications of GNNs for cognition and development.

(8) NIPBIS2020: First International Workshop on Neural Information Processing for Big Data and IoT in Smart Cities
Prof. Loo Chu Kiong, University of Malaya, Malaysia, ckloo.um@um.edu.my
Prof. Gwanggil Jeon, Incheon National University, South Korea, gjeon@inu.ac.kr
Prof. Marco Anisetti, University of Milan, Italy, marco.anisetti@unimi.it

Abstract: Smart cities are urban area that uses different types of electronic Internet of Things (IoT) sensors to collect data and then use insights gained from that data to manage assets, resources and services efficiently. Smart cities can enhance quality of life and knowledge in contemporary society stands for the next wave of Civilization. The main techniques promoting to the accomplishment of smart connected cities contain big data, IoT, mobility, smart computing, cyber physical social system, artificial intelligence, data science, machine learning, and cognitive computing. Neural information processing such as artificial intelligence and machine learning integrated with IoT has the ability to answer key challenges presented by an excessive urban population which contains renewable energy, energy crises, transportation, healthcare and finance issues, and disaster management. It can improve the lives of the citizens and businesses that inhabit a smart city.

(9) Uncertainty Estimation: Theories and Applications
Prof. Saeid Nahavandi, Deakin University, saeid.nahavandi@deakin.edu.au
A/Prof. Abbas Khosravi, Deakin University, abbas.khosravi@deakin.edu.au
Prof. Amir F Atiya, Cairo University, Egypt, amir@alumni.caltech.edu

Abstract: How confident is a neural network model about its prediction? How much can one trust predictions of neural networks for new samples? How can one develop neural networks that know when they do not know? Answering these questions is a prerequisite for widespread deployment of neural networks in safety-critical applications. The field of uncertainty quantification of neural networks has received huge attention in recent years from both academia and industry. Several methods and frameworks have been proposed in the literature to generate predictive uncertainty estimates using neural networks. There are currently theoretical gaps and practical issues with proposed frameworks for uncertainty estimation using neural networks. Also, the research on the application of predictive uncertainty estimates for developing uncertainty-aware systems is still rare.

Workshops


(1) Workshop of Cross-Model Learning for Visual Question Answering
Dr. Zhou Zhao, Zhejiang University, China zhaozhou@zju.edu.cn
Dr. Zhou Yu, Hangzhou Dianzi University, China

Abstract: Visual Question Answering (VQA) is a recent hot topic in multimedia analysis, computer vision, natural language processing, and even a broad perspective of artificial intelligence, which has attracted a large amount of interest from the deep learning, computer vision, and natural language processing communities. Given an image (or a video clip) and a question in natural language, VQA requires grounding textural concepts to visual elements so as to infer the correct answer. The challenge lies in that, in most cases, it requires reasoning over the connecting between visual content and languages as well as the external knowledge. Towards general applications, besides the understanding of visual content, potential abilities of VQA largely come from the leveraging of different kinds of data (e.g., visual, audio, and text etc.) across multiple sources (e.g., social-media sites, surveillance videos, and Wikipedia etc.) for knowledge discovering and QA reasoning, which is recognized as cross-modal analysis in multimedia scope. From the above background, this workshop focuses on new theory and algorithms for visual question answering via cross-media analysis, as well as their applications in practice of human-computer interaction, multimedia search, visual description for blinded, incident report for surveillance, seeing chat bot, or even robotic intelligence.

Tutorials

Tutorial 1

Title: Advances in Randomized Learning Techniques for Neural Networks
Author: Dianhui Wang (La Trobe University, Australia)
Email: dh.wang@latrobe.edu.au

Abstract: Randomised learning techniques for training neural networks have received considerable attention in the past decades. The main reason behind this is that this class of learning algorithms can provide a feasible solution with comparable modelling performance. In 1992, Pao and Takefji proposed random vector functional-link (RVFL) nets, where the hidden layer parameters were randomly generated and then fixed during learning process. Indeed, a similar idea of such a randomized learning algorithm for the single layer perceptron model was also proposed by Schmidt et al. in 1992, and they suggested to assign the random input weights and biases in [-1, 1] with the uniform distribution. However, these existing randomized algorithms cannot guarantee to generate a capable learner model, although some theoretical results on the universal approximation property of randomized neural networks have been established by Igelnik and Pao in 1995. Recently, we develop a new randomized learning algorithm and ensure the resulting models, termed Stochastic Configuration Networks (SCNs), share the universal approximation property. This tutorial aims to clarify historical developments with milestone results and provide a deep insight into randomized learning techniques for constructing deep neural networks.

Bio: Dr Wang was awarded a Ph.D. from Northeastern University, Shenyang, China, in 1995. From 1995 to 2001, he worked as a Postdoctoral Fellow at Nanyang Technological University, Singapore, and a Researcher at The Hong Kong Polytechnic University, Hong Kong, China. He joined La Trobe University in July 2001 and is currently a Reader and Associate Professor with the Department of Computer Science and Information Technology, La Trobe University, Australia. He is also a Professor at The State Key Laboratory of Synthetical Automation of Process Industries, Northeastern University, China. His current research focuses on industrial bigdata-oriented machine learning theory and applications, specifically on Deep Stochastic Configuration Networks (http://www.deepscn.com/) for data analytics in process industries, intelligent sensing systems and power engineering.

Dr Wang is a Senior Member of IEEE, and serving as an Associate Editor for IEEE Transactions On Cybernetics, Information Sciences, and WIREs Data Ming and Knowledge Discovery.


Tutorial 2

Title: Transfer Learning for Brain-Computer Interfaces
Author: Dongrui Wu (Huazhong University of Science and Technology, China)
Email: drwu@hust.edu.cn

Abstract: A brain-computer interface (BCI) enables a user to communicate with a computer directly using the brain signals. Electroencephalogram (EEG) is the most frequently used input signal in BCIs. However, EEG signals are weak, easily contaminated by interferences and noise, non-stationary for the same subject, and varying among different subjects and sessions. So, it is difficult to build a generic pattern recognition model in an EEG-based BCI system that is optimal for different subjects, in different sessions, for different devices and tasks. Usually, a calibration session is needed to collect some training data for a new subject, which is time-consuming and user- unfriendly. Transfer learning (TL), which utilizes data or knowledge from similar or relevant subjects/sessions/devices/tasks to facilitate the learning for a new subject/session/device/task, is frequently used to reduce this calibration effort. This tutorial reviews the basics of EEG-based BCIs, and the progresses on TL approaches in the last few years, i.e., since 2016.

Bio: Dongrui Wu received a B.E in Automatic Control from the University of Science and Technology of China, Hefei, China, in 2003, an M.Eng in Electrical and Computer Engineering from the National University of Singapore in 2005, and a PhD in Electrical Engineering from the University of Southern California, Los Angeles, CA, in 2009. He was a Lead Researcher at GE Global Research, NY, and a Chief Scientist of several startups. He is now a Professor and Deputy Director of the Key Laboratory of the Ministry of Education for Image Processing and Intelligent Control, School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan, China.

Prof. Wu's research interests include affective computing, brain-computer interface, computational intelligence, and machine learning. He has more than 140 publications (6,300+ Google Scholar citations; h=38), including a book "Perceptual Computing" (with Jerry Mendel, Wiley-IEEE, 2010), and five US patents. He received the IEEE International Conference on Fuzzy Systems Best Student Paper Award in 2005, the IEEE Computational Intelligence Society (CIS) Outstanding PhD Dissertation Award in 2012, the IEEE Transactions on Fuzzy Systems Outstanding Paper Award in 2014, the North American Fuzzy Information Processing Society (NAFIPS) Early Career Award in 2014, the IEEE Systems, Man and Cybernetics (SMC) Society Early Career Award in 2017, and the IEEE SMC Society Best Associate Editor Award in 2018. He was a finalist of the IEEE Transactions on Affective Computing Most Influential Paper Award in 2015, the IEEE Brain Initiative Best Paper Award in 2016, the 24th International Conference on Neural Information Processing Best Student Paper Award in 2017, the Hanxiang Early Career Award in 2018, and the USERN Prize in Formal Sciences in 2019. He was a selected participant of the Heidelberg Laureate Forum in 2013, the US National Academies Keck Futures Initiative (NAKFI) in 2015, and the US National Academy of Engineering German-American Frontiers of Engineering (GAFOE) in 2015. His team won the First Prize of the China Brain-Computer Interface Competition in 2019.

Prof. Wu is/was an Associate Editor of the IEEE Transactions on Fuzzy Systems (2011-2018), the IEEE Transactions on Human-Machine Systems (2014-), the IEEE Computational Intelligence Magazine (2017-), and the IEEE Transactions on Neural Systems and Rehabilitation Engineering (2019-). He was the lead Guest Editor of the IEEE Computational Intelligence Magazine Special Issue on Computational Intelligence and Affective Computing, and the IEEE Transactions on Fuzzy Systems Special Issue on Brain Computer Interface. He is a Senior Member of the IEEE, a Board member and Distinguished Speaker of the NAFIPS, and a member of IEEE Systems, Man and Cybernetics Society Brain-Machine Interface Systems Technical Committee, IEEE CIS Fuzzy Systems Technical Committee, Emergent Technologies Technical Committee, and Intelligent Systems Applications Technical Committee. He has been Chair/Vice Chair of the IEEE CIS Affective Computing Task Force since 2012.


Tutorial 3

Title: Fundamentals of Deep Learning for Computer Vision
Author: Jonathan Chan (King Mongkut’s University of Technology Thonburi)
Email: jonathan@sit.kmutt.ac.th

Abstract: The NVIDIA Deep Learning Institute (DLI) and IC2-DLAB, School of Information Technology, King Mongkut’s University of Technology Thonburi (KMUTT) invite you to attend a hands-on deep learning workshop at ICONIP 2020, exclusively for verifiable academic students, staff, and researchers. This workshop teaches deep learning techniques for a range of computer vision tasks through a series of hands-on exercises. You will work with widely-used deep learning tools, frameworks, and workflows to train and deploy neural network models on a fully-configured, GPU-accelerated workstation in the cloud. After a quick introduction to deep learning, you will advance to: building and deploying deep learning applications for image classification and object detection, modifying your neural networks to improve their accuracy and performance, and implementing the workflow you have learned on a final project. At the end of the workshop, you will have access to additional resources to create new deep learning applications on your own. Upon successful completion of the workshop, participants will receive NVIDIA DLI Certification to recognize subject matter competency.

Bio: Dr. Jonathan H. Chan is an Associate Professor of Computer Science and a co-founder of D-Lab at the School of Information Technology (SIT), King Mongkut's University of Technology Thonburi (KMUTT), Thailand. Currently, he is the Acting Director of IC2-DLab at SIT, KMUTT. Jonathan holds a B.A.Sc., M.A.Sc., and Ph.D. degree from the University of Toronto and was a visiting professor back there on several occasions. He also holds an honorary Visiting Scientist status at The Centre for Applied Genomics at The Hospital for Sick Children (SickKids) in Toronto, Canada. Besides being the Section Editor of Heliyon Computer Science (Cell Press), Dr. Chan is an Action Editor of Neural Networks (Elsevier), and a member of the editorial boards of International Journal of Machine Intelligence and Sensory Signal Processing (Inderscience), International Journal of Swarm Intelligence (Inderscience), and Proceedings in Adaptation, Learning and Optimization (Springer). Also, he is a reviewer for a number of refereed international journals including Information Sciences, Applied Soft Computing, Expert Systems with Applications, and Computers in Biology and Medicine. He has served on the program, technical, organizing and/or advisory committees for numerous major international conferences. Moreover, Dr. Chan is a Past-President of the former Asia Pacific Neural Network Assembly (APNNA) and the VP of Education and a Governing Board member of the current Asia Pacific Neural Network Society (APNNS). In addition, he is a founding member and the current Chair of the IEEE-CIS Thailand Chapter. Dr. Chan is a senior member of IEEE, ACM, and INNS, and a member of the Professional Engineers of Ontario (PEO). Furthermore, he holds an NVIDIA Deep Learning Institute (DLI) University Ambassadorship and is a certified DLI instructor. His research interests include intelligent systems, biomedical informatics, and data science and machine learning in general.


Tutorial 4

Title: Adversarial Attacks on Deep Learning Models in Natural Language Processing
Author: Wei Emma Zhang (The University of Adelaide, Australia)
Email: wei.e.zhang@adelaide.edu.au

Abstract: With the development of high computational devices, deep neural networks (DNNs), in recent years, have gained significant popularity in many Artificial Intelligence (AI) applications. However, previous efforts have shown that DNNs are vulnerable to strategically modified samples, named adversarial examples. These samples are generated with some imperceptible perturbations, but can fool the DNNs to give false predictions. Inspired by the popularity of generating adversarial examples against DNNs in Computer Vision (CV), research efforts on attacking DNNs for Natural Language Processing (NLP) applications emerge in recent years. However, the intrinsic difference between image (CV) and text (NLP) renders challenges to directly apply attacking methods in CV to NLP. Various methods are proposed addressing this difference and attack a wide range of NLP applications. In this tutorial, we present a systematic introduction on all related academic works since the first appearance in 2017. We categorize the research efforts by five categorization criteria and discuss the representative and the-state-of- the-art works including the very recent BERT-based attack and defence. To make this talk self-contained, we briefly cover preliminary knowledge of NLP and discuss related seminal works in computer vision. We finally discuss open issues to bridge the gap between the existing progress and more robust adversarial attacks on NLP DNNs.

Bio: Dr. Wei Zhang (The publishing name is Wei Emma Zhang) is a lecturer in School of Computer Science, The University of Adelaide, and an early career researcher in information retrieval, natural language processing and text mining. She got her PhD in 2017 and spent half-year in IBM research Australia as full-time intern. She spent two and a half years in Macquarie University as postdoctoral researcher before joining The University of Adelaide. Dr. Wei Zhang has produced 50+ publications as books, refereed book chapters, journal articles and conference papers. She published an authored monograph by Springer in August 2018 on managing data and knowledge bases. Her papers has been published in prestigious journals in computer science including ACM Transactions on Internet Technology (CORE A), World Wide Web Journal (CORE A), Communications of the ACM (flagship publication of ACM), ACM Transactions on Intelligent Systems and Technology, IEEE Transactions on Services Computing, and IEEE Transactions on Big Data. Her research has also appeared in top-tier international conferences including the Web Conference (WWW), Intl. Conference on Extending Database Technology (EDBT), Intl. Conference on Information and Knowledge Management (CIKM), Intl.Conf. on Service Oriented Computing (ICSOC), Intl. Conf. on Web Services (ICWS), all CORE A*/A conferences usually with acceptance rate of 10-19%.

Dr Wei Zhang has been working on the proposed topic of this tutorial, namely adversarial attacks on textual data from 2018 and published an survey paper in 2020. She also published conference paper on black-box attacks. She has presented small scale tutorials within research group.


Tutorial 5

Title: Deep Learning on Graphs: Methods and Applications
Author: Irwin King, Jiani Zhang, Ziqiao Meng, Xinyu Fu, Yankai Chen, Tianyu Liu, Menglin Yang (The Chinese Univeristy of Hong Kong, HKSAR)
Email: king@cse.cuhk.edu.hk

Abstract: Over the past decade, deep Learning has achieved tremendous success in various domains. The representation power of deep learning to extract complex patterns layer-by-layer from underlying data is well recognized. However, applying deep learning to the ubiquitous graph data is non-trivial because of the non-Euclidean structure property, heterogeneity, and diversity of graphs. The main difficulty in analyzing graph data is to find the right way to express and exploit the graph’s underlying structural information. The objective of this tutorial is twofold. First, we provide a comprehensive overview of graph neural networks (GNNs) methods, mainly by following their development history and the ways these methods to solve the challenges posed by graphs. GNNs aim to learn a low-dimensional vector representation for every node in a graph, which can be used for other downstream ML tasks. To better capture the similarity and hierarchy of entities, we introduce the generalization of GNNs for hyperbolic embedding, which enforces hyperbolicity in hidden layers and conducts efficient Riemannian optimization. Second, we present how to utilize GNNs to solve real-world graph applications. To demonstrate the properties and challenges of learning on heterogeneous graphs , we investigate the applications of recommender systems, program understanding, and logical queries over knowledge graphs. We show how to apply GNNs to embed complex relations and multiple types of nodes into low-dimensional vectors to solve these problems. To demonstrate the effectiveness of GNNs on spatiotemporal and temporal graphs, we choose the applications of traffic forecasting and anomaly detection. The challenge of these two tasks is how to capture the structural relations as well as temporal dependencies.

Bio:
• Prof. Irwin King's research interests include machine learning, social computing, AI, web intelligence, data mining, and multimedia information processing. In these research areas, he has over 300 technical publications in journals and conferences. He is an Associate Editor of the Journal of Neural Networks and ACM Transactions on Knowledge Discovery from Data (ACM TKDD). He is President of the International Neural Network Society (INNS) and an IEEE Fellow, Distinguished Member of ACM, and HKIE Fellow. Moreover, he is the General Co-chair of The WebConf 2020, ICONIP 2020, WSDM 2011, RecSys 2013, ACML 2015, and in various capacities in a number of top conferences such as WWW, NIPS, ICML, IJCAI, AAAI, etc. While he was on leave with AT&T Labs Research, San Francisco, he also taught classes as a Visiting Professor at UC Berkeley. He received his B.Sc. degree in Engineering and Applied Science from California Institute of Technology, Pasadena and his M.Sc. and Ph.D. degree in Computer Science from the University of Southern California, Los Angeles.
• Jiani Zhang is a PhD student in computer science and engineering. Her research topic includes deep learning, graph neural networks, recommendations and learning analytics.
• Ziqiao Meng is a PhD student in computer science and engineering. His research topic includes theoretical understanding in graph neural networks and its applications in computer program understanding.
• Xinyu Fu is a PhD student in computer science and engineering. Her research topic includes deep learning, graph neural networks, and heterogeneous graph embedding.
• Yankai Chen is a PhD student in computer science and engineering. His research interest includes knowledge graph and graph neural networks related problems.
• Tianyu Liu is an MPhil student in computer science and engineering. Her research topic includes hyperbolic graph embedding and hyperbolic graph neural networks.
• Menglin Yang is a PhD student in computer science and engineering. His research interest includes dynamic graph embedding, graph optimization and curvature graph representation.


Tutorial 6

Title: Robust Adversarial Learning: Fundamentals, Theory, and Applications
Author: Kaizhu Huang (Xi’an Jiaotong-Liverpool University, China)
Email: kaizhu.huang@xjtlu.edu.cn

Abstract: Adversarial learning is a hot topic in machine learning, pattern recognition, and security. In particular, adversarial examples, referred to as augmented data points generated by imperceptible perturbation of input samples, have recently drawn much attention in the community. Being difficult to distinguish from real examples, such adversarial examples could change the prediction of many best learning models including the state-of-the-art deep learning models. Various attempts have recently been made to build robust models that take into account adversarial examples. However, these methods can either lead to performance drops, or are ad-hoc in nature and lack mathematic motivations. In this tutorial, we will first present the background of adversarial examples including its history, the motivation, properties, and the concepts. We will then discuss in theory how a unified framework and various robust learning models can be built against adversarial examples. Visualization and illustrative examples will be made to understand the theory. Finally we will extend the theory of adversarial examples in various applications including computer vision, pattern recognition, and cybersecurity. A series of experimental investigations will also be presented for illustrating the usefulness of adversarial examples for robust learning.

Bio: Kaizhu Huang is currently a Professor, Department of Electrical and Electronic Engineering, Xi’an Jiaotong- Liverpool University, China. He is also the founding director of Suzhou Municipal Key Laboratory of Cognitive Computation and Applied Technology. Prof. Huang has been working in machine learning, adversarial learning and security, neural information processing, and pattern recognition. He was the recipient of 2011 Asia Pacific Neural Network Society (APNNS) Younger Researcher Award. He also received Best Book Award in National Book Competition and best paper (or finalist) awards in international conferences four times. He has published 8 books in Springer and over 180 international research papers including about 60 SCI-indexed international journals, e.g., in journals (JMLR, Neural Computation, IEEE T-PAMI, IEEE T-NNLS, IEEE T-IP, IEEE T-BME, IEEE T-Cybernetics) and conferences (NeurIPS, IJCAI, SIGIR, UAI, CIKM, ICDM, ICML, ECML, CVPR). He serves as associated editors in four international journals (including three JCR-1 journals) and board member in three international book series. He has been invited as a keynote speaker in over 20 international conferences and forums. He has been sitting in the grant evaluation panels in Hong Kong RGC, Singapore AI programmes, and NSFC, China. He served as chairs and PCs in many international conferences and workshops such as ICONIP, AAAI, ICML, IJCAI, NeurIPS, ICLR, ICDAR, and ACPR. His homepage can be seen at: http://www.premilab.com/KaizhuHUANG.ashx.

Keynote Speakers



Max Welling

Title: PDEs and Schrödinger Equation for Deep Learning

Bio: Max Welling is a computer scientist who works in artificial intelligence (expert systems, machine learning, robotics). He holds a research chair in machine learning at the University of Amsterdam; is co-founder of Scyfer BV, a university spin-off in deep learning; and has held postdoc positions at the California Institute of Technology, University College London and the University of Toronto. Welling received his PhD in 1998 under supervision of Nobel laureate Gerard ‘t Hooft. He has served on the editorial boards of JMLR and JML; was an associate editor for Neurocomputing and JCGS; and has received grants from Google, Facebook, Yahoo, NSF, NIH, NWO and ONR-MUR. Currently, Welling serves on the board of the NIPS foundation and of the Data Science Research Center in Amsterdam; directs the Amsterdam Machine Learning Lab (AMLAB); and co-directs the Qualcomm-UvA deep learning lab (QUVA), the Bosch-UvA Deep Learning lab (DELTA) and the AML4Health Lab.


Stephen Grossberg

Title: Explainable and Reliable AI: Comparing Deep Learning with Adaptive Resonance

Bio: Stephen Grossberg is a cognitive scientist, theoretical and computational psychologist, neuroscientist, mathematician, biomedical engineer, and neuromorphic technologist. He is the Wang Professor of Cognitive and Neural Systems and a Professor Emeritus of Mathematics & Statistics, Psychological & Brain Sciences, and Biomedical Engineering at Boston University. Grossberg is a founder of the fields of computational neuroscience, connectionist cognitive science, and neuromorphic technology. His work focuses upon the design principles and mechanisms that enable the behavior of individuals, or machines, to adapt autonomously in real time to unexpected environmental challenges. This research has included neural models of vision and image processing; object, scene, and event learning, pattern recognition, and search; audition, speech and language; cognitive information processing and planning; reinforcement learning and cognitive-emotional interactions; autonomous navigation; adaptive sensory-motor control and robotics; self-organizing neurodynamics; and mental disorders. Grossberg also collaborates with experimentalists to design experiments that test theoretical predictions and fill in conceptually important gaps in the experimental literature, carries out analyses of the mathematical dynamics of neural systems, and transfers biological neural models to applications in engineering and technology. He has published seventeen books or journal special issues, over 500 research articles, and has seven patents.


Kun Zhang

Title: Learning and using causality-related representations

Bio: Kun Zhang is an associate professor of philosophy and an affiliate faculty in the machine learning department of Carnegie Mellon University, and a senior research scientist at Max Planck Institute for Intelligent Systems, Germany. He obtained his BS degree in automation from University of Science and Technology of China and his Ph.D. degree in computer science from the Chinese University of Hong Kong. He is interested in the connection between causality and machine intelligence, and has been actively developing methods for automated causal discovery from various kinds of data and investigating machine learning problems, including transfer learning, adversarial vulnerability, and deep learning, from a causal view. His work has been widely published in major artificial intelligence and machine learning venues. Dr. Zhang coauthored a best student paper for UAI and a best finalist paper for CVPR, and received the best benchmark award of the causality challenge, and has been frequently serving as a senior area chair, area chair, or senior program committee member for most major conferences in machine learning or artificial intelligence, including NeurIPS, ICML, UAI, IJCAI, and AISTATS.

Invited Speakers



Tien-Tsin Wong

Title: convolutional neural networks in computational Manga

Bio: Prof Wong is a professor in the Computer Science & Engineering Department of the Chinese University of Hong Kong He is a core member of Virtual Reality, Visualization and Imaging Research Centre in the Chinese University of Hong Kong. Recently, he received the IEEE Transactions on Multimedia Prize Paper Award 2005 and the Young Researcher Award 2004.He was a member in Academic Committee of Microsoft Digital Cartoon and Animation Laboratory in Beijing Film Academy, visiting professor in School of Computer Science and Technology at Tianjin University, and the visiting research professor in Biomedical Engineering Department of Shanghai Jiaotong University. He has actively involved (as Program Co-chair, Program Committee and Organizing Committee) in several international conferences, including SIGGRAPH (2019, 2020), SIGGRAPH Asia (2009, 2010, 2012, 2013, 2018), Eurographics (2007, 2008, 2009, 2011, 2019), Pacific Graphics (2000-2005, 2007-2019), i3D (2010-2013), IEEE Virtual Reality (2011), ICCV (2009), Computer Graphics International (2004, 2006, 2012-2020), CAD/Graphics (2003, 2005-2007, 2009, 2011), Chinagraph (2000, 2002, 2004, 2006, 2008, 2010, 2012, 2014, 2016, 2018, 2020) and ACM VRCIA (2006). He has a number of works related to convolutional neural networks in graphics applications.


Tianbao Yang

Title: Large-scale Deep AUC Maximization and Applications in Medical Image Classification

Bio: Prof. Yang is Associate Professor at the Computer Science Department at the University of Iowa. He was a researcher at NEC Laboratories America, Inc. Before that, he was a Machine Learning Researcher at GE Global Research. He received his Ph.D. degree in Computer Science from Michigan State University in 2012. His interests include non-convex optimization algorithms in machine learning, faster convergent algorithms by Leveraging error bound condition, large-scale stochastic optimization, online optimization, deep learning, randomized algorithms for big data analytics and distributed optimization for Big Data.

Organizing Committee


Honorary Co-Chairs
  • Jonathan Chan, King Mongkut’s University of Technology Thonburi, Thailand
  • Irwin King, The Chinese University of Hong Kong, Hong Kong
General Co-Chairs
  • Andrew Leung, City University of Hong Kong, Hong Kong
  • James Kwok, The Hong Kong University of Science and Technology, Hong Kong
Program Co-Chairs
  • Haiqin Yang, Ping An Life, China
  • Kitsuchart Pasupa, King Mongkut's Institute of Technology Ladkrabang, Thailand
Local Arrangements Co-Chairs
  • Vithida Chongsuphajaisiddhi, King Mongkut University of Technology Thonburi, Thailand
Finance Co-Chairs
  • Vajirasak Vanijja, King Mongkut's University of Technology Thonburi, Thailand
  • Seiichi Ozawa, Kobe University, Japan
Special Sessions Co-Chairs
  • Kaizhu Huang, Xi'an Jiaotong Liverpool University, China
  • Raymond Chi-Wing Wong, The Hong Kong University of Science and Technology, Hong Kong
Tutorial Co-Chairs
  • Zenglin Xu, Harbin Institute of Technology, Shenzhen, China
  • Jing Li, The Hong Kong Polytech University, Hong Kong
Proceedings Co-Chairs
  • Xinyi Le, Shanghai Jiao Tong University, China
  • Jinchang Ren, University of Strathclyde, United Kingdom
Publicity Co-Chairs
  • Zeng-Guang Hou, Institute of Automation, Chinese Academy of Sciences, China
  • Ricky Ka-Chun Wong, City University of Hong Kong, Hong Kong
Regional Liaison
  • Yiu-ming Cheung, Hong Kong Baptist University, Hong Kong
  • Junqiu Wei, Noah's Ark Lab, Huawei Technologies, Hong Kong
  • Jianke Zhu, Zhejiang University, China
  • Jiefeng Cheng, Ping An Life, China
  • Kun Zhang, Carnegie Mellon University, USA
  • Zhirong Yang, Norwegian University of Science and Technology, Norway
  • Bo Zhang, Philips Research France, France
  • Paul S Pang, Federation University Australia, New Zealand
  • Somnuk Phon-Amnuaisuk, Universiti Teknologi Brunei, Brunei