Invited talks and tutorials
Invited talks and tutorials at LION6 have been confirmed.
Invited talksGet the invited list: invited-talks.pdf, invited-talks.doc.
(Invited) Title: Surrogate-Assisted Evolutionary Optimisation: Past, Present and Future
Speaker: Yaochu Jin, Nature-Inspired Computing and Engineering Group, Department of Computing, University of Surrey, UK
==> Yaochu Jin - Surrogate-Assisted Evolutionary Optimisation: Past, Present and Future - SLIDES (pdf)Abstract:
Surrogate-assisted (or meta-model based) evolutionary computation uses efficient computational models, often known as surrogates or meta-models, for approximating the fitness function in evolutionary algorithms. Research on surrogate-assisted evolutionary computation began over a decade ago and has received considerably increasing interest in recent years. Very interestingly, surrogate-assisted evolutionary computation has found successful applications not only in solving computationally expensive single- or multi-objective optimization problems, but also in addressing dynamic optimization problems, constrained optimization problems and multi-modal optimization problems. This talk provides an up-to-date overview of the history and recent developments in surrogate-assisted evolutionary computation and suggests a few future trends in this research area.
Yaochu Jin received the B.Sc., M.Sc., and Ph.D. degrees from Zhejiang University, China, in 1988, 1991, and 1996, respectively, and the Dr.-Ing. Degree from Ruhr University Bochum, Germany, in 2001. He is a Professor of Computational Intelligence and Head of the Nature Inspired Computing and Engineering (NICE) Group, Department of Computing, University of Surrey, UK. He was a Principal Scientist with the Honda Research Institute Europe in Germany. His research interests include understanding evolution, learning and development in biology and bio-inspired approaches to solving engineering problems. He (co)authored over 130 peer-reviewed journal and conference papers. He is an Associate Editor of BioSystems, IEEE Transactions on Neural Networks, IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews, IEEE Transactions on Nanobioscience, and IEEE Computational Intelligence Magazine. He has delivered over ten Plenary/Keynote speeches at international conferences on multi-objective machine learning, computational modeling of neural development, morphogenetic robotics and evolutionary design optimization. He is the General Chair of the 2012 IEEE Symposium on Computational Intelligence in Bioinformatics and Computational Biology. He presently chairs the Intelligent System Applications Technical Committee of the IEEE Computational Intelligence Society. Professor Jin is a Fellow of BCS and Senior Member of IEEE.
(Invited) Title: Optimization problems and algorithms for the high-level control of dynamic systems
Speaker: Gérard Verfaillie, ONERA, France
==> Gérard Verfaillie - Optimization problems and algorithms for the high-level control of dynamic systems - SLIDES (pdf)Abstract:
The high-level control of dynamic systems, such as aircraft, airports, air traffic, or spacecraft, consists in deciding at each control step on which action(s) to be performed as a function of current observations and objectives. Successive decisions must entail that the dynamics of the controlled system satisfies user objectives as best as possible. To do so, a usual approach, inspired from the Model Predictive Approach in Automatic Control consists at each control step in (i) collecting current observations and objectives (ii) solving a deterministic planning problem over a given horizon ahead, (iii) extracting the first action from the best plan produced, (iv) applying it, and (v) considering the next step. From the optimization point of view, this implies to be able to solve quickly many successive similar planning problems over a sliding horizon, maybe not in an optimal way. I will try to present and illustrate this approach and to explain the potential impact of learning techniques.
Graduated from école Polytechnique (Paris) in 1971 and from SUPAéRO (French national engineering school in aeronautics and space, Computer science specialization, Toulouse) in 1985, Gérard Verfaillie is now Research supervisor at ONERA (The French Aerospace Lab). His research activity is related to models, methods, and tools for combinatorial optimization and constrained optimization, especially for planning and decision-making.
(Invited) Title: Autonomous Search
Speaker: Frédéric Saubion, Université d'Angers, France
Decades of innovations in combinatorial problem solving have produced better and more complex algorithms. These new methods are better since they can solve larger problems and address new application domains. They are also more complex, which means that they are hard to reproduce and often harder to fine tune to the peculiarities of a given problem. This last point has created a paradox where efficient tools became out of reach for practitioners. Autonomous search represents a new research field defined to precisely address the above challenge. Its major strength and originality consist in the fact that problem solvers can now perform self-improvement operations based on analysis of the performances of the solving process -- including short-term reactive reconfiguration and long-term improvement through self-analysis of the performance, offline tuning and online control, and adaptive control and supervised control. Autonomous search "crosses the chasm" and provides engineers and practitioners with systems that are able to autonomously self-tune their performance while effectively solving problems. In this talk, we review existing works and we attempt to classify the different paradigms that have been proposed during past years to build more autonomous solvers. We also draw some perspectives and futures directions.
Frédéric Saubion coheads the Metaheuristics, Optimization and Applications team at the Université d'Angers (France); his research topics include hybrid and adaptive evolutionary algorithms and applications of metaheuristics to various domains such as information retrieval, nonmonotonic reasoning and biology. www.info.univ-angers.fr/pub/saubion
TutorialsGet the tutorial list: tutorials.pdf, tutorials.doc.
(Tutorial) Title: Addressing Numerical Black-Box Optimization: CMA-ES
Speakers: Anne Auger, and Nikolaus Hansen
==> Anne Auger, and Nikolaus Hansen - Addressing Numerical Black-Box Optimization: CMA-ES - SLIDES (pdf)Abstract:
Evolution Strategies (ESs) and many continuous domain Estimation of Distribution Algorithms (EDAs) are stochastic optimization procedures that sample a multivariate normal (Gaussian) distribution in the continuous search space, R^n. Many of them can be formulated in a unified and comparatively simple framework. This introductory tutorial focuses on the most relevant algorithmic question: how should the parameters of the sample distribution be chosen and, in particular, updated in the generation sequence? First, two common approaches for step-size control are reviewed, one-fifth success rule and path length control. Then, Covariance Matrix Adaptation (CMA) is discussed in depth: rank-one update, the evolution path, rank-mu update. Invariance properties and the interpretation as natural gradient descent are touched upon. In the beginning, general difficulties in solving non-linear, non-convex optimization problems in continuous domain are revealed, for example non-separability, ill-conditioning and ruggedness. Algorithmic design aspects are related to these difficulties. In the end, the performance of the CMA-ES is related to other well-known evolutionary and non-evolutionary optimization algorithms, namely BFGS, DE, PSO,...
Anne Auger is a permanent researcher at the French National Institute for Research in Computer Science and Control (INRIA). She received her diploma (2001) and PhD (2004) in mathematics from the Paris VI University. Before to join INRIA, she worked for two years (2004-2006) at ETH in Zurich. Her main research interest is stochastic continuous optimization including theoretical aspects and algorithm designs. She is a member of ACM-SIGECO executive committee and of the editorial board of Evolutionary Computation. She has been organizing the biannual Dagstuhl seminar "Theory of Evolutionary Algorithms" in 2008 and 2010.
Nikolaus Hansen is researcher at The French National Institute for Research in Computer Science and Control (INRIA). He received a Ph.D. in civil engineering in 1998 from the Technical University Berlin under Ingo Rechenberg. Before joining INRIA, he has been working in applied artificial intelligence and in genomics, and he has been researching in evolutionary computation and computational science at the Technical University Berlin and the ETH Zurich. His main research interests are learning and adaptation in evolutionary computation and the development of algorithms applicable in practice. He has been a main driving force behind the development of CMA-ES over many years.
(Tutorial) Title: Symmetry in Mathematical Programming
Speaker: Leo Liberti
This tutorial will introduce some basic concepts about group theory and how it applies to mathematical programming. We shall give an overview of the main existing research streams on this subjects, and then discuss the latest developments. We shall show how to put together existing computational tools (GAP, AMPL, CPLEX, Couenne, Rose, kept together using shell scripts) in order to automatically detect and exploit symmetry in a given mathematical programming instance.
Leo Liberti received his PhD in 2004 from Imperial College, London. He then obtained a postdoctoral fellowship at Politecnico di Milano, and has been at LIX, Ecole Polytechnique ever since 2006, where he is an associate professor. He co-founded (and currently heads) the System Modelling and Optimization (SYSMO) team, he is co-director of the Optimization and Sustainable Development (OSD) Microsoft-CNRS sponsored chair, and is vice-president of the Computer Science department. He is Editor-in-Chief of 4OR, and holds associate editorships with several international journals (DAM, JOGO, ITOR, EURJCO, CMS). He has published more than 100 papers on mathematical programming and optimization techniques and applications.
(Tutorial) Title: Intelligent Optimization with Submodular Functions
Speaker: Andreas Krause
In recent years, submodularity, a discrete analogue of convexity, has emerged as very useful in a variety of machine learning problems. Similar to convexity, submodularity allows to efficiently find provably (near-) optimal solutions. In this tutorial, I will introduce the notion of submodularity, discuss examples and properties of submodular functions, and review algorithms for submodular optimization. I will also cover recent extensions to the online (no-regret) and adaptive (closed-loop) setting. A particular focus will be on relevant applications such as active learning and optimized information gathering, ranking and algorithm portfolio optimization.
Andreas Krause received his Diplom in Computer Science and Mathematics from the Technical University of Munich, Germany (2004) and his Ph.D. in Computer Science from Carnegie Mellon University (2008). He joined the California Institute of Technology as an assistant professor of computer science in 2009, and is currently assistant professor in the Department of Computer Science at the Swiss Federal Institute of Technology Zurich. His research is in adaptive systems that actively acquire information, reason and make decisions in large, distributed and uncertain domains, such as sensor networks and the Web. Dr. Krause is a 2010 Kavli Frontiers Fellow, and received an NSF CAREER award, the Okawa Foundation Research Grant recognizing top young researchers in telecommunications, as well as awards at several premier conferences (AAAI, KDD, IPSN, ICML, UAI) and the ASCE Journal of Water Resources Planning and Management.