Abstracts of (Visiting) Lectures



Seminar talk on April 22nd, 2024

Dr. Lie Meng Peng Department of Computer Science and Engineering, Southern University of Science and Technology, China

Fair performance comparison of multi-objective evolutionary algorithms

The field of evolutionary multi-objective optimization (EMO) has been very active in recent years. Various EMO algorithms are proposed every year. The performance of a newly designed algorithm is usually evaluated by computational experiments in comparison with existing algorithms. However, fair comparison of different EMO algorithms is not easy since the evaluated performance of each algorithm usually depends on experimental setting. This is also because solution sets instead of solutions are evaluated. In this talk, we will explain and discuss various difficulties in fair performance comparison of EMO algorithms related to the following four issues: (i) the termination condition of each algorithm, (ii) the population size of each algorithm, (iii) performance indicators, and (iv) test problems. For each issue, its strong effects on comparison results are clearly demonstrated. Then, we will discuss the handling of each issue for fair comparison. Finally, we will also suggest some promising future research topics related to each issue.



Seminar talk on April 22nd, 2024

Prof. Hisao Ishibuchi Department of Computer Science and Engineering, Southern University of Science and Technology, China

New Research Directions in Evolutionary Multi-Objective Optimization

In the field of evolutionary multi-objective optimization (EMO), various EMO algorithms have been proposed in the last three decades under the following widely-accepted implicit assumption: The final population is presented to the decision maker. Thus, the goal of EMO algorithm design is to find a good final population. The performance of EMO algorithms is evaluated using the final population. Recently, two different directions have been actively studied. One is the use of an unbounded external archive, and the other is the Pareto front (or Pareto set) modelling. In the first direction, all the examined solutions are stored in the archive. Then, an arbitrary number of solutions can be selected from the archive for the decision maker. In the second direction, the Pareto front (or Pareto set) is modelled by supervised learning based on examined non-dominated solutions or reinforcement learning based on a performance indicator. Then, an arbitrary number of final solutions can be obtained from the model for the decision maker. These two approaches are much more flexible than the traditional EMO algorithm framework since an arbitrary number of solutions can be presented to the decision maker, which may increase the practical usefulness of EMO-based decision making. In this talk, first I will explain the traditional EMO framework and the two new directions (the use of an unbounded external archive, and the modelling of the Pareto front/set). Then, I will discuss some interesting research challenges related to the two new directions such as subset selection from large candidate sets, use of examined solutions for generating new solutions, and the inverse mapping from the m-dimensional objective space to the n-dimensional decision space where n >> m.



Seminar talk on February 19th, 2024

Prof. Margaret M. Wiecek School of Mathematical and Statistical Sciences, Clemson University, USA

On Two-Stage Multiobjective Linear Programs: Formulations and Solution Methods

Many decision-making problems under uncertainty are resolved in a two-stage decision process. Strategic decisions being here-and-now decisions are made in the first stage while tactical/operational decisions being wait-and-see decisions are made in the second stage. For example, in engineering design, a two-stage decision-making structure is common because certain high-level design decisions must be made to allow further development of physical or simulation models from which additional low-level design decisions can be made accordingly. For example, the selection of a design configuration is conducted in the first stage whereas the second stage deals with the decisions in terms of product utilization and applicability in various scenarios.

Two-stage multiobjective linear programs (TSMOLPs) model two-stage decision processes under uncertainty having conflicting linear objectives and linear constraints at every stage. Addressing the worst-case uncertainty scenario, the TSMOLP is transformed into the two-stage robust counterpart (TSrMOLP) whose efficient solutions are the robust-efficient solutions for the original problem.

The TSrMOLP can be studied in two ways. One goal is to compute the first-stage feasible solutions that are efficient with respect to the first and second-stage objectives. The other goal is to recognize the first- and second-stage decision makers' preferences and develop an interactive procedure to compute a first-stage feasible solution that is the most preferred efficient solution with respect to the first- and second-stage objectives.

The assumptions on discrete or continuous uncertainty and the number of second-stage objectives, and the application of weighted-sum scalarization transform the TSrMOLP into single-objective optimization problems (SOPs) of an increasing level of difficulty. The SOPs' optimal solutions provide exact or approximate efficient solution to the TSrMOLP. Solution approaches using Benders' decomposition, a Parametric Linear Programming solver, and the Biconvex Approximate Simplex method are given and illustrated with biobjective examples.



Seminar talk on November 9th, 2023

Prof. Nirupam Chakraborti and Dr Pavel Bastl Faculty of Mechanical Engineering, Czech Technical University

Robot Caliberation

Robot calibration is a process which allows to identify real parameters of a robot as a system. The robot itself can be considered as an electro-mechanical or mechatronic system which consists of mechanical parts, sensors, drives, control system and others. All these parts have to be controlled in a way which allows intended robot’s application. Robot application defines required parameters which have to be fulfilled. These parameters must be consistent with the parameters that the robot is capable to achieve. One of the mostly used parameter in the industry is the precision of the end-point position of the robot. For this purpose is the robot modeled by its kinematic model which involves robot design parameters. Mechanical parts of the robot are manufactured with real precision and in this case the calibration process should provide values of design parameters which maximize precision of position achieved by the real robot. In this case the position of robot end-point are measured precisely by expensive devices (laser trackers) and requires the robot to be removed from the application for a certain amount of time. Such widely used approach does not influence other aspects which can significantly influence real precision of end-point position. The precision is affected by many other parameters which can influence, for example the stiffness of mechanical parts, precision of sensors used for robot control, control loops and also environmental conditions like temperatures, wear and tear of the mechanical parts and others. For this reason it is necessary to look into the calibration as a more sophisticated process which is based on more sophisticated models and more sources of data which can be acquired from the robot. Some aspects can be modeled by appropriate methods and sometimes they may be very difficult to obtain or even unknown. On the other side there may be more data sources of data which can be used for calibration process. These may involve data from precise devices which are often very expensive and these data are available offline, but also data from additional sources, which can be obtained online but with simple sensors and less accuracy. We have to consider the data also as a source which give us also information which aspects can involve real precision of robot end-point accuracy. Therefore evolutionary algorithms can provide valuable information not only for the identification of sophisticated surrogate models, but also information which aspects affect, for example, the accuracy of the robot end-point position.



Seminar talk on September 6th, 2023

Assoc. Prof. Hemant Kumar Singh University of New South Wales, Australia

Recent advancements in evolutionary multi-objective optimization

Multi-objective optimization problems (MOPs) are widely encountered in several disciplines including engineering, operations research, and economics. The theoretical solution to such problems comprises not only one but a set of best trade-off designs in the objective space, known as the Pareto-optimal front (PF). Even though there exists a significant body of literature investigating MOPs, there remain several open and emerging challenges. These include, but are not limited to, developing computationally efficient methods to search a good approximation of the PF, identifying preferred solutions on PF for practical implementation, and designing benchmarking instances, metrics and practices for quantitative comparisons between solution methods. In this talk, I will provide a brief overview of several methods developed in the recent years in our research group to address these issues. Most of the methods presented are developed within the framework of evolutionary algorithms (EAs), which are often a viable choice when the underlying objective and/or constraint functions are highly non-linear or black box in nature. Biography: Hemant Kumar Singh is an Associate Professor at the School of Engineering and Technology at the University of New South Wales (UNSW), Australia. He completed his PhD from UNSW in 2011 and B.Tech in Mechanical Engineering from Indian Institute of Technology (IIT) Kanpur in 2007. He worked with General Electric Aviation at John F. Welch Technology Centre as a Lead Engineer during 2011-13. His research interests include development of evolutionary computation methods to deal with various challenges such as multiple objectives, constraints, uncertainties, hierarchical (bi-level) objectives, and decision-making. He has collectively over 125 refereed publications on these topics. He is the recipient of two Discovery Project Grants (2019-22, 2022-24), from Australian Research Council, Endeavour Australia Fellowship 2018-19 and IEEE CEC Best Paper Award 2023, among others. He is an Associate Editor for IEEE Transactions on Evolutionary Computation and has been in the organizing team of several conferences, e.g., IEEE CEC (Program co-chair 2021), SSCI (MCDM co-chair 2020-23), ACM GECCO (RWACMO workshop co-chair 2018-21). More details of his research and professional activities can be found at http://mdolab.net/Hemant/



Seminar talk on June 14th, 2023

Dr. Dmitry Podkopaev Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland

Value function as a model for multiobjective preferences

During interactive processes of solving multiobjective optimization problems, Decision Maker (DM) iteratively provides preference information in response to the information obtained from the method. The preference information changes as the DM learns about the problem. We assume that the source of preference information consists of two parts: the personal view of the DM on the advantages and disadvantages of different solutions (the constant part) and the knowledge of the problem that evolves during the solution process. We prove that under general rationality assumptions, the constant part can be represented by a value function and the knowledge about the problem as the set of reachable Pareto optimal solutions. This model can be used to study human behavior in the interactive multiobjective optimization.



Seminar talk on May 17th, 2023

Dr. Otto Pulkkinen Medical Immuno-Oncology Research Group (MIORG), University of Turku

Improving Selectivity of Combinatorial Anti-Cancer Therapies by Preclinical Multiobjective Optimization

Combinations of anti-cancer drugs often show higher response rates, increased efficacy, and reduced probability of drug resistance in comparison to single agent therapies. Yet their safety must be carefully assessed: a combination can exhibit even supra-additive toxicity, resulting in a vanishing of the therapeutic window. In this talk, I will introduce a preclinical, biobjective framework for optimizing drug combination selectivity, and show why even additive combinations without any synergistic interaction may exhibit therapeutic advance over monotherapies. The theory is applied to design of combinations of novel immunotherapies with more traditional cancer therapies to increase responsiveness of tumors that are otherwise cold to immunotherapies. Extension of the theory to include tumor heterogeneity, and hence to optimization with higher number of optimization criteria will be discussed.



Seminar talk on April 19th, 2023

Ana B. Ruiz Senior Lecturer, Department of Applied Economics (Mathematics), University of Málaga, Spain

Different Applications of Multiobjective Optimization Approaches

Multiobjective optimization methods have widely demonstrated their capability for solving real-world problems in different fields of research. In this talk, three applications of these approaches will be described.

Firstly, I will present two new multiobjetive optimization models for portfolio selection, where four objective functions are considered in each of them to find efficient portfolios avoiding the over-diversification problem (which consists of reducing the portfolio's return without significantly reducing its risk by introducing new assets into the portfolio). The first model pursues an equal distribution of the capital across all assets as a strategy to reduce risk, while the second one is aimed at allocating the capital in such a way that the individual contributions of the assets in the portfolio to the total risk are equivalent, i.e., it seeks to spread the risk equally across all assets. Because of the complexity of the models proposed, the evolutionary algorithm GWASF-GA has been applied to generate an approximation of the Pareto optimal fronts in both cases. The results obtained allowed us to conclude that both models generate portfolios that are notably more balanced in terms of proportions and contributions to risk, and better diversified, compared to the portfolios of the classical mean-variance model.

The second application is related to the study of the scientificity level attributed in Spain to several professions. The scientificity of a profession refers to the quality of the scientific procedure followed. The more objective, unbiased, and neutral the procedure and the results obtained, the better the level of scientificity. While professions like physicians, medical doctors, or researchers are highly valued in the society, other professions belonging to social sciences or humanities usually have considerably lower scientific recognition. In this work, we propose statistical models to estimate the scientificity level of a set of professions and we study how to simultaneously improve their scientific perception by means of multiobjective optimization methodologies. The results indicate some policies to be followed to increase the scientificity of the professions under study.

The last application is focused on the study of the sustainability of European Countries by means of both econometrics and multiple criteria decision-making techniques. Using individual sustainability development indicators available in the EUROSTAT database (from 2010-2019), we first build composite indicators to assess the economic, social, and environmental dimensions of the European countries. These composite indicators are built using the Multiple Reference Point Weak and Strong Composite Indicators methodology, which is a technique based on the reference point preferential scheme. According to the information obtained, secondly, we perform an econometric analysis (using panel data) to regress the composite indicators only considering the individual indicators that are somehow controllable by policy makers. The main motivation is to get some insights into the impact that a modification of these controllable individual indicators would have on the overall sustainable situation of the territories. Finally, we focus on Spain, whose sustainability situation can be improved, and we build a multiobjetive optimization problem based on the econometric analysis previously performed, which is aimed at identifying the most desired compromise among the three sustainability dimensions as a way to enhance its sustainability situation in the future.



Seminar talk on April 12th, 2023

Prof. Kash Barker School of Industrial and Systems Engineering University of Oklahoma, USA

Two-Stage Stochastic Program for Environmental Refugee Displacement Planning

Forced displacement is a global problem that requires planning for the relocation and integration of displaced people. Most studies focus on conflict-driven forced displacement, and hence the refugee resettlement problem. These studies generally focus on short-term planning and assume that demand within the fixed time interval is given. However, forced displacement, including environmental displacement as well as conflict-driven displacement, is not a one-time event. On the contrary, it is an ongoing and long-term process with dynamic parameters. We are interested in the long-term displacement problem, especially for climate-driven cases in which people will be forced to leave uninhabitable regions in to escape slow-onset climate change impacts such as water stress, crop failure, and sea level rise. To reflect the long-term planning requirements of the climate-driven displacement problem in the parameters and the model, we propose a two-stage stochastic program where demand uncertainty is represented with various demand scenarios, demand and capacity are managed dynamically, and integration outcomes and related costs are optimized.



Seminar talk on December 1st, 2022

Assoc. Prof. Luís Paquete University of Coimbra, Portugal

Hypervolume-based Representation and Scalarization: Results and Challenges

The hypervolume indicator measures the multidimensional volume of the union of axis-parallel boxes, each of which spanned by a nondominated point and a pre-defined reference point. This indicator has shown to have interesting properties, and it has gained popularity as a performance assessment method, as a selection criterion, and an archiving strategy for multiobjective evolutionary algorithms. Moreover, under appropriate assumptions about the location of the reference point in the objective space, the hypervolume indicator takes its maximum value at the nondominated set. This result suggests that optimizing the hypervolume indicator might also be useful in the context of exact approaches for multiobjective optimization problems. In this talk, we consider two possibilities of applying the hypervolume indicator in the context of multiobjective combinatorial optimization: i) to consider the hypervolume indicator from the perspective of representation quality, where the goal is, given a nondominated set, to find a subset of a given cardinality that maximizes the hypervolume indicator, and ii) to use the hypervolume indicator as a scalarization method, leading to search procedures that find the nondominated set or a representation. We discuss applications of these methods to particular biobjective combinatorial optimization problems as well as challenges that arise when considering more than two objectives.



Seminar talk on November 16th, 2022

Ofer M. Shir, MIGAL, University of Upper Galilea, Israël

Algorithmically-Guided Scientific Discoveries

The majority of experimental sciences share the common basis of inherent physical observables that constitute objective functions and may undergo optimization. Accordingly, an underlying problem shared by scientists and engineers is to achieve optimal behavior of their systems and arrive at new discoveries while searching over an array of decision variables. This perspective reduces any scientific discovery to solving a Combinatorial Optimization problem, which also translates into a navigation problem in the landscape of possible experiments. The seminar talk will focus on the integration of Computational Intelligence into experimental systems within the Natural Sciences, which may benefit from solving such optimization problems, to form altogether an algorithmically-guided setup with a real-world assay as the objective function evaluation. We present two modern experimental use-cases, from the domains of Biochemistry and Postharvest, in which a simple randomized search heuristic is shown to locate solutions that outperform the best human practices and are adopted by the scientists. With the AI revolution taking place, we will then question whether more decisions in scientific experiments may be driven by the machine. We shall introduce the formulation of scientific hypotheses as the possible next leap-frog of AI, capitalizing on established knowledge representation frameworks (e.g., ontologies, Prolog encoding) and by using machine-driven causal inference.



Seminar talk on October 19th, 2022

Dr. Federico Daniel Peralta Samaniego Loyola University, Spain

Can we use water surface vehicles for monitoring multiple Water Quality Parameters? On the interactive approach of this MO problem.

Autonomous Surface Vehicles (ASVs) come in very handy for monitoring Water Quality Parameters of polluted lakes and lagoons. Having the ASVs equipped with water quality sensors (for measuring pH, Temperature, Dissolved Oxygen, etc.) allows for safe and autonomous navigation dedicated for monitoring. When an ASV measures a parameter, a model of that parameter is created/updated, and utility functions are used to select measurement locations that, when measured, will decrease the uncertainty of this model. But each different parameter will create different models, and the utility function will select different measurement locations, so we have multiple objectives to be solved (obtaining the different parameter models). Moreover, we can only start/end the mission from one of the available harbors, the ASVs have a limited battery capacity and we have a limited time window in which we can measure the parameters. Using the knowledge of a DM and an interactive framework is beneficial in this situation due to the increasing complexity of the problem and open-ended solutions that can be provided. In this sense, the DESDEO framework can be used to this extent and provide for optimal solutions.



Seminar talk on October 12th, 2022

Prof. Francisco Ruiz de la Rúa Universityof Malaga, Spain

Multiple Reference Point Composite Indicators. Further Developments and Applications

The development of methodologies for the construction of composite indicators is an area of growing interest for the scientific community, due to the immense amount of data available in all areas of society. The authors have developed several methods based on the reference point scheme, which allows establishing reference levels for the different indicators and expressing the results based on the position of each unit with respect to said levels. In this presentation, the different methods developed are reviewed, paying special attention to the degree of compensation that each of them allows. In addition, various applications of this methodology are briefly exposed, in fields such as the evaluation of universities, environmental sustainability or quality of life. Besides, assessing performance implies considering all the relevant aspects, and the aim should be to achieve good outputs but also, simultaneously, to make an efficient use of the available resources. For this reason, we propose a two-step procedure that combines the Multiple Reference Point methodology for building outcome-oriented composite indicators which, in the second step, are used as outputs in a Data Envelopment Analysis Model. This methodology is applied to the evaluation of universities. Different scenarios are considered depending on the compensation between indicators. The information provided allows the decision-makers to detect strengths and weakness of the universities and relation it to the input-oriented efficiency.



Seminar talk on May 25th, 2022

Dr. Judit Lienert (Eawag - Swiss Federal Institute of Aquatic Science and Technology)

Learning from complex environmental real-world decisions for improving MCDA methods and advancing behavioral operational research

Most environmental decisions are complex. It is usually difficult to predict the outcomes of interventions to an environmental or engineered system, and we usually require interdisciplinary collaboration. Moreover, many stakeholders are involved in the decision, which requires participatory approaches in transdisciplinary projects. People usually have various interests, and stakeholders have to make trade-offs between achieving environmental, societal, and economic objectives. Uncertainty is usually high, including uncertainty of the predictions (the scientific evidence), the stakeholders’ preferences, the model, and possibly the future world. My work at Eawag, the Swiss Federal Institute of Aquatic Science and Technology, focuses on such “messy” environmental decisions. I have specialized in Multi-Criteria Decision Analysis (MCDA), which can be very useful to integrate the scientific evidence with the stakeholders’ preferences. In this talk, I will guide through a structured decision-making process, based on Multi-Attribute Value and Utility Theory (MAVT/MAUT). I will highlight challenges and interesting research opportunities. These start in the way we set up the decision using Problem Structuring Methods (PSMs). Recent research indicates that PSMs have been neglected in MCDA, and we need to find rigorous methods that can be easily applied in practice. Moreover, I will introduce Behavioral OR, which is especially important when we elicit the preferences of stakeholders. I will also introduce some ways to tackle uncertainty. I will use examples from urban water management, river rehabilitation, creating a flood forecast and alert system for West Africa, and pesticide governance in Swiss agriculture to illustrate our research. So far, it was always possible to find compromise solutions despite conflicts of interest and high uncertainty. I look forward to a lively discussion with you!



Seminar talk on May 24th, 2022

Eero Lantto (Finnish Institute of Occupational Health)

Safety management and multiobjective optimization

Economic, environmental and social dimensions of criteria are the three constituent parts of sustainability. The social dimension includes occupational health and safety, which is one of the virtues a responsible business may want to pursue. Safety management (SM) is an integral part of companies’ business management. The key of SM is to ensure operations do not include risks that are not acceptable. The acceptable risk level is always conditional, though. Traditionally, safety decision-making (SDM) has been based on manual on-site inspections by safety experts, and statistical tools used by safety managers. Thereby, SDM has been based mostly on descriptive analytics and expert knowledge. Many industrial corporations possess significant amounts of data on occupational accidents and hazards in the form of reports. In the field of safety research these data have been used in various machine learning studies. The ML experiments have usually aimed at predicting accident count and accident severity. Topic modelling of safety related text is another common target of application in the field. Hence, predictive analytics is already a relatively common topic in the field of safety research and safety management, while prescriptive analytics is still in its infancy. Multiobjective optimization has lots of potential as a method to improve SDM and there are plenty of ways to do it. For example, MOO can be used to make industrial production processes safer by minimizing worker exposure to pollution while maximizing production capacity. Also, MOO can be used to make a specific job task safer, having physical workload as one of the objective functions to be minimized. From the point of view of sustainability, supply chains might be the area with the largest scale of impact to be achieved with MOO. This could be done by applying MOO in production method selection or supplier selection, to mention a few.



Seminar talk on December 7th, 2021

Tinkle Chugh (University of Exeter, UK)

Multi-objective Bayesian Optimisation over Sets

Bayesian optimisation methods have been widely used to solve problems with computationally expensive objective functions. In the multi-objective case, these methods have been successfully applied to maximise the expected hypervolume improvement of individual solutions. However, the hypervolume, and other unary quality indicators such as multiplicative epsilon indicator, measure the quality of an approximation set and the overall goal is to find the set with the best indicator value. Unfortunately, the literature on Bayesian optimisation over sets is scarce. This work uses a recent set-based kernel in Gaussian processes and applies it to maximise hypervolume and minimise epsilon indicators in Bayesian optimisation over sets. The results on benchmark problems show that maximising hypervolume using Bayesian optimisation over sets gives a similar performance than non-set-based methods. The performance of using epsilon indicator in Bayesian optimisation over sets needs to be investigated further. The set-based method is computationally more expensive than the non-set-based ones, but the overall time may be still negligible in practice compared to the expensive objective functions.



Seminar talk on November 23th, 2021

Dr. Jari Toivanen (Accuray Inc, Sunnyvale, CA)

Mathematical and Computational Aspects of Radiation Therapy Planning

Radiation therapy is one of the most common treatment options for cancers along with surgery and chemotherapy. The general goal is to kill or control malignant tumor cells. To accomplish this the tumor has to receive sufficient amount of radiation. On the other hand the surrounding tissue and organs should receive low dose to avoid undesired side effects. Commonly a linear accelerator is used to deliver beams of radiation. Treatment planning aims to find beam directions, beam on times, and the shapes of these beams. This problem can be formulated as an optimization problem with a computationally expensive objective function. One possible formulation and how it can be solved will be discussed. Also related imaging, organ segmentation, dose calculation, and motion management problems are briefly described.



Seminar talk on May 19th, 2021

Kangas Annika and Mykkänen Reijo (Natural Resources Institute, Finland (LUKE))

Discussions about potential collaboration with researchers from the Natural Resources Institute, Finland on forest treatment management with multiobjective optimization and uncertainty handling

In the seminar, researchers from LUKE tell about their research challenges and we describe some of our methods. They want to hear about new interactive methods, so we could demonstrate DESDEO as well.



Seminar talk on January 13th, 2021

Manuel Berkemeier (Paderborn University, Germany)

Derivative-Free Trust Region Methods for Unconstrained and Convexly Constrained Multiobjective Optimization

Many real-world optimization problems require computationally expensive objective function evaluations.

For such problems it consequently becomes infeasible to approximate derivatives in high dimensional settings.

In this talk, a flexible trust region framework for unconstrained and convexly constrained multiobjective problems is presented. It is designed to require as few objective function evaluations as possible by using local radial basis function surrogate models.

I will briefly revisit the basic concepts of trust region algorithms and why they naturally allow for the incorporation of local surrogates. The construction of suitable fully linear models is explained, with numerical experiments illustrating their respective efficacy.



Seminar talk on October 14th, 2020

Karthik Sindhiya (Finnopt)

ML tools for researchers: Food for thought

ML has proved to be rapidly changing the productivity landscape in a wide variety of Industries across all verticals and horizontals. However one may feel education ad research the birthplace of ML has not leveraged the benefit of it. Hence there is a need for ML based productivity tools for researchers which not only drastically boosts their productivity but also helps generate better research articles. In this talk, we discuss some of these topics and some initial results and chalk out potential collaborative and funding ideas.



Seminar talk on May 6th, 2020 (Cancelled)

Manuel Berkemeier (Paderborn University)

Radial Basis Function Models as Surrogates in Multiobjective Optimization

Multiobjective Optimization is a useful tool to find preferable configurations in engineering applications with several objectives. Unfortunately, many real world applications require time consuming simulations in order to evaluate the objectives. Hence, a multiobjective optimization routine using as few as possible of these expensive evaluations is desirable.

In this talk, we present results of using (global) Radial Basis Function models as surrogates for the original objective functions to approximate the Pareto Set and the Pareto Frontier of an industrial problem, which is concerned with simultaneous testing and development of autonomous vehicles.

The problem relies on black box simulations that are parameterized by decision variables as well additional testing variables (i.e., different driving scenarios the safety of which has to be ensured), and the goal is to find decision vectors that yield optimal compromises with respect to conflicting criteria regarding the performance as well as safety of the vehicle.

The challenges in building reliable and cheap surrogate models are discussed and a method for online sampling of new training data is presented. The resulting surrogate models can be incorporated into arbitrary established optimization routines.

Finally, first results on locally valid models to find individual Pareto critical points of the original problem are shown. Early findings of the resulting derivative-free trust region algorithm are promising and critical points are found using only few function evaluations.



Seminar talk on March 18th, 2020

Dave Sayers

How language technology will break linguistics (and some thoughts about fixing it)

What happens when we’re all cyborgs, permanently hooked up to Augmented Reality (AR)? “Within the next 10 years, many millions of people will … walk around wearing relatively unobtrusive AR devices that offer an immersive and high-resolution view of a visually augmented world” (Perlin 2016: 85). Linguistics must adapt, or risk irrelevance.

I begin this talk by outlining a likely near-future scenario in which handheld mobile phones have been widely replaced by devices integrated with our ears, eyes, and hands – and then, further into the future (but not that far!), directly with our brains. This is not science fiction. It is a simple progression of current prototype hardware, the subject of huge corporate R&D investments. It is coming.

And it will break linguistics.

Let’s say you are having a conversation, and you are both using advanced AR kit to augment your voice in real time into each other's language. Who (what) are the 'interlocutors' here? Let’s say in the same conversation you want to check a fact, so you wiggle your fingers to activate the right search, and that fact instantly appears in your visual field, and goes straight into your conversation without missing a beat. How can we account for that with pragmatic theory, and turn-taking? Your conversation descends into an argument, and soon you realise you want to sue! You call your lawyer… an exceptionally polite robot who instantly scans all case law and predicts your chances in court. What does that mean for legal meaning embedded in language, and for linguistic politeness strategies between judges, lawyers and clients? On the way home, you meet a deaf friend who uses sign language. Your respective hardware does a good job of turning signs into words and vice versa, but a lot of nuances get lost. It’s not the same as the two spoken languages in your last conversation. This friend benefits a bit less from the new technology. On the bus home, you see someone begging for change. They have one of those old mobile phones, displaying an adaptive sign in your language (or anyone else’s) but it’s one-way; they can’t otherwise communicate. You realise that some old layers of inequality have followed this new technology into the world. Back home, you go about your hobby, learning a new language… by interacting in a virtual world of very patient automated chat partners.

All of the above is completely wide open for linguistic research. Put less optimistically, linguistics has no idea what is about to hit it! How can linguistics adapt? I’ll think about ways the discipline could be reoriented towards these technologically fluid scenarious, in the lab, in the field… in the future.



Seminar talk on May 14th, 2019

Otto Pulkkinen (Univ. of Helsinki and Tampere Univ.)

Finding cancer-selective combination therapies by multiobjective optimization

Combination therapies are often required to treat patients with advanced cancers that have already escaped single-drug therapies through rewiring of cellular signaling pathways or other resistance mechanisms. Due to huge number of potential drug combinations, there is a need for systematic approaches that identify safe and effective combinations for each patient. In this talk, I will present a multiobjective optimization method for finding pairs of drugs and higher-order drug combinations that show maximal therapeutic effect and, in the same time, minimal adverse effects. The optimization method is widely applicable to various cancer types, and it takes as input only drug sensitivity profiling in patient-derived cell models. The performance of the method is demonstrated for advanced melanoma with a characteristic V600E mutation in the BRAF gene. The optimization method predicts a number of new co-inhibition partner drugs for vemurafenib, which is a selective BRAF-V600E inhibitor and currently the main therapy for advanced melanoma with this mutation.



Seminar talk on May 14th, 2019

Tommi Kärkkäinen (University of Jyväskylä)

Machine learning using Minimal Learning Machine and Extreme Minimal Learning Machine

Classically machine learning methods and algorithms have been given separately for unsupervised and supervised problems, with or without target variables. However, supervised distance-based learning was rebirthed in 2013 when the Minimal Learning Machine (MLM) utilizing distance-regression was proposed. In 2018, the distance-based kernel from MLM was integrated with the extreme learning machine and the novel machine learning variant referred as Extreme Minimal Learning Machine (EMLM) was suggested by yours truly. In this talk, I introduce both of these distance-based methods and provide some experimental results.



Seminar talk on May 21th, 2019

Prof. Yaochu Jin (University of Surrey, UK)

Data-driven Evolutionary Optimization: Online and Offline Model Management

Data-driven surrogate-assisted evolutionary optimization has attracted increasing attention in the communities of evolutionary computation, engineering design and machine learning as well. This talk is going to introduce the basic ideas of surrogate-assisted evolutionary optimization, followed by a detailed discussion of different Bayesian approaches to surrogate management. We will then present a few recent advances in offline model management methods. The talk is concluded by examples of surrogate-assisted optimization in solving real-world problems.



Seminar talk on May 14th, 2019

Prof. Kathrin Klamroth (Univ. of Wuppertal, Germany)

Generic Scalarization-Based Algorithms in Multiobjective Optimization

Generic algorithms in multiobjective optimization are often based on the iterative solution of scalarized single-objective subproblems, each of which provides additional knowledge on the nondominated set. Given some (partial) knowledge on the nondominated set of a multiobjective optimization problem, the search region corresponds to that part of the objective space that potentially contains additional nondominated points. We consider a representation of the search region by a set of tight local upper bounds (in the minimization case). While the search region can be easily determined in the bi-objective case, its computation in higher dimensions is considerably more difficult and gives rise to interesting relations to computational geometry and to tropical algebra. We particularly discuss the usefulness of local upper bounds in generic scalarization based solution methods, aiming at concise representations of the nondominated set.



Seminar talk on April 25th, 2019

Tinkle Chugh (University of Exeter, UK)

A multiobjective optimization approach in building Gaussian process models

Gaussian processes (GPs) have been widely used in optimization and machine learning communities. Some of the problems where GPs have gained their popularity are non-linear regression, classification and Bayesian optimization (both single and multiobjective). The main advantage of GPs is that they provide a predictive distribution of the data instead of a point prediction. The uncertainty provided by the distribution can further be used in decision making and optimizing an acquisition function in Bayesian optimization. Despite their wide applicability, a little attention has been paid to the problem of selecting hyperparameter values and kernel functions. Several options exist and it is not straightforward to select a particular value of hyperparameter and a kernel function. In this talk, I will present a multiobjective optimization approach in building Gaussian process models which addresses the challenges of selection of hyperparameter values and different kernels.



Seminar talk on April 3rd, 2019

Tinkle Chugh (University of Exeter, UK)

A study on using different scalarizing functions in Bayesian multiobjective optimization

Scalarizing functions have been used for many decades in Multiple Criteria Decision Making (MCDM) community for converting a multiobjective optimization problem into a single objective optimization problem. In the last few years, their use in evolutionary multiobjective optimization (EMO) has also been increased especially in decomposition based algorithms. However, their use in solving expensive multiobjective optimization is scarce and only few studies exist. In this talk, I will present a review of different scalarizing functions from both MCDM and EMO communities and their use in Bayesian multiobjective optimization when solving expensive multiobjective optimization problems. The results on different benchmark problems clearly show a correlation between the performance of different functions and the fitness landscape created by them.



Seminar talk on April 3rd, 2019

Richard Andrášik

Road network vulnerability and recovery process after disasters

Proper quantification of hazard for various types of events (e.g. natural processes and traffic crashes) ranks among the most prominent current topics of transportation network reliability research. Extremal events usually affect many nodes and links, which can even lead, in the worst cases, to disintegrations of a network into several parts. One of the main features of road networks is that the number of links outgoing from a single node is only seldom higher than five. This means that interruption of a relatively small number of links can have a significant impact on functioning of theentire network. Thus, there are two main aims which have to be addressed. First, combinations of concurrently interrupted links with the largest negative impact on the network are worth to be identified. Second, a reconnection of a road network has to be planned rapidly and efficiently after a disaster. Considering large networks, both these tasks are computationally demanding when a deterministic approach is used to solve them. Therefore, we applied stochastic approaches which are able to find a sufficiently good solution in a very short time in comparison to deterministic procedures. In particular, these approaches include simulated annealing and ant colony optimization.



Seminar talk on December 19th, 2018

Yury Nikulin (University of Turku)

Modelling efficient routing in a dynamic network

In this work we present a new bi-objective integer programming model for routing and scheduling in a time-dependent directed network, where edge weights vary with time. The objective is to find an algorithmic solution for the optimal sequence of location/time points which gives the shortest travel distance, with maximum number of visits. A local search heuristic based on time splitting is proposed for computionally intractable instances. The performance of the algorithm on real scale sets is evaluated. The results of this research are purposed for various logistic applications, specifically maritime management services.



Seminar talk on December 18th, 2018

Jian-Bo Yang (Decision and Cognitive Sciences Research Centre, Alliance Manchester Business School, The University of Manchester)

Perspectives for deterministic methods in multiobjective optimization

In this presentation, we discuss the necessity of using both data and human judgments for scientific inference, intelligent modeling, and evidence-based decision making under uncertainty in business, management and engineering systems. The focus is on the analysis of uncertainty in data and judgments and how to model various types of uncertainty in an integrated framework including randomness, ambiguity, inaccuracy and inconsistency. In particular, the paradigm of evidential reasoning (ER) will be introduced as an extension to Bayesian inference and conventional rule based system modeling. In this presentation, a short overview of the ER developments as inspired by real word applications in a wide range of areas will be provided first, from ER multiple criteria assessment (MCA) and decision making (MCDM) under uncertainty, to the ER rule for information fusion and a new maximum likelihood ER (MAKER) framework for big data analytics, probabilistic inference and machine learning, and to the belief rule-base inference methodology using the evidential reasoning approach (RIMER) or belief rule-base (BRB) methodology in short for intelligent modeling and knowledge-based systems. The details of the ER approach for MCA and MCDM, the RIMER/BRB methodology for universal nonlinear system modeling and the ER rule and MAKER framework for information fusion, probabilistic inference with data and machine learning will be discussed with real world applications conducted by the researcher and his collaborators who have worked and made main contributions in these areas for years. The key software packages with application examples that has been developed by the researcher and his colleagues over many years will be demonstrated.



Seminar talk on December 14th, 2018

Alberto Lovison (University of Padova, Italy)

Perspectives for deterministic methods in multiobjective optimization

While traditionally heuristic methods have been the most diffused strategies employed in multiobjective optimization (MOO), in literature there is a number of attempts to adapt efficient and appealing deterministic strategies for multiple objectives. In this talk, we propose a faithful multiobjective translation of the DIRECT algorithm, a celebrated global optimization algorithm that uses the ideas of Lipschitz global optimization, but without requiring the knowledge of a global Lipschitz constant for the unknown function. We discuss several critical aspects emerging in the translation of the method for MOO that have not yet been solved satisfactorily in the attempts encountered in literature.

Another recent proposal consists of the dimensionality reduction and problem decomposition, a technique designed for attacking large dimensional problems with several objectives. The curse of dimensionality is a severe obstacle for producing a satisfactory analysis of such problems and consequently, the solutions found may be extremely costly and still be sub-optimal. Our proposal consists in analyzing the strength of the functional dependence between inputs and outputs by means of a statistical technique called Analysis of Variance (AnOVa). On this basis, it is possible to split the large problem in a collection of subproblems with a subset of the original design variables and objective functions, which offer an approximation of the original undecomposed problem while admitting thorough exploration of the small dimensional domain. The tradeoff between approximation accuracy and problem dimensionality is discussed along with an extension to interactive methods.



Seminar talk on November 13th, 2018

Dmitry Podkopaev (Polish Academy of Sciences)

Some approaches to visualization of multidimensional and big data

My presentation is based based on the presentation by Prof. Dzemyda at the seminar of IBS PAN in Warsaw. He is an expert in data visualization from Vilnius University / Institute of Data Science and Digital Technologies, co-author of the book "Multidimensional Data Visualization: Methods and Applications" (2013)

When a researcher needs to analyse multidimensional data, it is often the case where visualizing the data helps to understand which methods to apply and how to set method parameters. Besides that, some techniques of dimensionality reduction can be applied for machine learning.



Seminar talk on October 24th, 2018

Francisco Ruiz (University of Malaga)

On building synthetic indicators using reference point techniques: approaches, reflections and applications

Synthetic indicators are increasingly recognised as a useful tool in policy analysis and public communication. They provide simple comparisons of units that can be used to illustrate the complexity of our dynamic environment in wide-ranging fields, such as competitiveness, governance, environment, press, development, peacefulness, tourism, economy, universities, etc. Their construction has been dealt with from several angles. Some authors claim that MCDM techniques are highly suitable in multidimensional frameworks when aggregating single indicators into a composite one, since this process involves making choices when combining criteria of different natures, and it requires a number of steps in which decisions must be made.

We propose a methodology based on the multicriteria reference point scheme,named MRP-WES. The decision maker can establish reference levels for each indicator, and the final outcome can be interpreted in terms of the position with respect to these levels. Besides, two different aggregations are proposed: the weak indicator, allowing for full compensation among the single indicators, and the strong indicator, not allowing for any compensation. The joint visualization of both synthetic indicators provides valuable information that may be unnoticed using other existing approaches. Some features of the methodology proposed are discussed, and some comparisons are made with other methods used for building synthetic indicators. Finally, some applications of the methodology proposed are reported.



Margaret Wiecek (Clemson University)

On Highly Robust Efficient Solutions to Uncertain Multiobjective Linear Programs

Decision making in the presence of uncertainty and multiple conflicting objectives is a real-life issue, especially in the fields of engineering, public policy making, business management, and many others. The conflicting goals may originate from the variety of ways to assess a system's performance such as cost, safety, and affordability, while uncertainty may result from inaccurate or unknown data, limited knowledge, or future changes in the environment. To address optimization problems that incorporate these two aspects, we focus on the integration of robust and multiobjective optimization.

Although the uncertainty may present itself in many different ways due to a diversity of sources, we address the situation of objective-wise uncertainty only in the coefficients of the objective functions, which is drawn from a finite set of scenarios. Among the numerous concepts of robust solutions that have been proposed and developed, we concentrate on a strict concept referred to as highly robust efficiency in which a feasible solution is highly robust efficient provided that it is efficient with respect to every realization of the uncertain data. We apply this concept to uncertain multiobjective linear programs (UMOLPs).

We develop properties of the highly robust efficient set, provide its characterization using the cone of improving directions associated with the UMOLP, derive several bound sets on the highly robust efficient set, and present a robust counterpart for a class of UMOLPs. As various results rely on the polar and strict polar of the cone of improving directions, as well as the acuteness of this cone, we derive properties and closed-form representations of the (strict) polar and also propose methods to verify the property of acuteness. Moreover, we undertake the computation of highly robust efficient solutions. We provide methods for checking whether or not the highly robust efficient set is empty, computing highly robust efficient points, and determining whether a given solution of interest is highly robust efficient. An application in the area of bank management is included.



Seminar talk on September 3rd, 2018

Majid Soleimani-damaneh (University of Tehran)

On Robustness of Multiobjective Programming

This talk is devoted to presenting some new results about robustness in linear/nonlinear/nonsmooth multi-objective programming. In the linear case, robustness order of a given efficient solution (with respect to the degree of interiority of the cost vector in the binding cone) and its calculation are discussed. Furthermore, robustness with respect to the eligible angle deviation of the cost vector in the binding cone is investigated. Obtaining the maximum eligible angle deviation and the connection between two above-mentioned robustness standpoints are highlighted. In the nonlinear case, robustness in nonsmooth nonlinear multi-objective programming is addressed. The norm-based robustness is dealt with, and its relationship with proper efficiency is discussed. Some characterizations of robust solutions, in terms of the tangent cone and the non-ascent directions, are presented. Furthermore, the calculation of the robustness radius followed by a comparison between different robustness notions is presented.



Guest Lecture on August 21st, 2018

Manuel López-Ibáñez (University of Manchester)

Intersections between Machine Learning and Optimization

Although machine learning (ML) and mathematical optimization are usually regarded as separate fields, both fields are increasingly benefiting from advances in the other. Examples of the intersection between ML and optimization are the use of optimization for the tuning of hyper-parameters of machine learning models, or the use of machine learning models to approximate expensive objective function in black-box optimization algorithms. The combination of these techniques is leading to "automatic machine learning" and "automatic configuration and selection of optimization algorithms" methodologies, which aim to simplify the task of selecting and adapting machine learning and optimization methods when tackling new problems.



Seminar talk on June 11th, 2018

Neill Bartie (Helmholtz-Institute Freiberg for Resource Technology)

Exploring Quantitative Modelling and Assessment Approaches for Circular Economy Systems

Circular Economy (CE) is a promising approach to the operationalisation of efforts to reduce the negative impacts of anthropological resource consumption, emissions and waste generation on the planet, and to ensure its habitability for future generations. It advocates for a transition from linear to circular economies in which wastes, traditionally destined for landfill, are used as resources instead, reducing virgin resource consumption and environmental impact. Numerous approaches and methods have been developed for the quantification of different aspects of CE systems at various levels of detail. However, no single method can answer every question for every stakeholder, necessitating the integration of approaches that often speak different languages and assess conflicting objectives. The elaboration and evaluation of these systems require quantitative and holistic analysis, assessment and optimisation of physical flows and environmental, social and economic impacts. Such holistic analysis demands the use of sophisticated tools capable of dynamically balancing and predicting multi-component inputs and outputs. Moreover, these tools must be able to assess impacts related to all three pillars of sustainability (people, planet, profit). Not all processes can be quantified at the process model level of detail. As systems expand from the product or process level to meso and macro levels, the assessment boundary needs to cross over to methods with higher levels of aggregation. Where interacting process systems are physically or conceptually separated (e.g. geographically, economically, politically, across cultures and attitudes toward sustainability, or time), additional computational boundaries have to be crossed (e.g. currency exchange rates, regulation and legislation, pollution abatement technologies, societal attitudes, system dynamics). To quantify and optimise such multi-dimensional systems, methods that facilitate compatibility across these boundaries would be essential. In turn, it would enable the development of reliable CE tools that convey realistic and representative information to support good decision making, from individual purchasing decisions to national and global climate policy.

Seminar talk on Septemper 6th, 2017

Santtu Tikka (University of Jyväskylä, Department of Mathematics and Statistics)

Fusion: A Comprehensive Software Package for Causal Inference

Causal inference relies on graphical models to represent causal relationships and assumptions about real-world phenomena. Fusion is a tool which enables users to effortlessly create and modify graphs through a graphical user interface, and to apply well-known inference algorithms including algorithms for identifiability, recoverability and transportability. Various tools to analyze graphs are readily available in Fusion, such as evaluation of d-separation, paths, admissible sets, instrumental variables and much more. Graphs can be easily shared and exported in publication-quality formats, and a diagnostic trace is provided for the inference algorithms.



Seminar talk on August 9th, 2017

Richard Allmendinger (Manchester Business School The University of Manchester)

Data-driven optimization: Industrial case studies

In this talk I will talk about several industrial case studies in data-driven optimization including (i) heuristic allocation of computational resources (joint project with ARM), (ii) online bidding for product advertisement (joint work with Dream Agility), and (iii) optimization of drug manufacturing processes (joint project with Biopharm Services). The main focus of the seminar will be on project (i), which is about deciding which computational projects (scripts) should run on which cluster such that the clusters are evenly utilized, as many projects as possible completed, and projects belonging to the same group as evenly distributed across the clusters as possible. After providing a formal problem definition, several heuristics to tackle the problem will be introduced and investigated. Finally, projects (ii) and (iii) will be introduced and computational challenges outlined, hopefully, leading to interesting discussions.



Visiting lectures on February 10th, 2017

Christer Carlsson (Abo Akademi University)

Big Data Analytics and Knowledge Mobilisation

The “big data” era has brought increasing demand for analytics skills as big corporations have realized that modern information and communications technology [ICT], with some interesting developments in terms of Internet of Things and Digital Economy, has brought not only blessings but also a growing set of challenges. ICT has brought the capabilities to produce large and quickly growing sets of data, which are now both growing and changing in almost real time demanding planning, problem solving and decision making also in almost real time – as this is required in digital business (“if we are late, we are out – somebody else will take the market and the customers”). The big corporations have realized that they need people with skills (i) to organize and make sense of the data, (ii) to find the logic driving events, actions and outcomes, (iii) to build models that can describe, explain and predict what is happening and (iv) to communicate the insight over a number of media to groups of people in almost real time. People with these skills have analytics expertise.

I will give an overview of a number of industrial cases that we have been working on over the years, offer some insights on how to use mathematical models to handle some fairly complex problems, explain some surprising insights we have had in some cases and show some rather unconventional (or exotic) approaches that we were surprised to find out that they actually work. One of the messages I have is the old rule of “requisite variety” – the models and the algorithms should be powerful enough to deal with the complexities offered by big data and offer platforms to actually work out essential and crucial relations - and to get the relations right - as we cannot afford to build planning, problem solving and decision making on “perceptions, impressions and experience” in a fast moving and highly competitive international business environment.



Visiting lectures on December 14th, 2016

Ignacy Kaliszewski (Polish Academy of Sciences)

When solving large-scale multiobjective optimization problems, solvers often get stuck with the memory or time limit. In such cases one is left with no information how far to the true Pareto front are the feasible solutions obtained till a solver stops.

In this work we show how to provide such information when solving multiobjective multidimensional knapsack problems by commercial mixed-integer linear solvers accessible on open access platforms.

We illustrate the proposed approach on bicriteria multidimensional knap sack problems derived from singleobjective multidimensional knapsack problems taken from Beasley OR Library.



Visiting lectures on November 30th, 2016

Alberto Lovison (Università degli Studi di Padova)

In this talk I will expose a couple of problems arising from industry and biology related to optimization naturally involving infinite dimensions.

The first is a problem coming from the arena of motorbike competitions and deals with the administration of uncertainty for the distribution of a family of trajectories. The rigorous methods available in literature lead to an analysis which is mostly qualitative.

The second problem is the attempt to explain the profile of the xylematic channels (the microtubes carrying the water from the roots to the leafs of a tree) by means of an optimization principle. Being the solution (a curve) belonging to an infinite dimensional space and given that the functional is a variable convex combination of two scalar functions, the formulation appears as an instance of multiobjective calculus of variations.



Visiting lectures on 14-27 August, 2016

Joshua Knowles (University of Birmingham)

ParEGO is a global optimization algorithm proposed by the presenter in 2005 for expensive multiobjective problems. Although ParEGO has been recognized by the multiobjective optimization community as an efficient method, the original implementation had several limitations. In this talk I will outline changes we have implemented in an updated release (in early 2016), and the plans for further development while keeping true to the ParEGO framework (of a scalarized version of Jones' et al's EGO method). These improvements mostly concern scalability in the decision parameters, and maximum number of evaluations possible with ParEGO, methods for incorporation of prior knowledge, and hacks for greater stability. Improvements in other aspects are also coming in forthcoming work on sParEGO (Purshouse et al), and ParASEGO* (Hakanen et al) -- very briefly outlined here. Finally, there will be a summary of the plans for a Benchmarking suite for multiobjective surrogate-assisted methods.



Visiting lectures on 7-14 August, 2016

Handing Wang (University of Surrey)

Most existing work on evolutionary optimization assumes that there are analytic functions for evaluating the objectives and constraints. In the real-world, however, the objective or constraint values of many optimization problems can be evaluated solely based on data and solving such optimization problems is often known as data-driven optimization. In this paper, we divide data-driven optimization problems into two categories, i.e., off-line and on-line data-driven optimization, and discuss the main challenges involved therein. An evolutionary algorithm is then presented to optimize the design of a trauma system, which is a typical off-line data-driven multi-objective optimization problem, where the objectives and constraints can be evaluated using incidents only. As each single function evaluation involves large amount of patient data, we develop a multi-fidelity surrogate management strategy to reduce the computation time of the evolutionary optimization. The main idea is to adaptively tune the approximation fidelity by clustering the original data into different numbers of clusters and a regression model is constructed to estimate the required minimum fidelity. Experimental results show that the proposed algorithm is able to save up to 90\% of computation time without much sacrifice of the solution quality.



Visiting lectures on 7-14 August, 2016

Chaoli Sun (University of Surrey)

Surrogate models have shown to be effective in assisting meta-heuristic algorithms for solving computationally expensive complex optimization problems. The effectiveness of existing surrogate-assisted meta-heuristics, however, has only been verified on low-dimensional optimization problems. In this paper, a surrogate-assisted cooperative swarm optimization algorithm is proposed, in which a surrogate-assisted particle swarm optimization algorithm and a surrogate-assisted social learning based particle swarm optimization algorithm cooperatively search for the global optimum. The cooperation between the particle swarm optimization and the social learning based particle swarm optimization consists in two aspects. First, they share promising solutions evaluated by the real fitness function. Second, the social learning based particle swarm optimization focuses on exploration while the particle swarm optimization concentrates on local search. Empirical studies on six 50-dimensional and six 100-dimensional benchmark problems demonstrate that the proposed algorithm is able to find high-quality solutions for high-dimensional problems on a limited computational budget.



Visiting lectures on May 30th, 2016

Anita Schöbel

Robust (single-objective) optimization has been grown to an important field which is both, practically important and mathematically challenging. In contrast to this, literature on robust multi-objective optimization is rare. This may be due to the fact that the robust counterpart of an optimization problem requires the implicit determination of a worst scenario in the objective function - which in multi-objective optimization turns out to be a multi-objective optimization problem again!

This talk will first review classic and more recent robustness concepts for single-objective optimization and then present possible concepts on how robustness for a Pareto solution may be defined. In particular, we will introduce the concepts of flimsily and highly robust solutions, different variants of set-based minmax solutions and also present a generalization of the recent concept of light robustness to multi-objective problems.

Furthermore, some solution approaches on how robust efficient solutions can be computed will be shown. The concepts are illustrated on two examples: planning of short and secure flight routes and location problems.



Visiting lectures on March 16th, 2016

Jeff Keisler

Sophisticated quantitative techniques for project management, notably PERT/CPM, attempt to minimize the risk that a project will fail to meet fixed requirements. But requirements themselves often vary, necessitating qualitative techniques to get or keep projects on track. This new work replaces the assumption of fixed requirements with new assumptions that allow for (1) a fully decision analytic treatment of project management decision making under uncertainty that (2) can be easily incorporated into existing project management techniques.



Visiting lectures on September 30th, 2015

Mariano Luque (Department of Applied Economics (Mathematics), University of Málaga)

A great variety of real decision problems involves working with multiple objective functions which are usually in conflict. Evolutionary algorithms in multiobjective optimization have proved very useful and efficient in solving many of these multiobjective optimization problems. It is therefore necessary to have appropriate evolutionary algorithms, able to solve different types of them. In this presentation I describe till three evolutionary algorithms based on the preferential information requested to the decision maker: a generating method (Global WASF-GA), one with priori information (WASF-GA) and an interactive version (Interactive WASF-GA). The three algorithms use an achievement scalarizing function (Tchebychev metric plus an augmentation term) and a set of weight vectors whose vectors formed by the inverse components are evenly distributed as possible. At each generation, all individuals are classified in different fronts taking the values of the achievement scalarizing function for the different weight vectors and for a reference point or even two for Global WASF-GA, which depend(s) on the algorithm considered. New proposals of these algorithms are also discussed.



Visiting lectures on August 11th, 2015

Anders Forsgren (KTH, Stockholm, Sweden)

Optimization has become an indispensable tool in many application areas. In this talk, I will discuss how research on fundamental method development and research in application areas may benefit from each other. I will discuss quasi-Newton methods and their relation to optimization problems arising in radiation therapy. In addition, I will discuss multiobjective optimization and robust optimization for radiation therapy. Finally, I will cover optimization of metabolic networks in a column-generation framework.

The research on radiation therapy is carried out jointly with RaySearch Laboratories AB. The research on cell metabolism is carried out jointly with the KTH School of Bioengineering.



Visiting lectures on June 24th, 2015

Chaoli Sun (University of Surrey)

Like most Evolutionary Algorithms (EAs), Particle Swarm Optimization (PSO) usually requires a large number of fitness evaluations to obtain a sufficiently good solution. This poses an obstacle for applying PSO to computationally expensive problems. We proposed two different approximation strategies on fitness evaluation, in which one utilizes fitness inheritance (called FESPSO) and the other is assisted with surrogate models (called TLSAPSO).

In FESPSO, the fitness of a particle is estimated based on its positional relationship with other particles. More precisely, once the fitness of a particle is known, either estimated or evaluated using the original objective function, the fitness of its closest neighboring particle will be estimated by the proposed estimation formula. If the fitness of its closest neighboring particle has not been evaluated using the original objective function, the minimum of all estimated fitness values on this position will be adopted. In case of more than one particle is located at the same position, the fitness of only one of them needs to be evaluated or estimated.

In TLSAPSO, a global and a number of local surrogate models are employed for fitness approximation. The global surrogate model aims to smooth out the local optima of the original multimodal fitness function and guide the swarm to fly quickly to an optimum. In the meantime, a local surrogate model constructed using the data samples near the particle is built to achieve a fitness estimation as accurate as possible.

Finally, further work on fitness approximation will be discussed focusing on integrating fitness inheritance and surrogate models in order to achieve fast convergence to the global optimum with less real fitness evaluations. In addition, data sampling is very important for a good surrogate model, especially for a global surrogate model, therefore, investigation of sampling strategies also will be our future work.

Ran Cheng (University of Surrey)

In evolutionary many-objective optimization, maintaining a good balance between convergence and diversity is particularly crucial to the performance of the evolutionary algorithms, which, however, becomes increasingly challenging to achieve as the number of objectives increase. To tackle this issue, a reference vector guided evolutionary algorithm is proposed. One major advantage of the proposed algorithm is that both convergence and diversity can be fully controlled by a set of predefined reference vectors, which distinguishes itself from most existing algorithms. We demonstrate that the proposed reference vector guided selection method is able to successfully address the loss of selection pressure, from which most dominance-based mechanisms suffer in dealing with many-objective optimization. Meanwhile, population diversity can also be effectively achieved with the help of reference vectors, which is of particular importance for many-objective optimization. Our experimental results on some benchmark test problems show that the proposed algorithm is highly competitive in comparison with five state-of-the-art evolutionary algorithms for many-objective optimization. In addition, we show that reference vectors are effective and cost-efficient in preference articulation, which is extremely desirable for many-objective optimization, where achieving a representative set of Pareto optimal solutions becomes intractable using a limited population size. Furthermore, a reference vector regeneration strategy is proposed for handling irregular Pareto fronts. Finally, the proposed algorithm is extended for solving constrained many-objective optimization problems.



Visiting lectures on April 7th, 2015

Francisco Ruiz (University of Málaga, Spain)

Interactive methods have proved to be extremely useful multiobjective techniques, when it comes to solve real complex decision making problems. Their iterative schemes are especially suitable for the necessary learning process that has to be present in every decision making process. Many different interactive methods exist, and they vary both in the type of information that the decision maker (DM) has to provide at each iteration, and in the way the different solutions are obtained along the process. The information required from the DM can take many different forms (just choosing one solution among a set of possible solutions, giving local tradeoffs, giving reference or target values, classifying the objectives…). But in many cases, the interactive method is chosen without taking into account the cognitive burden that it implies for the DM. In this sense, we have developed hybrid interactive multiobjective systems, where the DM can decide at each step the type of information (s)he prefers to give, and the system internally switches to the most appropriated method. The idea is to adapt the resolution process to the necessities of the DM, and not vice versa. We have applied these interactive systems to several real problems, including the budget assignment to the hospitals of our Regional Sanitary System, the determination of the optimal electricity mix of Andalucía, or the calculation of the optimal dimensions of a solar thermal plant.

Tomas Back (Natural Computing Group, Leiden Institute of Advanced Computer Science (LIACS), Leiden University, Netherlands)

Industrial optimization problems often characterized by a number of challenging properties, such as time-consuming function evaluations, high dimensionality, a large number of constraints, and multiple optimization objectives.

Working with Evolutionary Strategies, we have optimized them over the past decades for such optimization problems. Certain variants are particularly effective, and set-oriented selection criteria such as SMS-EMOA are useful for approximating the Pareto front in case of multiobjective optimization.

In this presentation, we will illustrate these aspects by referring to industrial optimization problems, such as they occur in the automotive and many other industries. We will show that evolutionary strategies can be very effective even in case of very small numbers of functions evaluations, and that they can approximate Pareto fronts very well.



Visiting lecture on March 11th, 2015

Kerstin Daechert (University of Duisburg-Essen, Germany)

In this talk we present an algorithm that computes the nondominated set or a subset of it by solving a sequence of scalarizations whose parameters are varied in an adaptive way. More precisely, the parameters are chosen so that with every scalarization solved either a new nondominated point is computed or the investigated part of the search region, i.e. the part of the outcome space containing possibly further nondominated points, can be discarded. Besides an appropriate computation of the parameters, the main ingredient of such an adaptive parametric algorithm is a systematic decomposition of the search region.

In the first part of our talk, we present a redundancy-free decomposition which permits to show that the number of scalarized optimization problems that need to be solved in order to generate the entire nondominated set (finiteness assumed) depends linearly on the number of nondominated points in the tricriteria case. This improves former results which showed a quadratic dependence in the worst case.

The presented adaptive parametric algorithm is not restricted to a special scalarization and can be used, e.g., with the classic e-constraint or the weighted Tchebycheff method. For the augmented variants of these scalarizations, particularly for the augmented weighted Tchebycheff method, we show in the second part of our talk how all parameters, including the one associated to the augmentation term, can be chosen adaptively such that a nondominated point in a selected part of the search region is generated whenever there exists one and, at the same time, the augmentation parameter is chosen as large as possible in order to avoid numerical difficulties reported in the literature.

In the third and concluding part, we validate our theoretical findings for the tricriteria case by numerical tests. Moreover, we demonstrate the flexibility of the presented algorithm by applying it also to continuous multicriteria optimization problems. Finally, we discuss how to generalize the algorithm to any number of criteria.



Visiting lecture on February 9th, 2015

Audrius Varoneckas (a postdoc researcher at Institute of Mathematics and Informatics, Vilnius University)

A problem of visualizing the optimal set of solutions of a multi-objective optimization problem is considered. We focus on discovery the structure of multidimensional decisions space by visualization of Pareto sets as well as on the relationship between objective functions and design variables in optimal set of solutions. A multi-objective visualization method is proposed to visualize a set of efficient points, e.g. the multidimensional points, as points in the two-dimensional space.



Visiting lecture on November 13th, 2014

Montaz Ali (University of Witwatersrand, Johannesburg, South Africa)

It goes without saying that the solutions to system of linear equations are of paramount importance in almost every field of science, engineering and management. We will consider the following linear system Ax = b where A ϵ R^(m×n), x ϵ R^n, b ϵ R^m. In practice, it is common to encounter systems that do not admit a unique solution, rather the system is consistent where the number of variables exceeds the number of equations - in which case an infinite number of solutions exist, or the system is inconsistent and no solutions exist. In the former case the system is called underdetermined and in the latter the system is called over-determined. When the system is underdetermined we have in general m <n, and practitioners seek to pick the solution that is best suited to their needs. Minimum infinity norm solutions are often chosen for various practical problems.

The problem is to minimize the maximum value of elements in the solution vector. Such a solution is chosen when one seeks to minimize the maximum load on any node of a given system. In particular, such solutions are sought in control theory when the limitations of any individual component of a system cannot be breached.

The current best algorithms for the solution to the problem are given in [1, 2, 3, 4]. These are the path following algorithm was proposed by Cadzow [1] in which the polyhedral structure of the objective function of the dual was Exploited. Abdelmalek [2] some years later proposed a linear programming formulation of the dual problem in which a modified simplex method was applied to a reduced tableu - this being made possible by the strong symmetries present in the constraint matrix. Shim and Yoon [3] almost two decades later proposed a primal method which they claim is conceptually and geometrically clear at the cost of computational inferiority to both of the methods already mentioned.

We propose a primal path following method to deal with the problem. Unlike the dual approach suggested in [1], our method is based on the primal formulation and yet conceptually different from the primal method in [3]. In particular, our method employs novel heuristics coupled with ideas that have been exploited for the over-determined system. Results and comparisons with the existing method will also be shown. An iteration complexity analysis will also be presented.

References

  1. James A. Cadzow. An efficient algorithmic procedure for obtaining a minimum L-infinity norm solution to a system of consistent linear equations. SIAM Journal on Numerical Analysis, 11(6): pp. 1151--1165, 1974.
  2. Nabih N Abdelmalek. Minimum l-infinity solution of underdetermined systems of linear equations. Journal of Approximation Theory, 20(1): pp. 57--69, 1977.
  3. Ick-Chan Shim and Yong-San Yoon. Stabilized minimum infinity norm torque solution for redundant manipulators. Robotica, 16(2):pp. 193--205, 1998.
  4. Insso Ha and Jihong Lee. Analysis on a minimum infinity norm solution for kinematically redundant manipulators. ICASE, 4(2): pp. 130--139, 2002.


Visiting lecture on August 7th, 2014

Kalyanmoy Deb (Koenig Endowed Chair Professor, Michigan State University, East Lansing, USA)

Many practical problem‐solving tasks involve multiple hierarchical search processes, requiring one search and optimization task to solve another lower--‐level search and Optimization problem in a nested manner. In their simplicity, these problems are known as "Bilevel optimization" problems, in which there are two levels of optimization tasks. Problems in economics and business involving company CEOs and department heads or governmental decision makers and NGOs are standard bilevel optimization problems. In engineering, optimal control problems, structural design problems, transportation problems and other hierarchical problems fall into this category. These problems are also known as Stackelberg games in the operations research and computer science community. Due to nestedness and inherent complexities involved in solving bilevel problems, evolutionary methods are increasingly being found to be effective. In this talk, we shall discuss some of the key developments of evolutionary bilevel optimization (EBO) and highlight some promising areas of research in this area.



Visiting lecture on June 12th, 2014

Ralph L. Keeney (Duke University, USA)

This seminar will discuss the art and science identifying and structuring values for any decision situation. It will present practical techniques to initially create a useful set of values. It will also include a workshop element where participants will practice identifying objectives for a relevant practical application. Then the concepts necessary to structure a set of values are presented. The seminar will indicate the essential role that values have for making good decisions.



Visiting lecture on May 21st, 2014

Toni Lastusilta (GAMS)

The General Algebraic Modeling System (GAMS) is a high-level modeling system for mathematical programming and optimization. We will start off by covering the basic concepts and the design principles of the GAMS system. Then we will highlight some good modeling practices, as well as, look at a simple example. Finally, we will consider a large scale energy model application.



Visiting lecture on August 23, 2013:

Kalyanmoy Deb (Michigan State University)

Evolutionary Many-Objective Optimization

Evolutionary optimization methods were found to be apt in solving two and three-objective optimization problems, particularly in aiding to find a representative set of trade-off points before making a decision for a preferred solution. Efforts were underway for the past decade in trying to come up with an efficient methodology for handling many-objective optimization problems involving more than three objectives. In the recently proposed NSGA-III approach, the task of diversity preservation is assisted by specifying some lead reference points and the task of convergence near to Pareto-optimal front is achieved by algorithmic means. Results on problems having up to 15 objectives will be shown.



Visiting lecture on August 12, 2013:

Gilberto Montibeller (London School of Economics)

Developing Risk Management Support Systems for the Prioritization of Emerging Health Threats

The prioritisation and management of emerging threats to human and animal health pose serious challenges for policy makers. Such challenges have many sources. First, the emerging nature of such threats, coupled with limited impact modelling, means that often there is lack of reliable evidence about impacts and probability of an outbreak. Second, the continuing emergence of multiple threats, often contemporary, requires regular prioritization informed by the amalgamation of different sources of quantitative and qualitative data, and experts' judgments. Third, policy makers are often concerned with multiple impacts that go beyond health and economic concerns, including issues related with public perception and capability building. In this paper we suggest how decision analysis could address these challenges both from an analytical and, more critically, organizational perspective. In particular we argue that the development and use of simple and tailored risk management systems, when appropriately embedded into organizational routines, can provide effective support for the assessment of emerging threats and the design of policy recommendations. We illustrate our suggestions with a real- world case study, in which we developed a risk management support system for DEFRA, the UK Government Department for Environment, Food and Rural Affairs, to help with their prioritization of emerging threats to the countrys animal health status.



Visiting lecture on April 24, 2013:

David Greiner (ULPGC, Spain)

Solving single- and multi-objective evolutionary design optimization problems in structural and civil engineering

Several optimum design problems concerning the field of structural and civil engineering are solved using numerical methods and evolutionary algorithms. Concretely, the description of methodology and final optimum solutions of different single- and multi-objective applications are shown related to: structural engineering optimum design (constrained weight and reliability), noise barrier optimum design (insertion loss and barrier height), and slope stability optimum design (factor of safety and slope height).


Visiting lecture on August 16th, 2012:

Yaochu Jin (University of Surrey, UK)

A Systems Approach to Evolutionary Optimization of Complex Engineering Problems

Real-world complex engineering optimisation remains a challenging issue in evolutionary optimisation. This talk discusses the major challenges we face in applying evolutionary algorithms (EAs) to complex engineering optimization, including representation, the involvement of time-consuming and multi-disciplinary quality evaluation processes, changing environments, vagueness in formulating criteria formulation, and the involvement of multiple sub-systems. We propose that the successful tackling of all these aspects give birth to a systems approach to evolutionary design optimization characterized by considerations at four levels, namely, the system property level, temporal level, spatial level and process level. Finally, we suggest a few promising future research topics in evolutionary optimisation that consist in the necessary steps towards a life-like design approach, where design principles found in biological systems such as self-organization, self-repair and scalability play a central role.



Visiting lecture on October 31st, 2012:

Jouni Lampinen (Vaasan yliopisto)

Globaali optimointi datan luokittelussa

Vaasan yliopiston ja Lappeenrannan teknillisen yliopiston yhteisessä tutkimushankkeessa kehitetään uuden tyyppistä globaalia yksi- ja monitavoiteoptimointia hyödyntävää datan luokittelumenetelmää. Keskeisenä kontruktiivisena tavoitteena on, että kehitettävä järjestelmä sopeutuisi automaattisesti luokiteltavan datan ominaisuuksiin mm. valitsemalla automaattisesti luokiteltavan datan ominaisuuksiin parhaiten sopivan luokitinmallin ja optimoi kaikki valittuun malliin liittyvät vapaat parametrit.

Järjestelmän keskeiset osat ovat luokitinmalli ja sen optimointiin käytettävä algoritmi. Käytetty luokitinmalli pohjautuu ns. lähimmän prototyyppivektorin menetelmään. Mallin sovittaminen luokiteltavaan dataan suoritetaan globaalin optimoinnin keinoin. Optimointialgoritmina käytetään evoluutiolaskennan menetelmiin kuuluvaa differentiaalievoluutioalgoritmia.

Tähän mennessä on luokitinmallin optimointi on suoritettu yksitavoiteoptimointina, jossa optimoinnin tavoite on ollut virheellisesti luokiteltujen datapisteiden määrän minimointi. Jatkossa monitavoiteoptimoinnin soveltamisen odotetaan mahdollistavan myös useamman luokittelutarkkuuden ja -laadun kriteerin samanaikaisen huomioimisen.

Hankkeen tavoitteena on kehittää uudentyyppinen luokittelumenetelmä, jossa evoluutiolaskennan ja globaalin optimoinnin avulla pyritään kehittämään tarkkuudeltaan aiempia tarkempi, monipuolisempi ja helppokäyttöisempi datan luokittelumenetelmä.



Visiting lecture on August 16th, 2012:

Yaochu Jin (University of Surrey, UK)

A Systems Approach to Evolutionary Optimization of Complex Engineering Problems

Real-world complex engineering optimisation remains a challenging issue in evolutionary optimisation. This talk discusses the major challenges we face in applying evolutionary algorithms (EAs) to complex engineering optimization, including representation, the involvement of time-consuming and multi-disciplinary quality evaluation processes, changing environments, vagueness in formulating criteria formulation, and the involvement of multiple sub-systems. We propose that the successful tackling of all these aspects give birth to a systems approach to evolutionary design optimization characterized by considerations at four levels, namely, the system property level, temporal level, spatial level and process level. Finally, we suggest a few promising future research topics in evolutionary optimisation that consist in the necessary steps towards a life-like design approach, where design principles found in biological systems such as self-organization, self-repair and scalability play a central role.



Visiting lecture on May 21st, 2012:

Carlos Coello Coello (CINVESTAV-IPN, Mexico)

Recent Results and Open Problems in Evolutionary Multiobjective Optimization

Evolutionary algorithms (as well as a number of other metaheuristics) have become a popular choice for solving problems having two or more (often conflicting) objectives (the so-called multi-objective optimization problems). This area, known as EMOO (Evolutionary Multi-Objective Optimization) has had an important growth in the last 15 years, and several people (particularly newcomers) get the impression that it is now very difficult to make contributions of sufficient value to justify, for example, a PhD thesis. However, a lot of interesting research is still under way. In this talk, we will review some of the research topics on evolutionary multi-objective optimization that are currently attracting a lot of interest (e.g., handling many objectives, hybridization, indicator-based selection, use of surrogates, etc.) and which represent good opportunities for doing research. Some of the challenges currently faced by this discipline will also be delineated.


Visiting lecture on September 1st, 2011:

Murat Köksalan (Middle-East Technical University, Ankara, Turkey)

Solving Multiobjective Mixed Integer Programs

We develop an algorithm to find the best solution for multiobjective integer programs when the DM's preferences are consistent with a quasiconcave utility function. Based on the convex cones derived from past preferences, we characterize the solution space that excludes inferior regions. We guarantee finding the most preferred solution and our computational results show that the algorithm works effectively.


Visiting lecture on August 17th, 2011:

Anders Forsgren (Optimization and Systems Theory, KTH, Sweden)

Optimization of Radiation Therapy

Optimization has become an indispensable tool for radiation therapy. In this talk, we highlight fundamental aspects of the optimization problems that arise, and also discuss more advanced aspects, such as how to handle conflicting treatment goals and model uncertainty.

We initially discuss how problem structure may be taken into account for computing approximate solutions to the fundamental optimization problem that arises in radiation therapy. We then show how conflicting treatment goals may be handled in a multiobjective formulation by approximation of the Pareto surface. Finally, we discuss how uncertainties in range, setup and organ motion may be handled in a robust optimization framework for optimization of proton therapy.

The talk is based on joint research between KTH and RaySearch Laboratories AB, in particular research carried out by Rasmus Bokrantz, Albin Fredriksson and Fredrik Lofman.


Visiting lecture on August 9th, 2011:

João Pedro Pedroso (University of Porto, Portugal)

Issues on algorithm self-tuning

In this talk I will raise some topics arising in algorithm parameterization, and on their view as a problem of noisy, non-linear optimization. I will present data obtained with the parameterization of some metaheuristics for combinatorial optimization, and propose a method for automatically tuning parameters. The main objective is to open discussion, and to initiate a brainstorming session on what could be done for advancing the state-of-the-art on this issue.


Visiting lecture on June 7th, 2011:

Margaret M. Wiecek (Department of Mathematical Sciences and Department of Mechanical Engineering,Clemson University,Clemson, SC, USA)

Battery Thermal Packaging Design

The performance of batteries is critical for the mobility and performance of automotive vehicles. In order to maintain battery life and performance, it is crucial to keep the batteries within the temperature range in which their operating characteristics are optimal. To achieve the desired tension and current required for different applications, the cells are packed together in modules which in turn are connected in parallel or in series. To provide a reliable battery, the temperature of the pack should be kept inside the temperature range. The uniformity of the temperature inside the pack depends mainly on the non-uniform heat transfer efficiency among the cells. Due to the capacity unbalance, some cells may experience over/under charging which leads to premature battery failure.

The cell optimal layout inside the battery pack is optimized while considering thermal aspects. Due to a large number of function evaluations, computational fluid dynamics models are not suitable and a simplified lumped parameter thermal model is integrated with the optimization. The optimization problem is formulated as a constrained multiobjective program in which objective functions (to be minimized) represent descrepancies between the operating cell temperatures and the target temperature. Such a formulation with physically homogeneous and comparable objectives is addressed in terms of the equitability preference rather than the traditional Pareto preference. Mathematical aspects of the equitability preference are investigated and the applicability of the equitable solutions to engineering problems is discussed. Furthermore, the battery location in the vehicle is also optimized to improve vehicle dynamics, component accessibility and passenger survivability while considering geometric constraints including collision and overlap among the components. The overall optimization problem is two-level. A solution method to the overall problem is outlined.


Visiting lecture on November 30th, 2010:

Antanas Zilinskas (Vilnius University, Institute of Mathematics and Informatics, Vilnius, Lithuania)

P-algorithm for black box multiobjective optimization

Statistical models of multimodal objective functions have been successfully applied for the construction of black box single objective global optimization algorithms. P-algorithm is based on a statistical model and is defined as the repeated decision making under uncertainty. The axioms of rationality are formulated taking into account the situation of selecting a point for the current computation of the objective function value. The formulated axioms imply the selection of the point of maximum probability to improve the best known value of the objective function. In the present talk the P-algorithm is generalized to multiobjective optimization. An example illustrating properties of the newly proposed algorithm is included.


Visiting lecture on September 16th, 2010:

Theodor Stewart (University of Cape Town, South Africa and Manchester Business School, UK)
Simon French & Jesus Rios (Manchester Business School)

Scenario Planning and Multiple Criteria Decision Analysis

Scenario planning in its various forms is a widely used approach to strategic planning. It provides a mechanism for sharing understanding of major sources of risk and uncertainty in decision making. In many instances, however, scenario planning does not make use of formal analytical tools for evaluation of potential courses of action. The field of multiple criteria decision analysis (MCDA), on the other hand, has developed powerful tools and algorithms for the evaluation and choice of alternative strategies in the presence of multiple and conflicting objectives. Many approaches to MCDA, however, do not employ formal methods for dealing with substantial uncertainties in outcomes. We thus discuss some approaches to integration of scenario planning and multiple criteria decision analysis, to capture the power of both. The concepts are applicable to any of the broad schools of MCDA, as well as to both discrete choice and continuous problems.


Visiting lecture on August 25th, 2010:

K. Kufer (Univ. Kaiserslautern, Germany)

Interactive decision support - multicriteria optimization in practice

Optimization problems in class room go out from a well defined problem setting: the set of feasible solutions is given as well as the objective function(s). When it comes to practice we often find an incomplete description of what is feasible and what not. Even more complicated is it to distinguish between good and bad or even to characterize optimality.

The talk emphasizes this major problem of not rigorously given problems and shows how multicriteria optimization and interactive decision support by visualization of solutions might help in this dilemma. Problems and methods are discussed in the context of industrial problems Fraunhofer ITWM is currently involved in.


Visiting lecture on May 24th, 2010:

Leonidas Sakalauskas (Institute of Mathematics and Informatics, Vilnius, Lithuania, European Working Group on Continuous Optimization)

Stochastic Programming for Business and Technology

The concept of implementable nonlinear stochastic programming by finite series of Monte-Carlo samples is surveyed addressing to topics related with stochastic differentiation, stopping rules, conditions of convergence, rational setting of parameters of algorithms, etc. Our approach distinguishes by treatment of the accuracy of the solution in a statistical manner, testing the hypothesis of optimality according to statistical criteria and estimating confidence intervals of the objective and constraint functions. The rule for adjusting the Monte-Carlo sample size is introduced which ensures the convergence by linear rate and enables us to solve the stochastic optimization problem using a reasonable number of Monte-Carlo trials. Issues of implementation of the developed approach in financial management, business management and engineering are considered, too.


Visiting lecture on February 10th, 2010:

Ralph E. Steuer (University of Georgia, USA)

An Overview in Graphs of Portfolio-Selection Efficient Frontiers and Surfaces in Finance

Being able to render an efficient frontier quickly is an important attribute of the systems used to support decision making in portfolio selection. In standard portfolio selection, simplifying assumptions generally make this possible. But in the larger problems that are beginning to appear with greater frequency, the assumptions can cause more trouble than they are worth. This is shown along with their computational implications. Also shown are the effects on standard portfolio selection of inserting additional criteria (such as dividends, liquidity, etc.) into the portfolio selection process.

One is that this causes the efficient frontier to turn into an efficient surface. Another is that the efficient surface has a tendency to be formed by a collection of platelets (like on the back of a turtle). A third concerns the availability of algorithms capable of computing the platelets. And a fourth is how to search a surface of platelets for one's most preferred portfolio on it. Computational results are reported where possible to support the graphs presented.


Visiting lecture on January 18th, 2010:

Matthias Ehrgott (The University of Auckland, New Zealand)

1. An approximation algorithm for convex multi-objective programming problems

In multi-objective optimization, several objective functions have to be minimized simultaneously. We propose a method for approximating the nondominated set of a multi-objective nonlinear programming problem, where the objective functions are convex and the feasible set is convex. This method is an extension of Benson's outer approximation algorithm for multi-objective linear programming problems. We prove that this method provides a set of epsilon-nondominated points. For the case that the objectives and constraints are differentiable, we describe an efficient way to carry out the main step of the algorithm, the construction of a hyperplane seperating an exterior point from the feasible set in objective space. We provide examples that show that this cannot always be done in the same way in the case of non-differentiable objectives or constraints.

2. Finite representation of nondominated sets in multiobjective linear programming

In this paper we address the problem of representing the continuous set of nondominated solutions of a multiobjective linear programme by a finite subset of such points. We prove that a related decision problem is NP-hard. Moreover we illustrate the drawbacks of the known global hooting, normal boundary intersection and normal constraint methods concerning the coverage of the nondominated set and uniformity of the representation by examples. We propose a method which combines the global shooting and normal boundary intersection methods. By doing so, we overcome the limitations of these methods. We show that our method computes a set of evenly distributed nondominated points for which the the coverage error and the uniformity level can be measured. Finally, we apply this method to an optimization problem in radiation therapy and present illustrative results for some clinical cases.


Visiting lecture on December 15th, 2009:

Michael Monz (Fraunhofer-ITWM, Germany)

Interactive Planning of Intensity Modulated Radiation Therapy (IMRT)

IMRT can be used for curative treatment even if the tumor is of complicated shape or close to important risk organs. Since the degrees of freedom vastly exceed the number one can manually handle, the planning is done using so called inverse planning. Here, each considered plan is the result of a large-scale optimization problem.

The planning problem naturally is a multi-criteria problem: To each relevant organ at risk a function is assigned. Furthermore, the different tumor volumes each get one to two functions. The planning process now tries to find a compromise between the specified goals. Currently, this most often done by iterative changes to the model or iterative changes to the weights with which the different functions contribute to the overall objective. We will call this tedious and time consuming procedure human iteration loop (HIL).

Explicitly treating the problem as a multi-criteria problem offers the possibility to greatly improve the planning process. Yet the problem under consideration does not make it easily accessible for multi-criteria optimization: There is no generally acknowledged model for the quality of a plan, the problem is large scale and gets almost intractable, if all degrees of freedom are included into the optimization.

The talk will introduce IMRT planning and describe how the described problems were tackled and what open questions are still actively being researched.


Visiting lecture on December 10th, 2009:

Timo Laukkanen (Helsinki University of Technology)

Modelling, simulation and optimization of energy systems at ENY

The modelling, simulation and optimization research in the Energy Engineering an Environmental Protection (ENY)-research group has aimed at developing systematic tools and models to analyze and optimize energy systems so that they are cost- and energy efficient. Focus is mainly on single and multiobjective models that belong to the class of deterministic Mixed Integer NonLinear Programming (MINLP) problems, where the general problem is to cope with both combinatorial issues due to integer (binary) variables and with non-convexities due to nonlinearity. Modelling and simulation of energy systems with commercial simulation software has also been an important activity especially regarding case-specific industrial applications. In this presentation, different methods, models and applications of energy systems currently or previously researched at ENY are presented. Also the main commercial simulation software used and GAMS (General Algebraic Modelling System), which is and has been the main modelling platform, are shortly presented. Finally the future possibilities are shortly discussed.


Visiting lecture on November 10th, 2009:

Georges Fadel (Clemson University, USA)

Optimization and complex systems design - the packaging / layout problem

The presentation will describe some mechanical design problems that are considered complex, namely packaging (compact packing) and layout or configuration design. Complexity in this context will be defined highlighting the multiple interactions that need to be considered and the geometry and other characteristics that contribute to that complexity. Next, the past work on developing an approach to deal with that complexity will be shown as it evolved over the years leading to its current state. The presentation will describe the development of an archive based micro genetic algorithm as well as various geometrical considerations that had to be efficiently managed to solve the problem. The talk will conclude with some of the extensions of that work, describing the design of heterogeneous components.


Lecture on May 25th, 2009:

Timo Aittokoski (University of Jyväskylä, Finland)

Efficient Evolutionary Method to Approximate the Pareto Optimal Set in Multiobjective Optimization

Solving real-life engineering problems requires often multiobjective, global and efficient (in terms of objective function evaluations) treatment. In this study, we consider problems of this type by discussing some drawbacks of the current methods and then introduce a new population based multiobjective optimization algorithm which produces a dense (not limited to the population size) approximation of the Pareto optimal set in a computationally effective manner.


Visiting lecture on December 9th, 2008:

Ausra Mackute (Institute of Mathematics and Informatics, Vilnius, Lithuania)

On few applications of multi-criteria optimization

Applied optimization problems such as process design or optimal control are multi - criteria problems in essence. It is important to construct feasible solution set, but in case these problems are combined with the use of nonlinear models, generation of reliable Pareto front can be difficult. A case study in process design is used to illustrate the multi-step procedure for generating Pareto front for a two criteria problem. The base of this procedure is high-dimensional data analysis and visualization techniques. The results show that the use of data analysis and visualization can help gain insight into the Pareto optimal. Another case study in practical optimal control problem from biotechnology is used for comparison of several multi-criteria optimization methods . Two criteria are taken into account: yield of biomass and natural process duration. Theoretical analysis of the problem is difficult because of non-linearity of the process’ model. The problem is reduced to a parametric optimization problem by means of parameterization of control functions. Several evolutionary multi-criteria optimization algorithms and a scalarization based direct search algorithms are considered. The methods are compared with respect to the precision and the solution time.


Visiting lectures on November 17th, 2008:

Ankur Sinha (Helsinki School of Economics/IIT Kanpur, India)

Solving Bilevel Multi-Objective Optimization Problems Using Evolutionary Algorithms

Bilevel optimization problems require every feasible upper-level solution to satisfy optimality of a lower-level optimization problem. These problems commonly appear in many practical problem solving tasks including optimal control, process optimization, game-playing strategy development, transportation problems, and others. In the context of a bilevel single objective problem, there exists a number of theoretical, numerical, and evolutionary optimization results. However, there does not exist too many studies in the context of having multiple objectives in each level of a bilevel optimization problem. I shall address bilevel multi-objective optimization issues and discuss a viable algorithm based on evolutionary multi-objective optimization (EMO) principles to handle such problems.

Shyam Prasad Kodali (Helsinki School of Economics/IIT Kanpur, India)

Application of Genetic Algorithms to Tomographic Reconstruction

Tomographic reconstruction is an inverse problem wherein, we reconstruct the image of an object using what are called projection data. The aim is to be able to visualize the inside of an object in a non-invasive manner. Tomography is very popular in the field of medical science and is gaining popularity in other areas including non destructive evaluation, and geosciences. The projection data collected depends on the tomographic principle used to acquire the data. For example in ultrasonic tomography this could be the time-of-flight of an ultrasonic wave through a specimen or the attenuation an ultrasound wave undergoes when it passes through the specimen. In this lecture, we briefly present the tomographic principles and the different methods in use for solving the reconstruction problem. We further present some preliminary results of application of a GA-based algorithm to tomographic reconstruction with and without noise.

Jussi Hakanen (University of Jyväskylä, Finland)

Simulation-based Interactive Multiobjective Optimization in Wastewater Treatment

This paper deals with developing tools for wastewater treatment plant design. The idea is to utilize interactive multiobjective optimization which enables the designer to consider the design with respect to several conflicting evaluation criteria simultaneously. This is especially important because the requirements for wastewater treatment plants are getting more and more strict. By combining a process simulator to simulate wastewater treatment and an interactive multiobjective optimization software to aid the designer during the design process, we obtain a practically useful tool for decision support. The applicability of our methodology is illustrated through a case study related to municipal wastewater treatment where three conflicting evaluation criteria are considered.

Sauli Ruuska (University of Jyväskylä, Finland)

The Effect of Trial Point Generation Schemes on the Efficiency of Population-Based Global Optimization Algorithms

Many practical optimization problems in engineering are solved using population-based direct search methods because of their wide applicability and ease of use. The purpose of this paper is to bring attention to the deficiences of the point generation schemes in some of the currently used population-based algorithms. We focus on the population-based algorithms such as Controlled Random Search and Differential Evolution which use the population also as the source of perturbations used to generate new trial points. The situations in which these algorithms generate a large number of unsuccesful trial points or fail to generate succesful trial points altogether are illustrated.


Visiting lecture on November 12th, 2008:

Francisco Ruiz (University of Malaga, Spain)

A Multiobjective Interactive Approach to Determine the Optimal Electricity Mix of Andalucia

The principles of sustainability imply the joint consideration of economical, social and environmental criteria in every decisional process. Electricity is, of course, a basic need of our modern society, but the production processes in the Spanish region of Andalucia have traditionally been aggressive for the environment, due to the high number of plants using fossil fuels (mainly coal and petrol) some decades ago, and combined cycle plants that work with natural gas (more recently). On the other hand, the alternative renewable sources (eolic, solar, hydraulic...), being more respectful for the environment, are usually much more expensive. In such a framework, multiple criteria decision making techniques can be extremely helpful.

This talk reports the use of interactive multiobjective methods to determine the most adequate electrical mix for Andalucia. This study was financed by the Regional Ministry of Environment, and we have considered eight different electricity generation techniques (comprising non renewable and renewable sources). As for the criteria, we have used the yearly costs and the vulnerability (dependence on imported fuels) as economical-strategic criteria, and the environmental issues have been addressed through the consideration of twelve impact categories which have in turn been assessed using a life cycle analysis scheme. The problem is to decide the percentage of the electricity to be produced by each of the eight generation techniques.

This problem has been solved using the interactive multiobjective programming package PROMOIN. This package allows the simultaneous use of different interactive multiobjective techniques, in such a way the the Decision Maker (DM) can change the type of information he wishes to provide at each iteration (local tradeoffs or weights, reference points, choosing a solution among several ones...), and the interactive procedures is changed accordingly. This provides a flexible resolution framework, which was highly appreciated by the DMs. This talk reports both the modeling of the problem, and the resolution process that was carried out.


Visiting lecture on October 28th, 2008:

Andrzej P. Wierzbicki (National Institute of Telecommunications, Poland)

1. Delays in Technology Development: Their Impact on the Issues of Determinism, Autonomy and Controllability of Technology

The paper provides a discussion of diverse delays occurring in the development and utilization of technology products, and an explanation of reasons why, when seen holistically from outside, the process of technology development might appear as an autonomous, self-determining, uncontrollable process. When seen from inside, however, e.g., from the perspective of software development and evaluation, the process is far from being uncontrollable. This paradox is explained by the fact that technology development contains many processes with delays, in total amounting sometimes to fifty years; when seen from outside, such a process might appear uncontrollable, even if it is very much controllable when approached internally and in detail. Therefore, the definition and some types of technology creation as well as some stages of technological processes are discussed in some detail in this paper. Some aspects of the contemporary informational revolution and some recent results on micro-theories of knowledge and technology creation are also reviewed. It is suggested that one of possible ways of changing the paradigmatic attitude of philosophy of technology is to invite some such philosophers to participate in the development of modern tools of knowledge civilization era: software development and evaluation; moreover, inputs from philosophy of technology might enrich such processes. On the other hand, without participating in software development and evaluation, philosophy of technology runs the risk of becoming outdated and sterile. The conclusions of the paper stress the need of essentially new approaches to many issues, such as software development and evaluation versus philosophy of technology, in the time when informational revolution results in a transition towards knowledge civilization.

2. Ontology Construction and Its Applications in Local Research Communities

Ontological engineering has been widely used for diverse purposes in different communities and a number of approaches have been reported for developing ontologies; however, few works address issues of specific ontology construction for local communities, especially when taking into account the specificity of academic knowledge creation. This Chapter summarizes efforts done in two cooperating communities in Japan and in Poland, including attempts to clarify the concept and the field of knowledge science, to create an ontology characterizing a research program in this field, then to apply related results in another field – contemporary telecommunications. The distinctive approach to ontology creation (see Ren at al. 2008) is based on a combination of bottom-up and top-down approaches with the purpose of combining explicit knowledge with tacit, intuitive and experiential knowledge for constructing an ontology. Other possible views on constructing ontology are also presented and discussed; lessons from an ongoing application of this approach to a local research community working on contemporary telecommunication issues in Poland are also discussed. The combination of explicit and tacit, intuitive and experiential knowledge has led to a development of a software system named adaptive hermeneutic agent (AHA), a toolkit for documents gathering, keywords extracting, keywords clustering, and ontology visualization.


Visiting lecture on September 25th, 2008:

Leoneed Kirilov (Bulgarian Academy of Sciences, Sofia, Bulgary)

The problem of Metal Buiding-up by Welding and Multiple Criteria Decision Support

The problem for process optimizing of metal building-up by welding is investigated. A multiple criteria model with four objectives is suggested. The interactive satisfying trade-off method of Nakayama is used to solve the model.


Visiting lecture on May 9th, 2008:

Eckart Zitzler, Johannes Bader (ETH, Zurich, Switzerland)

Approximating the Pareto Set Using Set Preference Relations: A New Perspective On Evolutionary Multiobjective Optimization

Assuming that evolutionary multiobjective optimization (EMO) mainly deals with set problems, one can identify three core questions in this area of research: (i) how to formalize what type of Pareto set approximation is sought, (ii) how to use this information within an algorithm to efficiently search for a good Pareto set approximation, and (iii) how to compare the Pareto set approximations generated by different optimizers with respect to the formalized optimization goal. There is a vast amount of studies addressing these issues from different angles, but so far only few studies can be found that consider all questions under one roof.

This talk is an attempt to summarize recent developments in the EMO field within a unifying theory of set-based multiobjective search. It discusses how preference relations on sets can be formally defined, gives examples for selected user preferences, and proposes a general, preference-independent hill climber for multiobjective optimization with theoretical convergence properties. The proposed methodology brings together preference articulation, algorithm design, and performance assessment under one framework and thereby opens up a new perspective on EMO.