Special Sessions

Approved Special Sessions

SS01: Fuzzy Interval Analysis: Algebraic Structures of Fuzzy Interval Spaces, Optimization, Decision Making and Differential Equations – Theory, Algorithms, and Applications

We propose talks with theoretical content, algorithms, state-of–the–art surveys, and applications in the area of fuzzy interval analysis including fuzzy interval spaces, optimization, decision making, and differential equations. Topics such as fuzzy interval number representations, generalized uncertainty theory are within the scope of the special session proposal.

  • Rufian-Lizana (University of Sevilla, Spain)
  • Weldon Lodwick (Federal University of São Paulo, Brazil)
  • Yurilev Chalco-Cano (University of Tarapacá at Arica, Chile)

SS02: Theoretical and Applied Aspects of Imprecise Probabilities

This session is devoted to Imprecise Probability Theory. This theory involves all the mathematical models that can be used as more flexible tools than usual Probability Theory when the available information is scarce, vague or incomplete. It includes lower previsions, n-monotone capacities, belief functions, possibility measures, or non-additive measures, among others.

This special session aims to include papers related to Imprecise Probabilities that either present a significant advance in the foundations or show potential applications in real problems. In addition, papers where the connection between imprecise probability theories and other fields such as fuzzy sets or game theory is emphasized are also welcome.

  • Enrique Miranda (University of Oviedo, Spain)
  • Ignacio Montes (University of Oviedo, Spain)

SS03: Similarities in Artificial Intelligence

The session will aim to bring together researchers and practitioners working on all aspects of similarities, in particular, but no limited to: 

  • theoretical analysis of existing or new measures of similarities,
  • cognitive aspects of similarity measures,
  • utilization of similarities in machine learning and clustering,
  • measurement theory applied to similarities,
  • other artificial intelligence processes based on similarities such as analogy or transfer learning,
  • application of similarities to domains such as information retrieval, data mining, big data, medicine and bioinformatics, finance, robotics, speech and natural language processing, image processing, multi-media.
  • Bernadette Bouchon-Meunier (National Center for Scientific Research (CNRS), France)
  • Giulianella Coletti (Università degli Studi di Perugia, Italy)

SS04: Belief Function Theory and its Applications

During the past few years Belief function theory, also known as Dempster-Shafer theory or Evidence theory, has attracted considerable attention within the Artificial Intelligence community as a promising method of dealing with uncertainty in expert systems. As presented in the literature, the Dempster-Shafer theory offers an alternative to traditional probabilistic theory for the mathematical representation of uncertainty. The significant innovation of this framework is that is allows for the allocation of a probability mass to sets or intervals. Dempster-Shafer theory does not require an assumption regarding the probability of the individual constituents of the set or interval. This is a potentially valuable tool for the evaluation of risk and reliability in engineering applications when it is not possible to obtain a precise measurement from experiments, or when knowledge is obtained from expert elicitation. An important aspect of this theory is the combination of evidence obtained from multiple sources and modeling of conflict between them.

The objective of the special session is to bring together researchers to report and discuss recent developments of belief function theory, the relationship between belief function theory and the other theories such as probability theory, possibility theory, rough set theory, fuzzy set theory, with their applications in artificial intelligence and management of uncertain data. We invite original submissions in this area. The topics of interest include, but not limited to:

  • Belief function theory
  • Conflict management
  • Data mining
  • Fuzzy sets
  • Rough sets
  • Temporal information fusion
  • Applications in artificial intelligence, managing environmental data, pattern recognition, etc.
  • Didier Coquin (Université de Savoie Mont-Blanc, France)
  • Reda Boukezzoula (Université de Savoie Mont-Blanc, France)

SS05: Aggregation: theory and practice

Probably due to the accessibility to large databases in the internet era, the interest in and need for carefully-chosen aggregation techniques is currently arising in almost all fields of application (e.g., image processing and decision making). However, the mathematical formalization of aggregation processes can be traced back to the early days of the fuzzy set community and has led to many special sessions on the topic at past IPMU conferences. The aim of this IPMU 2020 track on Aggregation is to follow this longstanding tradition and to present a forum for researchers to discuss the most up-to-date research in the field of aggregation theory.

This special session addresses the most canonical aspects of aggregation functions. In particular, recent advances on classical aggregation functions such as weighted means, t-norms, t-conorms, Choquet or Sugeno integrals, copulas, overlap and grouping functions and ignorance functions, are welcome. Examples of the use of aggregation functions in different fields of application, such as in decision support and medical image decision problems, are encouraged.

  • Tomasa Calvo (University of Alcalá, Madrid, Spain)
  • Radko Mesiar (Slovak University of Technology (STU), Bratislava, Slovakia)
  • Andrea Stupnanova (Slovak University of Technology, Bratislava, Slovakia)

SS06: Aggregation: pre-aggregation functions and other generalizations of monotonicity

Probably due to the accessibility to large databases in the internet era, the interest in and need for carefully-chosen aggregation techniques is currently arising in almost all fields of application (e.g., image processing and decision making). However, the mathematical formalization of aggregation processes can be traced back to the early days of the fuzzy set community and has led to many special sessions on the topic at past IPMU conferences. The aim of this IPMU 2020 track on Aggregation is to follow this longstanding tradition and to present a forum for researchers to discuss the most up-to-date research in the field of aggregation theory.

This special session covers the study of fusion functions satisfying some generalized forms of monotonicity. Since the property of weak monotonicity (which can be understood as a meeting point for two classical properties for aggregation functions: shift-invariance and increasing monotonicity) was introduced, many different generalizations of the property of monotonicity have been studied. Prominent such examples are directional monotonicity and ordered directional monotonicity. The study of these properties led to the introduction of pre-aggregation functions in 2016. Since then, the interest in new classes of functions related to pre-aggregation functions has experienced increasing attention. The objective of this special session is to study recent developments in this area and to discuss the benefits of using pre-aggregation functions in practical problems.

  • Humberto Bustince (Public University of Navarra, Spain)
  • Graçaliz Dimuro (Centro de Ciências Computacionais of Universidade Federal do Rio Grande – FURG, Brazil)
  • Javier Fernández (Public University of Navarra, Spain)

SS07: Aggregation: aggregation of different data structures

Probably due to the accessibility to large databases in the internet era, the interest in and need for carefully-chosen aggregation techniques is currently arising in almost all fields of application (e.g., image processing and decision making). However, the mathematical formalization of aggregation processes can be traced back to the early days of the fuzzy set community and has led to many special sessions on the topic at past IPMU conferences. The aim of this IPMU 2020 track on Aggregation is to follow this longstanding tradition and to present a forum for researchers to discuss the most up-to-date research in the field of aggregation theory.

This special session pays particular attention to the study of structures that have been often disregarded in the field of aggregation theory. In particular, aggregation functions are typically defined on a compact real interval and many of their properties are based on the preservation of the classical order on the real line. However, there do exist many aggregation processes that are not defined on a compact real interval. For instance, the aggregation of binary relation is a classical problem in relational algebra and, in the particular cases of equivalence relations and order relations, in cluster analysis and social choice. Other examples are the aggregation of multivariate data in multivariate statistics and the aggregation of strings in computer science. In this session all contributions dealing with the aggregation on different structures, from either a theoretical, practical and even algorithmic point of view, are very welcome.

  • Bernard De Baets (Ghent University, Belgium)
  • Raúl Pérez-Fernández (University of Oviedo, Spain)

SS08: Fuzzy methods in Data Mining and Knowledge Discovery

The objective of the special session is to provide a forum for the discussion of recent advances in the application of Data Mining and Knowledge Discovery technologies to diverse problems, focusing on those involving fuzzy methods, and to offer an opportunity for researchers to identify new and promising research directions.

Data Mining aims at the automatic discovery of underlying non-trivial knowledge from datasets by applying intelligent analysis techniques. The interest in this research area has experienced a considerable growth in the last years due to two key factors: (a) knowledge hidden in organizations’ databases can be exploited to improve strategic and managerial decision-making in the current ultra-competitive markets; (b) the large volume of data managed by organizations makes it impossible to carry out an analysis process manually.

Nowadays, the volume of information digitally stored has considerably increased not only in database format but also in text format which is available in open source bases such as the Web, including log files registering the use of the information or social media content. This has contributed to increase the interest on Text and Web Mining techniques. In one hand, these techniques aim to automatize the analysis process by introducing a variety of intelligent techniques to learn, optimize and represent uncertain and imprecise knowledge. On the other hand, these tools offer the possibility to analyze massive data offering more efficient algorithms and a suitable selection of obtained results in terms of their novelty, usefulness and interpretability. Topics of interest include, but are not limited to, the following topics:

  • Data, text and web mining.
  • Stream data mining, temporal data series.
  • Big data mining.
  • Imprecision, uncertainty and vagueness in data mining.
  • Data pre- and post- processing in data mining.
  • Parallel and distributed data mining algorithms.
  • Information summarization and visualization.
  • Human-machine interaction for data access.
  • Semantic models to represent input data and extracted knowledge in a Data Mining process.

Applications of Data Mining techniques: health, tourism, biological process, customer profiles, anomaly detection, emergency management, situation recognition, etc.

  • M. Dolores Ruiz (University of Granada, Spain)
  • Karel Gutiérrez Batista (University of Granada, Spain)
  • Carlos J. Fernández-Basso (University of Granada, Spain)

SS09: Computational Intelligence for Logistics and Transportation Problems

Logistics and transportation (L&T) problems are at the core of many challenges that our societies will have to face in the middle and long term. Among them we can cite: intelligent transport, services deployment, health, smart societies, last mile delivery, tourism, disaster and crisis management, etc. In each of these challenges it is possible to recognize decision and optimization problems leading to models and frameworks (be it mathematical, linguistic, computational…) requiring suitable models and solution methods. Uncertainty, vagueness, imprecision, dynamism, etc. are ubiquitous in L&T. These features should not be ignored and should be properly managed. It is here where Computational Intelligence (CI) based methodologies and techniques, emerge as proper tools to model and solve L&T problems having those features.

The aim of this special session is to serve as a meeting point and discussion forum for researchers and practitioners on the latest developments on CI for L&T problems. In this context, we will welcome both theoretical and more application oriented contributions addressing:

  • how uncertainty, vagueness, imprecision, dynamism, etc. in L&T problems can be managed with CI tools;
  • description of ongoing efforts to develop CI based solutions for L&T problems;
  • description of already deployed CI-based applications;
  • “Success Stories”.

L&T problems of interest are (but not limited to):

  • Intelligent transportation systems;
  • Routing Problems;
  • Covering and Location Problems;
  • Port Logistics.
  • David A. Pelta (University of Granada, Spain)
  • Belén Melián (University of La Laguna, Spain)

SS10: Fuzzy Implication Functions

From more than a decade now, fuzzy implication functions have become one of the main research lines of the fuzzy logic community. These logical connectives are the generalization of the classical two-valued implication to the infinite-valued setting. In addition to modelling fuzzy conditionals, they are also used to perform backward and forward inferences in different fuzzy rule based systems. Moreover, they have proved to be useful not only in fuzzy control and approximate reasoning, but also in many other fields such as Multi-Valued Logic, Image Processing, Data Mining, Computing with Words and Rough Sets, among others.

Due to this great variety of applications, fuzzy implication functions have attracted the efforts of many researchers from the points of view of both theory and applications. Indeed, the theoretical perspective focuses on problems whose solutions provide important insights from the point of view of their applications. Therefore, this special session seeks to bring together researchers interested in recent advances in the theory and the applications of fuzzy implication functions, concerning, among others, characterizations, representations, generalizations and their relationships with fuzzy negations, triangular norms, uninorms and other fuzzy logic connectives.

  • Michal Baczynski (University of Silesia in Katowice, Poland)
  • Balasubramaniam Jayaram (Indian Institute of Technology Hyderabad, India)
  • Sebastia Massanet (University of the Balearic Islands, Spain)

SS11: Soft Methods in Statistics and Data Analysis

A rapid flow of diverse data and a wide range of applications reveal the need for more flexible tools for uncertainty modeling. Often the desired methodology should combine various possible types of uncertainty including randomness, imprecision, vagueness, etc. Also in science and engineering the need to analyze and model the true uncertainty associated with complex systems still requires better and more sophisticated representation of ignorance than that provided by uninformative Bayesian priors. Such challenges and needs require soft modeling and computing less rigid than the traditional approaches and techniques and hence more easily adapting to the actual nature of information. For example, by integration of fuzzy sets and probability theory one can develop more robust and interpretable models and tools which better capture all kinds of the information contained in data.

The aim of this Special Session is to bring together theoreticians and practitioners working on soft methods applied in statistical reasoning and data analysis to exchange ideas and discuss new trends that enrich the traditional statistical and uncertainty modeling toolbox. Topics of interest include but are not limited to:

  • Analysis of censored or missing data
  • Analysis of fuzzy data
  • Clustering and classification
  • Fuzzy random variables
  • Fuzzy regression methods
  • Imprecise probabilities
  • Interval data
  • Machine learning
  • Possibility theory
  • Random sets
  • Robust statistics
  • Soft computing
  • Statistical software for imprecise data.
  • Przemyslaw Grzegorzewski (Warsaw University of Technology, Poland)

SS12: Uncertainty Issues in Brain Computer Interface Systems

BCI has emerged as one of the most interesting topics of research. Applications range from studying creativity in humans to medical applications especially in enabling individuals with various limitations (e.g. motor limitations) to overcome them. Although mainly grounded mostly in signal processing, various forms of uncertainty and imprecision are present in BCI systems. This special session on Uncertainty Issues in Brain Computer Interface Systems aims to address these topics.

  • Anca Ralescu (University of Cincinnati, OH, USA)
  • Javier Andreu (University of Essex, United Kingdom)

SS13: Image Understanding and Explainable AI

Hot on the heels of major successes of deep learning (dubbed also as AI), came the concept of explainable AI. This is because while deep learning excels at recognition tasks, it appears to fail when it comes to explain and reason about the results of the recognition. Yet, it can be argued that image understanding, a topic with a rich past, is indeed an instance of explainable AI. This session aims to provide a forum for researchers in image understanding to show how their approaches are related, relevant, or indeed at the root of the concept of explainable AI.

  • Isabelle Bloch (Paristech, Telecom, Paris, France)
  • Atsushi Inoue (Eastern Washington University, Spokane, USA)
  • Hiroharu Kawanaka (Mie University, Japan)
  • Anca Ralescu (University of Cincinnati, OH, USA)

SS14: Fuzzy and Generalized Quantifier Theory

The theory of generalized quantifiers has been initiated in logic by Mostowski in 1957 and since then developed by many authors. A very general setting was provided by Lindström in 1966. In 1983, Zadeh introduced the concept of fuzzy quantifier. His suggestion generalizes the concept of Lindström monadic quantifiers of type <1, 1>. Following Zadeh, several theories of fuzzy quantifiers have been developed by various authors. Important is the class of intermediate quantifiers that are special expressions of natural language whose interpretation lays between interpretation of the classical quantifiers “forall” and “exists”. Typical examples are “most, many, almost all, a lot of, few” and many other ones.

The aim of this special session is to present recent developments and trends in the theory of fuzzy quantifiers. We invite contributions that are focused on (but not limited to) the following topics:

  • fuzzy quantifiers as generalization of Lindström quantifiers,
  • fuzzy logic theory of intermediate quantifiers,
  • other competing theories of fuzzy quantifiers and their comparison,
  • enhancement of existing applications,
  • presentation of new applications,
  • fundamental philosophical issues of the theory of fuzzy quantifiers in relation to the theory of generalized quantifiers.
  • Vilém Novák (University of Ostrava, Czech Republic)
  • Petra Murinová (University of Ostrava, Czech Republic)

SS15: Mathematical Methods Towards Dealing with Uncertainty in Applied Sciences

The aim of this session is to discuss mathematical methods that are focused on elaboration various structured spaces characterizing uncertainty in many of its facets. By this, we mean spaces whose structure is determined by fuzzy constraints: relations, partitions, topologies, metrics etc. In all these structured spaces, we are focused on mutual relationships, representations in the form of projections, and estimations of quality.

Another focus is to show that a certain amount of uncertainty is useful in finding weak (robust) solutions to many classical problems in applied fields, connected with dynamic processes, that are modelled by differential, integral, stochastic equations, and their fuzzy versions. We solicit contributions that show how sophisticated theories contribute to non-trivial solutions to problems in applied sciences including those that are ill-defined or have non-standard solutions.

  • Irina Perfilieva (University of Ostrava, Czech Republic)
  • Michal Holčapek (University of Ostrava, Czech Republic)

SS16: Statistical Image Processing and Analysis, with Applications in Neuroimaging

The scientific study of digitally acquired imaging data has been expanding exponentially over the last half century or so, and all indications are that this trend will continue. This expansion is especially acute in the field of medical imaging where sophisticated imaging modalities such as functional MRI (fMRI), diffusion weighted imaging (DWI), real time cardiac imaging, and electroencephalography (EEG) are giving rise to more complex scientific questions. For example, in the field of neuroimaging new insights from fMRI, DWI and EEG are leading to knowledge of how different regions of the brain communicate with each other (connectivity) and this in turn allows observation of the detrimental effects of brain diseases on these connectivity patterns. 

In order to keep up with the progress of medical imaging, specific attention is needed to develop new image processing and analysis techniques that can improve image quality and interpretability. In addition, new methods are needed to appropriately model the (potentially stochastic) biological processes involved while appropriately quantifying the associated uncertainties. Specifically, quantification and characterization of uncertainty is urgently needed throughout the image processing pipeline, including image acquisition/harmonization, reconstruction/enhancement, and analysis, each of which produce their own unique challenges.  These different sources of uncertainty can aggregate and become amplified through the processing pipeline. For example, combining data acquired from different sources/platforms (e.g. scans with different magnet strengths) can be extremely challenging and can lead to some major differences in data preprocessing downstream, specifically in segmentation/tissue classification and biological process modeling. New preprocessing methods for intensity normalization may help to improve supervised learning models and decrease bias in final analyses, thus uncovering new features in different clinical studies that rely on imaging biomarkers.

With improved modeling approaches at all stages in the processing pipeline, medical imaging can be used to understand and battle the critical disease processes of our era (such as Alzheimer’s, cancer, and heart disease) that have long eluded prevention and comprehensive treatment from the medical research community. It is therefore essential that leading quantitative scientists, with appropriate understanding of modeling uncertainty, become more involved in the vast array of exciting open problems in medical image processing and analysis.  

This special session will address the span of challenges involved in quantifying uncertainty in image processing and neuroimaging.

  • John Kornak (University of California, San Francisco, USA)
  • Rajarshi Guhaniyogi (University of California, Santa Cruz, USA)
  • Aaron Wolfe Scheffler (University of California, San Francisco, USA)

SS17: Interval Uncertainty

The main topic of the conference is management and processing of uncertainty in knowledge-based systems. In general, there are two main sources of information: measurements and expert estimates. Traditionally, uncertainty in measurements is described in probabilistic terms, while uncertainty in expert estimates is usually described by fuzzy techniques. Probabilistic description of measurement uncertainty is indeed widely spread, it is based on the fact that in many practical situations, we know the probability distribution of the measurement errors. This distribution is usually obtained when we calibrate the measuring instrument, i.e., compare its results with results of measuring the same quantity by a much more accurate (“standard”) measuring instrument. However, there are two important situations when such a comparison is not done.

The first is cutting-edge measurements, when we use best-possible measuring instruments. In this case, there is no more accurate measuring instrument to compare. In such situations, the only information that we have about uncertainty is the upper bound D on the (absolute value of the) measurement error provided by the manufacturer of the measuring instruments. Then, if the measurement result is X, the only information that we know about the actual value of the corresponding quantity is that it is in the interval [X-D, X+D].

Another important case when interval uncertainty appears is measurements in industry. Here, in principle it is possible to calibrate all the sensors and all the measuring instruments, but calibration is expensive, so usually, for most instruments, it is not done. As a result, the only information that we have about the resulting uncertainty is the interval uncertainty. It is therefore important to take interval uncertainty into account when processing data in knowledge-based systems. Interval uncertainty is also closely related to fuzzy techniques: indeed, if we want to know how the fuzzy uncertainty of the inputs propagates through the data processing algorithm, then the usual Zadeh’s extension principle is equivalent to processing alpha-cuts (intervals) for each level alpha. This relation between intervals and fuzzy computations is well known, but often, fuzzy researchers are unaware of the latest most efficient interval techniques and thus use outdated less efficient methods. One of the objectives of the proposed session is to help fuzzy community by explaining the latest interval techniques and to help interval community to better understand the related interval computation problems.

Yet another relation between interval and fuzzy techniques is that the traditional fuzzy techniques implicitly assume that experts can describe their degree of certainty in different statements by an exact number. In reality, it is more reasonable to expect experts to provide only a rage (interval) of possible values — leading to interval-valued fuzzy techniques that, in effect, combine both types of uncertainty.

  • Martine Ceberio (University of Texas at El Paso, USA)
  • Vladik Kreinovich (University of Texas at El Paso, USA)

SS18: Discrete Models and Computational Intelligence

The objective of the special session is to provide a forum for the discussion of recent advances in topics dealing with Discrete models, such as fuzzy graphs, fuzzy state machines, fuzzy cognitive maps, etc. are welcome, both applications and theoretical aspects. The topics include, but are not limited to:

  • Evolutionary search in graphs: In traditional mathematics, many graph search problems are NP-hard, while various meta-heuristics often provide efficient polynomial algorithms delivering results with high accuracy in reasonably short time. Such approaches extended to fuzzy graph search problems are of interest.
  • Fuzzy graphs: Fuzzy edge and fuzzy node graphs in modeling various real life phenomena, such as reliable and limited capacity networks, multiple feature descriptors of data (fuzzy-fuzzy signatures), and others.
  • Fuzzy cognitive maps and its extensions: Fuzzy state machines and cognitive maps describing the dynamic behavior of vague discrete or discretized systems. Fuzzy cognitive networks, granular cognitive maps etc.
  • László T. Kóczy (Budapest University of Technology and Economics, Hungary)
  • István Á. Harmati (Széchenyi István University, Hungary)

SS19: Current Techniques to Model, Process and Describe Time Series

Today, there is a great interest on the research of time series since they are used in many situations in real life. Researchers are very interested in extracting the relevant information from data that can be modelled as time series. Initially, time series are represented as raw data, and usually, they are computed using mathematical methods to obtain information from them while other approaches develop models to represent series. When time series has been modelled, different techniques can be used to find patterns and/or to study trends in such series. Finally, another relevant line of research concerns the description of the series using natural language where researchers aims to extract information expressed in natural language. For all these reasons, the goal of this special session is to provide an international forum for the presentation of recent results in the research of this field. A non-exhaustive list of topics includes:

  • methods for processing time series represented as raw data;
  • current models to represent time series;
  • efficient modelling of time series;
  • querying time series;
  • linguistic description of time series;
  • current techniques for extracting specific information from time series.
  • Juan Moreno-Garcia (University of Castilla-La Mancha, Spain)
  • Luis Rodriguez-Benitez (University of Castilla-La Mancha, Spain)

SS20: Mathematical Fuzzy Logic and Graded Reasoning Models

The IPMU special session on “Mathematical fuzzy logic and graded reasoning models” is devoted to the most recent developments in logic-based formalisms dealing with graded notions of truth and belief, in particular those related to theoretical advances in mathematical fuzzy logic, in logics for reasoning about uncertainty, preference or similarity and their applications. Potential topics include but are not limited to:

  • Formal systems of fuzzy Logic (Mathematical Fuzzy Logic, Logics with Evaluated Syntax, Partial Fuzzy Logic, Higher-Order Fuzzy Logics, Fuzzy Modal Logics, Related Algebraic Structures, Theory)
  • Uncertainty reasoning (Probabilistic Logics, Possibilistic Logic, Non-Monotonic Reasoning, Causal Reasoning)
  • Similarity-based Reasoning (Logical Foundations, Applications)
  • Hybridizations of Fuzziness and Uncertainty (Fuzzy Logics for Uncertainty Reasoning, States and Preferences)
  • Applications of Fuzziness and Uncertainty in: Argumentation, Logic Programming, Description Logics, Formal Concept Analysis.
  • Tommaso Flaminio, (Spanish National Research Council, Spain)
  • Lluis Godo, (Spanish National Research Council, Spain)
  • Vilém Novák, (University of Ostrava, Czech Republic)
  • Amanda Vidal, (Czech Academy of Sciences, Czech Republic)

SS21: Formal Concept Analysis, Rough Sets, general operators and related topics

Formal Concept Analysis (FCA) is a mathematical tool for obtaining information from relational datasets. It has been related to other useful tool for extracting and handle information from databases, such as Rough Set Theory, Possibility Theory, Mathematical Morphology, Fuzzy Relation Equations, fuzzy logic, etc. They have been combined to obtain robust and efficient mechanisms taking advantage of the main properties of each of them, which have been successfully applied to data mining, information retrieval, knowledge management, data, knowledge engineering, image processing, etc. They have also been developed to make possible to tackle problems associated with the treatment and management of information with uncertainty. The purpose of this Special Session is to present new advances and interactions among these important tools and their applications to hot topics and relevant problems. The not exhaustive list of topics includes:

  • Formal concept analysis
  • Fuzzy sets and fuzzy logic
  • Rough sets
  • Fuzzy rough sets
  • Interval-valued fuzzy sets
  • Operators in relational data analysis
  • Fuzzy relation equations
  • Mathematical morphology
  • Aggregation operators in relational data analysis
  • M. Eugenia Cornejo, (University of Cádiz, Spain)
  • Didier Dubois, (IRIT, University Paul Sabatier, Toulouse, France)
  • Jesús Medina, (University of Cádiz, Spain)
  • Henri Prade, (IRIT, University Paul Sabatier, Toulouse, France)
  • Eloísa Ramírez-Poussa, (University of Cádiz, Spain)

SS22: Data analytics in Security Intelligence Systems

Data analytics is becoming increasingly important in many application areas, include what is focused in the special session, namely security intelligence systems. Management of uncertainty plays an important role in security intelligence where information and knowledge derived from various sources, such as open sources (web, news, etc.) and human observation are inherently uncertain due to imperfections of different kinds, such as the (un)reliability of the source, missing data, error-prone data, imprecise information, disinformation (including faked news), as well as extraction of information from unstructured data (natural language text, speech, images). In this context, topics include (but are not limited to):

  • AI, Machine Learning and Deep Learning approaches
  • Big data analytics
  • Information extraction and knowledge discovery
  • Data and information fusion
  • Speech understanding
  • HMI issues in end-users’ dealing with uncertainty
  • Anomaly detection
  • Situation recognition for threat detection
  • Situation assessment (threat assessment)
  • Investigative and strategic analysis
  • Henrik Legind Larsen, (Legind Technologies, Denmark)
  • Maria Jose Martín-Bautista, (University of Granada, Spain)
  • Hassane Essafi, (CEA, France)

SS23: Computational Intelligence Methods in Information Modelling, Representation and Processing: on the 50th anniversary of the Codd’s relational data model

Ever growing amount of information collected by modern information systems calls for improved methods of its adequate representation and processing, taking into account many important aspects such as: data volume, provenience, quality, veracity, etc. Even if many new solutions has been recently proposed and successfully implemented there are still many challenges pending. The topics of this special session revolve around the concept of the database. As it appears there is still a room for new proposals regarding such a fundamental aspect of database management as data modelling. Despite the fact that 50 years passed since its invention, that data has profoundly changed their characteristic, that many alternative data models has been proposed – the relational data model is still reigning. This definitely proves that there is a real stroke of genius behind it but it also means that there is an opportunity and need to develop new approaches to data modelling on all levels, from conceptual to physical, and to data querying, even better suiting data processed by modern ICT applications.

The topics related to data modelling, management and processing are very relevant and overlap with most of the conference themes. The following non-exhaustive list covers the topics of a special interest for this session.

  • Data modelling: theory and tools
  • Database management systems: theory and applications
  • Big Data challenges for data representation and processing
  • Distributed databases and distributed data processing, including blockchain technology
  • Graph databases and other NoSQL paradigms
  • Data quality, veracity and value
  • Handling imperfect information in databases
  • Spatio-temporal data modeling and analytics
  • Semantic techniques in data modelling and processing
  • Non-standard queries against databases
  • Computational intelligence based extensions to popular querying formalisms
  • Internet of Things: data collecting, representing and processing
  • Data processing in biomedical applications
  • Data management for decision making support
  • Deductive databases
  • Textual information retrieval
  • Guy De Tré, (Ghent University, Belgium)
  • Janusz Kacprzyk, (Systems Research Institute Polish Academy of Sciences, Poland)
  • Adnan Yazıcı, (Nazarbayev University, Kazakhstan)
  • Slawomir Zadrozny, (Systems Research Institute Polish Academy of Sciences, Poland)

Call for Special Session Proposals (now closed)