GCAI-2018: Papers with Abstracts

Papers
Abstract. Although being quite inexpressive, the description logic (DL) FL0, which provides only conjunction, value restriction and the top concept as concept constructors, has an intractable subsumption problem in the presence of terminologies (TBoxes): subsumption reasoning w.r.t. acyclic FL0 TBoxes is coNP-complete, and becomes even ExpTimecomplete in case general TBoxes are used. In the present paper, we use automata working on infinite trees to solve both standard and non-standard inferences in FL0 w.r.t. general TBoxes. First, we give an alternative proof of the ExpTime upper bound for subsumption in FL0 w.r.t. general TBoxes based on the use of looping tree automata. Second, we em- ploy parity tree automata to tackle non-standard inference problems such as computing the least common subsumer and the difference of FL0 concepts w.r.t. general TBoxes.
Abstract. QDec-POMDPs are a qualitative alternative to stochastic Dec-POMDPs for goal-oriented plan- ning in cooperative partially observable multi-agent environments. Although QDec-POMDPs share the same worst case complexity as Dec-POMDPs, previous research has shown an ability to scale up to larger domains while producing high quality plan trees. A key difficulty in distributed execution is the need to construct a joint plan tree branching on the combinations of observations of all agents. In this work, we suggest an iterative algorithm, IMAP, that plans for one agent at a time, taking into considerations collaboration constraints about action execution of previous agents, and generating new constraints for the next agents. We explain how these constraints are generated and handled, and a backtracking mechanism for changing constraints that cannot be met. We provide experimental results on multi-agent planning domains, showing our methods to scale to much larger problems with several collaborating agents and huge state spaces.
Abstract. Business Process Diagrams (BPDs) have been used for documenting, analyzing and optimizing business processes. Business Process Modeling and Notation (BPMN) provides a rich graphical notation which is supported by a formalization that permits users automating such tasks. Stochastic versions of BPMN allows designers to represent the probability every possible way a process can develop. Nevertheless, this support is not enough for representing conditional dependencies between events occurring during process development. We show how structural learning on a Bayesian Network obtained from a BPD is used for discovering causal relations between process events. Temporal precedence between events, captured in the BPD, is used for pruning and correcting the model discovered by an Inferred Causation (IC) algorithm. We illustrate our approach by detecting dishonest bidders in an on-line auction scenario.
Abstract. In the paper we study algorithms for computing modules that are minimal w.r.t. set inclusion and that preserve the entailment of all subsumptions over a signature of interest. We follow the black-box approach for finding one or all justifications by replacing the entailment tests with logical difference checks, obtaining modules that preserve not only a given consequence but all entailments over a signature. Such minimal modules can serve to improve our understanding of the internal structure of large and complex ontologies. Additionally, several optimisations for speeding up the computation of minimal modules are investigated. We present an experimental evaluation of an implementation of our algorithms for ELH r-terminologies by applying them on the prominent medical ontologies Snomed CT and NCI Thesaurus.
Abstract. An agent that autonomously learns to act in its environment must acquire a model of the domain dynamics. This can be a challenging task, especially in real-world domains, where observations are high-dimensional and noisy. Although in automated planning the dynamics are typically given, there are action schema learning approaches that learn sym- bolic rules (e.g. STRIPS or PDDL) to be used by traditional planners. However, these algorithms rely on logical descriptions of environment observations. In contrast, recent methods in deep reinforcement learning for games learn from pixel observations. However, they typically do not acquire an environment model, but a policy for one-step action selec- tion. Even when a model is learned, it cannot generalize to unseen instances of the training domain. Here we propose a neural network-based method that learns from visual obser- vations an approximate, compact, implicit representation of the domain dynamics, which can be used for planning with standard search algorithms, and generalizes to novel domain instances. The learned model is composed of submodules, each implicitly representing an action schema in the traditional sense. We evaluate our approach on visual versions of the standard domain Sokoban, and show that, by training on one single instance, it learns a transition model that can be successfully used to solve new levels of the game.
Abstract. We introduce the Historical Gradient Boosting Machine with the objective of improving the convergence speed of gradient boosting. Our approach is analyzed from the perspective of numerical optimization in function space and considers gradients in previous steps, which have rarely been appreciated by traditional methods. To better exploit the guiding effect of historical gradient information, we incorporate both the accumulated previous gradients and the current gradient into the computation of descent direction in the function space. By fitting to the descent direction given by our algorithm, the weak learner could enjoy the advantages of historical gradients that mitigate the greediness of the steepest descent direction. Experimental results show that our approach improves the convergence speed of gradient boosting without significant decrease in accuracy.
Abstract. Search and optimization problems are a major arena for the practical application of Artificial Intelligence. However, when supply chain optimization and scheduling is tackled, techniques based on linear or non-linear programming are often used in preference to Evolutionary Computation such as Genetic Algorithms (GAs). It is important to analyse whether GA are suitable for continuous real-world supply chain scheduling tasks which need regular updates.
We analysed a practical situation involving iron ore train networks which is indeed one of significant economic importance. In addition, iron ore train networks have some interesting and distinctive characteristics so analysing this situation is an important step toward understanding the performance of GA in real-world supply chain scheduling. We compared the performance of GA with Nonlinear programming heuristics and existing industry scheduling approaches. The main result is that our comparison of techniques here produce an example in which GAs perform well and is a cost effective approach.
Abstract. CAPTCHAs have established themselves as a standard technology to confidently distinguish humans from bots. Beyond the typical use for security reasons, CAPTCHAs have helped promote AI research in challenge tasks such as image classification and optical character recognition. It is, therefore, natural to consider what other challenge tasks for AI could serve a role in CAPTCHAs. The Winograd Schema Challenge (WSC), a certain form of hard pronoun resolution tasks, was proposed by Levesque as such a challenge task to promote research in AI. Based on current reports in the literature, the WSC remains a challenging task for bots, and is, therefore, a candidate to serve as a form of CAPTCHA. In this work we investigate whether this a priori appropriateness of the WSC as a form of CAPTCHA can be justified in terms of its acceptability by the human users in relation to existing CAPTCHA tasks. Our empirical study involved a total of 329 students, aged between 11 and 15, and showed that the WSC is generally faster and easier to solve than, and equally entertaining with, the most typical existing CAPTCHA tasks.
Abstract. The Winograd Schema Challenge (WSC) — the task of resolving pronouns in certain sentences where shallow parsing techniques seem not to be directly applicable — has been proposed as an alternative to the Turing Test. According to Levesque, having access to a large corpus of text would likely not help much in the WSC. Among a number of attempts to tackle this challenge, one particular approach has demonstrated the plausibility of using commonsense knowledge automatically acquired from raw text in English Wikipedia.
Here, we present the results of a large-scale experiment that shows how the performance of that particular automated approach varies with the availability of training material. We compare the results of this experiment with two studies: one from the literature that investigates how adult native speakers tackle the WSC, and one that we design and undertake to investigate how teenager non-native speakers tackle the WSC. We find that the performance of the automated approach correlates positively with the performance of humans, suggesting that the performance of the particular automated approach could be used as a metric of hardness for WSC instances.
Abstract. We assume that service robots will have spare time in between scheduled user requests, which they could use to perform additional unrequested services in order to learn a model of users’ preferences and receive rewards. However, a mobile service robot is constrained by the need to travel through the environment to reach users in order to perform services for them, as well as the need to carry out scheduled user requests. We assume service robots operate in structured environments comprised of hallways and floors, resulting in scenarios where an office can be conveniently added to the robot’s plan at a low cost, which affects the robot’s ability to plan and learn.
We present two algorithms, Planning Thompson Sampling and Planning UCB1, which are based on existing algorithms used in multi-armed bandit problems, but are modified to plan ahead considering the time and location constraints of the problem. We compare them to existing versions of Thompson Sampling and UCB1 in two environments representative of the types of structures a robot will encounter in an office building. We find that our planning algorithms outperform the original naive versions in terms of both reward received and the effectiveness of the model learned in a simulation. The difference in performance is partially due to the fact that the original algorithms frequently miss opportunities to perform services at a low cost for convenient offices along their path, while our planning algorithms do not.
Abstract. With the current surge of interest in ethics in AI, we present our position with respect to these challenges. Our proposal, responsible technologies, aims to (1) address a number of the ethical challenges put forward in AI, and (2) provide the first building blocks to- wards the development of ethical AI systems. The current discussion on how to address ethics in AI usually focuses on issues like policies, education, or research culture. There is no computational method yet mature enough to address ethics in AI. We break ground by proposing new methods and tools, underpinned by multidisciplinary research, that can make humans and machines understand their respective dynamic goals while strictly abiding by the values that inspire our societies. This position paper presents our plan of work for the development of responsible technologies that embed values within technology through what we refer to as ethics by construction.
Abstract. Advances in state-of-the-art techniques including convolutional neural networks (CNNs) have led to improved perception in autonomous robots. However, these new techniques make a robot’s decision-making process obscure even for the experts. Our goal is to auto- matically generate natural language explanations of a robot’s perception-based inferences in order to help people understand what features contribute to these classification predic- tions. Generating natural language explanations is particularly challenging for perception and other high-dimension classification tasks because 1) we lack a mapping from features to language and 2) there are a large number of features which could be explained. We present a novel approach to generating explanations, which first finds the important features that most affect the classification prediction and then utilizes a secondary detector which can identify and label multiple parts of the features, to label only those important features. Those labels serve as the natural language groundings that we use in our explanations. We demonstrate our explanation algorithm’s ability on the floor identification classifier of our mobile service robot.
Abstract. Significant advances in the performance of deep neural networks, such as Convolutional Neural Networks (CNNs) for image classification, have created a drive for understanding how they work. Different techniques have been proposed to determine which features (e.g., image pixels) are most important for a CNN’s classification. However, the important features output by these techniques have typically been judged subjectively by a human to assess whether the important features capture the features relevant to the classification and not whether the features were actually important to classifier itself. We address the need for an objective measure to assess the quality of different feature importance measures. In particular, we propose measuring the ratio of a CNN’s accuracy on the whole image com- pared to an image containing only the important features. We also consider scaling this ratio by the relative size of the important region in order to measure the conciseness. We demonstrate that our measures correlate well with prior subjective comparisons of important features, but importantly do not require their human studies. We also demonstrate that the features on which multiple techniques agree are important have a higher impact on accuracy than those features that only one technique finds.
Abstract. Service robots are becoming more and more capable but at the same time they are opaque to their users. Once a robot starts executing a task it is hard to tell what it is doing or why. To make robots more transparent to their users we propose to expand the capabilities of robots to not only execute tasks but also answer questions about their experience.
During execution, our CoBot robots record log files. We propose to use these files as a recording of the robot experience. Log files record the experience of the robot in term of its internals. To process information from the logs we define Log Primitives Operations (LPOs) that the robot can autonomously perform. Each LPO is defined in terms of an operation and a set of filters. We frame the problem of understanding questions about robot past experiences, as grounding input sentences to LPOs. To do so, we introduce a probabilistic model to ground sentences to these primitives. We evaluate our approach on a corpus of 133 sentences showing that our method is able to learn the meaning of users’ questions.
Finally we introduce the concept of checkable answers to have the robot provide answers that better explain the computation performed to achieve the result reported.
Abstract. Recent developments in AI, Machine Learning and Robotics have raised concerns about the ethical consequences of both academic and industrial AI research. Leading academics, businessmen and politicians have voiced an increasing number of questions about the con- sequences of AI not only over people, but also on the large-scale consequences on the the future of work and employment, its social consequences and the sustainability of the planet. In this work, we analyse the use and the occurrence of ethics-related research in leading AI, machine learning and robotics venues. In order to do so we perform long term, historical corpus-based analyses on a large number of flagship conferences and journals. Our experiments identify the prominence of ethics-related terms in published papers and presents several statistics on related topics. Finally, this research provides quantitative evidence on the pressing ethical concerns of the AI community.
Abstract. Three connections between Dynamic Programming (DP) and Constraint Programming (CP) have previously been explored in the literature: DP-based global constraints, DP- like memoisation during tree search to avoid recomputing results, and subsumption of both by bucket elimination. In this paper we propose a new connection: many discrete DP algorithms can be directly modelled and solved as a constraint satisfaction problem (CSP) without backtracking. This has applications including the design of monolithic CP models for bilevel optimisation. We show that constraint filtering can occur between leader and follower variables in such models, and demonstrate the method on network interdiction.
Abstract. Software vulnerabilities in organizational computer networks can be leveraged by an attacker to gain access to sensitive information. As fixing all vulnerabilities requires much effort, it is critical to rank the possible fixes by their importance. Centrality measures over logical attack graphs, or over the network connectivity graph, often provide a scalable method for finding the most critical vulnerabilities.
In this paper we suggest an analysis of the planning graph, originating in classical planning, as an alternative for the logical attack graph, to improve the ranking produced by centrality measures. The planning graph also allows us to enumerate the set of possible attack plans, and hence, directly count the number of attacks that use a given vulnerability. We evaluate a set of centrality-based ranking measures over the logical attack graph and the planning graph, showing that metrics computed over the planning graph reduce more rapidly the set of shortest attack plans.
Abstract. Planning under uncertainty assumes a model of the world that specifies the probabilistic effects of the actions of an agent in terms of changes of the state. Given such model, planning proceeds to determine a policy that defines for each state the choice of action that the agent should follow in order to maximize a reward function. In this work, we realize that the world can be changed in more ways than those possible by the execution of the agent’s repertoire of actions. These additional configurations of the world may allow new policies that let the agent accumulate even more reward than that possible by following the optimal policy of the original world. We introduce and formalize the problem of planning while considering these additional possible worlds. We then present an approach that models feasible changes to the world as modifications to the probability transition function, and show that the problem of computing the configuration of the world that allows the most rewarding optimal policy can be formulated as a constrained optimization problem. Finally, we contribute a gradient-based algorithm for solving this optimization problem. Experimental evaluation shows the effectiveness of our approach in multiple problems of practical interest.
Abstract. Replaceability is a form of generalized substitutability whose features make it potentially of great importance for problem simplification. It differs from simple substitutability in that it only requires that substitutable values exist for every solution containing a given value without requiring that the former always be the same. This is the most general form of substitutability that allows inferences from local to global versions of this property. Building on earlier work, this study first establishes that algorithms for localized replaceability (consistent neighbourhood replaceability or CNR algorithms) based on all-solutions neighbourhood search outperform other replaceability algorithms by several orders of magnitude. It also examines the relative effectiveness of different forms of depth-first CNR algorithms. Secondly, it demonstrates an apparent complexity ridge, which does not occur at the same place in the problem space as the complexity areas for consistency or full search algorithms. Thirdly, it continues the study of methods for inferring replaceability in structured problems in order to improve efficiency. Here, it is shown that some strategies for inferring replaceable values can be extended to disjunctive constraints in scheduling problems.