GCAI 2020: Papers with Abstracts

Papers
Abstract. This paper introduces, philosophically and to a degree formally, the novel concept of learn- ing ex nihilo, intended (obviously) to be analogous to the concept of creation ex nihilo. Learning ex nihilo is an agent’s learning “from nothing”, by the suitable employment of inference schemata for deductive and inductive reasoning. This reasoning must be in machine-verifiable accord with a formal proof/argument theory in a cognitive calculus (i.e., here, roughly, an intensional higher-order multi-operator quantified logic), and this reasoning is applied to percepts received by the agent, in the context of both some prior knowledge, and some prior and current interests. Learning ex nihilo is a challenge to con- temporary forms of ML, indeed a severe one, but the challenge is here offered in the spirit of seeking to stimulate attempts, on the part of non-logicist ML researchers and engineers, to collaborate with those in possession of learning-ex nihilo frameworks, and eventually attempts to integrate directly with such frameworks at the implementation level. Such integration will require, among other things, the symbiotic interoperation of state-of-the- art automated reasoners and high-expressivity planners, with statistical/connectionist ML technology.
Abstract. This paper discusses the tragic accident in which the first pedestrian was killed by an autonomous car: due to several grave errors in its design, it failed to recognize the pedestrian and stop in time to avoid a collision. We start by discussing the accident in some detail, enlightened by the recent publication of a report from the National Transportation Safety Board (NTSB) re. the accident. We then discuss the shortcomings of current autonomous- car technology, and advocate an approach in which several AI agents generate arguments in support of some action, and an adjudicator AI determines which course of action to take. Input to the agents can come from both symbolic reasoning and connectionist-style inference. Either way, underlying each argument and the adjudication process is a proof/argument in the language of a multi-operator modal calculus, which renders transparent both the mechanisms of the AI and accountability when accidents happen.
Abstract. Proofs are a key feature of modern propositional and first-order theorem provers. Proofs generated by such tools serve as explanations for unsatisfiability of statements. However, these explanations are complicated by proofs which are not necessarily as concise as possible. There are a wide variety of compression techniques for propositional resolution proofs, but fewer compression techniques for first-order resolution proofs generated by automated theorem provers. This paper describes an approach to compressing first-order logic proofs based on lifting proof compression ideas used in propositional logic to first-order logic. An empirical evaluation of the approach is included.
Abstract. The Winograd Schema Challenge (WSC), the task of resolving pronouns in certain carefully-structured sentences, has received considerable interest in the past few years as an alternative to the Turing Test. In our recent work we demonstrated the plausibility of using commonsense knowledge, automatically acquired from raw text in English Wikipedia, towards computing a metric of hardness for a limited number of Winograd Schemas.
In this work we present WinoReg, a new system to compute hardness of Winograd Schemas, by training a Random Forest classifier over a rich set of features identified in relevant WSC works in the literature. Our empirical study shows that this new system is considerably faster and more accurate compared to the system proposed in our earlier work, making its use as part of other WSC-based systems feasible.
Abstract. Nowadays remarkable progress has been observed in facial detection as a core part of computer vision. Nevertheless, motion blur still presents substantial challenges in face detection. The most recent face image deblurring methods make oversimplifying presumption and fail to restore the highly structured face shape/identity information. Therefore, we propose a data-driven based face image deblurring approach that foster facial detection and identity preservation. The proposed model includes two sequential data streams: Out of any supervision the first has been trained on real unlabeled clear/blurred data to generate a close realistic blurred image data during its inference. On the other hand, the generated labeled data has been exploited with by a second supervised learning-based data steam to learn the mapping function from blur domain to the clear one. We utilize the restored data to conduct an experimentation on face detection task. The experimental evaluation demonstrates the outperformance of our results and supports our system design and training strategy.
Abstract. Logical reasoning as performed by human mathematicians involves an intuitive under- standing of terms and formulas. This includes properties of formulas themselves as well as relations between multiple formulas. Although vital, this intuition is missing when supplying atomically encoded formulae to (neural) down-stream models.
In this paper we construct continuous dense vector representations of first-order logic which preserve syntactic and semantic logical properties. The resulting neural formula embeddings encode six characteristics of logical expressions present in the training-set and further generalise to properties they have not explicitly been trained on. To facilitate training, evaluation, and comparing of embedding models we extracted and generated data sets based on TPTP’s first-order logic library. Furthermore we examine the expressiveness of our encodings by conducting toy-task as well as more practical deployment tests.
Abstract. We extend epistemic logic S5r for reasoning about knowledge under hypotheses with distributive knowledge operator. This extension gives possibility to express distributive knowledge of agents with different background assumptions. The logic is important in com- puter science since it models agents behavior which already have some equipped knowledge. Extension with distributive knowledge shows to be extremely interesting since knowledge of an arbitrary agent whose epistemic capacity corresponds to any system between S4 and S5 under some restrictions can be modeled as distributive knowledge of agents with cer- tain background knowledge. We present an axiomatization of the logic and prove Kripke completeness and decidability results.
Abstract. Domain-oriented knowledge bases (KBs) such as DBpedia and YAGO are largely constructed by applying a set of predefined extraction rules to the semi-structured contents of Wikipedia articles. Although both of these large-scale KBs achieve very high average precision values (above 95% for YAGO3), subtle mistakes in a few of the underlying ex- traction rules may still impose a substantial amount of systematic extraction mistakes for specific relations. For example, by applying the same regular expressions to extract per- son names of both Asian and Western nationality, YAGO erroneously swaps most of the family and given names of Asian person entities. For traditional rule-learning approaches based on Inductive Logic Programming (ILP), it is very difficult to detect these systematic extraction mistakes, since they usually occur only in a relatively small subdomain of the relations’ arguments. In this paper, we thus propose a guided form of ILP, coined “GILP”, that iteratively asks for small amounts of user feedback over a given KB to learn a set of data-cleaning rules that (1) best match the feedback and (2) also generalize to a larger portion of facts in the KB. We propose both algorithms and respective metrics to automatically assess the quality of the learned rules with respect to the user feedback.
Abstract. Various sub-symbolic approaches for reasoning and learning have been proposed in the literature. Among these approaches, the neural theorem prover (NTP) approach uses a backward chaining reasoning mechanism to guide a machine learning architecture to learn vector embedding representations of predicates and to induce first-order clauses from a given knowledge base. NTP is however known for being not scalable, as the computation trees generated by the backward chaining process can grow exponentially with the size of the given knowledge base. In this paper we address this limitation by extending the NTP approach with a topic-based method for controlling the induction of first-order clauses. Our proposed approach, called TNTP for Topical NTP, identifies topic-based clusters over a large knowledge-base and uses these clusters to control the soft unification of predicates during the learning process with the effect of reducing the size of the computation tree needed to induce first-order clauses. Our TNTP framework is capable of learning a diverse set of induced rules that have improved predictive accuracy, whilst reducing the computational time by several orders of magnitude. We demonstrated this by evaluating our approach on three different datasets (UMLS, Kinship and Nations) and comparing our results with that of the NTP method, chosen here as our baseline.