Dr Chris Fox — Publications

BibTeX file (contains some UTF-8 characters)

Books (4)
[1]
Shalom Lappin and Chris Fox, editors. Handbook of Contemporary Semantic Theory, 2nd Edition. Wiley-Blackwell, Oxford and Malden MA, second edition, 2015. 755 pages.

Description: The second edition of The Handbook of Contemporary Semantic Theory presents a comprehensive introduction to cutting-edge research in contemporary theoretical and computational semantics: Features completely new content from the first edition of The Handbook of Contemporary Semantic Theory; Features contributions by leading semanticists, who introduce core areas of contemporary semantic research, while discussing current research; Suitable for graduate students for courses in semantic theory and for advanced researchers as an introduction to current theoretical work.

[2]
Alex Clark, Chris Fox, and Shalom Lappin, editors. Handbook of Computational Linguistics and Natural Language Processing. Wiley-Blackwell, 2010.

Description: This comprehensive reference work provides an overview of the concepts, methodologies, and applications in computational linguistics and natural language processing (NLP). Features contributions by the top researchers in the field, reflecting the work that is driving the discipline forward; Includes an introduction to the major theoretical issues in these fields, as well as the central engineering applications that the work has produced; Presents the major developments in an accessible way, explaining the close connection between scientific understanding of the computational properties of natural language and the creation of effective language technologies; Serves as an invaluable state-of-the-art reference source for computational linguists and software engineers developing NLP applications in industrial research and development labs of software companies.

[3]
Chris Fox and Shalom Lappin. Foundations of Intensional Semantics. Blackwell, 2005.

We present Property Theory with Curry Typing (PTCT), an intensional first-order logic for natural language semantics. PTCT permits fine-grained specifications of meaning. It also supports polymorphic types and separation types. We develop an intensional number theory within PTCT in order to represent proportional generalized quantifiers like “most.” We use the type system and our treatment of generalized quantifiers in natural language to construct a type-theoretic approach to pronominal anaphora that avoids some of the difficulties that undermine previous type-theoretic analyses of this phenomenon.

[4]
Chris Fox. The Ontology of Language: properties, individuals and discourse. Lecture Notes of The Center for the Study of Language and Information (CSLI). The Center for the Study of Language and Information (CSLI), 2000.

This monograph is concerned with exploring various ontological assumptions, and whether they can be eliminated. It examines the basic notions of proposition and property, as adopted by property theory, and then goes on to explore what other ontological assumptions may be necessary for a semantic theory of natural language, covering plurals, mass terms, intensional individuals and discourse representation.

Chapters (11)
[1]
Chris Fox. Philosophy of language, ontology and logic. In Piotr Stalmaszczyk, editor, The Cambridge Handbook of the Philosophy of Language, chapter 5, pages 107–123. Cambridge University Press, December 2021.

This chapter considers the various philosophical and methodological questions that arise in the formal analysis of the semantics of language. Formal semantics aims to provide a systematic account of the meaning of language in a rigorous formal framework. It is typically a rule-based analysis of the relevant data and intuitions. This is a broad and complex problem, given the nuances in the use and meaning of everyday language. In practice, this means that a given analysis will confine itself to some specific aspect of meaning, an appropriate sample of the language, and some constrained context of use.

[2]
C Fox. The semantics of imperatives. In C Fox and S Lappin, editors, The Handbook of Contemporary Semantic Theory, 2nd Edition, pages 314–342. Wiley, Oxford and Malden MA, 2015.

Some issues in the analysis of imperatives and a sketch of a proof-theoretic analysis.

[3]
Chris Fox. The meaning of formal semantics. In Piotr Stalmaszczyk, editor, Semantics and Beyond. Philosophical and Linguistic Investigations, volume 57 of Philosophische Analyse / Philosophical Analysis, pages 85–108. De Gruyter, Berlin, July 2014.

What is it that semanticists think they are doing when using formalisation? What kind of endeavour is the formal semantics of natural language: scientific; linguistic; philosophical; logical; mathematical? If formal semantics is a scientific endeavour, then there ought to be empirical criteria for determining whether such a theory is correct, or an improvement on an alternative account. The question then arises as to the nature of the evidence that is being accounted for. It could be argued that the empirical questions are little different in kind to other aspects of linguistic analysis, involving questions of performance and competence (Chomsky 1965; Saussure 1916). But there are aspects of formal accounts of meaning that appear to sit outside this scientific realm. One key issue concerns the precise nature of the formalisation that is adopted; what criteria are to be used to decide between accounts that are founded on different formal systems, with different ontological assumptions? Indeed, is it necessary to judge semantic frameworks on such grounds? In other words, are two theoretical accounts to be treated as equivalent for all relevant purposes if they account for exactly the same linguistic data? Broadly speaking, there are two related perspectives on the analysis of propositional statements, one “truth conditional” — reducing sentence meaning to the conditions under which the sentence is judged to be true (e.g. Montague 1973) — the other “proof theoretic” — reducing sentence meanings to patterns of entailments that are supported (e.g. Ranta 1994, Fox & Lappin 2005 ,Fox 2000). Variations of these perspectives might be required in the case of non-assertoric utterances. We may wonder what critieria might be used to decide between these approaches. This brings us back to the nature of the data itself. If the data is (merely) about which arguments, or truth conditions, subjects agree with, and which they disagree with, then the terms in which the theory is expressed may be irrelevant. But it may also be legitimate to be concerned with either (a) the intuitions that people have when reasoning with language, or (b) some technical or philosophical issues relating to the chosen formalism. The truth-conditional vs proof-theoretic dichotomy might broadly be characterised as model-theoretic vs axiomatic, where the model-theoretic tend to be built around a pre-existing formal theory, and the axiomatic involves formulating rules of behaviour “from scratch”. In some sense, fitting an analysis of a new problem into an existing framework could be described as providing some kind of “explanatory” power, assuming that the existing framework has some salient motivation that is independent of the specific details of the phenomena in question. In contrast, building a new theory that captures the behaviour might then be characterised as “descriptive”, as — superficially at least — it does not show how an existing theory “already” accounts for the data in some sense. Here we observe instead that the argument can be run in the other direction. that a reductive model-theoretic account merely “describes” how some aspects of a problem can be reduced to some formalisation, but may fail to capture a subject’s understanding or intuitions about meaning. It is surely appropriate for the formal theory itself to be at least sympathetic to the ontological concerns and intuitions of its subjects — if not inform them (Dummett 1991). The alternative amounts to little more than carving out otherwise arbitrary aspects of a system that mimics the required behaviour, without a coherent explanation of why some aspects of a formal theory characterise, or capture, the intended meaning, but not others (cf. Benacerraf 1965). That seems an impoverished approach, weakening any claim to “explain”. Any constraint this imposes on what is it to be an explanatory account of meaning then faces the same problem as naive notions of compositionality (Zadrozny 1994) — that is, what appears to be a meaningful restriction is, in reality, a mirage.

[4]
Chris Fox. Curry-typed semantics in typed predicate logic. In Vit Puncochar, editor, Logica Yearbook 2013. College Publications, June 2014.

Various questions arise in semantic analysis concerning the nature of types. These questions include whether we need types in a semantic theory, and if so, whether some version of simple type theory (STT, Church 1940) is adequate or whether a richer more flexible theory is required to capture our semantic intuitions. Propositions and propositional attitudes can be represented in an essentially untyped first-order language, provided a sufficiently rich language of terms is adopted. In the absence of rigid typing, care needs to be taken to avoid the paradoxes, for example by constraining what kinds of expressions are to be interpreted as propositions (Turner 1992). But the notion of type is ontologically appealing. In some respects, STT seems overly restrictive for natural language semantics. For this reason it is appropriate to consider a system of types that is more flexible than STT, such as a Curry-style typing (Curry & Feys 1958). Care then has to be taken to avoid the logical paradoxes. Here we show how such an account, based on the Property Theory with Curry Types (PTCT, Fox & Lappin 2005), can be formalised within Typed Predicate Logic (TPL, Turner 2009). This presentation provides a clear distinction between the classes of types that are being used to (i) avoid paradoxes (ii) allow predicative polymorphic types. TPL itself provides a means of expressing PTCT in a uniform language.

[5]
Chris Fox. Axiomatising questions. In Vit Puncochar and Petr Svarny, editors, Logica Year Book 2012, pages 23–34. College Publications, May/June 2013.

Accounts of the formal semantics of natural language often adopt a pre-existing framework. Such formalisations rely upon informal narrative to explain the intended interpretation of an expression — an expression that may have different interpretations in different circumstances, and may supports patterns of behaviour that exceed what is intended. This ought to make us question the sense in which such formalisations capture our intuitions about semantic behaviour. In the case of theories of questions and answers, a question might be interpreted as a set (of possible propositional answers), or as a function (that yields a proposition given a term that is intended to be interpreted as a phrasal answer), but the formal theory itself provides no means of distinguishing such sets and functions from other cases where they are not intended to represent questions, or their answers. Here we sketch an alternative approach to formalisation a theory of questions and answers that aims to be sensitive to such ontological considerations.

[6]
Chris Fox and Raymond Turner. In defense of axiomatic semantics. In Piotr Stalmaszczyk, editor, Philosophical and Formal Approaches to Linguistic Analysis, pages 145–160. Ontos Verlag, 2012. Based on the paper “A semantic Method” presented at PhiLang 2011.

We may wonder about the status of logical accounts of the meaning of language. When does a particular proposal count as a theory? How do we judge a theory to be correct? What criteria can we use to decide whether one theory is “better” than another? Implicitly, many accounts attribute a foundational status to set theory, and set-theoretic characterisations of possible worlds in particular. The goal of a semantic theory is then to find a translation of the phenomena of interest into a set-theoretic model. Such theories may be deemed to have “explanatory” or “predictive” power if a mapping can found into expressions of set-theory that have the appropriate behaviour by virtue of the rules of set-theory (for example Montague 1973; Montague1974). This can be contrasted with an approach in which we can help ourselves to “new” primitives and ontological categories, and devise logical rules and axioms that capture the appropriate inferential behaviour (as in Turner 1992). In general, this alternative approach can be criticised as being mere “descriptivism”, lacking predictive or explanatory power. Here we will seek to defend the axiomatic approach. Any formal account must assume some normative interpretation, but there is a sense in which such theories can provide a more honest characterisation (cf. Dummett 1991). In contrast, the set-theoretic approach tends to conflate distinct ontological notions. Mapping a pattern of semantic behaviour into some pre-existing set-theoretic behaviour may lead to certain aspects of that behaviour being overlooked, or ignored (Chierchia & Turner 1988; Bealer 1982). Arguments about the explanatory and predictive power of set-theoretic interpretations can also be questioned (see Benacerraf 1965, for example). We aim to provide alternative notions for evaluating the quality of a formalisation, and the role of formal theory. Ultimately, claims about the methodological and conceptual inadequacies of axiomatic accounts compared to set-theoretic reductions must rely on criteria and assumptions that lie outside the domain of formal semantics as such.

[7]
Mahmoud El-Haj, Udo Kruschwitz, and Chris Fox. Experimenting with automatic text summarization for Arabic. In Zygmunt Vetulani, editor, Human Language Technology, number LNAI 6562 in Lecture Notes in Artificial Intelligence. Springer, 2011. Fourth Language and Technology Conference, LTC 2009, PoznaƄ, Poland, November 2009. Revised Selected Papers. (doi:10.1007/978-3-642-20095-3_45)

At the time of writing, summarisation systems for Arabic are not as sophisticated and as reliable as those developed for languages like English. In this paper we discuss two summarisation systems for Arabic and report on a large user study performed on these systems. The first system, the Arabic Query-Based Text Summarisation System (AQBTSS), uses standard retrieval methods to map a query against a document collection and to create a summary. The second system, the Arabic Concept-Based Text Summarisation System (ACBTSS), creates a query-independent document summary. Five groups of users from different ages and educational levels participated in evaluating our systems.

[8]
Chris Fox. Computational semantics. In Alex Clark, Chris Fox, and Shalom Lappin, editors, Handbook of Computational Linguistics and Natural Language Processing. Wiley-Blackwell, 2010.

A brief introduction to Computational Semantics.

[9]
Chris Fox and Shalom Lappin. Expressive completeness and computational efficiency for underspecified representations. In Lars Borin and Staffan Larsson, editors, Festschrift for Robin Cooper. 2007. Celebrating the occasion of Robin Cooper’s 60th birthday.

Cooper (1983) pioneered underspecified scope representation in formal and computational semantics through his introduction of quantifier storage into Montague semantics as an alternative to the syntactic operation of quantifying-in. In this paper we address an important issue in the development of an adequate formal theory of underspecified semantics. The tension between expressive power and computational tractability poses an acute problem for any such theory. Ebert (2005) shows that any reasonable current treatment of underspecified semantic representation either suffers from expressive incompleteness or produces a combinatorial explosion that is equivalent to generating the full set of possible scope readings in the course of disambiguation. In previous work we have presented an account of underspecified scope representations within Property Theory with Curry Typing (PTCT), an intensional first-order theory for natural language semantics. Here we show how filters applied to the underspecified-scope terms of PTCT permit both expressive completeness and the reduction of computational complexity in a significant class of non-worst case scenarios.

[10]
Chris Fox and Shalom Lappin. Polymorphic quantifiers and underspecification in natural language. In S. Artemov, H. Barringer, A. S. d’Avila Garcez, L. C. Lamb, and J. Woods, editors, We Will Show Them: Essays in Honour of Dov Gabbay. College Publications, 2005.

It is reasonably well-understood that natural language displays polymorphic behaviour in both its syntax and semantics, where various constructions can operate on a range of syntactic categories, or semantic types. In mathematics, logic and computer science it is appreciated that there are various ways in which such type-general behaviours can be formulated. It is also known that natural languages are highly ambiguous with respect to scoping artifacts, as evident with quantifiers, negation and certain modifier expressions. To deal with such issues, formal frameworks have been explored in which the polymorphic nature of natural language can be expressed, and theories of underspecified semantics have been proposed which seek to separate the process of pure compositional interpretation from the assignment of scope. To date, however, there has been no work on bringing these two aspects together; there is no semantic treatments of scope ambiguity and underspecification which explicitly takes into account the polymorphic nature of natural language quantifiers. In this paper, we extend an existing treatment of underspecification and scope ambiguity in Property Theory with Curry Typing (PTCT) to deal with arbitrary types of quantification by adopting a form of polymorphism. In this theory of underspecification, all of the expressions in the theory are terms of the logic; there is no “meta-semantic” machinery. For this reason all aspects of the theory must be able to deal with polymorphism appropriately.

[11]
Chris Fox. Plurals and mass terms in Property Theory. In F. Hamm and E. Hinrichs, editors, Plurality and Quantification, Studies in Linguistics and Philosophy (SLAP), pages 113–175. Kluwer Academic Press, Dordrecht, 1998.

This chapter is concerned with representing the semantics of natural language plurals and mass terms in property theory; a weak first-order theory of Truth, Propositions and Properties with fine-grained intensionality (Turner 1990, Turner 1992, Aczel 1980). The theory allows apparently coreferring items to corefer without inconsistency. This is achieved by using property modifiers which keep track of the property used to refer to a term, much like Landman’s roles (Landman 1989). We can thus predicate apparently contradictory properties of “the judge” and “the cleaner,” for example, even if they turn out to be the same individual. The same device can also be used to control distribution into mereological terms: when we say “the dirty water is liquid,” we can infer that those parts that are dirty water are liquid without inferring that the dirt is liquid. The theory shows how we can formalise some aspects of natural language semantics without being forced to make certain ontological commitments. This is achieved in part by adopting an axiomatic methodology. Axioms are proposed that are just strong enough to support intuitively acceptable inferences, whilst being weak enough for some ontological choices to be avoided (such as whether or not the extensions of mass terms should be homogeneous or atomic). The axioms are deliberately incomplete, just as in basic PT, where incomplete axioms are used to avoid the logical paradoxes. The axioms presented are deliberately too weak to say much about‘non-denoting’ definite descriptors. For example, we cannot erroneously prove that they are all equal. Neither can we prove that predication of such definites results in a proposition. This means that we cannot question the truth of sentences such as “the present king of France is bald.”

Articles (21)
[1]
Ayman Alghelbawy, Udo Kruschitz, Mark Lattimer, Massimo Poesio, and Chris Fox. An NLP-powered Human Rights monitoring platform. Expert Systems with Applications, 153, September 2020. Available online 16 March 2020. (doi:10.1016/j.eswa.2020.113365)

Effective information management has long been a problem in organisations that are not of a scale that they can afford their own department dedicated to this task. Growing information overload has made this problem even more pronounced. On the other hand we have recently witnessed the emergence of intelligent tools, packages and resources that made it possible to rapidly transfer knowledge from the academic community to industry, government and other potential beneficiaries. Here we demonstrate how adopting state-of-the-art natural language processing (NLP) and crowdsourcing methods has resulted in measurable benefits for a human rights organisation by transforming their information and knowledge management using a novel approach that supports human rights monitoring in conflict zones. More specifically, we report on mining and classifying Arabic Twitter in order to identify potential human rights abuse incidents in a continuous stream of social media data within a specified geographical region. Results show deep learning approaches such as LSTM allow us to push the precision close to 85% for this task with an F1-score of 75%. Apart from the scientific insights we also demonstrate the viability of the framework which has been deployed as the Ceasefire Iraq portal for more than three years which has already collected thousands of witness reports from within Iraq. This work is a case study of how progress in artificial intelligence has disrupted even the operation of relatively small-scale organisations.

[2]
C Fox and G Feis. ‘Ought implies Can’ and the law. Inquiry, 61:370–393, 2018. (doi:10.1080/0020174X.2017.1371873)

In this paper, we investigate the ‘ought implies can’ (OIC) thesis, focusing on explanations and interpretations of OIC, with a view to clarifying its uses and relevance to legal philosophy. We first review various issues concerning the semantics and pragmatics of OIC; then we consider how OIC may be incorporated in Hartian and Kelsenian theories of the law. Along the way we also propose a taxonomy of OIC-related claims.

[3]
Azhar Alhindi, Udo Kruschwitz, Chris Fox, and Dyaa Al Bakour. Profile-based summarisation for web site navigation. ACM Transactions on Information Systems, 33(1), 27th November 2015. Special Issue on Contextual Search and Recommendation. Editors: Paul N. Bennett, Kevyn Collins-Thompson, Diane Kelly, Ryen W. White, Yi Zhang. (doi:10.1145/2699661)

Information systems that utilise contextual information have the potential of helping a user identify relevant information more quickly and more accurately than systems that work the same for all users and contexts. Contextual information comes in a variety of types, often derived from records of past interactions between a user and the information system. It can be individual or group based. We are focusing on the latter, harnessing the search behaviour of cohorts of users, turning it into a domain model that can then be used to assist other users of the same cohort. More specifically, we aim to explore how such a domain model is best utilised for profile-biased summarisation of documents in a navigation scenario in which such summaries can be displayed as hover text as a user moves the mouse over a link. The main motivation is to help a user find relevant documents more quickly. Given the fact that the Web in general has been studied extensively already, we focus our attention on Web sites and similar document collections. Such collections can be notoriously difficult to search or explore. The process of acquiring the domain model is not a research interest here; we simply adopt a biologically inspired method that resembles the idea of ant colony optimisation. This has been shown to work well in a variety of application areas. The model can be built in a continuous learning cycle that exploits search patterns as recorded in typical query log files. Our research explores different summarisation techniques, some of which use the domain model and some that do not. We perform task-based evaluations of these different techniques—thus of the impact of the domain model and profile-biased summarisation—in the context of Web site navigation.

[4]
Mahmoud El-Haj, Udo Kruschwitz, and Chris Fox. Creating language resources for under-resourced languages: methodologies and experiments on Arabic. Language Resources and Evaluation Journal, 49(3):549–580, September 2015. First online: 9 August 2014. Printed September 2015. (doi:10.1007/s10579-014-9274-3)

Language resources are important for those working on computational methods to analyse and study languages. These resources are needed to help advancing the research in fields such as natural language processing, machine learning, information retrieval and text analysis in general. We describe the creation of useful resources for languages that currently lack them, taking resources for Arabic summarisation as a case study. We illustrate three different paradigms for creating language resources, namely: (1) using crowdsourcing to produce a small resource rapidly and relatively cheaply; (2) translating an existing gold-standard dataset, which is relatively easy but potentially of lower quality; and (3) using manual effort with appropriately skilled human participants to create a resource that is more expensive but of high quality. The last of these was used as a test collection for TAC-2011. An evaluation of the resources is also presented.

[5]
Chris Fox and Shalom Lappin. Type-theoretic logic with an operational account of intensionality. Synthese, 192(3):563–584, March 2015. First online: January 2014.

A reformulation of Curry-Typed Property Theory within Typed Predicate Logic, and some discussion of an operational interpretation of intensional distinctions.

[6]
Chris Fox. Imperatives: a judgemental analysis. Studia Logica, 100(4):879–905, 2012. (doi:10.1007/s11225-012-9424-9)

This paper proposes a framework for formalising intuitions about the behaviour of imperative commands. It seeks to capture notions of satisfaction and coherence. Rules are proposed to express key aspects of the general logical behaviour of imperative constructions. A key objective is for the framework to allow patterns of behaviour to be described while avoiding making any commitments about how commands, and their satisfaction criteria, are to be interpreted. We consider the status of some conundrums of imperative logic in the context of this proposal.

[7]
Chris Fox. Obligations and permissions. Language and Linguistics Compass, 6(9):593–610, 2012. (doi:10.1002/lnc3.352)

Utterances and statements that are concerned with obligations and permissions are known as “deontic” expressions. They can present something of a challenge when it comes to formalising their meaning and behaviour. The content of these expressions can appear to support entailment relations similar to those of classical propositions, but such behaviour can sometimes lead to counter-intuitive outcomes. Historically, much of the descriptive work in this area has been philosophical in outlook, concentrating on questions of morality and jurisprudence. Some additional contributions have come from computer science, in part due to the need to specify normative behaviour. There are a number of formal proposals that seek to account for obligations and permissions, such as “Standard Deontic Logic”. In the literature, there has also been discussion of various conundrums and dilemmas that need to be resolved, such as “the Good Samaritan”, “the Knower”, “the Gentle Murderer”, “Contrary to Duty Obligations”, “Ross’s Paradox”, “Jþrgensen’s Dilemma”, “Sartre’s Dilemma”, and “Plato’s Dilemma”. Despite all this work, there still appears to be no definite consensus about how these kinds of expressions should be analysed, or how all the deontic dilemmas should be resolved. It is possible that obligations themselves, as opposed to their satisfaction criteria, do not directly support a conventional logical analysis. It is also possible that a linguistically informed analysis of obligations and permissions may help to resolve some of the deontic dilemmas, and clarify intuitions about how best to formulate a logic of deontic expressions.

[8]
Chris Fox and Shalom Lappin. Expressiveness and complexity in underspecified semantics. Linguistic Analysis, 36:385–417, 2010.

In this paper we address an important issue in the development of an adequate formal theory of underspecified semantics. The tension between expressive power and computational tractability poses an acute problem for any such theory. Generating the full set of resolved scope readings from an underspecified representation produces a combinatorial explosion that undermines the efficiency of these representations. Moreover, Ebert (2005) shows that most current theories of underspecified semantic representation suffer from expressive incompleteness. In previous work we present an account of underspecified scope representations within Property Theory with Curry Typing (PTCT), an intensional first-order theory for natural language semantics. We review this account, and we show that filters applied to the underspecified-scope terms of PTCT permit expressive completeness. While they do not solve the general complexity problem, they do significantly reduce the search space for computing the full set of resolved scope readings in non-worst cases. We explore the role of filters in achieving expressive completeness, and their relationship to the complexity involved in producing full interpretations from underspecified representations. This paper is dedicated to Jim Lambek.

[9]
Sebastian Danicic, Mohammed Daoudi, Chris Fox, Mark Harman, Rob Hierons, John Howroyd, Lahcen Ourabya, and Martin Ward. Consus: A light-weight program conditioning. Journal of Systems and Software, special issue on Software Reverse Engineering, 77(3):241–262, 2005. (doi:10.1016/j.jss.2004.03.034)

Program conditioning consists of identifying and removing a set of statements which cannot be executed when a condition of interest holds at some point in a program. It has been applied to problems in maintenance, testing, re­use and re­engineering. Program conditioning relies upon both symbolic execution and reasoning about symbolic predicates. Automation of the process therefore requires some form of automated theorem proving. However, the use of a full-power ‘heavyweight’ theorem prover would impose unrealistic performance constraints. This paper reports on a lightweight approach to theorem proving using the FermaT simplify decision procedure. This is used as a component to ConSUS, a program conditioning system for the Wide Spectrum Language WSL. The paper describes the symbolic execution algorithm used by ConSUS, which prunes as it conditions. The paper also provides empirical evidence that conditioning produces a significant reduction in program size and, although exponential in the worst case, the conditioning system has low degree polynomial behaviour in many cases, thereby making it scalable to unit level applications of program conditioning.

[10]
Sebastian Danicic, Chris Fox, Mark Harman, John Howroyd, and Michael L. Lawrence. Slicing algorithms are minimal for free liberal program schemas. Computer Journal, 48:737–748, 2005. (doi:10.1093/comjnl/bxh121)

Program slicing is an automated source code extraction technique that has been applied to a number of problems including testing, debugging, maintenance, reverse engineering, program comprehension, reuse and program integration. In all these applications the size of the slice is crucial; the smaller the better. It is known that statement minimal slices are not computable, but the question of dataflow minimal slicing has remained open since Weiser posed it in 1979. This paper proves that static slicing algorithms produce dataflow minimal end slices for programs which can be represented as schemas which are free and liberal.

[11]
Rob Hierons, Mark Harman, and Chris Fox. Branch-coverage preserving transformations for unstructured programs. The Computer Journal, 48(4):421–436, 2005. (doi:10.1093/comjnl/bxh093)

Test data generation by hand is a tedious, expensive and error-prone activity, yet testing is a vital part of the development process. Several techniques have been proposed to automate the generation of test data, but all of these are hindered by the presence of unstructured control flow. This paper addresses the problem using testability transformation. Testability transformation does not preserve the traditional meaning of the program, rather it deals with preserving test-adequate sets of input data. This requires new equivalence relations which, in turn, entail novel proof obligations. The paper illustrates this using the branch coverage adequacy criterion and develops a branch adequacy equivalence relation and a testability transformation for restructuring. It then presents a proof that the transformation preserves branch adequacy.

[12]
Chris Fox and Shalom Lappin. Underspecified interpretations in a Curry-typed representation language. Journal of Logic and Computation, 15(2):131–143, April 2005. (doi:10.1093/logcom/exi006)

Abstract In previous work we have developed Property Theory with Curry Typing (PTCT), an intensional first-order logic for natural language semantics. PTCT permits fine-grained specifications of meaning. It also supports polymorphic types and separation types. We develop an intensional number theory within PTCT in order to represent proportional generalized quantifiers like "most", and we suggest a dynamic type-theoretic approach to anaphora and ellipsis resolution. Here we extend the type system to include product types, and use these to define a permutation function that generates underspecified scope representations within PTCT. We indicate how filters can be added to encode constraints on possible scope readings. Our account offers several important advantages over other current theories of underspecification.

[13]
Chris Fox and Shalom Lappin. An expressive first-order logic with flexible typing for natural language semantics. Logic Journal of the Interest Group in Pure and Applied Logics, 12(2):135–168, 2004. (doi:10.1093/jigpal/12.2.135)

We present Property Theory with Curry Typing (PTCT), an intensional first-order logic for natural language semantics. PTCT permits fine-grained specifications of meaning. It also supports polymorphic types and separation types. We develop an intensional number theory within PTCT in order to represent proportional generalized quantifiers like “most.” We use the type system and our treatment of generalized quantifiers in natural language to construct a type-theoretic approach to pronominal anaphora that avoids some of the difficulties that undermine previous type-theoretic analyses of this phenomenon.

[14]
Chris Fox, Sebastian Danicic, Mark Harman, and Rob Hierons. Consit: A fully automated conditioned program slicer. Software — Practise and Experience (SPE), 34(1):15–46, 2004. John Wiley & Sons. Also published online on 26th November 2003. (doi:10.1002/spe.556)

Conditioned slicing is a source code extraction technique. The extraction is performed with respect to a slicing criterion which contains a set of variables and conditions of interest. Conditioned slicing removes the parts of the original program which cannot affect the variables at the point of interest, when the conditions are satisfied. This produces a conditioned slice, which preserves the behaviour of the original with respect to the slicing criterion. Conditioned slicing has applications in source code comprehension, reuse, restructuring and testing. Unfortunately, implementation is not straightforward because the full exploitation of conditions requires the combination of symbolic execution, theorem proving and traditional static slicing. Hitherto, this difficultly has hindered development of fully automated conditioning slicing tools. This paper describes the first fully automated conditioned slicing system, ConSIT, detailing the theory that underlies it, its architecture and the way it combines symbolic execution, theorem proving and slicing technologies. The use of ConSIT is illustrated with respect to the applications of testing and comprehension.

[15]
Costin Badica and Chris Fox. Design and implementation of a business process representation module. Advances in Electrical and Computer Engineering, 2 (2002)(1):38–45, 2002. A version of this paper was presented at the Sixth IEEE International Conference on Development and Application Systems (DAS 2002) Suceava, Romania, May 2002.

This paper reports on work done in the INSPIRE project on developing the Process Representation Module (PRM). The major aim of INSPIRE is the development of a tool for intelligent, human-orientated business process re-engineering. Our task was to develop the PRM, a core module of the INSPIRE tool. The main responsibility of the PRM is to provide an all-encompasing and consistent representation of business processes. The paper describes the architecture and data-models of the system, plus discussion of business prcess modelling and the formalisms used in INSPIRE, and the details of the design and implementation of the PRM.

[16]
C. Badica, M. Brezovan, and C. Fox. Business process modeling in inspire using petri nets. Transactions on Automatic Control and Computer Science, 47(2):41–46, 2002. A version of this paper was also presented at the Fifth international Conference on Technical Informatics, 18th–19th October 2002, Timisoara, Romania.

This paper introduces a notation for business process modeling and shows how it can be formally interpreted in terms of Petri nets. Petri nets have a quite respectable research community, which is 35 years old. However, they were only recently proposed for business process modeling. This is probably due to the fact they are often claimed to be “too complex” for this task. Nevertheless, they are quite well understood and the theory behind them is well developed, so we think they have a good potential for business process modeling, but more work needs to be done. In this paper we show that Petri nets can help in understanding formally the business process modeling notation developed in the Inspire project. This understanding can act as a basis for a future work on formal analysis of business process models developed with the INSPIRE tool. The Inspire project (IST-10387-1999) aims to develop an integrated tool-set to support a systematic and more human-oriented approach to business process re-engineering.

[17]
Rob Hierons, Mark Harman, Chris Fox, Lahcen Ouarbya, and Dave/Mohammed) Daoudi. Conditioned slicing supports partition testing. Journal of Software Testing, Verification and Reliability (STVR), 12(1):23–28, 2002. (doi:10.1002/stvr.232)

This paper describes the use of conditioned slicing to assist partition testing, illustrating this with a case study. The paper shows how a conditioned slicing tool can be used to provide confidence in the uniformity hypothesis for correct programs, to aid fault detection in incorrect programs and to highlight special cases.

[18]
Chris Fox. Book review: ‘Linux: The Complete Reference’ (third edition), by Richard Petersen, Osborne/McGraw-Hill. Software Testing, Verification & Reliability (STVR), 11(1):55–58, 2001.

Review of a once popular reference book for the GNU/Linux operating system.

[19]
Chris Fox. Existence presuppositions and category mistakes. Acta Linguistica Hungarica, 42(3/4):325–339, 1994. Published 1996.

This paper discusses a property-theoretic approach to the existence presuppositions that occur with definite descriptors and some examples of anaphoric reference. Property theory can avoid the logical paradoxes (such as the Liar: “this sentence is false.”) by taking them to involve a category mistake, so they do not express felicitous propositions. It is suggested that this approach might be extended to other cases of infelicitous discourse, such as those due to a false presupposition (as in: “The present queen of France is bald.”) or due to a missing antecedent (as in: “Every man walks in. He whistles.”). These examples may be represented by terms that embody category mistakes, so semantically they do not express propositions. Felicity of discourse then corresponds with the propositionhood of the representation.

[20]
A.N. De Roeck, Chris Fox, B. G. T. Lowden, and B. R. Walls. An approach to paraphrasing logical query languages in English. Journal of Database Technology, 4(4):227–233, 1993. Volume dated 1991–92.

This paper describes an extension to a system for parsing English a relational calculus (SQL), by way of Property-theoretic semantics, that paraphrases the relational query back into English, to ensure the query has been interpreted as intended.

[21]
A.N. De Roeck, Chris Fox, B. G. T. Lowden, and B. R. Walls. Modal reasoning in relational systems. Journal of Database Technology, 4(4):235–244, 1993. Volume dated 1991–92.

This paper describes an extension to a system for parsing English into a relational calculus (SQL), by way of Property-theoretic semantics, that allows such a system to answer modal questions, such as “Can Sally earn 12,000?” The system answered such queries by performing a trial update to see if any of the constraints on the database would then be broken.

Special Issues (2)
[1]
Journal of logic and computation, April 2008. Lambda Calculus, Type Theory and Natural Language II. (doi:10.1093/logcom/exm090)

This special issue was a spin off of the second workshop on Lambda Calculus, Type Theory and Natural Language. The Workshop on Lambda Calculus, Type Theory, and Natural Language (LCTTNL) was held in London in September 2005.

[2]
Journal of logic and computation, April 2005. Lambda Calculus, Type Theory and Natural Language. (doi:10.1093/logcom/exi006)

This special issue was a spin off of the first workshop on Lambda Calculus, Type Theory and Natural Language, which was held in London in December, 2003.

Conference Proceedings (52)
[1]
S Zimmerman, A Thorpe, C Fox, and U Kruschwitz. Investigating the interplay between searchers’ privacy concerns and their search behavior. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval — SIGIR’19. ACM Press, 2019. (doi:10.1145/3331184.3331280)

Privacy concerns are becoming a dominant focus in search applications, thus there is a growing need to understand implications of efforts to address these concerns. Our research investigates a search system with privacy warning labels, an approach inspired by decision making research on food nutrition labels. This approach is designed to alert users to potential privacy threats in their search for information as one possible avenue to address privacy concerns. Our primary goal is to understand the extent to which attitudes towards privacy are linked to behaviors that protect privacy. In the present study, participants were given a set of fact-based decision tasks from the domain of health search. Participants were rotated through variations of search engine results pages (SERPs) including a SERP with a privacy warning light system. Lastly, participants completed a survey to capture attitudes towards privacy, behaviors to protect privacy, and other demographic information. In addition to the comparison of interactive search behaviors of a privacy warning SERP with a control SERP, we compared self-report privacy measures with interactive search behaviors. Participants reported strong concerns around privacy of health information while simultaneously placing high importance on the correctness of this information. Analysis of our interactive experiment and self-report privacy measures indicate that 1) choice of privacy-protective browsers has a significant link to privacy attitudes and privacy-protective behaviors in a SERP and 2) there are no significant links between reported concerns towards privacy and recorded behavior in an information retrieval system with warnings that enable users to protect their privacy.

[2]
S Zimmerman, A Thorpe, C Fox, and U Kruschwitz. Privacy nudging in search: Investigating potential impacts. In Proceedings of the 2019 Conference on Human Information Interaction and Retrieval — CHIIR ’19. ACM Press, 2019. (doi:10.1145/3295750.3298952)

From their impacts to potential threats, privacy and misinformation are a recurring top news story. Social media platforms (e.g. Facebook) and information retrieval (IR) systems (e.g. Google), are now in the public spotlight to address these issues. Our research investigates an approach, known as Nudging, applied to the domain of IR, as a potential means to minimize impacts and threats surrounding both matters. We perform our study in the space of health search for two reasons. First, encounters with misinformation in this space have potentially grave outcomes. Second, there are many potential threats to personal privacy as a result of the data collected during a search task. Adopting methods and a corpus from previous work as the foundation, our study asked users to determine the effectiveness of a treatment for 10 medical conditions. Users performed the tasks on 4 variants of a search engine results page (SERP) and a control, with 3 of the SERP’s being a Nudge (re-ranking, filtering and a visual cue) intended to reduce impacts to privacy with minimal impact to search result quality. The aim of our work is to determine the Nudge that is least impactful to good decision making while simultaneously increasing privacy protection. We find privacy impacts are significantly reduced for the re-ranking and filtering strategies, with no significant impacts on quality of decision making.

[3]
S Zimmerman, C Fox, and U Kruschwitz. Improving hate speech detection with deep learning ensembles. In LREC 2018 - 11th International Conference on Language Resources and Evaluation, pages 2546–2553, Jan 2019.

© LREC 2018 - 11th International Conference on Language Resources and Evaluation. All rights reserved. Hate speech has become a major issue that is currently a hot topic in the domain of social media. Simultaneously, current proposed methods to address the issue raise concerns about censorship. Broadly speaking, our research focus is the area human rights, including the development of new methods to identify and better address discrimination while protecting freedom of expression. As neural network approaches are becoming state of the art for text classification problems, an ensemble method is adapted for usage with neural networks and is presented to better classify hate speech. Our method utilizes a publicly available embedding model, which is tested against a hate speech corpus from Twitter. To confirm robustness of our results, we additionally test against a popular sentiment dataset. Given our goal, we are pleased that our method has a nearly 5 point improvement in F-measure when compared to original work on a publicly available hate speech evaluation dataset. We also note difficulties encountered with reproducibility of deep learning methods and comparison of findings from other work. Based on our experience, more details are needed in published work reliant on deep learning methods, with additional evaluation information a consideration too. This information is provided to foster discussion within the research community for future work.

[4]
M Poesio, A Alhelbawy, C Fox, and U Kruschwitz. Exploiting social media to address fundamental human rights issues. In CEUR Workshop Proceedings, volume 1696, pages 6–7, Jan 2016.

© 2016, CEUR-WS. All rights reserved. This invited talk provided an overview of some of our work in relation to extracting meaningful knowledge from social media feeds to help in addressing human rights issues highlighting the potential that the rise of ‘big data’ offers in this respect looking at both sides of the coin regarding big data and human rights: how big data can help human rights work, but also the potential dangers that can originate from the ability to analyse massive amounts of data very quickly. The primary focus of our work is on applying natural language processing methods to turn large-scale unstructured and partially structured data streams into actionable knowledge.

[5]
Fawaz Alarfaj, Udo Kruschwitz, and Chris Fox. Experiments with query expansion for entity finding. In Alexander Gelbukh, editor, Proceedings of CICLing, Part II, volume 9042 of Lecture Notes in Computer Science, pages 417–426. Springer, 2015. (doi:10.1007/978-3-319-18117-2_31)

Query expansion techniques have proved to have an impact on retrieval performance across many retrieval tasks. This paper reports research on query expansion in the entity finding domain. We used a number of methods for query formulation including thesaurus-based, relevance feedback, and exploiting NLP structure. We incorporated the query expansion component as part of our entity finding pipeline and report the results of the aforementioned models on the CERC collection.

[6]
Richard Sutcliffe, Tim Crawford, Chris Fox, Deane L. Root, and Eduard Hovy. The c@merata task at mediaeval 2015: Natural language queries on classical music scores. In Proceedings of MediÊval, Barcelona, 2015.

This was the second year of the C@merata task [16,1] which relates natural language processing to music information retrieval. Participants each build a system which takes as input a query and a music score and produces as output one or more matching passages in the score. This year, questions were more difficult and scores were more complex. Participants were the same as last year and once again CLAS was the best with a Beat F-Score of 0.620.

[7]
Richard Sutcliffe, Tim Crawford, Chris Fox, Deane L. Root, and Eduard Hovy. Relating natural language text to musical passages. In Proceedings of the 16th International Society for Music Information Retrieval Conference (ISMIR), page 524–530, Malaga, Spain, October 2015.

There is a vast body of musicological literature containing detailed analyses of musical works. These texts make frequent references to musical passages in scores by means of natural language phrases. Our long-term aim is to investigate whether these phrases can be linked automatically to the musical passages to which they refer. As a first step, we have organised for two years running a shared evaluation in which participants must develop software to identify passages in a MusicXML score based on a short noun phrase in English. In this paper, we present the rationale for this work, discuss the kind of references to musical passages which can occur in actual scholarly texts, describe the first two years of the evaluation and finally appraise the results to establish what progress we have made.

[8]
F Alarfaj, U Kruschwitz, and C Fox. Exploring adaptive window sizes for entity retrieval. In M de Rijke, T Kenter, AP de Vries, CX Zhai, F de Jong, K Radinsky, and K Hofmann, editors, Proceedings of the 36th European Conference on Information Retrieval (ECIR–14), volume 8416 of Lecture Notes in Computer Science, pages 573–578. Springer, 2014. Amsterdam.

With the continuous attention of modern search engines to retrieve entities and not just documents for any given query, we introduce a new method for enhancing the entity-ranking task. An entity-ranking task is concerned with retrieving a ranked list of entities as a response to a specific query. Some successful models used the idea of association discovery in a window of text, rather than in the whole document. However, these studies considered only fixed window sizes. This work proposes a way of generating an adaptive window size for each document by utilising some of the document features. These features include document length, average sentence length, number of entities in the document, and the readability index. Experimental results show a positive effect once taking these document features into consideration when determining window size.

[9]
Richard Sutcliffe, Tim Crawford, Chris Fox, Deane L. Root, and Eduard Hovy. The c@merata task at mediaeval 2014: Natural language queries on classical music scores. In Proceedings of MediÊval, Barcelona, 2014.

This paper summarises the C@merata task in which participants built systems to answer short natural language queries about classical music scores in MusicXML. The task thus combined natural language processing with music information retrieval. Five groups from four countries submitted eight runs. The best submission scored Beat Precision 0.713 and Beat Recall 0.904.

[10]
Azhar Hasan Alhindi, Udo Kruschwitz, and Chris Fox. A pilot study on using profile-based summarisation for interactive search assistance. In P Serdyukov, P Braslavski, SO Kuznetsov, J Kaamps, S RĂŒger, E Agichtein, I Segalovich, and E Yilmaz, editors, Proceedings of the 35th European Conference on Information Retrieval (ECIR–13), volume 7814 of Lecture Notes in Computer Science, pages 672–675, 2013.

Text summarisation is the process of distilling the most important information from a source to produce an abridged version for a particular user or task. This poster investigates the use of profile-based summarisation to provide contextualisation and interactive support for enterprise searches. We employ log analysis to acquire continuously updated profiles to provide profile-based summarisations of search results. These profiles could be capturing an individual’s interests or (as discussed here) those of a group of users. Here we report on a first pilot study.

[11]
F Alarfaj, U Kruschwitz, and C Fox. An adaptive window-size approach for expert-finding. In CEUR Workshop Proceedings, volume 986, pages 76–79, Jan 2013.

The goal of expert-finding is to retrieve a ranked list of people as a response to a user query. Some models that proved to be very successful used the idea of association discovery in a window of text rather than the whole document. So far, all these studies only considered fixed window sizes. We pro-pose an adaptive window-size approach for expert-finding. For this work we use some of the document attributes, such as document length, average sentence length, and number of candidates, to adjust the window size for the document. The experimental results indicate that taking document features into consideration when determining the window size, does have an effect on the retrieval outcome. The results shows an improvement over a range of baseline approaches.

[12]
A Alhindi, U Kruschwitz, and C Fox. Site search using profile-based document summarisation. In CEUR Workshop Proceedings, volume 986, pages 62–63, Jan 2013.

Text summarisation is the process of distilling the most important information from a source to produce an abridged version for a particular user or task. This demo presents the use of profile-based summarisation to provide contextualisation and interactive support for site search and enter- prise search. We employ log analysis to acquire continuously updated profiles to provide profile-based summarisations of search results. These profiles could be capturing an individual’s interests or those of a group of users. Here we look at acquiring profiles for groups of users.

[13]
Mahmoud El-Haj, Udo Kruschwitz, and Chris Fox. Exploring clustering for multi-document Arabic summarisation. In The 7th Asian Information Retrieval Societies (AIRS 2011), volume 7097 of Lecture Notes in Computer Science, page 550–561, Berlin/Heidelberg, 2011. Springer. (doi:10.1007/978-3-642-25631-8_50)

In this paper we explore clustering for multi-document Arabic summarisation. For our evaluation we use an Arabic version of the DUC-2002 dataset that we previously generated using Google Translate. We explore how clustering (at the sentence level) can be applied to multi-document summarisation as well as for redundancy elimination within this process. We use different parameter settings including the cluster size and the selection model applied in the extractive summarisation process. The automatically generated summaries are evaluated using the ROUGE metric, as well as precision and recall. The results we achieve are compared with the top five systems in the DUC-2002 multi-document summarisation task.

[14]
Mahmoud El-Haj, Udo Kruschwitz, and Chris Fox. Multi-document Arabic text summarisation. In Proceedings of the third Computer science and Electronic Engineering Conference. IEEE Xplore, 2011.

In this paper we present our generic extractive Arabic and English multi-document summarisers. We also describe the use of machine translation for evaluating the generated Arabic multi-document summaries using English extractive gold standards. In this work we first address the lack of Arabic multi-document corpora for summarisation and the absence of automatic and manual Arabic gold-standard summaries. These are required to evaluate any automatic Arabic summarisers. Second, we demonstrate the use of Google Translate in creating an Arabic version of the DUC-2002 dataset. The parallel Arabic/English dataset is summarised using the Arabic and English summarisation systems. The automatically generated summaries are evaluated using the ROUGE metric, as well as precision and recall. The results we achieve are compared with the top five systems in the DUC-2002 multi-document summarisation task.

[15]
Mahmoud El-Haj, Udo Kruschwitz, and Chris Fox. University of Essex at the TAC 2011 Multilingual Summarisation Pilot. In Proceedings of the Text Analysis Conference (TAC) 2011, MultiLing Summarisation Pilot, Maryland, USA, 2011.

We present the results of our Arabic and English runs at the TAC 2011 Multilingual summarisation (MultiLing) task. We participated with centroid-based clustering for multi-document summarisation. The automatically generated Arabic and English summaries were evaluated by human participants and by two automatic evaluation metrics, ROUGE and AutoSummENG. The results are compared with the other systems that participated in the same track on both Arabic and English languages. Our Arabic summariser performed particularly well in the human evaluation.

[16]
Mahmoud El-Haj, Udo Kruschwitz, and Chris Fox. Using Mechanical Turk to create a corpus of Arabic summaries. In Proceedings of the International Conference on Language Resources and Evaluation (LREC), Valletta, Malta, 2010. European Language Resources Association. Semitic languages workshop.

This paper describes the creation of a human-generated corpus of extractive Arabic summaries of a selection of Wikipedia and Arabic newspaper articles using Mechanical Turk—an online workforce. The purpose of this exercise was two-fold. First, it addresses a shortage of relevant data for Arabic natural language processing. Second, it demonstrates the application of Mechanical Turk to the problem of creating natural language resources. The paper also reports on a number of evaluations we have performed to compare the collected summaries against results obtained from a variety of automatic summarisation systems.

[17]
Chris Fox. The good Samaritan and the hygenic cook. In Piotr Stalmaszczyk, editor, Philosophy of Language and Linguistics, volume I: The Formal Turn of Linguistics and Philosophy. Ontos Verlag, 2010. Paper based on a talk at the conference on the Philosophy of Language and Linguistics, ƁódĆș, Poland.

When developing formal theories of the meaning of language, it is appropriate to consider how apparent paradoxes and conundrums of language are best resolved. But if we base our analysis on a small sample of data then we may fail to take into account the influence of other aspects of meaning on our intuitions. Here we consider the so-called Good Samaritan Paradox (Prior, 1958), where we wish to avoid any implication that there is an obligation to rob someone from “You must help a robbed man”. We argue that before settling on a formal analysis of such sentences, we should consider examples of the same form, but with intuitively different entailments—such as “You must use a clean knife”—and also actively seek other examples that exhibit similar contrasts in meaning, even if they do not exemplify the phenomena that is under investigation. This can refine our intuitions and help us to attribute aspects of interpretation to the various facets of meaning.

[18]
Mahmoud El-Haj, Udo Kruschwitz, and Chris Fox. Experimenting with automatic text summarization for Arabic. In Zygmunt Vetulani, editor, Proceedings of the Fourth Language and Technology Conference: Human Language Technologies as a Challenge for Computer Science and Linguistics, pages 365–369, PoznaƄ, Poland, 6th–8th November 2009.

The volume of information available on the Web is increasing rapidly. The need for systems that can automatically summarise documents is becoming ever more desirable. For this reason, text summarisation has quickly grown into a major research area as illustrated by the DUC and TAC conference series. Summarisation systems for Arabic are however still not as sophisticated and as reliable as those developed for languages like English. In this paper we discuss two summarisation systems for Arabic and report on a large user study performed on these systems. The first system, the Arabic Query-Based Text Summarisation System (AQBTSS), uses standard retrieval methods to map a query against a document collection and to create a summary. The second system, the Arabic Concept-Based Text Summarisation System (ACBTSS), creates a query-independent document summary. Five groups of users from different ages and educational levels participated in evaluating our systems.

[19]
Chris Fox. Obligations, permissions and transgressions: an alternative approach to deontic reasoning. In Proceedings of the Tenth Symposium on Logic and Language, pages 81–88, Balatonszemes, Hungary, 26th–29th August 2009. Theoretical Linguistics Program, ELTE, Budapest.

This paper proposes a logic of transgressions for obligations and permissions. A key objective of this logic is to allow deontic conflicts (Lemmon, 1962) but without appealing to defeasible or paraconsistent reasoning, or multiple levels of obligation. This logic of transgressions can be viewed as conceptually related to those approaches that formulate obligations in terms of “escaping” from a sanction (Prior, 1958; Nowell-Smith and Lemmon, 1960), and its modal variants (Anderson, 1958; Kanger, 1971), but where the notion of a transgression is more fine-grained than a single “sanction”

[20]
Arthorn Luangsodsai and Chris Fox. Statechart slicing. In Proceedings of the Sixth Joint Conference on Computer Science and Software Engineering (JCSSE2009), volume 1, pages 411–416, Phuket, Thailand, 13th–15th May 2009.

The paper discusses how to reduce a statechart model by slicing. We start with the discussion of control dependencies and data dependencies in statecharts. The and-or dependence graph is introduced to represent control and data dependencies for statecharts. We show how to slice statecharts by using this dependence graph. Our slicing approach helps systems analysts and system designers in understanding system specifications, maintaining software systems, and reusing parts of systems models.

[21]
Zdeněk Čeơka and Chris Fox. The influence of text pre-processing on plagiarism detection. In Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2009), Borovets, Bulgaria, 14th–16th September 2009.

This paper explores the influence of text pre-processing techniques on plagiarism detection. We examine stop-word removal, lemmatization, number replacement, synonymy recognition, and word generalization. We also look into the influence of punctuation and word-order within N-grams. All these techniques are evaluated according to their impact on F_1-measure and speed of execution. Our experiments were performed on a Czech corpus of plagiarized documents about politics. At the end of this paper, we propose what we consider to be the best combination of text pre-processing techniques.

[22]
Chris Fox and Arthorn Luangsodsai. And-or dependence graphs for slicing statecharts. In David W. Binkley, Mark Harman, and Jens Krinke, editors, Beyond Program Slicing, number 05451 in Dagstuhl Seminar Proceedings, Dagstuhl, Germany, 6th–11th November 2006. Internationales Begegnungs- und Forschungszentrum fĂŒr Informatik (IBFI), Schloss Dagstuhl, Germany.

The construction of an And-Or dependence graphs is illustrated, and its use in slicing statecharts is described. The additional structure allows for more precise slices to be constructed in the event of additional information, such as may be provided by static analysis and model checking, and with constraints on the global state and external events.

[23]
Costin Badica, Maria Teodorescu, Cosmin Spahiu, Amelia Badica, and Chris Fox. Intergrating Role Activity Diagrams and hybrid IDEF for business process modeling using MDA. In Proceedings of the Seventh International Symposium on Symbolic and Numeric Algorithms for Scientific Computing, Timisoara, Romania, 25–29th September 2005. IEEE Computer Society. (doi:10.1109/synasc.2005.40)

Business process modeling is an important phase during requirements collection. Usually functional, dynamic and role models are needed. We propose to integrate Role Activity Diagrams and Hybrid IDEF for business process modeling within Model Driven Architecture. Our proposal is demonstrated with a sample implementation.

[24]
Chris Fox and Shalom Lappin. Achieving expressive completeness and computational efficiency with underspecified scope representations. In Proceedings of the Fifteenth Amsterdam Colloquium, 19th–21st December 2005.

Ebert (2005) points out that most current theories of underspecified semantic representation either suffer from expressive incompleteness or do not avoid generating the full set of possible scope readings in the course of disambiguation. In previous work we have presented an account of underspecified scope representations within an intensional first-order property theory enriched with Curry Typing for natural language semantics. Here we show how filters applied to the underspecified scope terms of this theory permit both expressive completeness and the reduction of the search space of possible scope interpretations.

[25]
Costin Badica and Chris Fox. Hybrid IDEF0/IDEF3 modelling of business processes: syntax, semantics and expressiveness. In Computer Aided Verification of Information Systems (CaVIS 2004), pages 20–22, Timisoara, Romania, 27th–28th February 2004.

A description of the process dimension of a notation for business process modelling that integrates aspects from IDEF0 and IDEF3 in a novel way is presented. The features of this notation include black box modelling of activities in the style of IDEF0 and glass box refinements of activities using connectors for specifying process branching in the style of IDEF3. The semantics of the notation is given by a mapping to a place/transition net. The notation is shown to be as expressive as a Standard Workflow Model.

[26]
Costin Badica and Chris Fox. Verification of multiple input/multiple output business processes. In Proceedings of the 2004 IEEE International Conference in Information Reuse and Integration (IEEE IRI-2004), Las Vegas, Nevada, USA, 8th–10th November 2004. Sponsored by IEEE Systems, Man and Cybernetics Society. (doi:10.1109/iri.2004.1431428)

In many business process modelling situations using Petri nets, the resulting model does not have a single input place and a single output place. Therefore, the correctness of the model cannot be assessed within the existing frameworks, which are devised for workflow nets — a particular class of Petri nets with a single input place and a single output place. Moreover, the existing approaches for tackling this problem are rather simplistic and they do not work even for simple examples. This paper shows that, by an appropriate reduction of a multiple input/multiple output Petri net, it is possible to use the existing techniques to check the correctness of the original process. The approach is demonstrated with an appropriate example.

[27]
M. PĂ©rez-RamĂ­rez and C. Fox. Agents interpreting imperative sentences. In Proceedings of the Fifth International Conference on Intelligent Text Processing and Computational Linguistics (CICLing 2004), Lecture Notes in Computer Science (LNCS), Seoul, South Korea, 15th–21st February 2004.

The aim of this paper is to present a model for the interpretation of imperative sentences in which reasoning agents play the role of speakers and hearers. A requirement is associated with both the person who makes and the person who receives the order which prevents the hearer coming to inappropriate conclusions about the actions s/he has been commanded to do. By relating imperatives with the actions they prescribe, the dynamic aspect of imperatives is captured. Further, by using the idea of ‘encapsulation’, it is possible to distinguish what is demanded by an imperative from the inferential consequences of the imperative. These two ingredients provide agents with the tools to avoid inferential problems in interpretation. These two ingredients provide agents with the tools to avoid inferential problems in interpretation.

[28]
M. PĂ©rez-RamĂ­rez and C. Fox. The role of imperatives in inference: Agents and actions. In Proceedings of the Mexican International Conference on Artificial Intelligence (MICAI’04), Mexico City, Mexico, 26th–30th April 2004.

The aim of this paper is to present a model for the interpretation of imperative sentences in which reasoning agents play the role of speakers and hearers. A requirement is associated with both the person who makes and the person who receives the order which prevents the hearer coming to inappropriate conclusions about the actions s/he has been commanded to do. By relating imperatives with the actions they prescribe, the dynamic aspect of imperatives is captured and by using the idea of encapsulation, it is possible to distinguish what is demanded from what is not. These two ingredients provide agents with the tools to avoid inferential problems in interpretation.

[29]
Chris Fox and Shalom Lappin. Doing natural language semantics in an expressive first-order logic with flexible typing. In In G. Jaeger, P. Monachesi, G. Penn, and S. Wintner, editors, Proceedings of the Eighth Conference on Formal Grammar 2003 (FGVienna), pages 89–102, Vienna, Austria, 16th–17th August 2003.

We present Property Theory with Curry Typing (PTCT), an intensional first-order logic for natural language semantics. PTCT permits fine-grained specifications of meaning. It also supports polymorphic types and separation types. (Separation types are also known as sub-types.) We develop an intensional number theory within PTCT in order to represent proportional generalized quantifiers like most. We use the type system and our treatment of generalized quantifiers in natural language to construct a type-theoretic approach to pronominal anaphora that avoids some of the difficulties that undermine previous type-theoretic analyses of this phenomenon.

[30]
M. PĂ©rez-RamĂ­rez and C. Fox. An axiomatisation of imperatives using Hoare logic. In Harry Bunt, Ielka van der Sluis, and Roser Morante, editors, Proceedings of the Fifth International Workshop on Computational Semantics(IWCS-5), pages 303–320, Tilburg, Netherlands, 15th–17th January 2003.

This paper presents an axiomatisation of imperatives using Hoare logic. It accounts for some inferential pragmatic aspects of imperatives. Unlike Jorgensen’s, Ross, and Chellas, proposals, rather than assigning truth-values to imperatives, imperatives are evaluated as a relation between the state demanded and the state or circumstances in which the imperative is uttered.

[31]
M. PĂ©rez-RamĂ­rez and C. Fox. Imperatives as obligatory and permitted actions. In Alexander F. Gelbukh, editor, Proceedings of the Fourth International Conference on Intelligent Text Processing and Computational Linguistics (CICLing 2003), volume 2588 of Lecture Notes in Computer Science (LNCS), pages 52–64, Mexico City, Mexico, 16th–22nd February 2003. Springer.

We present a dynamic deontic model for the interpretation of imperative sentences in terms of Obligation (O) and Permission (P). Under the view that imperatives prescribe actions and unlike the so-called "standard solution" (Huntley, 1984) these operators act over actions rather that over statements. Then by distinguishing obligatory from non obligatory actions we tackle the paradox of Free Choice Permission (FCP).

[32]
Chris Fox and Shalom Lappin. Type-theoretic approach to anaphora and ellipsis. In Proceedings of Recent Advances in Natural Language Processing (RANLP 2003), Borovets, Bulgaria, September 2003.

We present an approach to anaphora and ellipsis resolution in which pronouns and elided structures are interpreted by the dynamic identification in discourse of type constraints on their semantic representations. The content of these conditions is recovered in context from an antecedent expression. The constraints define separation types (sub-types) in Property Theory with Curry Typing (PTCT), an expressive first-order logic with Curry typing that we have proposed as a formal framework for natural language semantics

[33]
Dave/Mohammed Daoudi, Sebastian Danicic, John Howroyd, Mark Harman, Chris Fox, and Martin Ward. Consus: A scalable approach to conditional slicing. In Proceedings of the 9th IEEE Working Conference on Reverse Engineering (WCRE2002), pages 181–189, Richmond, Virginia, USA, 28th October–1st November 2002. IEEE Comput. Soc. (doi:10.1109/wcre.2002.1173069)

Conditioned slicing can be applied to reverse engineering problems which involve the extraction of executable fragments of code in the context of some criteria of interest. This paper introduces ConSUS, a conditioner for the Wide Spectrum Language, WSL. The symbolic executor of ConSUS prunes the symbolic execution paths, and its predicate reasoning system uses the FermaT simplify transformation in place of a more conventional theorem prover. We show that this combination of pruning and simplification as-reasoner leads to a more scalable approach to conditioning.

[34]
Chris Fox, Shalom Lappin, and Carl Pollard. First-order Curry-typed semantics for natural language. In S. Wintner, editor, Proceedings of the Seventh International Workshop on Natural Language Understanding and Logic Programming (NLULP 2002), pages 175–192, Copenhagen, Denmark, 28th July 2002. Also in Datalogiske Skrifter, Volume 92, pages 87–102, 28th July 2002 (Federated Logic Conference 2002 Omnibus).

presents Property Theory with Curry Typing (PTCT) where the language of terms and well-formed formulĂŠ are joined by a language of types. In addition to supporting fine-grained intensionality, the basic theory is essentially first-order, so that implementations using the theory can apply standard first-order theorem proving techniques. The paper sketches a system of tableau rules that implement the theory. Some extensions to the type theory are discussed, including type polymorphism, which provides a useful analysis of conjunctive terms. Such terms can be given a single polymorphic type that expresses the fact that they can conjoin phrases of any one type, yielding an expression of the same type.

[35]
Mark Harman, Chris Fox, Rob Hierons, Lin Hu, Sebastian Danicic, and Joachim Wegener. Vada: A transformation-based system for variable dependence analysis. In Proceedings of the Second IEEE International Workshop on Source Code Analysis and Manipulation (SCAM 2002), pages 55–64, Montreal, Canada, 1st October 2002. IEEE Comput. Soc. (doi:10.1109/scam.2002.1134105)

Variable dependence is an analysis problem in which we seek to determine the set of input variables which can affect the values stored in a chosen set of intermediate program variables. Traditionally the problem is studied as a dataflow analysis problem, and the answers are computed in terms of solutions to data and control flow relations. This paper shows the relationship between the variable dependence analysis problem and slicing and describes a system, VADA, which implements variable dependence analysis for C. In order to cover the full range of C constructs and features, a transformation to a core language is employed. Thus, the full analysis is only required for the core language, which is relatively simple. This reduces the overall effort required. The transformations used need only preserve the variable dependence relation, and therefore need not be meaning preserving in the traditional sense. We show how this relaxed meaning further simplifies the transformation phase of the approach. Finally, we present the results of an empirical study into the performance of the system.

[36]
Mark Harman, Lin Hu, Rob Hierons, Chris Fox, Sebastian Danicic, Andre Baresel, Harmen Sthamer, and Joachim Wegener. Evolutionary testing supported by slicing and transformation. In Proceedings of the 18th IEEE International Conference on Software Maintenance (ICSM02), page 285, Montreal, Canada, 3rd–6th October 2002. IEEE Comput. Soc. Industrial Applications Track. (doi:10.1109/icsm.2002.1167781)

Evolutionary testing is a search based approach to the automated generation of systematic test data, in which the search is guided by the test data adequacy criterion. Two problems for evolutionary testing are the large size of the search space and structural impediments in the implementation of the program which inhibit the formulation of a suitable fitness function to guide the search. In this paper we claim that slicing can be used to narrow the search space and transformation can be applied to the problem of structural impediments. The talk presents examples of how these two techniques have been successfully employed to make evolutionary testing both more efficient and more effective.

[37]
Lahcen Ouarbya, Sebastian Danicic, David/Mohammed Daoudi, Mark Harman, and Chris Fox. A denotational interprocedural program slicer. In Proceedings of the 9th IEEE Working Conference on Reverse Engineering (WCRE2002), pages 109–118, Richmond, Virginia, USA,, 28th October–1st November 2002. CSpress. (doi:10.1109/wcre.2002.1173076)

This paper extends a previously developed intraproce- dural denotational program slicer to handle procedures. Using the denotational approach, slices can be defined in terms of the abstract syntax of the object language without the need of a control flow graph or similar intermediate structure. The algorithm presented here is capable of correctly handling the interplay between function and procedure calls, side-effects, and short-circuit expression evaluation. The ability to deal with these features is required in reverse engineering of legacy systems, where code often contains side-effects.

[38]
Chris Fox, Shalom Lappin, and Carl Pollard. A higher-order fine-grained logic for intensional semantics. In G. Alberti, K. Balough, and P. Dekker, editors, Proceedings of the Seventh International Symposium on Logic and Language (LoLa7), pages 37–46, PĂ©cs, Hungary, August 2002.

This paper describes a higher-order logic with fine-grained intensionality (FIL). Unlike traditional Montogovian type theory, intensionality is treated as basic. rather than derived through possible worlds. This allows for fine-grained intensionality without impossible worlds. Possible worlds and modalities are defined algebraically. The proof theory for FIL is given as a set of tableau rules, and an algebraic model theory is specified. The proof theory is shown to be sound relative to this model theory. FIL avoids many of the problems created by classical course-grained intensional logics that have been used in formal and computational semantics.

[39]
Chris Fox, Shalom Lappin, and Carl Pollard. Intensional first-order logic with types. In G. Alberti, K. Balough, and P. Dekker, editors, Proceedings of the Seventh International Symposium on Logic and Language (LoLa7), pages 47–56, PĂ©cs, Hungary, August 2002.

presents Property Theory with Curry Typing (PTCT) where the language of terms and well-formed formulé are joined by a language of types. In addition to supporting fine-grained intensionality, the basic theory is essentially first-order, so that implementations using the theory can apply standard first-order theorem proving techniques. Some extensions to the type theory are discussed, type polymorphism, and enriching the system with sufficient number theory to account for quantifiers of number, such as “most.”

[40]
Costin Badica and Chris Fox. Modelling and verification of business processes. In Applied Simulation and Modelling (ASM 2002), Crete, Greece, June 2002.

This paper introduces a notation for business process modelling based on flownominal expressions, and shows how it can be used for static verification of business processes, under the assumption of single instance executions, by evaulationg them over boolean relations. Its main advantage is simplicity, but it is also more restrictive than other approaches because it can only indiciate those input patterns that can cause it to enter an infinite loop, or resource starvation. Nevertheless, this is useful because it can help isolate problems at an early stage, prior to running any dynamic simulations.

[41]
Sebastian Danicic, Chris Fox, Mark Harman, and Rob Hierons. Backward conditioning: a new program specialisation technique and its application to program comprehension. In IEEE Proceedings of the 9th International Workshop on Program Comprehension (IWPC2001),, pages 89–97, Toronto, Canada, 12th–13th May 2001. IEEE Comput. Soc. (doi:10.1109/wpc.2001.921717)

This paper introduces backward conditioning. Like forward conditioning (used in conditioned slicing), backward conditioning consists of specialising a program with respect to a condition inserted into the program. However, whereas forward conditioning deletes statements which are not executed when the initial state satisfies the condition, backward conditioning deletes statements which cannot cause execution to enter a state which satisfies the condition. The relationship between backward and forward conditioning is reminiscent of the relationship between backward and forward slicing. Forward conditioning addresses program comprehension questions of the form ‘what happens if the program starts in a state satisfying condition c?’, whereas backward conditioning addresses questions of the form ‘what parts of the program could potentially lead to the program arriving in a state satisfying condition c?’. The paper illustrates the use of backward conditioning as a program comprehension assistant and presents an algorithm for constructing backward conditioned programs.

[42]
Chris Fox and Shalom Lappin. A framework for the hyperintensional semantics of natural language with two implementations. In P. de Groote, G. Morrill, and C. Retore, editors, Proceedings of the Fourth International Conference on Logical Aspects of Computational Linguistics (LACL2001), Lecture Notes in Computer Science (LNCS), pages 175–192, Le Croisic, France, 2001. Springer, Berlin and New York.

In this paper we present a framework for constructing hyperintensional semantics for natural language. On this approach, the axiom of extensionality is discarded from the axiom base of a logic. Weaker conditions are specified for the connection between equivalence and identity which prevent the reduction of the former relation to the latter. In addition, by axiomatising an intensional number theory we can provide an internal account of proportional cardinality quantifiers, like most. We use a (pre-)lattice defined in terms of a (pre-)order that models the entailment relation. Possible worlds/situations/indices are then prime filters of propositions in the (pre-)lattice. Truth in a world/situation is then reducible to membership of a prime filter. We show how this approach can be implemented within (i) an intensional higher-order type theory, and (ii) first-order property theory.

[43]
Mark Harman, Rob Hierons, Sebastian Danicic, Mike Laurence, John Howroyd, and Chris Fox. Node coarsening calculi for program slicing. In Proceedings of the Eighth IEEE Working Conference on Reverse Engineering (WCRE2001), pages 25–34, Stuttgart, Germany., 2nd–5th October 2001. IEEE Comput. Soc. (doi:10.1109/wcre.2001.957807)

Slicing has been shown to be a useful program abstraction technique, with applications at many points in the software development life-cycle, particularly as a tool to assist software evolution. Unfortunately, slicing algorithms scale up rather poorly, diminishing the applicability of slicing in practise. In applications where many slices are required from a largely unchanging system, incremental approaches to the construction of dependence information can be used, ensuring that slices are constructed speedily. However, for some applications, the only way to compute slices within effective time constraints will be to trade precision for speed. This approach has been successfully applied to a number of other computationally expensive source code analysis techniques, most notably point-to analysis. This paper introduces a theory for trading precision for speed in slicing based upon ‘blobbing together’, or ‘coarsening’, several individual Control Flow Graph nodes. The theory defines the properties which should be possessed by a logical calculus for ‘coarsening’ (coalescing several nodes in a region into a single representative of the region). The theory is illustrated with a case study which presents a calculus for R-coarsening, and a consistent and complete set of inference rules which compromise precision for speed.

[44]
Mark Harman, Rob Hierons, Chris Fox, Sebastian Danicic, and John Howroyd. Pre/post conditioned slicing. In Proceedings of the 17th IEEE International Conference in Software Maintenance (ICSM2001), pages 138–147, Florence, Italy, 6th–10th November 2001. IEEE Comput. Soc. (doi:10.1109/icsm.2001.972724)

shows how analysis of programs in terms of pre- and post- conditions can be improved using a generalisation of conditioned program slicing called pre/post conditioned slicing. Such conditions play an important role in program comprehension, reuse, verification and re-engineering. Fully automated analysis is impossible because of the inherent undecidability of pre- and post- conditions. The method presented here reformulates the problem to circumvent this. The reformulation is constructed so that programs which respect the pre- and post-conditions applied to them have empty slices. For those which do not respect the conditions, the slice contains statements which could potentially break the conditions. This separates the automatable part of the analysis from the human analysis.

[45]
Sebastian Danicic, Chris Fox, Mark Harman, and Rob Hierons. Consit: A conditioned program slicer. In IEEE Proceedings of the International Conference in Software Maintenance (ICSM2000), pages 216–226, San Jose, California, USA, 11th–14th October 2000. (doi:10.1109/icsm.2000.883049)

Conditioned slicing is a powerful generalisation of static and dynamic slicing which has applications to many problems in software maintenance and evolution, including re-use, re-engineering and program comprehension. However, there has been relatively little work on the implementation of conditioned slicing. Algorithms for implementing conditioned slicing necessarily involve reasoning about the values of program predicates in certain sets of states derived from the conditioned slicing criterion, making implementation particularly demanding. This paper introduces ConSIT, a conditional slicing system which is based upon conventional static slicing, symbolic execution and theorem proving. ConSIT is the first fully automated implementation of conditioned slicing.

[46]
Mark Harman, Chris Fox, Rob Hierons, David Binkley, and Sebastian Danicic. Program simplification as a means of approximating undecidable propositions. In IEEE Proceedings of the Seventh International Workshop on Program Comprehension 1999 (IWPC-99), pages 208–217, Pittsburgh, Pennsylvania, USA, 5th–7th May 1999. (doi:10.1109/wpc.1999.777760)

In this paper, an approach is described which mixes testing, slicing, transformation and program verification to investigate speculative hypotheses concerning a program formulated during program comprehension activity. Our philosophy is that such hypotheses (which are typically undecidable) can, in some sense, be ‘answered’ by a partly automated system which returns neither ‘true’ nor ‘false’, but a program (the ‘test program’) which computes the answer. The motivation for this philosophy is the way in which, as we demonstrate, static analysis and manipulation technology can be applied to ensure that the resulting program is significantly simpler than the original program, thereby simplifying the process of investigating the original hypothesis.

[47]
Chris Fox. Plural anaphora in a Property-theoretic discourse representation theory. In The Second International Workshop on Computational Semantics, page (10 pages), Tilburg, 1997.

It is possible to use a combination of classical logic and dependent types to represent natural language discourse and singular anaphora (Fox 1994b). In this paper, these ideas are extended to account for some cases of plural anaphora. In the theory described universal quantification and conditionals give rise to a context in which singular referents within its scope are transformed into plurals. These ideas are implemented in axiomatic Property Theory (Turner 1992) extended with plurals (Fox 1993), giving a treatment of some examples of singular and plural anaphora in a highly intensional, weakly typed, classical, first-order logic.

[48]
Chris Fox. Discourse Representation, Type Theory and Property Theory. In H. Bunt, R. Muskens, and G. Rentier, editors, Proceedings of the International Workshop on Computational Semantics, pages 71–80, Institute for Language Technology and Artificial Intelligence (ITK), Tilburg, 1994.

Since Aristotle, it has been accepted that the appropriate interpretation of sentences is as propositions, and that general terms should be interpreted as properties, distinct from propositions. Recent proposals for natural language semantics have used constructive type theories such as Martin-Löf’s Type Theory MLTT (Martin-Löf 1982, 1984) which treat anaphora and ‘donkey’ sentences using dependent types (Sundholm 1989, Ranta 1991, Davila 1994). However, MLTT conflates the notions of proposition and property. This work shows how, within Property Theory, dependent-type expressions representing natural language discourse can be mapped systematically into first-order expressions with a classical notion of propositionhood, distinct from that of properties.

[49]
Chris Fox. Individuals and their guises: a Property-theoretic analysis. In P. Dekker and M. Stokhof, editors, Proceedings of the Ninth Amsterdam Colloquium, volume II, pages 301–312, 1993.

This paper reappraises Landman’s formal theory of intensional individuals—individuals under roles, or guises (Landman 1989)—within property theory (PT) (Turner 1992). As many of Landman’s axioms exist to overcome the strong typing of his representation, casting his ideas in weakly typed PT produces a simpler theory. However, there is the possibility of an even greater simplification: if roles, or guises, are represented with property modifiers then there is no need for Landman’s intensional individuals. Landman’s argument against the use of property modifiers is re-examined, and shown to be mistaken.

[50]
Chris Fox and (with A. N. De Roeck and B. G. T. Lowden and R. Turner and B. R. Walls). A natural language system based on formal semantics. In Proceedings of the International Conference on Current Issues in Computational Linguistics, pages 221–234, Penang. Malaysia, 1991.

This paper describes a system for parsing English into Property-theoretic semantics, using an attribute-value grammar and a bi-directional chart parser, which then translates this representation into a relational calculus (SQL) query which can be presented to a database ( textsc Ingres) for evaluation. The Property Theory used is a highly intensional first-order theory, which avoids some of the problems of higher-order intensional logics.

[51]
Chris Fox and (with R. A. J. Ball and E. K. Brown and A. N. De Roeck and M. Groefsema and N. Obeid and R. Turner). Helpful answers to modal and hypothetical questions. In Proceedings of the European Association for Computational Linguistics (EACL), pages 257–262, 1991.

The paper describes a system in which a chart parser translates a question in English into a propositional representation in a non-monotonic logic. A “context machine” uses this representation to extract salient statements from a knowledge base. A tableau theorem prover then takes these statements and attempts to prove the proposition associated with the original question. If the proof fails, the reason for failure can be used to provide a relevant helpful answer.

[52]
A. N. De Roeck, Chris Fox, B. G. T. Lowden, R. Turner, and B. R. Walls. A formal approach to translating English into SQL. In M. S. Jackson and A. E. Robinson, editors, Aspects of Databases, Proceedings of the Ninth British National Conference on Databases (BNCOD 9), pages 110–125. Butterworth-Heinmann, July 1991.

This paper describes a system for parsing English into Property-theoretic semantics, using an attribute-value grammar and a bi-directional chart parser, which then translates this representation into a relational calculus (SQL) query which can be presented to a database INGRES for evaluation. The query was optimised using techniques from resolution theorem proving.

Selected Talks (19)
[1]
Chris Fox. ‘big data’ and contemporary concerns about consent. Invited presentation, The Law Society, Chancery Lane, London, July 2018.
[2]
Chris Fox. What on earth are we talking about? Invited presentation, Cognitive science seminar, New College of Humanities, March 2018.
[3]
Chris Fox. Possible worlds considered harmful. Invited symposium presentation, Ninth European Conference on Analytic Philosophy, Munich, 2017.
[4]
Chris Fox. Automatic classification of social-media data. Invited contribution, Experts’ Meeting, OHCHR (Office of the United Nations High Commissioner for Human Rights), Geneva, December 2017.
[5]
Chris Fox. Existence and freedom. Invited contribution, In memoriam: Sebastian Danicic, Goldsmiths College, London, September 2016.
[6]
Chris Fox. Against ontological reduction. Invited presentation, Language and Cognition Seminar, Dept. of Philosophy, King’s College London, March 2015.
[7]
Chris Fox. The meaning of formal semantics. Plenary speech, Philosophy of Language and Linguistics, ƁódĆș, Poland, 2013.
[8]
Chris Fox. Axiomatising questions. Presentation, Logica 2012, Hejnice, Czech Republic, June 2012.

Conference presentation. The paper of the same name is based on this talk.

[9]
Chris Fox. Ought ought to imply can. Opening presentation, Ought and Can special philosophy workshop, Essex, April 2012.

Conference slides: some thoughts on the Ought Implies Can puzzle, and a meta-level proposal.

[10]
Chris Fox and Raymond Turner. A semantic method. Second conference on the Philosophy of Language and Linguistics (PhiLang). This is the basis of the paper “In Defense of Axiomatic Semantics”, published in Philosophical and Formal Approaches to Linguistic Analysis, Ontos Verlag, 2012, May 2011.

Conference talk on which “In Defense of Axiomatic Semantics” is based.

[11]
Chris Fox. The good samaritan and the hygenic cook: a cautionary tale about linguistic data. Presentation, Conference on the Philosophy of Language and Linguistics, ƁódĆș, Poland, 14th–16th May 2009.

When developing formal theories of the meaning of language, it is appropriate to consider how apparent paradoxes and conundrums of language are best resolved. But if we base our analysis on a small sample of data then we may fail to take into account the influence of other aspects of meaning on our intuitions. Here we consider the so-called Good Samaritan Paradox (Prior, 1958), where we wish to avoid any implication that there is an obligation to rob someone from “You must help a robbed man”. We argue that before settling on a formal analysis of such sentences, we should consider examples of the same form, but with intuitively different entailments—such as “You must use a clean knife”—and also actively seek other examples that exhibit similar contrasts in meaning, even if they do not exemplify the phenomena that is under investigation. This can refine our intuitions and help us to attribute aspects of interpretation to the various facets of meaning.

[12]
Chris Fox. Axiomatic imperatives. Invited paper presented at the Linguistics Association of Great Britain (LAGB) workshop “Issues in Dynamic Semantics,” at King’s College London, 29th August 2007.

In the case of indicative sentences, broadly speaking there is some consensus about how to approximate a range of phenomena by appeal to truth conditional semantics and various forms of predicate logic (modulo differences in theoretical framework and philosophical taste). Even when these approximations fall short, they can help provide a background against which the behaviour of more recalcitrant data can be better understood. In the case of imperatives, there have been various proposals for their formal semantics. Unfortunately, these theories are often presented without all the relevant formal details, and may rely on complex and problematic notions, such as actions, cause and effect. This can make it difficult to compare and evaluate such theories, or discern any general consensus about how to address a given phenomena. The current proposal seeks to capture the formal logical behaviour of core imperatives by way of inference rules over propositional satisfaction criteria. One objective is to find a level of abstraction which avoids troublesome notions such as actions and causality, but with sufficient expressive power to capture key intuitions about the meaning of imperatives. In addition to giving an informative analysis, the hope is that this will provide a baseline of clearly formulated and generally accepted patterns of behaviour that can be used to evaluate other proposals, and help us understand more recalcitrant data.

[13]
Chris Fox. Program slicing and conditioning. Invited talk, Theoretical Computer Science Seminar, University of Kent, February 2005.

An introduction to the notions of program slicing and program conditioning.

[14]
Chris Fox. Generating underspecified interpretations as terms of the representation language. Invited talk, Eighth International Symposium on Logic and Language (LoLa8), Debrecen, Hungary, 26th–28th August 2004. (Joint work with Shalom Lappin).

In previous work we have developed Property Theory with Curry Typing (PTCT), an intensional first-order logic for natural language semantics. PTCT permits fine-grained specifications of meaning. It also supports polymorphic types and separation types. We develop an intensional number theory within PTCT in order to represent proportional generalized quantifiers like most, and we suggest a dynamic type-theoretic approach to anaphora and ellipsis resolution. Here we extend the type system to include product types, and use these to define a permutation function that generates underspecified scope representations within PTCT. We indicate how filters can be added to encode constraints on possible scope readings. Our account offers several important advantages over other current theories of underspecification.

[15]
Chris Fox and Shalom Lappin. Generalized quantifiers with underspecified scope relations in a first-order representation language. Presentation, Strategies of Quantification, York, UK, 15th–17th July 2004.

In this paper we show that by adding Curry typing to a first-order property theory it is possible to represent the full range of generalized quantifiers (GQs) corresponding to natural language determiners. We characterize GQs as property terms that specify cardinality relations between properties (or separation types). We also generate underspecified quantifier scope representations within the representation language, rather than through meta-language devices, as in most current treatments of underspecification (Reyle, 1993; Bos, 1995; Blackburn & Bos, 2003; Copestake, Flickinger, & Sag, 1997).

[16]
Chris Fox. Natural language semantics in a flexibly typed intensional logic. Invited talk, The ITRI Seminar series, University of Brighton, September 2004. (Joint work with Shalom Lappin).

In this talk I shall present Property Theory with Curry Typing (PTCT), an intensional first-order theory for natural language semantics developed by myself and Shalom Lappin. PTCT permits fine-grained specifications of meaning. It also supports polymorphic types and separation types. We have developed an intensional number theory within PTCT in order to represent proportional generalized quantifiers like "most". We use the type system and our treatment of generalized quantifiers in natural language to construct a type-theoretic approach to pronominal anaphora and ellipsis. We have also developed a theory of underspecification that is expressed within the term language of the theory. The talk will focus on the basics of PTCT itself, and outline the treatment of anaphora and ellipsis. If there is time, a sketch of our treatment of underspecification may also be given.

[17]
Chris Fox. A type-theoretic approach to anaphora and ellipsis resolution. Invited talk, The Human Communications Research Centre Colloquium, Edinburgh, February 2004. (Joint work with Shalom Lappin).

We present an approach to anaphora and ellipsis resolution in which pronouns and elided structures are interpreted by the dynamic identification in discourse of type constraints on their semantic representations. The content of these conditions is recovered in context from an antecedent expression. The constraints define separation types (sub-types) in Property Theory with Curry Typing (PTCT), an expressive first-order logic with Curry typing that we have proposed as a formal framework for natural language semantics.

[18]
Chris Fox. A fine-grained intensional first-order logic with flexible Curry typing. Invited talk, The Fields Institute Workshop of Mathematical Linguistics, Ottawa, June 2003. (Joint work with Shalom Lappin).

A highly intensional first-order logic will be presented which incorporates an expressive type system, including general function spaces, separation types and type polymorphism. Although first-order in power, the logic is sufficiently expressive to capture aspects of natural language semantics that are often characterised as requiring a higher-order analysis. Aspects of the model theory for this logic will also be discussed.

[19]
Chris Fox. Property Theory with Curry typing: An intensional logic for natural language semantics. Invited contribution, Foundations of Computational Linguistics (Workshop at the IEEE Symposium on Logic in Computer Science (LICS)), Ottawa, Ontario, Canada, June 2003. (Joint work with Shalom Lappin).

We present Property Theory with Curry Typing (PTCT), an intensional first-logic for natural language semantics. PTCT permits fine-grained specifications of meaning. It also supports polymorphic, separation, and dependent types. We develop an intensional number theory with PTCT in order to represent proportional generalized quantifiers like most. We use the type system and our treatment of generalized quantifiers in natural language to construct a type-theoretic approach to pronominal anaphora that avoids some of the difficulties that undermine previous type-theoretic analyses of this phenomenon.

Selected Reports & Whitepapers (13)
[1]
Professor Lorna McGregor, Professor Pete Fussey, Dr Daragh Murray, Dr Chris Fox, Dr Ayman Alhelbawy, Professor Klaus McDonald-Maier, Dr Ahmed Shaheed, and Professor Geoff Gilbert. COV0090 — The Government’s response to COVID–19: human rights implications. Technical report, May 2020. Written Parliamentary witness statement, cited in House of Commons & House of Lords Joint Committee on Human Rights, “Human Rights and the Government’s Response to Covid-19: Digital Contact Tracing”, Third Report of Session 2019–21. (HC 343, HL Paper 59). (PDF)
[2]
HM Bal, S Dubberley, and C Fox. Technology in support of humanitarian work: An overview of opportunities and challenges in project design. Technical report, Human Rights Centre, Essex, 2018.
[3]
V Ng and C Fox. Big data: Definitions and reflections. Technical report, Human Rights Centre, Essex, 2018.
[4]
R Cooper, R Crouch, JV Eijck, C Fox, JV Genabith, J Jaspars, H Kamp, M Pinkal, D Milward, M Poesio, and S Pulman. Building the framework, FraCaS: A framework for computational semantics, FraCaS deliverable D15. Technical report, University of Edinburgh, 1996. 408 pages. Additional contributions from Nicholas Asher, Paul Dekker, Karsten Konrad, Emiel Krahmer, Holger Maier and Peter Ruhrberg.
[5]
R Cooper, R Crouch, JV Eijck, C Fox, JV Genabith, J Jaspars, H Kamp, M Pinkal, D Milward, M Poesio, and S Pulman. Evaluation of previous work, FraCaS: A framework for computational semantics, FraCaS deliverable D13. Technical report, University of Edinburgh, 1996. 78 pages.
[6]
R Cooper, R Crouch, JV Eijck, C Fox, JV Genabith, J Jaspars, H Kamp, M Pinkal, D Milward, M Poesio, and S Pulman. A strategy for building a framework, FraCaS: A framework for computational semantics, FraCaS deliverable D14. Technical report, University of Edinburgh, 1996. 25 pages.
[7]
R Cooper, R Crouch, JV Eijck, C Fox, JV Genabith, J Jaspars, H Kamp, M Pinkal, D Milward, M Poesio, and S Pulman. Using the framework, FraCaS: A framework for computational semantics, FraCaS deliverable D16. Technical report, University of Edinburgh, 1996. 136 pages. Additional contributions from Ted Briscoe, Holger Maier and Karsten Konrad.
[8]
R Cooper, R Crouch, JV Eijck, C Fox, JV Genabith, J Jaspars, H Kamp, M Pinkal, D Milward, M Poesio, and S Pulman. The bluffer’s guide to computational semantics, FraCaS: A framework for computational semantics. Technical report, University of Edinburgh, 1995. 48 pages.
[9]
R Cooper, R Crouch, JV Eijck, C Fox, JV Genabith, J Jaspars, H Kamp, M Pinkal, D Milward, M Poesio, and S Pulman. Describing the approaches, FraCaS: A framework for computational semantics , FraCaS deliverable D8. Technical report, University of Edinburgh, 1995. 231 pages.
[10]
R Cooper, R Crouch, JV Eijck, C Fox, JV Genabith, J Jaspars, H Kamp, M Pinkal, D Milward, M Poesio, and S Pulman. Evaluating the state of the art, FraCaS: A framework for computational semantics, FraCaS deliverable D10. Technical report, University of Edinburgh, 1995. 152 pages.
[11]
R Cooper, R Crouch, JV Eijck, C Fox, JV Genabith, J Jaspars, H Kamp, M Pinkal, D Milward, M Poesio, and S Pulman. Harmonizing the approaches, FraCaS: A framework for computational semantics, FraCaS deliverable D7. Technical report, University of Edinburgh, 1995. 107 pages.
[12]
R Cooper, R Crouch, JV Eijck, C Fox, JV Genabith, J Jaspars, H Kamp, M Pinkal, D Milward, M Poesio, and S Pulman. The state of the art in computational semantics: Evaluating the descriptive capabilities of semantic theories, FraCaS: A framework for computational semantics, FraCaS deliverable D9. Technical report, University of Edinburgh, 1995. 262 pages.
[13]
Chris Fox. Episodes, characterising sentences and causes. Technical report, UniversitÀt des Saarlandes, 1994. manuscript, UniversitÀt des Saarlandes, (10 pages).

Episodic Logic (EL), as described in Chung Hee Hwang and Lenhart K. Schubert’s paper “Episodic Logic: a Situational Logic for Natural Language Processing” (Hwang & Schubert1993), is a formal theory of natural language semantics which has an extensive coverage of phenomena. The theory has been applied effectively in various software implementations of natural language systems. This paper is not intended to undermine this theoretical and applied work. It aims merely to illustrate some problems with the informal intuitions that purport to explain and justify the formal theory of EL. In particular, this paper criticises the view that we should think of events as situations (episodes) which can be completely characterised by natural language sentences. I argue that: (1) there are no genuine natural language examples which require it; (2) it results in a loss of expressiveness; and (3) it leads to problems when giving the logical form of causal statements. I suggest that the motivating example can be dealt with adequately using a (neo-)Davidsonian approach. That these arguments do not undermine the formal theory of EL and its application in various systems can be seen from the fact (discussed at the end of Section II) that the formal theory appears to make no use of the problematic notions; they only appear in its informal motivation. In effect, EL can be seen to provide a neo-Davidsonian theory. This paper is structured as follows: Section I introduces those aspects of EL relevant for the discussion; Section II presents detailed criticisms; Section III re-appraises the (neo-)Davidsonian approach to events, and shows how it can cope with Hwang and Schubert’s motivating example; and Section IV makes some concluding remarks.

Distance Learning (4)
[1]
Chris Fox. Mathematics for computing. University of London. Stand-alone “computer aided learning” resources, based upon an existing printed volume, 2007.

Stand-alone “computer aided learning” resources, based upon an existing printed volume.

[2]
Chris Fox. Subject guide LATEχ classfile. For automating the production of distance learning materials for the University of London in their house style, 2005. (First version 2003; final version 2005).

Subject guide LaTeX classfile (software) for automating the production of distance learning materials for the University of London in their house style.

[3]
Chris Fox. Introduction to computing. A Subject Guide for the University of London’s undergraduate programme for external students, 2000.

A Subject Guide for the University of London’s undergraduate programme for external students.

[4]
Chris Fox. Artificial intelligence. A Subject Guide for the University of London’s undergraduate programme for external students, 1997.

A Subject Guide for the University of London’s undergraduate programme for external students.

PhD Thesis (1!)
[1]
Chris Fox. Mass Terms and Plurals in Property Theory. PhD thesis, University of Essex, 1993.

The Thesis Mass Terms and Plurals in Property Theory is concerned with extending a weak axiomatic theory of Truth Propositions, and Properties, with fine grained intensionality (PT), to represent the semantics of natural language (NL) sentences involving plurals and mass terms. The use of PT as a semantic theory for NL eases the problem of modeling the behaviours of these phenomena by removing the artificial burdens imposed by strongly typed, model-theoretic semantic theories. By deliberately using incomplete axioms, following the example set by basic PT, it is possible to remain uncommitted about: the existence of atomic mass terms; the existence of a ‘bottom element’ (a part of all terms) as a denotation of NL nominals; and the propositionhood (and hence truth of) sentences such as “the present King of France is bald.” Landman’s theory concerning the representation of individuals under roles, or guises is reappraised in terms of property modifiers. This is used to offer a solution to the problem of distinguishing predication of equal fusions of distinct properties, and the control of distribution into mereological terms, when used to represent NL mass nominals, without assuming an homogeneous extension. The final theory provides a uniform framework for representing sentences involving both mass and count nominals.

Music (7)
[1]
Chris Fox, Dan Fox, Sandra Moog, and Owen Robinson. Attenuation. Line Break: Rehearsal recordings. Soundcloud, May 2022. Original music by Owen Robinson, arranged and produced by Line Break.
[2]
Shalom Lappin and Chris Fox. The Good Intensions. Online, 2021. Recordings of covers, and original songs by Shalom Lappin.
[3]
Chris Fox, Dan Fox, Sandra Moog, Peter Patrick, and Owen Robinson. The Earth moves around the Sun. The Emergency Room: Live at the White House. YouTube, September 2021. Original music by Peter Patrick, arranged and produced by The Emergency Room.
[4]
Wagner Antunes, Chris Fox, Sandra Moog, Peter Patrick, and Owen Robinson. Workin’ on the swing shift. EP distributed by BandCamp, December 2019. Original music by Peter Patrick, arranged and produced by The Emergency Room.
[5]
Chris Fox and WivBeat. WivBeat Xmas performance. Public performance, December 2013. West African rhythms, arranged by Chris Fox, performed by WivBeat. Video and sound recording copyright Chris Fox 2021.
[6]
Alice Fox, Camilla Fox, Chris Fox, Helen Fox, and Hassan Lotfi. Agaya. Street performance, August 2013. Traditional West African rhythm, arranged and performed by Drumz Kool.
[7]
Alice Fox, Camilla Fox, Chris Fox, Helen Fox, Hassan Lotfi, Dariush, and Joe. Drumz Kool at Salifest 2012. Public performance, September 2012. West African rhythms, arranged and performed by Drumz Kool. Video and sound recording copyright Chris Fox 2021.

Author: Dr Chris Fox · Round Peg

Created: 2021-11-19 Fri 08:17