Experiments
Lexical Ambiguity
As part of the SAsSy project, we often collaborate with experts from non-computer science disciplines. In the work linked to here, I collaborated with Psychologists in Aberdeen. They had run an experiment on lexical ambiguity, and wanted to build some mixed effects statistical models for their results, which I helped with. The R code for these analyses with documentation in pdf is available at this URL: bitbucket.org/matt_green/lcp2014ambiguity
Aggregation
This work discusses the idea, implicit in some treatments of aggregation in Natural Language Generation (NLG), that less redundant linguistic structure s must always be preferable to more redundant structures that express the same information. We demonstrate experimentally that this view is mistaken and argue for a non-directional approach to aggregation that is able to add or remove redundancies depending on a range of factors. We argue that aggregation, understood in this way, is not only relevant for NLG, but also to Summarisation, Text Simplification, and Machine Translation, and to human language production decisions as well. We carried out two experiments to address effects of aggregation. The first used very simple aggregation of the form; “Load the van and the truck and the lorry” compared with unaggregated controls like “Load the van. Load the truck. Load the lorry”, and found that the aggregated forms took longer to read than the controls. The second experiment replicated this finding even when more naturalistic aggregation was used, e.g., “Load the van, the truck, and the lorry”. Data sets and scripts for analysis in R are provided at the links below:
homepages.abdn.ac.uk/mjgreen/pages/experiments/aggregation1
homepages.abdn.ac.uk/mjgreen/pages/experiments/aggregation2
Formal Arguments, Preferences, and Natural Language Interfaces to Humans: an Empirical Evaluation
It has been claimed that computational models of argumentation provide support for complex decision making activities in part due to the close alignment between their semantics and human intuition. In this paper we assess this claim by means of an experiment: people’s evaluation of formal arguments — presented in plain English — is compared to the conclusions obtained from argumentation semantics. Our results show a correspondence between the acceptability of arguments by human subjects and the justification status prescribed by the formal theory in the majority of the cases. However, post-hoc analyses show that there are some significant deviations, which appear to arise from implicit knowledge regarding the domains in which evaluation took place. We argue that in order to create argumentation systems, designers must take implicit domain specific knowledge into account. Data sets are provided at the links below:
ECAIdataAnon.csv
variables