School of ECM University of Surrey Guildford, Surrey GU2 5XH, UK | Tel: +44 (0)1483 259823 Fax: +44 (0)1483 876051 |
Students' Ph.D. Abstracts from University of Surrey
Tracey Bale, B.Sc., Ph.D.
"Modular Connectionist Architectures and the Learning of Quantification
Skills"
(Department of Computing, January
1999)
AbstractModular connectionist systems comprise autonomous,
communicating modules, achieving a behaviour more complex than that of a single
neural network. The component modules, possibly of different topologies, may
operate under various learning algorithms. Some modular connectionist systems
are constrained at the representational level, in that the connectivity of the
modules is hard-wired by the modeller; others are constrained at an
architectural level, in that the modeller explicitly allocates each module to a
specific subtask. Our approach aims to minimise these constraints, thus reducing
the bias possibly introduced by the modeller. This is achieved, in the first
case, through the introduction of adaptive connection weights and, in the
second, by the automatic allocation of modules to subtasks as part of the
learning process. The efficacy of a minimally constrained system, with respect
to representation and architecture, is demonstrated by a simulation of numerical
development amongst children.
The modular connectionist system MASCOT
(Modular Architecture for Subitising and Counting Over Time) is a dual-routed
model simulating the quantification abilities of subitising and counting. A
gating network learns to integrate the outputs of the two routes in determining
the final output of the system. MASCOT simulates subitising through a numerosity
detection system comprising modules with adaptive weights that self-organise
over time. The effectiveness of MASCOT is demonstrated in that the distance
effect and Fechner's law for numbers are seen to be consequences of this
learning process. The automatic allocation of modules to subtasks is illustrated
in a simulation of learning to count. Introducing feedback into one of two
competing expert networks enables a mixture-of-experts model to perform
decomposition of a task into static and temporal subtasks, and to allocate
appropriate expert networks to those subtasks. MASCOT successfully performs
decomposition of the counting task with a two-gated mixture-of-experts model and
exhibits childlike counting errors.
Stephen Griffin, B.Sc., M.Phil.
"Exploiting Linguistic and Societal Metaphors for Knowledge
Acquisition"
(Department of Mathematical and Computing Sciences, September
1996)
AbstractOur inter-disciplinary research examines new
approaches to knowledge acquisition through the exploitation of linguistic and
societal metaphors. We argue that conventional knowledge acquisition relies too
heavily on a psychological metaphor, and that this is insufficient in broad
domains, where geographical and political issues make the expertise more
socially situated, because it lacks input from the society in which the
knowledge exists. We attempt to provide a methodology which captures this input
by introducing a Domain Interface Group to support the knowledge engineer in
his/her tasks. This presents a changing role for the knowledge engineer to
primarily that of a group facilitator, and we suggest guidelines for
brainstorming sessions to facilitate consensus decision making. We advocate the
continued use of expert interviews, but suggest ways to improve their
productivity. In particular, we attempt to alleviate reductive bias through the
use and understanding of domain specific terminology and lexical semantics,
during all domain communication and particularly during knowledge acquisition
from text. We situate our work in the constructivist modelling paradigm and
describe mediating representations which emphasise the importance of human
comprehension of the model, for the knowledge engineer, the expert and the end
user, above programming considerations. We have undertaken an evaluation of our
methodology and an audit of a resulting paper knowledge base, and present the
results in an attempt to prove the efficiency, effectiveness and accuracy of our
approach.
Mohamed Benbrahim, Ingénieur d'Etat, Ph.D.
"Automatic Text Summarisation through Lexical Cohesion
Analysis"
(Department of Mathematical and Computing Sciences, April
1996)
AbstractA methodology for automatically summarising
scientific texts is presented using the patterns of lexical cohesion found in
such texts. Lexical cohesion is a type of cohesion whereby certain lexical
features of the text connect sentences with each other in the text. An analysis
of lexical cohesion in text, primarily by counting repetitions, synonyms and
paraphrase, leads to the establishment of a network of sentences, some tightly
bonded through lexical cohesion relations, some others having weak bonds or no
bonds at all. The strength of connections in this cohesion network is used to
identify key sentences in a text. Some sentences open key topics, some close
topics, whilst others consolidate a given topic. Topic opening, closing, and
consolidating, or central sentences, have different strengths and different
connectivity patterns. A selection of these sentences can be construed as a
summary of a given text. TELE-PATTAN (TExt and LExical cohesion PATTerns
ANalysis), a system for summarising text automatically, extracts patterns of
lexical cohesion in a text, categorises its sentences and subsequently produces
summaries of the text on the basis of these patterns. Experiments were conducted
with human subjects to evaluate the summaries. The results of this preliminary
evaluation are encouraging.
John Wright, B.Sc., M.Sc., PhD.
"Connectionist Architectures for Language Disorder
Simulation"
(Department of Mathematical and Computing Sciences, December
1995)
AbstractOur interdisciplinary research focuses on the
application of connectionist modelling techniques to the study of language
disorders. In recent years, artificial neural network models of aphasia have
enabled cognitive neuropsychologists to explore contemporary theories of
language processing. Such work may, in the future, lead to the development of
innovative strategies for the rehabilitation of brain-damaged patients. The aim
of our work has been to analyse the modelling techniques employed in existing
connectionist accounts of language disorders and, on the basis of our findings,
to propose novel and computationally well-grounded architectures which may be
used to explore cognitive neuropsychological theories.
The majority of connectionist language disorder models reported in the
literature may be categorised as network-level models, consisting of a single
homogeneous structure built from identical processing elements. We believe that
in order to simulate more fully the complexity of human language processing, it
may be necessary to move away from this approach, in favour of nervous
system-level models, in which a number of network-level models are
interconnected to form a modular connectionist architecture. The suitability of
these architectures for language disorder simulation has been assessed through
the construction of LISA: a Language Impairment Simulation Architecture. LISA
comprises a number of linked connectionist networks which have been collectively
trained to simulate object naming and word repetition. By lesioning one or more
components of our modular system, it is possible to simulate the impaired
language production of an aphasic patient. We present our attempts to simulate
an acquired disorder of repetition, deep dysphasia and a progressive disorder,
semantic dementia, using LISA. The results of our experiments are encouraging,
and lead us to conclude that the cognitive neuropsychology community may indeed
benefit from the use of modular connectionist architectures in the simulation of
both progressive and acquired language disorders.
Paul Holmes-Higgin, B.Sc., PhD.
"Text Knowledge: The Quirk
Experiments"
(Department of Mathematical and Computing Sciences, March
1995)
AbstractOur research examines text knowledge: the knowledge
encoded in text and the knowledge about a text. We approach text knowledge from
different perspectives, describing the theories and techniques that have been
applied to extracting, representing and deploying this knowledge, and propose
some novel techniques that may enhance the understanding of text knowledge.
These techniques include the concept of virtual corpus hierarchies, hybrid
symbolic and connectionist representation and reasoning, text analysis and
self-organising corpora. We present these techniques in a framework that
embraces the different facets of text knowledge as a whole, be it corpus
organisation and text identification, text analysis or knowledge representation
and reasoning. This framework comprises three phases, that of organisation,
analysis and evaluation of text, where a single text might be a complete work, a
technical term, or even a single letter. The techniques proposed are
demonstrated by implementations of computer systems and some experiments based
on these implementations: the Quirk Experiments. Through these experiments we
show how the highly interconnected nature of text knowledge can be reduced or
abstracted for specific purposes, from a range of techniques based on explicit
symbolic representations and self-organisation connectionist schemes.
Syed Sibte Raza Abidi, B.Eng., M.S., Ph.D.
"A Connectionist Simulation: Towards a Model of Child Language
Development"
(Department of Mathematical and Computing Sciences, September
1994).
AbstractOur research focuses on the connectionist
simulation of child language development within the age group 9û24 months. We
present a hybrid connectionist modelùACCLAIM (A Connectionist Child Language
development and Imitation Model), comprising 'supervised' and 'unsupervised'
learning connectionist networks that take into account the diverse nature of
inputs to and outputs from a child learning his or her first language. The model
is used to simulate the child's development of concepts, acquisition of words,
ostensive naming of concepts, understanding of conceptual and semantic relations
and the learning of word-order. The simulation produces child-like one-word and
two-word sentences. The simulation of aspects of child language development are
'language informed', in that the data used in the simulation were taken from
extant child language corpora. Theoretical underpinnings of our simulation were
based on Jean Piaget's notions of cognitive development. The efficacy of hybrid
connectionist models is demonstrated through the operationalisation of real
child language data. The simulations indicate that connectionist networks can
simulate developmental behaviour, and both connectionist and developmental
psychology communities can benefit from such a contribution.
Abu Turab Alam, B.Sc., M.Sc., Ph.D.
"The Elicitation of Software Requirements: The Role of Natural Language
Processing"
(Department of Mathematical and Computing Sciences, February
1991.)
AbstractThe engineering of a software system depends
crucially upon the requirements specification of the system. The specification
of requirements is a complex and interactive process involving an analyst and a
client in a requirements definition activity. The principal medium for this
activity is natural language, and we observe that special terms or jargon are
used to abbreviate the communication between an analyst and the client. The
information available to an analyst during this communication is inherently
ambiguous and incomplete and often defined by the client without context.
We emphasise the all-persuasive use of natural language during the
requirements definition activity. Natural language is used from the very start
of a project and used throughout requirements acquisition, expression and
analysis for software specification. Furthermore, a substantial amount of
relevant information about the client's system is also available in natural
language.
An analyst performs various tasks to elicit and understand software
requirements. We identify a number of techniques to expedite these tasks for an
analyst. These techniques have their origins in three different fields:
knowledge engineering (for underlying the user's domain directly from its text);
and natural language studies (schema for formalising the user's domain
knowledge).
The main advantage of our framework is that it does not constrain (in the
form of arbitrary method constructs) the thinking processes of an analyst.
Instead, our framework emphasises the functional behaviour of natural language
in a specific domain and allows the analyst to elicit and understand the
requirements themselves in natural language.
Ian Gerald Wells, B.Sc., M.Sc., Ph.D.
"Knowledge Representation in Clinical Biochemistry"
(Department of
Mathematical and Computing Sciences/Department of Biochemistry, March
1990.)
AbstractClinical decision-making is related to direct
medical observation and treatment of patients, and includes both complex
decisions, often made with inadequate information, together with routine
judgements which require little rigorous analysis. An examination of the
clinical decision-making process suggests that for complex problems the tendency
is to reason backwards from a small number of hypotheses, but reasoning forwards
from the available evidence can be more effective in other cases. Clinical
errors are generally caused by defective knowledge rather than poor reasoning,
and focus on the incorrect interpretation of a small number of significant
clinical cues.
A computer-based development environment called PROSE has been designed and
implemented in order to demonstrate a methodical approach to the analysis of a
domain and to provide a tool for constructing knowledge-based systems which
emulate clinical decision-making. PROSE has been developed within a
computational framework where the domain primitives have been encoded as objects
and relationships, and reasoning is effected by novel control features which
allow alternative solutions to be explored. PROSE has been implemented in the
logic programming language Prolog, and its performance evaluated using five
selected problems in clinical biochemistry. PROSE has also been found to be of
greater relevance through its successful application to a range of practical
tasks in engineering and computer science.
Copyright © Dept. of
Computing, 1996, 1997, 1998, 1999. All rights reserved.
Direct comments or questions to: c.jones
Last Modified by Gemma Stevens 15 July 1999.
No comments:
Post a Comment