Project Proposals for William Flynn Scholarship:
IIndex
page
Project Number: 11
Project Title: Navigation and interaction in three-dimensional Virtual
Environments using Natural Language Processing
Project Supervisor: Professor Paul Mc Kevitt, Dr. Michael Mc Neill,
Ms Heather Sayers
A virtual environment (VE) provides a computer-based interface
representing a three-dimensional physical environment or abstract
space. A large and growing number of applications are using VE technology
in a variety of areas including manufacturing, business, entertainment,
medicine and education. Users of these three-dimensional information
spaces require intuitive tools to enable effective navigation and
interaction. Navigation is the process of moving (usually sequentially)
around an environment, deciding at each step where to go. Users
require the ability to move, controlling orientation, direction
of movement and speed, in order to get to desired positions within
a VE. VE applications often place a high demand on navigation skills,
which means that a high level of navigational support is required
from the interface. It is important, therefore, that interfaces
are designed which provide users with the tools to enable them to
exploit the new possibilities offered.
Research has shown that current interfaces which supply users with
visible tools for navigation within VEs cause user frustration in
a variety of ways. Problems encountered relate to support for velocity,
getting lost or becoming disoriented in the environment, direction
of movement, navigational modes (e.g. walking, flying), the provision
of landmarks in the environment itself, and automatic navigation
to predefined locations. This research will investigate the use
of Natural Language Processing (NLP) as an additional/alternative
means of navigation and interaction in these environments.
Although there has been much success in developing theories, models
and systems in the areas of Natural Language Processing (NLP) and
Visual Processing (VP), there has been little progress in integrating
these two areas. In the beginning, although the general aim of the
field was to build integrated language and visual systems, few were
developed, and these two subfields quickly arose.
Intelligent MultiMedia (IntelliMedia) focuses on the computer processing
and understanding of signal and symbol input from at least speech,
text and visual images in terms of semantic representations. There
has been very little work on integrating spoken dialogue systems
with VEs. Our focus here is to link spoken dialogue processing into
VEs so that users can ask questions about entities and objects in
the environments and also about navigation within them. For example,
users may be asking questions about a 3D VR presentation of a building
space and how to get to certain offices, or about how to get to
a destination on a VR map display, or about patient medical data.
There are a number of research questions in respect of how the semantics
of visual and spoken dialogue information can be integrated and
how visual data can be mapped into and out of those semantics.
If you are interested in being considered for a studentship please
contact
the Group Director, Professor T.M. McGinnity by email:
tm.mcginnity@ulst.ac.uk
or telephone: +44-(0)28-71375417.
See the current research section of this website
for details on research projects pursued by existing PhD students
|