top of page

Research

PI - Dimitra Gkatzia

RF - Carl Strathearn

R -Yanchao Yu

This project aims to develop a framework for common-sense- and visually-enhanced Natural Language Generation that can enable natural real-time communication between humans and artificial agents such as robots to enable effective collaboration between humans and robots. Human-Robot Interaction poses additional challenges to Natural Language Generation due to uncertainty derived from the dynamic environments and the non-deterministic fashion of interaction. For instance, the viewpoint of a situated robot will change when the robot moves and hence its representation of the world, which will result in the failure of current state-of-art methods, which are not able to adapt to changing environments. The project aims to investigate methods for linking various modalities, taking into account their dynamic nature. To achieve natural, efficient and intuitive communication capabilities, agents will also need to acquire human-like abilities in synthesising knowledge and expression. The conditions under which external knowledge bases (such as Wikipedia) can be used to enhance natural language generation still have to be explored as well as whether existing knowledge bases are useful for language generation.

PI - Carl Strathearn

Supervisor - Minhua Ma

​

A significant ongoing issue in realistic humanoid robotics (RHRs) is inaccurate speech to mouth synchronisation. Even the most advanced robotic systems cannot authentically emulate the natural movements of the human jaw, lips and tongue during verbal communication. These visual and functional irregularities have the potential to propagate the Uncanny Valley Effect (UVE) and reduce speech understanding in human-robot interaction (HRI). This paper outlines the development and testing of a novel Computer-Aided Design (CAD) robotic mouth prototype with buccinator actuators for emulating the fluidic movements of the human mouth. The robotic mouth system incorporates a custom Machine Learning (ML) application that measures the acoustic qualities of speech synthesis (SS) and translates this data into servomotor triangulation for triggering jaw, lip and tongue positions. The objective of this study is to improve current robotic mouth design and provide engineers with a framework for increasing the authenticity, accuracy and communication capabilities of RHRs for HRI. The primary contributions of this study are the engineering of a robotic mouth prototype and the programming of a speech processing application that achieved a 79.4% syllable accuracy, 86.7% lip synchronisation accuracy and 0.1s speech-to-mouth articulation differential.

PI - Carl Strathearn

Supervisor - Minhua Ma

​

In humanoid robotics, eyes are commonly made from glass or acrylic making them appear cold and lifeless. The novel robotic eyes presented in this study are the first to have pupils that respond to both light and emotion using machine learning and an artificial muscle made from graphene. The artificial muscle is coated in a colourised 3D-printed gelatine iris to simulate the materiality and appearance of the human eye. The robotic eyes were trained using pupillometry data taken from human test subjects when observing positive and negative video stimulus and high and low light. The results show that the robotic eyes are capable of operating within the natural range of the human pupils in response to light and emotion.

PI - Ramy Hammady

Supervisor - Minhua Ma

RA - Carl Strathearn

​

Many public services and entertainment industries utilise Mixed Reality (MR) devices to develop highly immersive and interactive applications. However, recent advancements in MR processing have prompted the tourist and events industry to invest and develop commercial applications. The museum environment provides an accessible platform for MR guidance systems by taking advantage of the ergonomic freedom of spatial holographical Head-mounted Displays (HMD). The application of MR systems in museums can enhance the typical visitor experience by amalgamating historical interactive visualisations simultaneously with related physical artefacts and displays. Current approaches in MR guidance research primarily focus on visitor engagement with specific content.

PI - Carl Strathearn

Supervisor - Spencer Roberts

​

​

I created ALDOUS during my MRES research project. The robot is life-size and uses a novel control system that transitions between autonomous actions (pre-scripted) and human control (WoZ) using telemetric and tactile sensor controls. The objective of the control system is to emulate spontaneous movements and interactions that are an integral part of improvisation. The robot uses skeleton tracking to track and follow users in a chosen environment, this allows the robot to direct its face towards the user.

​

​

PI - Carl Strathearn

Supervisor - Spencer Roberts

​

​

EGOR was the robot that I created in my first degree to showcase my research in language and vision. The robot has ASR and uses an adapted version of the ELIZA chatbot for spoken communication. For computer vision, the robot employs a Kinect camera and can detect people in a room to focus the head direction during human-robot interaction. â€‹

​

bottom of page