In the lead-up to the Oxfordshire Creative Industries Showcase on 27 June, we will be featuring a number of blogs from partner organisations taking part in the event at Oxford Brookes University.
The University of Oxford is active in the campaign to grow the Creative Industries in the UK. The IT, Software & Games sectors contribute 38% of the £92bn Creative Industries market[1]. The Mathematical, Physical & Life Sciences Division (MPLS) would like to demonstrate the strengths and cutting edge research achievements in this domain through the demonstrations of research groups in our MPLS Stand at the Oxfordshire Creative Industries Showcase event.
Please visit our stand to meet our creative researchers and academics, and join the vibrant chats while watching the demos we bring to you. We will showcase our software engineering and enabling hardware and cyber security design ideas and experiments. These can be utilised, for example in game design and coding or user experience and security enhancements.
We are very keen to share our results and thoughts while also learn about your ideas and interests. In this showcase event we would like to introduce you the leading laboratories who are nurturing unique talents for the future IT, Software & Games industries.
The demonstrators are from the VR AR Oxford Hub, Impact Engineering Laboratory, Oxford Robotics Institute,Cyber Physical Systems labs, Torr Vision Group, Visual Geometry Group, Martinovic Lab, and Computation Biology Group.
Oxford University strengths are:
- Utilization of high performance scientific computing for visualization, virtualization, modelling and collaborative design, experiments. Cyber security and privacy considerations contribute to safe and better quality gaming hardware, software design and wireless networking solutions.
- Multidisciplinary research: Computer science – engineering science – biological sciences – medical sciences have already proved to nurture successful team formations, reaching out for skills in multiple departments, divisions.
- State-of-the-art abstractions & correlations & method testing techniques in video games, modelling and visualization.
List of demonstrations to welcome visitors at our stand:
Deep Learning Open-source Tools for Searching and Annotating Images and Videos
Presenters: Ernesto Coto and Abhishek Dutta
The Visual Geometry Group is a world-class research group leading the Seebibyte Project, a research effort developing next generation Computer Vision methods. We will demonstrate open-source software to Browse, Classify, Match and Annotate large collections of images using Deep Learning and Computer Vision technology.
AR-enhanced research poster
Presenter: Mattia Montanari, Maria Lissner, Richard Smith
Every researcher goes to conferences and uses posters to engage with the scientific community, but only compelling posters attract attention and so the “WOW” factor is crucial. At the University of Oxford, the VR and AR Oxford Hub helps researchers to enhance their posters using Augmented Reality (AR). AR apps allow to traditional posters to tell a story by embedding links, videos and interactive 3D models. This demos leverages UX and is part of the training course delivered each term at IT Services.
VR app for improving public speech
Presenter: Mattia Montanari, Maria Lissner, Richard Smith
Public speeches can be intimidating and students from the Immersive Technologies Summer School 2018 have used Virtual Reality (VR) to do something otherwise not possible: practising in front of hundreds of people in a controlled environment. This immersive experience can track your movements and extract metrics on the eye-contact performance while presenting your slides in a large lecture room.
VR app for public engagement currently showcased at the Ashmolean Museum
Presenter: Mattia Montanari, Maria Lissner, Richard Smith
“Dimensions” was an exhibition on the mathematics of symmetry and space presented at the Ashmolean Museum in 2019. This demo is a Virtual Reality (VR) experience that gives visitors a chance to explore shapes in one, two, three and more dimensions. How does an object look like in higher dimensions? See it yourself and reach it with your bare hands in VR.
Next Best Design – Through Improving our Understanding of Robotic Fundamentals
Presenter: Jonathan Gammell
Estimation is the problem of measuring the position and orientation of a robot and its surroundings. It is challenging because these variables often cannot be measured directly and instead must be estimated from noisy sensors such as cameras and lidar. Getting accurate estimates from this data allows robots to operate in complex worlds.
Search or scheduling is the problem of finding a sequence of actions that allows a robot to achieve its goal. It is challenging because the individual actions may be poorly defined, mutually exclusive, and have uncertain results. Finding efficient and robust action sequence allows robots to achieve their objectives even in the presence of resource constraints and uncertainty.
Path planningis the problem of moving a robot between specified positions while avoiding obstacles. It is challenging because finding safe paths through complex environments is often computationally expensive. Planning these paths quickly and reliably is a necessary component of any autonomous robotic system operating in dynamic or unknown worlds.
AutoTune: Robust human identification systems accross cyber-physical spaces
Presenter: Chris Xiaxuan Lu and Changhao Chen
Can we let a machine autonomously adapt to recognize faces or voices in a new environment from ambient WiFi? The ability to recognize human is a fundamental prerequisite for a wide range of applications from robotics to smart buildings. However, current identification methods heavily rely on human guidance (i.e., labels) and lack robustness due to environmental dynamics. In this work, we present AutoTune, a novel framework that automatically labels biometrics from harvested digital identities in the wild. We will demonstrate how such an auto-tuned recognition system can secure smart workplace by stuff tracking and intruder detection.
mID: Privacy-preserving human sensing system with millimeter wave
Presenter: Chris Xiaxuan Lu and Changhao Chen
Understanding who-is-where is a key enabler for a wide range of applications at smart home, such as personalized services, security managements and energy control. Today, such a goal still relies on cameras which are privacy-intrusive and easy to be occluded by intruders/evaders. In this work, we propose mID, a novel human sensing system using a millimeter wave radar. mID observes objects with sparse point clouds that are considerably more privacy-preserving than visual traits. Moreover, as millimeter signals are able to penetrate certain materials, such a device can be installed inside walls which is hard to be discovered and occluded by intruders. We will show how such a system lively tracks multiple people in the demo spot.
SafeFuse: Selective Sensor Fusion towards Reliable Self-driving Cars
Presenter: Chris Xiaxuan Lu and Changhao Chen
Humans are able to perceive their self-motion through space via multimodal perceptions. In the fields of computer vision and robotics, integrating visual and inertial information to locate and navigate robots and autonomous driving cars is a well researched topic, as it enables ubiquitous mobility for mobile agents by providing robust and accurate pose information. Recent deep learning approaches for Visual-Inertial based motion estimation have proven successful, but they rarely focus on incorporating robust fusion strategies for dealing with imperfect input sensory data. We propose a novel end-to-end selective sensor fusion framework for monocular VIO, which fuses monocular images and inertial measurements in order to estimate the trajectory whilst improving robustness to real-life issues, such as missing and corrupted data or bad sensor synchronisation.
CSynth – 3D Structure of the genome in VR and non VR
Presenter: Stephen Taylor
CSynth is a physics based interactive visualisation platform for visualizing the 3D structure of the biological molecules. It is primarily designed to provide an engaging way to explore and understand the complex structure of the genome in 3D by integrating data from next generation sequencing (Hi-C) and modelling. CSynth does its calculations in the GPU hence is much faster than existing modelling software to infer and visualise the chromatin structure allowing real-time interaction with the modelling parameters. In addition, we include an option to view and manipulate these complicated structures using Virtual Reality (VR) so scientists can immerse themselves in the models for further understanding. The VR component has also proven to be a valuable teaching and public engagement tool.
BabelVR – 3D Visualisation and analysis of medical and research images
Presenter: Stephen Taylor
Babel VR is a free software tool for the HTC Vive that allows the import of 3D images from a variety of medical devices - different 3D image modalities including microscopes, ultrasound, Computer Aided Tomography (CAT), MRI scanners - so they can be viewed and interacted with in a virtual reality (VR) environment.
The images produced in clinical diagnosis are often complex and difficult to work with using existing 2D packages. We are developing software tools using Virtual Reality (VR) that we believe offer greater interaction and insight than standard software packages
(https://www.cbrg.ox.ac.uk/cbrg/babelVR.html)
Sarolta Mohaine Palfi
Industrial Research Partnerships Manager – ICT & Creative Industries, AI
Mathematics, Physical & Life Sciences Division
University of Oxford
Book your showcase tickets now
[1] Source: Creative Industries Council and DCMS Sector Estimates (2017)