Labs Introduction

AIRIS Lab

Professor

Sungyoung Kim

Student Amount

3 Ph.D / 2 MS / 4 Intern

Lab Introudction

Applied and Innovative Research for Immersive Sound (AIRIS) laboratory aims to explore all the possibilities of sound- and audio-related science and technologies. In particular, the laboratory focuses on how to better understand listeners’ perceptual and cognitive processes for reproduced sound fields; and integrating spatial sound for holistic and realistic experience in a virtual space such as the metaverse.

Several Research Topics

Real-time spatial audio rendering platform for Auditory Augmented Reality: ARAE

AR Technical Ear Training Game for the Hearing Impaired

Aural Heritage

Immersive sound with AI

Augmented reality Room Acoustic Estimator (ARAE)

AR Technical Ear Training Game

HUMAN Lab

Professor

Yong-Hwa Park

Student Amount

17 Ph.D / 10 MS / 1 Intern

Lab Introudction

Acoustic recognition is a difficult task to perform in complex and noisy real environment. As human can understand the sound in noisy cocktail party, we aim to distinguish sound, localize it and separate with two ears.

Several Research Topics

Deep Learning Based Frequency Dependent Sound Event Detection

Continuous Head Related Transfer Function

Binaural Sound Event Detection and Localization

Speech Verification

Human Auditory System Inspired Acoustic Recognition

Cough Detection Camera

Acoustic Signal Based Health Monitoring

HRTF measurement

Smart Sound Systems Lab

Professor

Jung-Woo Choi

Student Amount

6 Ph.D / 5 MS / 5 Intern

Lab Introudction

The Smart Sound Systems laboratory aims to analyze and understand acoustic environments. Specifically, the laboratory focuses on enhancing, separating, and extracting target speech, and rendering binaural audio for metaverse space.

Several Research Topics

Room Impulse Response Synthesis via Latent Diffusion Model

Sound Event Localization and Detection

Sound Anomaly Detection

Monaural Speech Separation

Multichannel Speech Enhancement & Separation

Room Geometry Inference

Deep Learning-based Multichannel Target Sound Extraction with Multiple Clues

Demo Video

Advanced Acoustic Information Systems Laboratory

Professor

Shuichi Sakamoto

Student Amount

1 Ph.D / 8 MS / 3 Intern

Lab Introudction

We aim at understanding human auditory informatoin processing and at developing comfortable communication systems based on human hearing properties.

Several Research Topics

Auditory spatial attention

People can extract target sounds from distributed auditory distractors. This phenomenon is well known as the cocktail-party effect. Although many researchers have investigated the mechanism of the phenomenon, the whole process is still unclear. We have focused on the effect of auditory selective attention and investigated how auditory spatial attention affects the phenomenon from both psychological and physiological points of view using loudspeaker array installed in our anechoic room.

Spherical speaker array in an anechoic chamber

Auditory spatial perception as a multi-modal perceptual information processing

The mechanism of auditory space perception is well known as multimodal perceptual information processing. Various researchers have reported that head rotation facilitates the accuracy of perceived sound space. We are investigating the relationship between auditory space perception and the listener's rotatory/linear motions.

Liner rail system

Binaural synthesis using spherical microphone arrays

We have developed various multichannel spherical microphone arrays and

published head-related transfer function datasets via our web page. We are developing binaural synthesis methods for advanced virtual reality systems by using these core components.

SENZI