-
Lab. Research Contents (研究内容)
-
- The research goal of our lab is to Create New Value based on human sensing.
- Sensing targets include Biosignals, Human motion, Eye movements, and Human-to-human or device output sensing etc.
-
-
Keywords (キーワード)
- Body motion measurement;
- Eye-tracking;
- Empirical mode decomposition;
- Deep learning;
- Driver Characteristics Analysis;
- Human behaviors analysis;
- Human-computer interaction (HCI);
- Agricultural work support;
- Vehicle Driving;
- Usercentered design.
-
Research Topics (研究例) -Link-
-
- Analysis on Falling Risk of Elderly Workers when Mowing on a Slope
-
Mowing is one of the most dangerous tasks in agricultural production which may cause accidents such as falling or accidental cutting. However, in the areas of mountainous in Japan, the mowing works still need to be done via the manually operated machines. In this research, we focus more on the mowing workers’ personal factors and try to analyze the effect on their falling risk via a high precision motion capture device (Xsens MVN) and Eye tracker (Tobii Pro Glasses 3).
-
-
- Multi Sensor-based Driver Behaviors Analyzing and Modeling
-
About 90% of traffic accidents are due to human error, human factors may affect a driver's braking behaviors and thus their driving safety, especially . To determine the effect of different human factors on drivers' pre-braking behaviors, This study focused on analyzing drivers' local joints by a motion capture device. Hilbert–Huang Transform (HHT)-based local human body movement analysis method was used to decompose the realistic complex pre-braking actions.
-
Student Research
-
-
- A Walk-through Type Authentication System Design via Gaze Detection and Color Recognition
-
Based on the using of glasses type eye-tracking device, in this study we focused on the detection of user’s eye movement and tried to propose a walk-through type of authentication system design by color recognition. Through a set of preparatory experiments, we tested the usability of the designed system, and found the effect of light setting on the accuracy of the experiment.
-
-
- Mask Recognition via AR Smart Glasses
-
-
-
- A Smart Glasses-based Gesture Recognition and Translation System for Sign Languages
-
-
-
- Analyzing the Effects of Driving Experience on Backing Maneuvers Based on Data Collected by Eye-Tracking Devices
-
This study aims to analyze the impact of driving experience on backing maneuvers by utilizing data collected from eye-tracking devices. A comparative analysis is conducted between novice and experienced drivers to investigate differences in gaze patterns and fixation positions during backing maneuvers. Real-time gaze data is collected using eye-tracking devices during backing maneuvers.
The findings reveal distinct disparities in gaze behavior and fixation positions between novice and experienced drivers. Novice drivers have cluttered vision and tend to focus more on the right door mirror and switch their eyes back and forth between the two areas of interest.
-
-
- A Study of Sketch Drawing Process Comparation with Different Painting Experience via Eye Movements Analysis
-
According to design and apply a set of experiments, we focus on analyzing the eye movement data of the subjects in the sketch of imaginary object shapes and compare the differences in the sketch between the experienced painters and the novice. Specifically, we invited 16 subjects to participate in sketching (e.g., a watch) on the canvas, and their eye movement data was collected by a glasses eye tracker while sketching.
The results of analysis by Mann-Whitney U test showed that the novice's gaze was skewed to the left and down as a whole, the gaze was scattered, and the fixation on the central position of the picture was not focused and sustained. In addition, experienced painters put the sketch content in the center of the picture to attract the attention of the viewer and effectively convey the important information of the picture.
-
-
- A Smart Glasses-based Real-time Micro-expressions Recognition System via Deep Neural Network
-
This research aims to capture micro-expressions in real- time through smart glasses and uses deep learning technology to develop a real- time emotion recognition system based on RGB color values to improve interpersonal communication. We trained a multi-layer fully deep neural network (DNN) model using the CASME2 dataset to effectively learn the association between facial expressions and emotions. The experimental results of model show that the emotion classification accuracy of the system reaches 95% on several test samples. The system is adapted to run on smart glasses, the emotion recognition results are immediately fed back to the screen of the smart glasses and displayed to the user in the form of an emotion label. The experimental results of system show that the system can achieve accurate emotion recognition, help users better understand the psychological state of the communication object, and improve the communication environment and quality.
-
-
- A Cloud-based Sign Language Translation System via CNN with Smart Glasses
-
This study develops a new cloud sign language translation system on smart device based on the Browser/Server architecture, so that when the hearing-impaired person makes a sign language movement in front of the user who using the system with a smart device (e.g., smart glasses), the screen of the smart device will display the subtitle of the sign languages. We use MediaPipe to recognize and collect sign language action data from WLASL dataset and provide it to TensorFlow's 1D-CNN deep learning model for training, so as to realize the sign language translation function. In the test phase, we invited five experimenters to test the sign language translation system ten times for each person, and the final average accuracy rate was 72%.
-
-
- A Real-time Recognition Gait Framework for Personal Authentication via Image-Based Neural Network: Accelerated by Feature Reduction in Time and Frequency Domains
-
In this paper, we proposed an innovative real-time MediaPipe-based gait analysis framework and a new Composite Filter Feature Selection (CFFS) method via key nodes, angles, lengths calculating. Then, based on the proposed method, we extract the aimed features as a new dataset and verified it by 1D-CNN neural network. Furthermore, we also applied Hilbert-Huang transform to investigate these extracted gait features in the frequency domain, improving the performance of our proposed framework to achieve real-time under higher recognition accuracy. The experimental results show that the innovative gait recognition framework and data processing technology can reduce the gait feature data, speed up the process of gait recognition, and still maintain the original recognition accuracy.