Deep Learning

Associate director of Bayesian learning team

Bohyung Han (POSTECH)

Research areas: 

(1) Deep learning for semantic segmentation

(2) Deep learning for visual tracking

Participant professors of Bayesian learning team

Chang D. Yoo (KAIST)

Research areas: 

(1) Feed-forward neural network for non-sequential data modeling

learning

(2) Recurrent neural network for sequential data modeling

reinforcement learning.

Hwanjo Yu (POSTECH)

Research areas: 

(1) Recommendation system with deep learning

Sungroh Yoon (SNU)

Research areas: 

(1) Scalable Deep Learning with Apache Spark

(2) Dynamic Scheduler for Scaling up Deep Learning

Our goal in this project is to achieve the state-of-the-art performance in semantic segmentation based on deep learning. Specifically, we learn deconvolution network for semantic segmentation, where abstract information about objects and scene is decoded to generate segmentation map. In addition, we are interested in weakly-supervised semantic segmentation; our focus is to train a deep network with a small number of training examples and/or training data with weak annotations.
We plan to develop a novel tracking algorithm based on convolutional neural networks. One idea is to exploit a pre-trained CNN for both image representation and target localization through transfer learning. Another approach is to learn a completely new CNN customized for visual tracking using the video data and update the deep network for adaptive appearance modeling in an online manner. We confirmed that both methods work very well in practice and are currently optimizing our idea to achieve the best performance.
In this research center, we are trying to research the feed-forward neural network (e.g. convolutional neural network) for non-sequential data modeling. Convolutional neural network includes several hidden layers as shown in the figure, and learns the parameters with the backpropagation of the errors in the objective functions according to the different tasks (e.g. classification, regression, and structure output). Using the objective function with various tasks in the top layer, the feed-forward neural network can be used in various ways.
In this research center, we are trying to research about recurrent neural network (RNN) for sequential data (e.g. speech and handwritten text) modeling. In most conventional approaches for sequential data modeling, the hidden markov model (HMM) is adopted, however; it has restricted form of the transition and emission. To overcome its limitation, RNN is adopted for sequential data processing, and it has several hidden layers following the time axis. It can classify or regress the ground truth of time t when the sequential data whose length is lower than T is given.
In this research center, we are trying to develop a novel recommendation system combining collaborative filtering and the topic model constructed by deep learning. Traditional collaborative filtering techniques build user and item latent model by the rating data. However, they cannot provide accurate recommendations for the new users or items. Other approaches to solve this cold-start problem by analyzing description document via LDA have troubles in effectively utilizing contextual information of the document. We propose a novel recommendation method that utilizes collaborative and contextual information by integrating convolutional neural network into probabilistic matrix factorization. It can make accurate recommendation for both cold-items and hot-items.
We develop a distributed platform for scaling out deep neural network (DNN) training. DNNs have recently reported state-of-the art performance in a variety of domains, including visual recognition, speech recognition, and many others. The performance of deep learning can be drastically improved by increasing the scale of network training, which inherently leads to a large amount of parameters to learn. In this research, we use a massively parallel cluster of commodity computers to exploit the notion of data and model parallelism to address the computational challenge in training deep neural networks. In particular, with the use of Apache Spark, we also expect fault tolerance and powerful data abstraction, which is helpful for guaranteeing reliability and scalability of the training platform.
In addition to the distributed DNN training framework described above, we develop a dynamic scheduler that can fully utilize the computing resources (i.e., multi-core CPUs, GPGPUs, and Xeon Phi units) available in each node of the cluster. The use of a variety of system resources would contribute to scaling up the deep learning algorithms and their applications. We plan to profile widely used DNN algorithms, listing up commonly and frequently used submodules and thoroughly analyzing their parallelism. With the result of the analysis, we will accelerate each submodule by parallelizing it or mapping it to the optimal computing unit(s).