This is the home for NSF-Funded Project OHPLNet developed earning materials.
This doctoral dissertation research will develop a highly scalable ordinal classification method that can be applied to both structured and unstructured (e.g., images and text) ordinal data. A core component of the method is a loss function that called Ordinal Hyperplane Loss (OHPL). OHPL is particularly designed for data with ordinal classes and enables deep learning techniques to be applied to the ordinal classification problems. By minimizing OHPL, a deep neural network learns to map data to an optimal space where the distance between points and their class centroid hyper-plane are minimized while a nontrivial ordinal relationship among classes are maintained. Preliminary experimental results indicate that deep neural network with OHPL optimizing significantly outperforms the state-of-the-art alternatives on classification accuracies across multiple datasets. This research will examine strategies that scale the OHPL based learning to big ordinal data. The investigators will apply the OHPL-based learning to real-life critical applications such as determining the severity/stages of a disease. They will develop a ready-to-use open-source package on the OHPL deep learning strategy and make it publicly available.
Submissions from 2021
Learning Module on OHPL Text Ordinal Classification, Ying Xie
Submissions from 2020
OHPL Tutorial, Bob Vanderheyden and Ying Xie
Submissions from 2019
Ordinal Hyperplane Loss, Bob Vanderheyden