ERIC Number: ED659011
Record Type: Non-Journal
Publication Date: 2023
Pages: 202
Abstractor: As Provided
ISBN: 979-8-3828-3405-4
ISSN: N/A
EISSN: N/A
Learning Effective Features with Self-Supervision
Zhiyuan Li
ProQuest LLC, Ph.D. Dissertation, University of Cincinnati
Deep learning techniques are being unified for decision support in various applications. However, it remains challenging to train robust deep learning models, due to the inherent insufficient labeled data that is usually time-consuming and labor-intensive. Self-supervised learning is a feature representation learning paradigm to learn robust features from insufficient annotated datasets. It contains two types of task stages, including the pretext task and the downstream task. The model is typically pre-trained with the pretext task in an unsupervised manner, where the data itself provides supervision. Afterward, the model is fine-tuned in a real downstream supervised task. Although self-supervised learning can effectively learn the robust latent feature representations and reduce human annotation efforts, it highly relies on designing efficient pretext tasks. Therefore, studying effective pretext tasks is desirable to learn more effective features and further improve the model prediction performance for decision support. In self-supervised learning, pretext tasks with deep metric/contrastive learning styles received more and more attention, as the learned distance representations are useful to capture the similarity relationship among samples and further improve the performance of various supervised or unsupervised learning tasks. In this dissertation proposal, we survey the recent state-of-the-art self-supervised learning methods and propose several new deep metric and contrastive learning strategies for learning effective features. Firstly, we propose a new deep metric learning method for image recognition. The proposed method learns an effective distance metric from both geometric and probabilistic space. Secondly, we develop a novel contrastive learning method using the Bregman divergence, extending the contrastive learning loss function into a more generalized divergence form, which improves the quality of self-supervised learned feature representation. Additionally, we present a new collaborative self-supervised learning method in real radiology applications. The proposed method collaboratively learns the robust latent feature representations from radiomic data in a self-supervised manner to reduce human annotation efforts, which benefits the disease diagnosis. Meanwhile, we propose a new joint self-supervised and supervised contrastive learning method to learn an enhanced multimodal feature representation by amalgamating complementary information across different modalities and capturing shared information among similar subjects. Finally, we delve into our future research topics, which center around the discussion of innovative self-supervised learning approaches for domain adaptation and large language modeling. [The dissertation citations contained here are published with the permission of ProQuest LLC. Further reproduction is prohibited without permission. Copies of dissertations may be obtained by Telephone (800) 1-800-521-0600. Web page: http://bibliotheek.ehb.be:2222/en-US/products/dissertations/individuals.shtml.]
Descriptors: Self Management, Learning Strategies, Supervision, Task Analysis, Cooperative Learning, Learning Processes, Models
ProQuest LLC. 789 East Eisenhower Parkway, P.O. Box 1346, Ann Arbor, MI 48106. Tel: 800-521-0600; Web site: http://bibliotheek.ehb.be:2222/en-US/products/dissertations/individuals.shtml
Publication Type: Dissertations/Theses - Doctoral Dissertations
Education Level: N/A
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: N/A
Grant or Contract Numbers: N/A