University of Pittsburgh

Multi-Domain Learning by Meta-Learning: Taking Optimal Steps in Multi-Domain Loss Landscapes by Inner-Loop Maximum a Posteriori Estimation

Student
Date: 
Friday, April 3, 2020 - 12:30pm - 1:00pm

Abstract: We consider a model-agnostic solution to the problem of Multi-Domain Learning (MDL). While a number of solutions to the problem of MDL exist, they primarily consist of model-dependent methods which designate separate sets of shared and domain-specific parameters, additionally making architectural specifications to accommodate these parameters. While some of these methods are effective, they are challenging to apply in problem spaces where certain standard model architectures are well accepted; e.g. the UNet architecture in the problem of Semantic Segmentation. To this end, we consider a weighted loss function (perhaps the simplest solution to MDL) and extend it to an effective procedure by employing techniques from the recently active area of learning-to-learn (meta-learning). Specifically, we take inner-loop gradient steps to dynamically estimate posterior distributions over the hyper-parameters of our loss function. The immediate result is a method which requires no additional model parameters and no architectural specification; instead, only a relatively efficient algorithmic modification is needed to improve performance in MDL. We demonstrate our solution on a fitting problem in medical imaging, specifically, in automatic segmentation of white matter hyper-intensity (WMH) where we take our domains to be two distinct imaging modalities (T1-MR and FLAIR) with a significant difference in underlying distribution and a large information imbalance.

Copyright 2009–2020 | Send feedback about this site