NeurIPS2020丨近期必读【元学习】论文推荐

AMiner平台由清华大学计算机系研发,拥有我国完全自主知识产权。平台包含了超过2.3亿学术论文/专利和1.36亿学者的科技图谱,提供学者评价、专家发现、智能指派、学术地图等科技情报专业化服务。系统2006年上线,吸引了全球220个国家/地区1000多万独立IP访问,数据下载量230万次,年度访问量超过1100万,成为学术搜索和社会网络挖掘研究的重要数据和实验平台。

AMiner平台:https://www.aminer.cn/

导语:NeurIPS,神经信息处理系统大会(Conferenceon Neural Information Processing Systems),是机器学习领域的*国际会议,会议固定在每年的12月举行。NeurIPS2020提交数再次创新高,共计达到9454篇,总接收1900篇;接收率是20.09%,较去年有所下降。

元学习(Meta Learning)不仅是NeurIPS近年比较火的Topic之一,同时也是目前机器学习领域一个令人振奋的研究趋势。元学习(Meta Learning)或者叫做“学会学习”(Learning to learn),它是要“学会如何学习”,即利用以往的知识经验来指导新任务的学习,具有学会学习的能力。
今天给大家分享七篇NeurIPS2020元学习论文,一起了解顶会最前沿的研究方向!
NeurIPS2020丨近期必读【元学习】论文推荐

1.论文标题:Graph Meta Learning via Local Subgraphs
论文作者:Huang Kexin、Zitnik Marinka
论文链接:https://www.aminer.cn/pub/5ee8986891e011e66831c452?conf=neurips2020
简介:
The authors introduce G-META, a scalable and inductive meta-learning method for graphs.
G-META excels at difficult few-shot graph learning tasks and can tackle a variety of graph meta-learning problems.
The core principle in G-META is to use local subgraphs to identify and transfer useful information across tasks.
The local subgraph approach is fundamentally different from prior work using entire graphs, which only captures broad structure at the loss of finer topological structure.
G-META outperforms nine baselines across seven datasets, including a novel dataset of 1,840 graphs
NeurIPS2020丨近期必读【元学习】论文推荐

2.论文标题:Robust Meta-learning for Mixed Linear Regression with Small Batches
论文作者:Weihao Kong、Raghav Somani、Sham Kakade、Sewoong Oh
论文链接:https://www.aminer.cn/pub/5eede0b091e0116a23aaface?conf=neurips2020
简介:
By exploiting similarities on a collection of related but different tasks, meta-learning predicts a newly arriving task with a far greater accuracy than what can be achieved in isolation.
Applying the proposed Algorithm 2, it is possible to generalize Proposition 2.6 to this Gaussian setting and achieve an optimal upper bound.
The authors leave this as a future research direction, and provide a sketch of how to adapt the proof of the algorithm to the exponential tail setting in Section D
NeurIPS2020丨近期必读【元学习】论文推荐

3.论文标题:Continuous Meta-Learning without Tasks
论文作者:Harrison James、Sharma Apoorva、Finn Chelsea、Pavone Marco
论文链接:https://www.aminer.cn/pub/5dfc9de13a55acedae95f41a?conf=neurips2020
简介:
The authors investigate the performance of MOCA in three problem settings: one in regression and two in classification.
The authors’ primary goal is to characterize the impact on performance of using MOCA to move from the standard task-segmented meta-learning setting to the task-unsegmented case.
To this end, the authors investigate the performance of MOCA versus an “oracle” model that uses the same base meta-learning algorithm, but has access to exact task segmentation at train and test time.
For Rainbow MNIST and miniImageNet, confidence intervals are 95% for five different models
NeurIPS2020丨近期必读【元学习】论文推荐

4.论文标题:Submodular Meta-Learning
论文作者:Arman Adibi、Aryan Mokhtari、Hamed Hassani
论文链接:https://www.aminer.cn/pub/5f0d802391e011047aff97c6?conf=neurips2020
简介:
The authors provide two experimental setups to evaluate the performance of the proposed algorithms and compare with other baselines.
Each setup involves a different set of tasks which are represented as submodular maximization problems subject to the k-cardinality constraint.
The authors briefly explain the data and tasks and refer the reader to the supplementary materials for more details.
The authors will formalize and solve a facility location problem on the
NeurIPS2020丨近期必读【元学习】论文推荐

5.论文标题:Modular Meta-Learning with Shrinkage
论文作者:Yutian Chen、Abram Friesen、Feryal Behbahani、Arnaud Doucet、David Budden、Matthew Hoffman、Nando de Freitas
论文链接:https://www.aminer.cn/pub/5f7fdd328de39f0828398034?conf=neurips2020
简介:
The authors answer all three experimental questions in the affirmative
In both image classification and text-to-speech, the learned shrinkage priors correspond to meaningful and interesting task-specific modules.
This paper presents a general meta-learning technique to automatically identify task-specific modules in a model for few-shot machine learning problems
It reduces the requirement for domain knowledge to hand-design task-specific architectures, and have a positive societal impact to democratize machine learning techniques.
An example application is to adapt a multilingual text-to-speech model to a low-resource language or dialect for minority ethnic groups
NeurIPS2020丨近期必读【元学习】论文推荐

6.论文标题:Look-ahead Meta Learning for Continual Learning
论文作者:Gunshi Gupta、Karmesh Yadav、Liam Paull
论文链接:https://www.aminer.cn/pub/5f7fdd328de39f0828398093?conf=neurips2020
简介:
The authors introduced La-MAML, an efficient meta-learning algorithm that leverages replay to avoid forgetting and favor positive backward transfer by learning the weights and LRs in an asynchronous manner.
More work on analysing and producing good optimizers for CL is needed, since many of the standard go-to optimizers like Adam [13] are primarily aimed at ensuring faster convergence in stationary supervised learning setups
Another interesting direction is to explore how the connections to meta-descent can lead to more stable training procedures for meta-learning that can automatically adjust hyper-parameters on-the-fly based on training dynamics
NeurIPS2020丨近期必读【元学习】论文推荐

7.论文标题:Meta-Learning Requires Meta-Augmentation
论文作者:Janarthanan Rajendran、Alex Irpan、Eric Jang
论文链接:https://www.aminer.cn/pub/5f0d75d591e011047aff9697?conf=neurips2020
简介:
Pascal3D Pose Regression The authors show that for the regression problem introduced by Yin et al [37], it is still possible to reduce overfitting via meta augmentation.
Each task is to take a 128x128 grayscale image of an object from the Pascal 3D dataset and predict its angular orientation yq about the Z-axis, with respect to some unobserved canonical pose specific to each object.
The set of tasks is non-mutually-exclusive because each object is visually distinct, allowing the model to overfit to the poses of training objects, neglecting the base learner and.
Unregularized models have poor task performance at test time, because the model does not know the canonical poses of novel objects
NeurIPS2020丨近期必读【元学习】论文推荐

根据主题分类查看更多论文,点击进入NeurIPS2020会议专题:https://www.aminer.cn/conf/neurips2020
最前沿的研究方向和最全面的论文数据等你来~
扫码了解更多NeurIPS2020会议信息
NeurIPS2020丨近期必读【元学习】论文推荐

添加“小脉”微信,留言“NeurIPS”,即可加入【NeurIPS会议交流群】,与更多论文作者学习交流!
NeurIPS2020丨近期必读【元学习】论文推荐