Wikitext-103-数据集
本数据集是超过 1 亿个语句的数据合集,全部从维基百科的 Good 与 Featured 文章中提炼出来。广泛用于语言建模,当中 包括 fastai 库和 ULMFiT 算法中经常用到的预训练模型。
Recent neural network sequence models with softmax classifiers have achieved their best language modeling performance only with very large hidden states and large vocabularies. Even then they struggle to predict rare or unseen words even if the context makes the prediction unambiguous. We introduce the pointer sentinel mixture architecture for neural sequence models which has the ability to either reproduce a word from the recent context or produce a word from a standard softmax classifier. Our pointer sentinel-LSTM model achieves state of the art language modeling performance on the Penn Treebank (70.9 perplexity) while using far fewer parameters than a standard softmax LSTM. In order to evaluate how well language models can exploit longer contexts and deal with more realistic vocabularies and larger corpora we also introduce the freely available WikiText corpus.
译:
近年来,基于softmax分类器的神经网络序列模型仅在隐藏状态和词汇量较大的情况下取得了最好的语言建模性能。即使如此,他们也很难预测稀有或不可见的单词,即使上下文使预测变得明确。我们介绍了用于神经序列模型的指针-哨兵混合结构,它既可以从最近的上下文中复制单词,也可以从标准的softmax分类器生成单词。我们的指针sentinel LSTM模型在Penn Treebank上实现了最先进的语言建模性能(70.9复杂性),同时使用的参数比标准的softmax LSTM少得多。为了评估语言模型如何更好地利用更长的上下文,处理更真实的词汇和更大的语料库,我们还引入了免费提供的WikiText语料库。
大家可以到官网地址下载数据集,我自己也在百度网盘分享了一份。可关注本人公众号,回复“2020082101”获取下载链接。
只要自己有时间,都尽量写写文章,与大家交流分享。
本人公众号:
****博客地址:https://blog.****.net/ispeasant