Deeplearning4j LSTM示例

问题描述:

我想了解Deeplearning4j上的LSTM。我正在检查示例的源代码,但我无法理解这一点。Deeplearning4j LSTM示例

 //Allocate space: 
    //Note the order here: 
    // dimension 0 = number of examples in minibatch 
    // dimension 1 = size of each vector (i.e., number of characters) 
    // dimension 2 = length of each time series/example 
    INDArray input = Nd4j.zeros(currMinibatchSize,validCharacters.length,exampleLength); 
    INDArray labels = Nd4j.zeros(currMinibatchSize,validCharacters.length,exampleLength); 

我们为什么要存储3D数组,这是什么意思?

+0

什么是您从中获取代码的示例文件的名称? –

+0

https://github.com/deeplearning4j/dl4j-0.4-examples/blob/master/src/main/java/org/deeplearning4j/examples/recurrent/character/CharacterIterator.java 看看下一个方法 – Nueral

+0

Nueral - please加入Gitter的Deeplearning4j社区,他们会回答你的问题:https://gitter.im/deeplearning4j/deeplearning4j – tremstat

好问题。但是这与LSTM的运作无关,但与任务本身有关。所以任务是预测下一个字符是什么。下一个角色的预测本身有两个方面:分类和近似。 如果我们只处理近似,我们只能处理一维数组。但是如果我们同时处理近似和分类,我们不能将神经网络仅仅归一化为字符的ascii表示。我们需要将每个字符转换为数组。

举例而言,(一个不是资本)会以这种方式来表示:

1,0,0,0,0,0,0,0,0,0,0,0,0,0 ,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 ,0,0,0,0,0,0,0,0,0,0,0

b(不是大写)将表示为: 0,1,0,0,0,0,0,0 ,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 ,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 c将表示为:

0,0,1,0 ,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 ,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0

Z(z capital !!!! )

0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 ,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 ,1

所以,每个字符给我们两个维度数组。所有这些维度是如何构建的?代码注释有如下解释:

// dimension 0 = number of examples in minibatch 
    // dimension 1 = size of each vector (i.e., number of characters) 
    // dimension 2 = length of each time series/example 

我想诚恳赞扬你理解LSTM是如何工作的努力,但你指出的代码给出例子是适用于各种神经网络,并说明如何处理文本神经网络中的数据,但没有解释LSTM如何工作。您需要查看源代码的另一部分。

+0

@Nueral是否有意义? –