Search in sources :

Example 1 with FirstIterationFunction

use of org.deeplearning4j.spark.models.embeddings.word2vec.FirstIterationFunction in project deeplearning4j by deeplearning4j.

the class TextPipelineTest method testSyn0AfterFirstIteration.

@Test
public void testSyn0AfterFirstIteration() throws Exception {
    JavaSparkContext sc = getContext();
    JavaRDD<String> corpusRDD = getCorpusRDD(sc);
    //  word2vec.setRemoveStop(false);
    Broadcast<Map<String, Object>> broadcastTokenizerVarMap = sc.broadcast(word2vec.getTokenizerVarMap());
    TextPipeline pipeline = new TextPipeline(corpusRDD, broadcastTokenizerVarMap);
    pipeline.buildVocabCache();
    pipeline.buildVocabWordListRDD();
    VocabCache<VocabWord> vocabCache = pipeline.getVocabCache();
    Huffman huffman = new Huffman(vocabCache.vocabWords());
    huffman.build();
    // Get total word count and put into word2vec variable map
    Map<String, Object> word2vecVarMap = word2vec.getWord2vecVarMap();
    word2vecVarMap.put("totalWordCount", pipeline.getTotalWordCount());
    double[] expTable = word2vec.getExpTable();
    JavaRDD<AtomicLong> sentenceCountRDD = pipeline.getSentenceCountRDD();
    JavaRDD<List<VocabWord>> vocabWordListRDD = pipeline.getVocabWordListRDD();
    CountCumSum countCumSum = new CountCumSum(sentenceCountRDD);
    JavaRDD<Long> sentenceCountCumSumRDD = countCumSum.buildCumSum();
    JavaPairRDD<List<VocabWord>, Long> vocabWordListSentenceCumSumRDD = vocabWordListRDD.zip(sentenceCountCumSumRDD);
    Broadcast<Map<String, Object>> word2vecVarMapBroadcast = sc.broadcast(word2vecVarMap);
    Broadcast<double[]> expTableBroadcast = sc.broadcast(expTable);
    FirstIterationFunction firstIterationFunction = new FirstIterationFunction(word2vecVarMapBroadcast, expTableBroadcast, pipeline.getBroadCastVocabCache());
    JavaRDD<Pair<VocabWord, INDArray>> pointSyn0Vec = vocabWordListSentenceCumSumRDD.mapPartitions(firstIterationFunction).map(new MapToPairFunction());
}
Also used : VocabWord(org.deeplearning4j.models.word2vec.VocabWord) MapToPairFunction(org.deeplearning4j.spark.models.embeddings.word2vec.MapToPairFunction) FirstIterationFunction(org.deeplearning4j.spark.models.embeddings.word2vec.FirstIterationFunction) CountCumSum(org.deeplearning4j.spark.text.functions.CountCumSum) JavaSparkContext(org.apache.spark.api.java.JavaSparkContext) Pair(org.deeplearning4j.berkeley.Pair) TextPipeline(org.deeplearning4j.spark.text.functions.TextPipeline) AtomicLong(java.util.concurrent.atomic.AtomicLong) Huffman(org.deeplearning4j.models.word2vec.Huffman) AtomicLong(java.util.concurrent.atomic.AtomicLong) Test(org.junit.Test)

Aggregations

AtomicLong (java.util.concurrent.atomic.AtomicLong)1 JavaSparkContext (org.apache.spark.api.java.JavaSparkContext)1 Pair (org.deeplearning4j.berkeley.Pair)1 Huffman (org.deeplearning4j.models.word2vec.Huffman)1 VocabWord (org.deeplearning4j.models.word2vec.VocabWord)1 FirstIterationFunction (org.deeplearning4j.spark.models.embeddings.word2vec.FirstIterationFunction)1 MapToPairFunction (org.deeplearning4j.spark.models.embeddings.word2vec.MapToPairFunction)1 CountCumSum (org.deeplearning4j.spark.text.functions.CountCumSum)1 TextPipeline (org.deeplearning4j.spark.text.functions.TextPipeline)1 Test (org.junit.Test)1