Search in sources :

Example 6 with DocumentFactory

use of org.icij.extract.document.DocumentFactory in project datashare by ICIJ.

the class ElasticsearchSpewerTest method test_long_content_length.

@Test
public void test_long_content_length() throws Exception {
    final TikaDocument document = new DocumentFactory().withIdentifier(new PathIdentifier()).create(get("t-file.txt"));
    final ParsingReader reader = new ParsingReader(new ByteArrayInputStream("test".getBytes()));
    document.setReader(reader);
    document.getMetadata().set("Content-Length", "7862117376");
    spewer.write(document);
    GetResponse documentFields = es.client.get(new GetRequest(TEST_INDEX, document.getId()), RequestOptions.DEFAULT);
    assertThat(documentFields.getSourceAsMap()).includes(entry("contentLength", 7862117376L));
}
Also used : DocumentFactory(org.icij.extract.document.DocumentFactory) ParsingReader(org.apache.tika.parser.ParsingReader) ByteArrayInputStream(java.io.ByteArrayInputStream) GetRequest(org.elasticsearch.action.get.GetRequest) PathIdentifier(org.icij.extract.document.PathIdentifier) TikaDocument(org.icij.extract.document.TikaDocument) GetResponse(org.elasticsearch.action.get.GetResponse) Test(org.junit.Test)

Example 7 with DocumentFactory

use of org.icij.extract.document.DocumentFactory in project datashare by ICIJ.

the class SourceExtractorTest method test_get_source_for_embedded_doc.

@Test
public void test_get_source_for_embedded_doc() throws Exception {
    DocumentFactory tikaFactory = new DocumentFactory().configure(Options.from(new HashMap<String, String>() {

        {
            put("idDigestMethod", Document.HASHER.toString());
        }
    }));
    Path path = get(getClass().getResource("/docs/embedded_doc.eml").getPath());
    Extractor extractor = new Extractor(tikaFactory);
    extractor.setDigester(new UpdatableDigester(TEST_INDEX, Document.HASHER.toString()));
    final TikaDocument document = extractor.extract(path);
    ElasticsearchSpewer spewer = new ElasticsearchSpewer(es.client, l -> Language.ENGLISH, new FieldNames(), Mockito.mock(Publisher.class), new PropertiesProvider()).withRefresh(IMMEDIATE).withIndex(TEST_INDEX);
    spewer.write(document);
    Document attachedPdf = new ElasticsearchIndexer(es.client, new PropertiesProvider()).get(TEST_INDEX, "1bf2b6aa27dd8b45c7db58875004b8cb27a78ced5200b4976b63e351ebbae5ececb86076d90e156a7cdea06cde9573ca", "f4078910c3e73a192e3a82d205f3c0bdb749c4e7b23c1d05a622db0f07d7f0ededb335abdb62aef41ace5d3cdb9298bc");
    assertThat(attachedPdf).isNotNull();
    assertThat(attachedPdf.getContentType()).isEqualTo("application/pdf");
    InputStream source = new SourceExtractor().getSource(project(TEST_INDEX), attachedPdf);
    assertThat(source).isNotNull();
    assertThat(getBytes(source)).hasSize(49779);
}
Also used : Path(java.nio.file.Path) HashMap(java.util.HashMap) InputStream(java.io.InputStream) TikaDocument(org.icij.extract.document.TikaDocument) Publisher(org.icij.datashare.com.Publisher) TikaDocument(org.icij.extract.document.TikaDocument) Document(org.icij.datashare.text.Document) PropertiesProvider(org.icij.datashare.PropertiesProvider) DocumentFactory(org.icij.extract.document.DocumentFactory) UpdatableDigester(org.icij.extract.extractor.UpdatableDigester) FieldNames(org.icij.spewer.FieldNames) Extractor(org.icij.extract.extractor.Extractor) Test(org.junit.Test)

Example 8 with DocumentFactory

use of org.icij.extract.document.DocumentFactory in project datashare by ICIJ.

the class SourceExtractorTest method test_get_source_for_embedded_doc_without_metadata.

@Test
public void test_get_source_for_embedded_doc_without_metadata() throws Exception {
    DocumentFactory tikaFactory = new DocumentFactory().configure(Options.from(new HashMap<String, String>() {

        {
            put("idDigestMethod", Document.HASHER.toString());
        }
    }));
    Path path = get(getClass().getResource("/docs/embedded_doc.eml").getPath());
    Extractor extractor = new Extractor(tikaFactory);
    extractor.setDigester(new UpdatableDigester(TEST_INDEX, Document.HASHER.toString()));
    final TikaDocument document = extractor.extract(path);
    ElasticsearchSpewer spewer = new ElasticsearchSpewer(es.client, l -> Language.ENGLISH, new FieldNames(), Mockito.mock(Publisher.class), new PropertiesProvider()).withRefresh(IMMEDIATE).withIndex(TEST_INDEX);
    spewer.write(document);
    Document attachedPdf = new ElasticsearchIndexer(es.client, new PropertiesProvider()).get(TEST_INDEX, "1bf2b6aa27dd8b45c7db58875004b8cb27a78ced5200b4976b63e351ebbae5ececb86076d90e156a7cdea06cde9573ca", "f4078910c3e73a192e3a82d205f3c0bdb749c4e7b23c1d05a622db0f07d7f0ededb335abdb62aef41ace5d3cdb9298bc");
    InputStream source = new SourceExtractor(true).getSource(project(TEST_INDEX), attachedPdf);
    assertThat(source).isNotNull();
    assertThat(getBytes(source).length).isNotEqualTo(49779);
}
Also used : Path(java.nio.file.Path) HashMap(java.util.HashMap) InputStream(java.io.InputStream) TikaDocument(org.icij.extract.document.TikaDocument) Publisher(org.icij.datashare.com.Publisher) TikaDocument(org.icij.extract.document.TikaDocument) Document(org.icij.datashare.text.Document) PropertiesProvider(org.icij.datashare.PropertiesProvider) DocumentFactory(org.icij.extract.document.DocumentFactory) UpdatableDigester(org.icij.extract.extractor.UpdatableDigester) FieldNames(org.icij.spewer.FieldNames) Extractor(org.icij.extract.extractor.Extractor) Test(org.junit.Test)

Example 9 with DocumentFactory

use of org.icij.extract.document.DocumentFactory in project datashare by ICIJ.

the class ElasticsearchSpewerTest method test_simple_write.

@Test
public void test_simple_write() throws Exception {
    final TikaDocument document = new DocumentFactory().withIdentifier(new PathIdentifier()).create(get("test-file.txt"));
    final ParsingReader reader = new ParsingReader(new ByteArrayInputStream("test".getBytes()));
    document.setReader(reader);
    spewer.write(document);
    GetResponse documentFields = es.client.get(new GetRequest(TEST_INDEX, document.getId()), RequestOptions.DEFAULT);
    assertThat(documentFields.isExists()).isTrue();
    assertThat(documentFields.getId()).isEqualTo(document.getId());
    assertEquals(new HashMap<String, String>() {

        {
            put("name", "Document");
        }
    }, documentFields.getSourceAsMap().get("join"));
    ArgumentCaptor<Message> argument = ArgumentCaptor.forClass(Message.class);
    verify(publisher).publish(eq(Channel.NLP), argument.capture());
    assertThat(argument.getValue().content).includes(entry(Field.DOC_ID, document.getId()));
}
Also used : DocumentFactory(org.icij.extract.document.DocumentFactory) Message(org.icij.datashare.com.Message) ParsingReader(org.apache.tika.parser.ParsingReader) ByteArrayInputStream(java.io.ByteArrayInputStream) GetRequest(org.elasticsearch.action.get.GetRequest) PathIdentifier(org.icij.extract.document.PathIdentifier) TikaDocument(org.icij.extract.document.TikaDocument) GetResponse(org.elasticsearch.action.get.GetResponse) Test(org.junit.Test)

Example 10 with DocumentFactory

use of org.icij.extract.document.DocumentFactory in project datashare by ICIJ.

the class ElasticsearchSpewerTest method test_truncated_content_if_document_is_smaller_than_limit.

@Test
public void test_truncated_content_if_document_is_smaller_than_limit() throws Exception {
    ElasticsearchSpewer limitedContentSpewer = new ElasticsearchSpewer(es.client, text -> Language.ENGLISH, new FieldNames(), publisher, new PropertiesProvider(new HashMap<String, String>() {

        {
            put("maxContentLength", "20");
        }
    })).withRefresh(IMMEDIATE).withIndex("test-datashare");
    final TikaDocument document = new DocumentFactory().withIdentifier(new PathIdentifier()).create(get("ok-file.txt"));
    final ParsingReader reader = new ParsingReader(new ByteArrayInputStream("this content is ok".getBytes()));
    document.setReader(reader);
    limitedContentSpewer.write(document);
    GetResponse documentFields = es.client.get(new GetRequest(TEST_INDEX, document.getId()), RequestOptions.DEFAULT);
    assertThat(documentFields.getSourceAsMap()).includes(entry("content", "this content is ok"));
}
Also used : PropertiesProvider(org.icij.datashare.PropertiesProvider) DocumentFactory(org.icij.extract.document.DocumentFactory) FieldNames(org.icij.spewer.FieldNames) HashMap(java.util.HashMap) ParsingReader(org.apache.tika.parser.ParsingReader) ByteArrayInputStream(java.io.ByteArrayInputStream) GetRequest(org.elasticsearch.action.get.GetRequest) PathIdentifier(org.icij.extract.document.PathIdentifier) TikaDocument(org.icij.extract.document.TikaDocument) GetResponse(org.elasticsearch.action.get.GetResponse) Test(org.junit.Test)

Aggregations

DocumentFactory (org.icij.extract.document.DocumentFactory)10 TikaDocument (org.icij.extract.document.TikaDocument)9 Test (org.junit.Test)8 HashMap (java.util.HashMap)6 Extractor (org.icij.extract.extractor.Extractor)6 UpdatableDigester (org.icij.extract.extractor.UpdatableDigester)6 GetRequest (org.elasticsearch.action.get.GetRequest)5 GetResponse (org.elasticsearch.action.get.GetResponse)5 PropertiesProvider (org.icij.datashare.PropertiesProvider)5 FieldNames (org.icij.spewer.FieldNames)5 ByteArrayInputStream (java.io.ByteArrayInputStream)4 ParsingReader (org.apache.tika.parser.ParsingReader)4 PathIdentifier (org.icij.extract.document.PathIdentifier)4 Path (java.nio.file.Path)3 Publisher (org.icij.datashare.com.Publisher)3 Document (org.icij.datashare.text.Document)3 InputStream (java.io.InputStream)2 DigestIdentifier (org.icij.extract.document.DigestIdentifier)2 Message (org.icij.datashare.com.Message)1 Duplicate (org.icij.datashare.text.Duplicate)1