Search in sources :

Example 6 with AnalysisOutputFile

use of ca.corefacility.bioinformatics.irida.model.workflow.analysis.AnalysisOutputFile in project irida by phac-nml.

the class AnalysisController method getNewickForAnalysis.

/**
 * Get a newick file associated with a specific {@link AnalysisSubmission}.
 *
 * @param submissionId {@link Long} id for an {@link AnalysisSubmission}
 * @return {@link Map} containing the newick file contents.
 * @throws IOException {@link IOException} if the newick file is not found
 */
@RequestMapping("/ajax/{submissionId}/newick")
@ResponseBody
public Map<String, Object> getNewickForAnalysis(@PathVariable Long submissionId) throws IOException {
    final String treeFileKey = "tree";
    AnalysisSubmission submission = analysisSubmissionService.read(submissionId);
    Analysis analysis = submission.getAnalysis();
    AnalysisOutputFile file = analysis.getAnalysisOutputFile(treeFileKey);
    List<String> lines = Files.readAllLines(file.getFile());
    return ImmutableMap.of("newick", lines.get(0));
}
Also used : Analysis(ca.corefacility.bioinformatics.irida.model.workflow.analysis.Analysis) AnalysisSubmission(ca.corefacility.bioinformatics.irida.model.workflow.submission.AnalysisSubmission) AnalysisOutputFile(ca.corefacility.bioinformatics.irida.model.workflow.analysis.AnalysisOutputFile)

Example 7 with AnalysisOutputFile

use of ca.corefacility.bioinformatics.irida.model.workflow.analysis.AnalysisOutputFile in project irida by phac-nml.

the class SISTRSampleUpdaterTest method testUpdaterPassed.

@SuppressWarnings("unchecked")
@Test
public void testUpdaterPassed() throws PostProcessingException, AnalysisAlreadySetException {
    ImmutableMap<String, String> expectedResults = ImmutableMap.of("SISTR serovar", "Enteritidis", "SISTR cgMLST Subspecies", "enterica", "SISTR QC Status", "PASS");
    Path outputPath = Paths.get("src/test/resources/files/sistr-predictions-pass.json");
    AnalysisOutputFile outputFile = new AnalysisOutputFile(outputPath, null, null, null);
    Analysis analysis = new Analysis(null, ImmutableMap.of("sistr-predictions", outputFile), null, null);
    AnalysisSubmission submission = AnalysisSubmission.builder(UUID.randomUUID()).inputFiles(ImmutableSet.of(new SingleEndSequenceFile(null))).build();
    submission.setAnalysis(analysis);
    Sample sample = new Sample();
    sample.setId(1L);
    ImmutableMap<MetadataTemplateField, MetadataEntry> metadataMap = ImmutableMap.of(new MetadataTemplateField("SISTR Field", "text"), new MetadataEntry("Value1", "text"));
    when(metadataTemplateService.getMetadataMap(any(Map.class))).thenReturn(metadataMap);
    updater.update(Lists.newArrayList(sample), submission);
    ArgumentCaptor<Map> mapCaptor = ArgumentCaptor.forClass(Map.class);
    // this is the important bit.  Ensures the correct values got pulled from the file
    verify(metadataTemplateService).getMetadataMap(mapCaptor.capture());
    Map<String, MetadataEntry> metadata = mapCaptor.getValue();
    int found = 0;
    for (Map.Entry<String, MetadataEntry> e : metadata.entrySet()) {
        if (expectedResults.containsKey(e.getKey())) {
            String expected = expectedResults.get(e.getKey());
            MetadataEntry value = e.getValue();
            assertEquals("metadata values should match", expected, value.getValue());
            found++;
        }
    }
    assertEquals("should have found the same number of results", expectedResults.keySet().size(), found);
    // this bit just ensures the merged data got saved
    verify(sampleService).updateFields(eq(sample.getId()), mapCaptor.capture());
    Map<MetadataTemplateField, MetadataEntry> value = (Map<MetadataTemplateField, MetadataEntry>) mapCaptor.getValue().get("metadata");
    assertEquals(metadataMap.keySet().iterator().next(), value.keySet().iterator().next());
}
Also used : Path(java.nio.file.Path) Sample(ca.corefacility.bioinformatics.irida.model.sample.Sample) AnalysisSubmission(ca.corefacility.bioinformatics.irida.model.workflow.submission.AnalysisSubmission) SingleEndSequenceFile(ca.corefacility.bioinformatics.irida.model.sequenceFile.SingleEndSequenceFile) Analysis(ca.corefacility.bioinformatics.irida.model.workflow.analysis.Analysis) MetadataEntry(ca.corefacility.bioinformatics.irida.model.sample.metadata.MetadataEntry) MetadataTemplateField(ca.corefacility.bioinformatics.irida.model.sample.MetadataTemplateField) AnalysisOutputFile(ca.corefacility.bioinformatics.irida.model.workflow.analysis.AnalysisOutputFile) Map(java.util.Map) ImmutableMap(com.google.common.collect.ImmutableMap) Test(org.junit.Test)

Example 8 with AnalysisOutputFile

use of ca.corefacility.bioinformatics.irida.model.workflow.analysis.AnalysisOutputFile in project irida by phac-nml.

the class SISTRSampleUpdaterTest method testUpdaterBadFile.

@Test(expected = PostProcessingException.class)
public void testUpdaterBadFile() throws PostProcessingException, AnalysisAlreadySetException {
    ImmutableMap<String, String> expectedResults = ImmutableMap.of("SISTR serovar", "Enteritidis", "SISTR cgMLST Subspecies", "enterica", "SISTR QC Status", "PASS");
    Path outputPath = Paths.get("src/test/resources/files/snp_tree.tree");
    AnalysisOutputFile outputFile = new AnalysisOutputFile(outputPath, null, null, null);
    Analysis analysis = new Analysis(null, ImmutableMap.of("sistr-predictions", outputFile), null, null);
    AnalysisSubmission submission = AnalysisSubmission.builder(UUID.randomUUID()).inputFiles(ImmutableSet.of(new SingleEndSequenceFile(null))).build();
    submission.setAnalysis(analysis);
    Sample sample = new Sample();
    sample.setId(1L);
    updater.update(Lists.newArrayList(sample), submission);
}
Also used : Path(java.nio.file.Path) Analysis(ca.corefacility.bioinformatics.irida.model.workflow.analysis.Analysis) Sample(ca.corefacility.bioinformatics.irida.model.sample.Sample) AnalysisSubmission(ca.corefacility.bioinformatics.irida.model.workflow.submission.AnalysisSubmission) AnalysisOutputFile(ca.corefacility.bioinformatics.irida.model.workflow.analysis.AnalysisOutputFile) SingleEndSequenceFile(ca.corefacility.bioinformatics.irida.model.sequenceFile.SingleEndSequenceFile) Test(org.junit.Test)

Example 9 with AnalysisOutputFile

use of ca.corefacility.bioinformatics.irida.model.workflow.analysis.AnalysisOutputFile in project irida by phac-nml.

the class SISTRSampleUpdaterTest method testUpdaterNoFile.

@Test(expected = PostProcessingException.class)
public void testUpdaterNoFile() throws PostProcessingException, AnalysisAlreadySetException {
    ImmutableMap<String, String> expectedResults = ImmutableMap.of("SISTR serovar", "Enteritidis", "SISTR cgMLST Subspecies", "enterica", "SISTR QC Status", "PASS");
    Path outputPath = Paths.get("src/test/resources/files/not_really_a_file.txt");
    AnalysisOutputFile outputFile = new AnalysisOutputFile(outputPath, null, null, null);
    Analysis analysis = new Analysis(null, ImmutableMap.of("sistr-predictions", outputFile), null, null);
    AnalysisSubmission submission = AnalysisSubmission.builder(UUID.randomUUID()).inputFiles(ImmutableSet.of(new SingleEndSequenceFile(null))).build();
    submission.setAnalysis(analysis);
    Sample sample = new Sample();
    sample.setId(1L);
    updater.update(Lists.newArrayList(sample), submission);
}
Also used : Path(java.nio.file.Path) Analysis(ca.corefacility.bioinformatics.irida.model.workflow.analysis.Analysis) Sample(ca.corefacility.bioinformatics.irida.model.sample.Sample) AnalysisSubmission(ca.corefacility.bioinformatics.irida.model.workflow.submission.AnalysisSubmission) AnalysisOutputFile(ca.corefacility.bioinformatics.irida.model.workflow.analysis.AnalysisOutputFile) SingleEndSequenceFile(ca.corefacility.bioinformatics.irida.model.sequenceFile.SingleEndSequenceFile) Test(org.junit.Test)

Example 10 with AnalysisOutputFile

use of ca.corefacility.bioinformatics.irida.model.workflow.analysis.AnalysisOutputFile in project irida by phac-nml.

the class AnalysisExecutionServiceGalaxyIT method testTransferAnalysisResultsSuccessPhylogenomicsPairedNoParameters.

/**
 * Tests out getting analysis results successfully for phylogenomics
 * pipeline (paired test version with no parameters, using defaults).
 *
 * @throws Exception
 */
@Test
@WithMockUser(username = "aaron", roles = "ADMIN")
public void testTransferAnalysisResultsSuccessPhylogenomicsPairedNoParameters() throws Exception {
    String validCoverageFromProvenance = "\"10\"";
    String validMidCoverageFromProvenance = "10";
    // I verify parameters were set
    String validTreeFile = "10 10 10";
    // correctly by checking output file
    // (where parameters were printed).
    AnalysisSubmission analysisSubmission = analysisExecutionGalaxyITService.setupPairSubmissionInDatabase(1L, pairedPaths1, pairedPaths2, referenceFilePath, iridaPhylogenomicsPairedParametersWorkflowId, false);
    Future<AnalysisSubmission> analysisSubmittedFuture = analysisExecutionService.prepareSubmission(analysisSubmission);
    AnalysisSubmission analysisSubmitted = analysisSubmittedFuture.get();
    Future<AnalysisSubmission> analysisExecutionFuture = analysisExecutionService.executeAnalysis(analysisSubmitted);
    AnalysisSubmission analysisExecuted = analysisExecutionFuture.get();
    analysisExecutionGalaxyITService.waitUntilSubmissionComplete(analysisExecuted);
    analysisExecuted.setAnalysisState(AnalysisState.FINISHED_RUNNING);
    Future<AnalysisSubmission> analysisSubmissionCompletedFuture = analysisExecutionService.transferAnalysisResults(analysisExecuted);
    analysisSubmissionCompletedFuture.get();
    AnalysisSubmission analysisSubmissionCompletedDatabase = analysisSubmissionService.read(analysisSubmission.getId());
    assertEquals("analysis state is not completed", AnalysisState.COMPLETED, analysisSubmissionCompletedDatabase.getAnalysisState());
    Analysis analysisResults = analysisSubmissionCompletedDatabase.getAnalysis();
    assertEquals("analysis results is an invalid class", AnalysisType.PHYLOGENOMICS, analysisResults.getAnalysisType());
    assertEquals("invalid number of output files", 3, analysisResults.getAnalysisOutputFiles().size());
    AnalysisOutputFile phylogeneticTree = analysisResults.getAnalysisOutputFile(TREE_KEY);
    AnalysisOutputFile snpMatrix = analysisResults.getAnalysisOutputFile(MATRIX_KEY);
    AnalysisOutputFile snpTable = analysisResults.getAnalysisOutputFile(TABLE_KEY);
    // verify parameters were set properly by checking contents of file
    @SuppressWarnings("resource") String treeContent = new Scanner(phylogeneticTree.getFile().toFile()).useDelimiter("\\Z").next();
    assertEquals("phylogenetic trees containing the parameters should be equal", validTreeFile, treeContent);
    // phy tree
    final ToolExecution phyTreeCoreInputs = phylogeneticTree.getCreatedByTool();
    assertEquals("The first tool execution should be by core_pipeline_outputs_paired_with_parameters v0.1.0", "core_pipeline_outputs_paired_with_parameters", phyTreeCoreInputs.getToolName());
    assertEquals("The first tool execution should be by core_pipeline_outputs_paired_with_parameters v0.1.0", "0.1.0", phyTreeCoreInputs.getToolVersion());
    Map<String, String> phyTreeCoreParameters = phyTreeCoreInputs.getExecutionTimeParameters();
    assertEquals("incorrect number of non-file parameters", 4, phyTreeCoreParameters.size());
    assertEquals("parameter coverageMin set incorrectly", validCoverageFromProvenance, phyTreeCoreParameters.get("coverageMin"));
    assertEquals("parameter coverageMid set incorrectly", validMidCoverageFromProvenance, phyTreeCoreParameters.get("conditional.coverageMid"));
    assertEquals("parameter coverageMax set incorrectly", validCoverageFromProvenance, phyTreeCoreParameters.get("coverageMax"));
    assertEquals("parameter conditional_select set incorrectly", "all", phyTreeCoreParameters.get("conditional.conditional_select"));
    Set<ToolExecution> phyTreeCorePreviousSteps = phyTreeCoreInputs.getPreviousSteps();
    assertEquals("there should exist 2 previous steps", 2, phyTreeCorePreviousSteps.size());
    Set<String> uploadedFileTypesPhy = Sets.newHashSet();
    for (ToolExecution previousStep : phyTreeCorePreviousSteps) {
        assertTrue("previous steps should be input tools.", previousStep.isInputTool());
        uploadedFileTypesPhy.add(previousStep.getExecutionTimeParameters().get("file_type"));
    }
    assertEquals("uploaded files should have correct types", Sets.newHashSet("\"fastqsanger\"", "\"fasta\""), uploadedFileTypesPhy);
    // snp matrix
    final ToolExecution matrixCoreInputs = snpMatrix.getCreatedByTool();
    assertEquals("The first tool execution should be by core_pipeline_outputs_paired_with_parameters v0.1.0", "core_pipeline_outputs_paired_with_parameters", matrixCoreInputs.getToolName());
    assertEquals("The first tool execution should be by core_pipeline_outputs_paired_with_parameters v0.1.0", "0.1.0", matrixCoreInputs.getToolVersion());
    Map<String, String> matrixCoreParameters = matrixCoreInputs.getExecutionTimeParameters();
    assertEquals("incorrect number of non-file parameters", 4, matrixCoreParameters.size());
    assertEquals("parameter coverageMin set incorrectly", validCoverageFromProvenance, matrixCoreParameters.get("coverageMin"));
    assertEquals("parameter coverageMid set incorrectly", validMidCoverageFromProvenance, phyTreeCoreParameters.get("conditional.coverageMid"));
    assertEquals("parameter coverageMax set incorrectly", validCoverageFromProvenance, matrixCoreParameters.get("coverageMax"));
    assertEquals("parameter conditional_select set incorrectly", "all", phyTreeCoreParameters.get("conditional.conditional_select"));
    Set<ToolExecution> matrixCorePreviousSteps = matrixCoreInputs.getPreviousSteps();
    assertEquals("there should exist 2 previous steps", 2, matrixCorePreviousSteps.size());
    Set<String> uploadedFileTypesMatrix = Sets.newHashSet();
    for (ToolExecution previousStep : matrixCorePreviousSteps) {
        assertTrue("previous steps should be input tools.", previousStep.isInputTool());
        uploadedFileTypesMatrix.add(previousStep.getExecutionTimeParameters().get("file_type"));
    }
    assertEquals("uploaded files should have correct types", Sets.newHashSet("\"fastqsanger\"", "\"fasta\""), uploadedFileTypesMatrix);
    // snp table
    final ToolExecution tableCoreInputs = snpTable.getCreatedByTool();
    assertEquals("The first tool execution should be by core_pipeline_outputs_paired_with_parameters v0.1.0", "core_pipeline_outputs_paired_with_parameters", tableCoreInputs.getToolName());
    assertEquals("The first tool execution should be by core_pipeline_outputs_paired_with_parameters v0.1.0", "0.1.0", tableCoreInputs.getToolVersion());
    Map<String, String> tableCoreParameters = tableCoreInputs.getExecutionTimeParameters();
    assertEquals("incorrect number of non-file parameters", 4, tableCoreParameters.size());
    assertEquals("parameter coverageMin set incorrectly", validCoverageFromProvenance, tableCoreParameters.get("coverageMin"));
    assertEquals("parameter coverageMid set incorrectly", validMidCoverageFromProvenance, phyTreeCoreParameters.get("conditional.coverageMid"));
    assertEquals("parameter coverageMax set incorrectly", validCoverageFromProvenance, tableCoreParameters.get("coverageMax"));
    assertEquals("parameter conditional_select set incorrectly", "all", phyTreeCoreParameters.get("conditional.conditional_select"));
    Set<ToolExecution> tablePreviousSteps = tableCoreInputs.getPreviousSteps();
    assertEquals("there should exist 2 previous steps", 2, tablePreviousSteps.size());
    Set<String> uploadedFileTypesTable = Sets.newHashSet();
    for (ToolExecution previousStep : tablePreviousSteps) {
        assertTrue("previous steps should be input tools.", previousStep.isInputTool());
        uploadedFileTypesTable.add(previousStep.getExecutionTimeParameters().get("file_type"));
    }
    assertEquals("uploaded files should have correct types", Sets.newHashSet("\"fastqsanger\"", "\"fasta\""), uploadedFileTypesTable);
}
Also used : Scanner(java.util.Scanner) ToolExecution(ca.corefacility.bioinformatics.irida.model.workflow.analysis.ToolExecution) Analysis(ca.corefacility.bioinformatics.irida.model.workflow.analysis.Analysis) AnalysisSubmission(ca.corefacility.bioinformatics.irida.model.workflow.submission.AnalysisSubmission) AnalysisOutputFile(ca.corefacility.bioinformatics.irida.model.workflow.analysis.AnalysisOutputFile) WithMockUser(org.springframework.security.test.context.support.WithMockUser) Test(org.junit.Test)

Aggregations

AnalysisOutputFile (ca.corefacility.bioinformatics.irida.model.workflow.analysis.AnalysisOutputFile)30 Analysis (ca.corefacility.bioinformatics.irida.model.workflow.analysis.Analysis)20 AnalysisSubmission (ca.corefacility.bioinformatics.irida.model.workflow.submission.AnalysisSubmission)17 Test (org.junit.Test)14 Path (java.nio.file.Path)13 ToolExecution (ca.corefacility.bioinformatics.irida.model.workflow.analysis.ToolExecution)11 WithMockUser (org.springframework.security.test.context.support.WithMockUser)7 Sample (ca.corefacility.bioinformatics.irida.model.sample.Sample)5 EntityNotFoundException (ca.corefacility.bioinformatics.irida.exceptions.EntityNotFoundException)4 MetadataTemplateField (ca.corefacility.bioinformatics.irida.model.sample.MetadataTemplateField)4 MetadataEntry (ca.corefacility.bioinformatics.irida.model.sample.metadata.MetadataEntry)4 ImmutableMap (com.google.common.collect.ImmutableMap)4 AnalysisType (ca.corefacility.bioinformatics.irida.model.enums.AnalysisType)3 SingleEndSequenceFile (ca.corefacility.bioinformatics.irida.model.sequenceFile.SingleEndSequenceFile)3 IridaWorkflow (ca.corefacility.bioinformatics.irida.model.workflow.IridaWorkflow)3 AnalysisOutputFileInfo (ca.corefacility.bioinformatics.irida.ria.web.analysis.dto.AnalysisOutputFileInfo)3 Scanner (java.util.Scanner)3 ExecutionManagerException (ca.corefacility.bioinformatics.irida.exceptions.ExecutionManagerException)2 IridaWorkflowNotFoundException (ca.corefacility.bioinformatics.irida.exceptions.IridaWorkflowNotFoundException)2 AnalysisState (ca.corefacility.bioinformatics.irida.model.enums.AnalysisState)2