Search in sources :

Example 1 with ContextDefinitions

use of com.ibm.cohort.cql.spark.aggregation.ContextDefinitions in project quality-measure-and-cohort-service by Alvearie.

the class SparkCqlEvaluatorTest method testReadContextDefinitions.

@Test
public void testReadContextDefinitions() throws Exception {
    evaluator.hadoopConfiguration = new SerializableConfiguration(SparkHadoopUtil.get().conf());
    ContextDefinitions contextDefinitions = evaluator.readContextDefinitions("src/test/resources/alltypes/metadata/context-definitions.json");
    assertNotNull(contextDefinitions);
    assertEquals(5, contextDefinitions.getContextDefinitions().size());
    assertEquals(3, contextDefinitions.getContextDefinitions().get(0).getRelationships().size());
}
Also used : ContextDefinitions(com.ibm.cohort.cql.spark.aggregation.ContextDefinitions) SerializableConfiguration(org.apache.spark.util.SerializableConfiguration) Test(org.junit.Test)

Example 2 with ContextDefinitions

use of com.ibm.cohort.cql.spark.aggregation.ContextDefinitions in project quality-measure-and-cohort-service by Alvearie.

the class SparkSchemaCreatorTest method makeContextDefinitions.

private ContextDefinitions makeContextDefinitions(List<ContextDefinition> definitionList) {
    ContextDefinitions definitions = new ContextDefinitions();
    definitions.setContextDefinitions(definitionList);
    return definitions;
}
Also used : ContextDefinitions(com.ibm.cohort.cql.spark.aggregation.ContextDefinitions)

Example 3 with ContextDefinitions

use of com.ibm.cohort.cql.spark.aggregation.ContextDefinitions in project quality-measure-and-cohort-service by Alvearie.

the class SparkSchemaCreatorTest method testInvalidKeyColumn.

@Test(expected = IllegalArgumentException.class)
public void testInvalidKeyColumn() throws Exception {
    ContextDefinitions contextDefinitions = makeContextDefinitions(Arrays.asList(makeContextDefinition("Context1Id", "Type1", "other")));
    CqlEvaluationRequests cqlEvaluationRequests = makeEvaluationRequests(Arrays.asList(makeEvaluationRequest(new CqlLibraryDescriptor().setLibraryId("Context1Id").setVersion("1.0.0"), new HashSet<>(Collections.singletonList("define_boolean")), "Context1Id")));
    SparkSchemaCreator schemaCreator = new SparkSchemaCreator(cqlLibraryProvider, cqlEvaluationRequests, contextDefinitions, outputColumnNameFactory, cqlTranslator);
    schemaCreator.calculateSchemasForContexts(Arrays.asList("Context1Id"));
}
Also used : ContextDefinitions(com.ibm.cohort.cql.spark.aggregation.ContextDefinitions) CqlEvaluationRequests(com.ibm.cohort.cql.evaluation.CqlEvaluationRequests) CqlLibraryDescriptor(com.ibm.cohort.cql.library.CqlLibraryDescriptor) Test(org.junit.Test)

Example 4 with ContextDefinitions

use of com.ibm.cohort.cql.spark.aggregation.ContextDefinitions in project quality-measure-and-cohort-service by Alvearie.

the class SparkSchemaCreatorTest method singleContextSupportedDefineTypes.

@Test
public void singleContextSupportedDefineTypes() throws Exception {
    ContextDefinitions contextDefinitions = makeContextDefinitions(Collections.singletonList(makeContextDefinition("A", "Type1", "id")));
    CqlEvaluationRequests cqlEvaluationRequests = makeEvaluationRequests(Arrays.asList(makeEvaluationRequest(new CqlLibraryDescriptor().setLibraryId("Context1Id").setVersion("1.0.0"), new HashSet<>(Arrays.asList("define_integer", "define_boolean", "define_string", "define_decimal")), "A"), makeEvaluationRequest(new CqlLibraryDescriptor().setLibraryId("Context2Id").setVersion("1.0.0"), new HashSet<>(Arrays.asList("define_date", "define_datetime")), "A")));
    SparkSchemaCreator schemaCreator = new SparkSchemaCreator(cqlLibraryProvider, cqlEvaluationRequests, contextDefinitions, outputColumnNameFactory, cqlTranslator);
    StructType actualSchema = schemaCreator.calculateSchemasForContexts(Arrays.asList("A")).get("A");
    StructType expectedSchema = new StructType().add("id", DataTypes.IntegerType, false).add("parameters", DataTypes.StringType, false).add("Context1Id.define_integer", DataTypes.IntegerType, true).add("Context1Id.define_boolean", DataTypes.BooleanType, true).add("Context1Id.define_string", DataTypes.StringType, true).add("Context1Id.define_decimal", DataTypes.createDecimalType(28, 8), true).add("Context2Id.define_date", DataTypes.DateType, true).add("Context2Id.define_datetime", DataTypes.TimestampType, true);
    validateSchemas(expectedSchema, actualSchema, "id");
}
Also used : ContextDefinitions(com.ibm.cohort.cql.spark.aggregation.ContextDefinitions) StructType(org.apache.spark.sql.types.StructType) CqlEvaluationRequests(com.ibm.cohort.cql.evaluation.CqlEvaluationRequests) CqlLibraryDescriptor(com.ibm.cohort.cql.library.CqlLibraryDescriptor) Test(org.junit.Test)

Example 5 with ContextDefinitions

use of com.ibm.cohort.cql.spark.aggregation.ContextDefinitions in project quality-measure-and-cohort-service by Alvearie.

the class SparkSchemaCreatorTest method testMultipleContextDefinitionsForContext.

@Test(expected = IllegalArgumentException.class)
public void testMultipleContextDefinitionsForContext() throws Exception {
    ContextDefinitions contextDefinitions = makeContextDefinitions(Arrays.asList(makeContextDefinition("Context1Id", "Type1", "id"), makeContextDefinition("Context1Id", "Type1", "id")));
    CqlEvaluationRequests cqlEvaluationRequests = makeEvaluationRequests(Arrays.asList(makeEvaluationRequest(new CqlLibraryDescriptor().setLibraryId("Context1Id").setVersion("1.0.0"), new HashSet<>(Collections.singletonList("define_boolean")), "Context1Id")));
    SparkSchemaCreator schemaCreator = new SparkSchemaCreator(cqlLibraryProvider, cqlEvaluationRequests, contextDefinitions, outputColumnNameFactory, cqlTranslator);
    schemaCreator.calculateSchemasForContexts(Arrays.asList("Context1Id"));
}
Also used : ContextDefinitions(com.ibm.cohort.cql.spark.aggregation.ContextDefinitions) CqlEvaluationRequests(com.ibm.cohort.cql.evaluation.CqlEvaluationRequests) CqlLibraryDescriptor(com.ibm.cohort.cql.library.CqlLibraryDescriptor) Test(org.junit.Test)

Aggregations

ContextDefinitions (com.ibm.cohort.cql.spark.aggregation.ContextDefinitions)12 Test (org.junit.Test)10 CqlEvaluationRequests (com.ibm.cohort.cql.evaluation.CqlEvaluationRequests)9 CqlLibraryDescriptor (com.ibm.cohort.cql.library.CqlLibraryDescriptor)9 StructType (org.apache.spark.sql.types.StructType)3 File (java.io.File)2 SerializableConfiguration (org.apache.spark.util.SerializableConfiguration)2 ColumnRuleCreator (com.ibm.cohort.cql.spark.aggregation.ColumnRuleCreator)1 ContextDefinition (com.ibm.cohort.cql.spark.aggregation.ContextDefinition)1 ContextRetriever (com.ibm.cohort.cql.spark.aggregation.ContextRetriever)1 DefaultDatasetRetriever (com.ibm.cohort.cql.spark.data.DefaultDatasetRetriever)1 SparkDataRow (com.ibm.cohort.cql.spark.data.SparkDataRow)1 SparkOutputColumnEncoder (com.ibm.cohort.cql.spark.data.SparkOutputColumnEncoder)1 SparkTypeConverter (com.ibm.cohort.cql.spark.data.SparkTypeConverter)1 EvaluationError (com.ibm.cohort.cql.spark.errors.EvaluationError)1 EvaluationSummary (com.ibm.cohort.cql.spark.metadata.EvaluationSummary)1 HadoopPathOutputMetadataWriter (com.ibm.cohort.cql.spark.metadata.HadoopPathOutputMetadataWriter)1 OutputMetadataWriter (com.ibm.cohort.cql.spark.metadata.OutputMetadataWriter)1 CqlToElmTranslator (com.ibm.cohort.cql.translation.CqlToElmTranslator)1 DataRow (com.ibm.cohort.datarow.model.DataRow)1