use of com.cerner.bunsen.spark.SparkRowConverter in project bunsen by cerner.
the class Functions method toBundle.
/**
* Returns a bundle containing all resources in the dataset. This should be used
* with caution for large datasets, since the returned bundle will include all data.
*
* @param dataset a dataset of FHIR resoruces
* @param resourceTypeUrl the FHIR resource type
* @return a bundle containing those resources.
*/
public static Bundle toBundle(Dataset<Row> dataset, String resourceTypeUrl) {
List<Row> resources = (List<Row>) dataset.collectAsList();
SparkRowConverter converter = SparkRowConverter.forResource(CONTEXT, resourceTypeUrl);
Bundle bundle = new Bundle();
for (Row row : resources) {
IBaseResource resource = converter.rowToResource(row);
bundle.addEntry().setResource((Resource) resource);
}
return bundle;
}
use of com.cerner.bunsen.spark.SparkRowConverter in project bunsen by cerner.
the class BundlesTest method testGetResourcesAndContainedResourcesByClass.
@Test
public void testGetResourcesAndContainedResourcesByClass() {
List<Class> containedClasses = ImmutableList.of(Provenance.class);
Dataset<Row> observations = bundles.extractEntry(spark, bundlesWithContainedRdd, Observation.class, containedClasses);
SparkRowConverter rowConverter = SparkRowConverter.forResource(fhirContext, Observation.class.getSimpleName(), containedClasses.stream().map(c -> c.getSimpleName()).collect(Collectors.toList()));
Observation observation = (Observation) rowConverter.rowToResource(observations.head());
Assert.assertEquals(1, observations.count());
Assert.assertEquals(ResourceType.Provenance, observation.getContained().get(0).getResourceType());
// internal references prefixed with #
String expectedId = "#" + "11000100-4";
Assert.assertEquals(expectedId, observation.getContained().get(0).getId());
}
Aggregations