Search in sources :

Example 1 with ImpalaComponent

use of org.talend.hadoop.distribution.component.ImpalaComponent in project tbd-studio-se by Talend.

the class AbstractDynamicDistributionTemplate method buildModuleGroupsTemplateMap.

protected Map<ComponentType, IDynamicModuleGroupTemplate> buildModuleGroupsTemplateMap() {
    Map<ComponentType, IDynamicModuleGroupTemplate> moduleGroupsTemplateMap = new HashMap<>();
    DynamicPluginAdapter pluginAdapter = getPluginAdapter();
    if (this instanceof HDFSComponent) {
        moduleGroupsTemplateMap.put(ComponentType.HDFS, new DynamicHDFSModuleGroupTemplate(pluginAdapter));
    }
    if (this instanceof HBaseComponent) {
        moduleGroupsTemplateMap.put(ComponentType.HBASE, new DynamicHBaseModuleGroupTemplate(pluginAdapter));
    }
    if (this instanceof HCatalogComponent) {
        moduleGroupsTemplateMap.put(ComponentType.HCATALOG, new DynamicHCatalogModuleGroupTemplate(pluginAdapter));
    }
    if (this instanceof HiveComponent) {
        moduleGroupsTemplateMap.put(ComponentType.HIVE, new DynamicHiveModuleGroupTemplate(pluginAdapter));
    }
    if (this instanceof HiveOnSparkComponent) {
        moduleGroupsTemplateMap.put(ComponentType.HIVEONSPARK, new DynamicHiveOnSparkModuleGroupTemplate(pluginAdapter));
    }
    if (this instanceof MapRDBComponent) {
        moduleGroupsTemplateMap.put(ComponentType.MAPRDB, new DynamicMapRDBModuleGroupTemplate(pluginAdapter));
    }
    if (this instanceof MRComponent) {
        moduleGroupsTemplateMap.put(ComponentType.MAPREDUCE, new DynamicMapReduceModuleGroupTemplate(pluginAdapter));
    }
    if (this instanceof SparkBatchComponent) {
        moduleGroupsTemplateMap.put(ComponentType.SPARKBATCH, new DynamicSparkBatchModuleGroupTemplate(pluginAdapter));
    }
    if (this instanceof SparkStreamingComponent) {
        moduleGroupsTemplateMap.put(ComponentType.SPARKSTREAMING, new DynamicSparkStreamingModuleGroupTemplate(pluginAdapter));
    }
    if (this instanceof SqoopComponent) {
        moduleGroupsTemplateMap.put(ComponentType.SQOOP, new DynamicSqoopModuleGroupTemplate(pluginAdapter));
    }
    if (this instanceof ImpalaComponent) {
        moduleGroupsTemplateMap.put(ComponentType.IMPALA, new DynamicImpalaModuleGroupTemplate(pluginAdapter));
    }
    return moduleGroupsTemplateMap;
}
Also used : ImpalaComponent(org.talend.hadoop.distribution.component.ImpalaComponent) HashMap(java.util.HashMap) HiveOnSparkComponent(org.talend.hadoop.distribution.component.HiveOnSparkComponent) MRComponent(org.talend.hadoop.distribution.component.MRComponent) HiveComponent(org.talend.hadoop.distribution.component.HiveComponent) HDFSComponent(org.talend.hadoop.distribution.component.HDFSComponent) SqoopComponent(org.talend.hadoop.distribution.component.SqoopComponent) ComponentType(org.talend.hadoop.distribution.ComponentType) DynamicPluginAdapter(org.talend.hadoop.distribution.dynamic.adapter.DynamicPluginAdapter) HCatalogComponent(org.talend.hadoop.distribution.component.HCatalogComponent) HBaseComponent(org.talend.hadoop.distribution.component.HBaseComponent) SparkBatchComponent(org.talend.hadoop.distribution.component.SparkBatchComponent) SparkStreamingComponent(org.talend.hadoop.distribution.component.SparkStreamingComponent) MapRDBComponent(org.talend.hadoop.distribution.component.MapRDBComponent)

Example 2 with ImpalaComponent

use of org.talend.hadoop.distribution.component.ImpalaComponent in project tbd-studio-se by Talend.

the class AbstractDynamicCDHDistributionTemplate method buildModuleGroupsTemplateMap.

@Override
protected Map<ComponentType, IDynamicModuleGroupTemplate> buildModuleGroupsTemplateMap() {
    Map<ComponentType, IDynamicModuleGroupTemplate> groupTemplateMap = super.buildModuleGroupsTemplateMap();
    DynamicPluginAdapter pluginAdapter = getPluginAdapter();
    if (this instanceof HiveOnSparkComponent) {
        groupTemplateMap.put(ComponentType.HIVEONSPARK, new DynamicCDHHiveOnSparkModuleGroupTemplate(pluginAdapter));
    }
    if (this instanceof MRComponent) {
        groupTemplateMap.put(ComponentType.MAPREDUCE, new DynamicCDHMapReduceModuleGroupTemplate(pluginAdapter));
    }
    if (this instanceof SparkBatchComponent) {
        groupTemplateMap.put(ComponentType.SPARKBATCH, new DynamicCDHSparkBatchModuleGroupTemplate(pluginAdapter));
    }
    if (this instanceof SparkStreamingComponent) {
        groupTemplateMap.put(ComponentType.SPARKSTREAMING, new DynamicCDHSparkStreamingModuleGroupTemplate(pluginAdapter));
    }
    if (this instanceof SqoopComponent) {
        groupTemplateMap.put(ComponentType.SQOOP, new DynamicCDHSqoopModuleGroupTemplate(pluginAdapter));
    }
    if (this instanceof ImpalaComponent) {
        groupTemplateMap.put(ComponentType.IMPALA, new DynamicImpalaModuleGroupTemplate(pluginAdapter));
    }
    return groupTemplateMap;
}
Also used : ImpalaComponent(org.talend.hadoop.distribution.component.ImpalaComponent) ComponentType(org.talend.hadoop.distribution.ComponentType) DynamicPluginAdapter(org.talend.hadoop.distribution.dynamic.adapter.DynamicPluginAdapter) HiveOnSparkComponent(org.talend.hadoop.distribution.component.HiveOnSparkComponent) MRComponent(org.talend.hadoop.distribution.component.MRComponent) DynamicImpalaModuleGroupTemplate(org.talend.hadoop.distribution.dynamic.template.DynamicImpalaModuleGroupTemplate) SparkBatchComponent(org.talend.hadoop.distribution.component.SparkBatchComponent) IDynamicModuleGroupTemplate(org.talend.hadoop.distribution.dynamic.template.IDynamicModuleGroupTemplate) SparkStreamingComponent(org.talend.hadoop.distribution.component.SparkStreamingComponent) SqoopComponent(org.talend.hadoop.distribution.component.SqoopComponent)

Example 3 with ImpalaComponent

use of org.talend.hadoop.distribution.component.ImpalaComponent in project tbd-studio-se by Talend.

the class EMR5290DistributionTest method testEMR5290Distribution.

@Test
public void testEMR5290Distribution() throws Exception {
    HadoopComponent distribution = new EMR5290Distribution();
    assertNotNull(distribution.getDistributionName());
    assertNotNull(distribution.getVersionName(null));
    assertTrue(distribution.doSupportS3());
    assertEquals(EMR5290Distribution.DISTRIBUTION_NAME, distribution.getDistribution());
    assertEquals(EMR5290Distribution.VERSION, distribution.getVersion());
    assertEquals(EHadoopVersion.HADOOP_2, distribution.getHadoopVersion());
    assertTrue(distribution.doSupportKerberos());
    assertTrue(distribution.doSupportUseDatanodeHostname());
    assertFalse(distribution.doSupportGroup());
    assertFalse(distribution.doSupportOldImportMode());
    assertTrue(((HDFSComponent) distribution).doSupportSequenceFileShortType());
    assertFalse(((MRComponent) distribution).isExecutedThroughWebHCat());
    assertTrue(((MRComponent) distribution).doSupportCrossPlatformSubmission());
    assertTrue(((MRComponent) distribution).doSupportImpersonation());
    assertEquals(((MRComponent) distribution).getYarnApplicationClasspath(), DEFAULT_YARN_APPLICATION_CLASSPATH);
    assertTrue(distribution instanceof HBaseComponent);
    assertTrue(distribution instanceof SqoopComponent);
    assertFalse(((HiveComponent) distribution).doSupportEmbeddedMode());
    assertTrue(((HiveComponent) distribution).doSupportStandaloneMode());
    assertFalse(((HiveComponent) distribution).doSupportHive1());
    assertTrue(((HiveComponent) distribution).doSupportHive2());
    assertFalse(((HiveComponent) distribution).doSupportTezForHive());
    assertFalse(((HiveComponent) distribution).doSupportHBaseForHive());
    assertTrue(((HiveComponent) distribution).doSupportSSL());
    assertTrue(((HiveComponent) distribution).doSupportORCFormat());
    assertTrue(((HiveComponent) distribution).doSupportAvroFormat());
    assertTrue(((HiveComponent) distribution).doSupportParquetFormat());
    assertTrue(((HiveComponent) distribution).doSupportStoreAsParquet());
    assertFalse(((HiveComponent) distribution).doSupportClouderaNavigator());
    assertTrue(distribution instanceof HCatalogComponent);
    assertFalse(distribution instanceof ImpalaComponent);
    assertTrue(((SqoopComponent) distribution).doJavaAPISqoopImportAllTablesSupportExcludeTable());
    assertTrue(((SqoopComponent) distribution).doJavaAPISqoopImportSupportDeleteTargetDir());
    assertTrue(((SqoopComponent) distribution).doJavaAPISupportStorePasswordInFile());
    assertTrue(((HBaseComponent) distribution).doSupportNewHBaseAPI());
    assertFalse(distribution.doSupportAzureDataLakeStorage());
    assertTrue(distribution.doSupportWebHDFS());
}
Also used : ImpalaComponent(org.talend.hadoop.distribution.component.ImpalaComponent) HadoopComponent(org.talend.hadoop.distribution.component.HadoopComponent) EMR5290Distribution(org.talend.hadoop.distribution.emr5290.EMR5290Distribution) HCatalogComponent(org.talend.hadoop.distribution.component.HCatalogComponent) SqoopComponent(org.talend.hadoop.distribution.component.SqoopComponent) HBaseComponent(org.talend.hadoop.distribution.component.HBaseComponent) Test(org.junit.Test)

Example 4 with ImpalaComponent

use of org.talend.hadoop.distribution.component.ImpalaComponent in project tbd-studio-se by Talend.

the class CustomDistributionTest method testCustomDistribution.

@Test
public void testCustomDistribution() throws Exception {
    HadoopComponent distribution = new CustomDistribution();
    assertNotNull(distribution.getDistributionName());
    assertNull(distribution.getVersionName(null));
    assertTrue(distribution.doSupportS3());
    assertEquals(CustomDistribution.DISTRIBUTION_NAME, distribution.getDistribution());
    assertNull(distribution.getVersion());
    assertNull(distribution.getHadoopVersion());
    assertTrue(distribution.doSupportKerberos());
    assertTrue(distribution.doSupportUseDatanodeHostname());
    assertFalse(distribution.doSupportGroup());
    assertTrue(distribution.doSupportOldImportMode());
    assertTrue(((HDFSComponent) distribution).doSupportSequenceFileShortType());
    assertFalse(((MRComponent) distribution).isExecutedThroughWebHCat());
    assertFalse(((MRComponent) distribution).doSupportCrossPlatformSubmission());
    assertTrue(((MRComponent) distribution).doSupportImpersonation());
    assertEquals(DEFAULT_YARN_APPLICATION_CLASSPATH, ((MRComponent) distribution).getYarnApplicationClasspath());
    assertFalse(((HBaseComponent) distribution).doSupportNewHBaseAPI());
    assertTrue(((SqoopComponent) distribution).doJavaAPISupportStorePasswordInFile());
    assertFalse(((SqoopComponent) distribution).doJavaAPISqoopImportSupportDeleteTargetDir());
    assertTrue(((SqoopComponent) distribution).doJavaAPISqoopImportAllTablesSupportExcludeTable());
    assertTrue(((HiveComponent) distribution).doSupportEmbeddedMode());
    assertTrue(((HiveComponent) distribution).doSupportStandaloneMode());
    assertTrue(((HiveComponent) distribution).doSupportHive1());
    assertTrue(((HiveComponent) distribution).doSupportHive2());
    assertTrue(((HiveComponent) distribution).doSupportTezForHive());
    assertTrue(((HiveComponent) distribution).doSupportHBaseForHive());
    assertTrue(((HiveComponent) distribution).doSupportSSL());
    assertTrue(((HiveComponent) distribution).doSupportORCFormat());
    assertTrue(((HiveComponent) distribution).doSupportAvroFormat());
    assertTrue(((HiveComponent) distribution).doSupportParquetFormat());
    assertFalse(((SparkBatchComponent) distribution).getSparkVersions().contains(ESparkVersion.SPARK_2_0));
    assertFalse(((SparkBatchComponent) distribution).getSparkVersions().contains(ESparkVersion.SPARK_1_6));
    assertFalse(((SparkBatchComponent) distribution).getSparkVersions().contains(ESparkVersion.SPARK_1_5));
    assertFalse(((SparkBatchComponent) distribution).getSparkVersions().contains(ESparkVersion.SPARK_1_4));
    assertTrue(((SparkBatchComponent) distribution).getSparkVersions().contains(ESparkVersion.SPARK_1_3));
    assertTrue(((SparkBatchComponent) distribution).doSupportDynamicMemoryAllocation());
    assertFalse(((SparkBatchComponent) distribution).isExecutedThroughSparkJobServer());
    assertTrue(((SparkBatchComponent) distribution).doSupportSparkStandaloneMode());
    assertTrue(((SparkBatchComponent) distribution).doSupportSparkYarnClientMode());
    assertFalse(((SparkStreamingComponent) distribution).getSparkVersions().contains(ESparkVersion.SPARK_2_0));
    assertFalse(((SparkStreamingComponent) distribution).getSparkVersions().contains(ESparkVersion.SPARK_1_6));
    assertFalse(((SparkStreamingComponent) distribution).getSparkVersions().contains(ESparkVersion.SPARK_1_5));
    assertFalse(((SparkStreamingComponent) distribution).getSparkVersions().contains(ESparkVersion.SPARK_1_4));
    assertTrue(((SparkStreamingComponent) distribution).getSparkVersions().contains(ESparkVersion.SPARK_1_3));
    assertTrue(((SparkStreamingComponent) distribution).doSupportDynamicMemoryAllocation());
    assertFalse(((SparkStreamingComponent) distribution).isExecutedThroughSparkJobServer());
    assertTrue(((SparkStreamingComponent) distribution).doSupportCheckpointing());
    assertTrue(((SparkStreamingComponent) distribution).doSupportSparkStandaloneMode());
    assertTrue(((SparkStreamingComponent) distribution).doSupportSparkYarnClientMode());
    assertTrue(((SparkStreamingComponent) distribution).doSupportBackpressure());
    assertFalse(((HiveComponent) distribution).doSupportStoreAsParquet());
    assertFalse(((HiveComponent) distribution).doSupportClouderaNavigator());
    assertTrue(distribution instanceof HCatalogComponent);
    assertTrue(distribution instanceof ImpalaComponent);
    assertTrue(distribution.doSupportCreateServiceConnection());
    assertTrue((distribution.getNecessaryServiceName() == null ? 0 : distribution.getNecessaryServiceName().size()) == 0);
    assertFalse(distribution.doSupportAzureDataLakeStorage());
    assertFalse(distribution.doSupportWebHDFS());
}
Also used : ImpalaComponent(org.talend.hadoop.distribution.component.ImpalaComponent) SparkBatchComponent(org.talend.hadoop.distribution.component.SparkBatchComponent) HadoopComponent(org.talend.hadoop.distribution.component.HadoopComponent) SparkStreamingComponent(org.talend.hadoop.distribution.component.SparkStreamingComponent) HCatalogComponent(org.talend.hadoop.distribution.component.HCatalogComponent) CustomDistribution(org.talend.hadoop.distribution.custom.CustomDistribution) Test(org.junit.Test) AbstractDistributionTest(org.talend.hadoop.distribution.test.AbstractDistributionTest)

Example 5 with ImpalaComponent

use of org.talend.hadoop.distribution.component.ImpalaComponent in project tbd-studio-se by Talend.

the class HDInsight40DistributionTest method testHDInsight40Distribution.

@Test
public void testHDInsight40Distribution() throws Exception {
    HadoopComponent distribution = new HDInsight40Distribution();
    assertNotNull(distribution.getDistributionName());
    assertNotNull(distribution.getVersionName(null));
    assertFalse(distribution.doSupportS3());
    assertEquals(HDInsight40Distribution.DISTRIBUTION_NAME, distribution.getDistribution());
    assertEquals(HDInsight40Distribution.VERSION, distribution.getVersion());
    assertEquals(EHadoopVersion.HADOOP_3, distribution.getHadoopVersion());
    assertFalse(distribution.doSupportKerberos());
    assertFalse(distribution.doSupportUseDatanodeHostname());
    assertFalse(distribution.doSupportGroup());
    assertFalse(distribution.doSupportOldImportMode());
    assertFalse(distribution instanceof HDFSComponent);
    assertEquals(DEFAULT_YARN_APPLICATION_CLASSPATH, ((MRComponent) distribution).getYarnApplicationClasspath());
    assertFalse(distribution instanceof HBaseComponent);
    assertFalse(distribution instanceof SqoopComponent);
    // Spark Batch
    assertTrue(((SparkBatchComponent) distribution).getSparkVersions().contains(ESparkVersion.SPARK_2_4_X));
    assertTrue(((SparkBatchComponent) distribution).getSparkVersions().contains(ESparkVersion.SPARK_2_3_X));
    assertFalse(((SparkBatchComponent) distribution).getSparkVersions().contains(ESparkVersion.SPARK_2_1));
    assertFalse(((SparkBatchComponent) distribution).getSparkVersions().contains(ESparkVersion.SPARK_2_0));
    assertFalse(((SparkBatchComponent) distribution).getSparkVersions().contains(ESparkVersion.SPARK_1_6));
    assertFalse(((SparkBatchComponent) distribution).getSparkVersions().contains(ESparkVersion.SPARK_1_5));
    assertFalse(((SparkBatchComponent) distribution).getSparkVersions().contains(ESparkVersion.SPARK_1_4));
    assertFalse(((SparkBatchComponent) distribution).getSparkVersions().contains(ESparkVersion.SPARK_1_3));
    assertFalse(((SparkBatchComponent) distribution).doSupportDynamicMemoryAllocation());
    assertFalse(((SparkBatchComponent) distribution).isExecutedThroughSparkJobServer());
    assertTrue(((SparkBatchComponent) distribution).isExecutedThroughLivy());
    assertFalse(((SparkBatchComponent) distribution).doSupportSparkStandaloneMode());
    assertFalse(((SparkBatchComponent) distribution).doSupportSparkYarnClientMode());
    assertTrue(((SparkBatchComponent) distribution).doSupportSparkYarnClusterMode());
    // Spark Streaming
    assertTrue(((SparkStreamingComponent) distribution).getSparkVersions().contains(ESparkVersion.SPARK_2_4_X));
    assertFalse(((SparkStreamingComponent) distribution).getSparkVersions().contains(ESparkVersion.SPARK_2_1));
    assertTrue(((SparkStreamingComponent) distribution).getSparkVersions().contains(ESparkVersion.SPARK_2_3_X));
    assertFalse(((SparkStreamingComponent) distribution).getSparkVersions().contains(ESparkVersion.SPARK_2_0));
    assertFalse(((SparkStreamingComponent) distribution).getSparkVersions().contains(ESparkVersion.SPARK_1_6));
    assertFalse(((SparkStreamingComponent) distribution).getSparkVersions().contains(ESparkVersion.SPARK_1_5));
    assertFalse(((SparkStreamingComponent) distribution).getSparkVersions().contains(ESparkVersion.SPARK_1_4));
    assertFalse(((SparkStreamingComponent) distribution).getSparkVersions().contains(ESparkVersion.SPARK_1_3));
    assertFalse(((SparkStreamingComponent) distribution).doSupportDynamicMemoryAllocation());
    assertFalse(((SparkStreamingComponent) distribution).isExecutedThroughSparkJobServer());
    assertTrue(((SparkStreamingComponent) distribution).isExecutedThroughLivy());
    assertFalse(((SparkStreamingComponent) distribution).doSupportCheckpointing());
    assertFalse(((SparkStreamingComponent) distribution).doSupportSparkStandaloneMode());
    assertFalse(((SparkStreamingComponent) distribution).doSupportSparkYarnClientMode());
    assertTrue(((SparkStreamingComponent) distribution).doSupportSparkYarnClusterMode());
    assertFalse(((SparkStreamingComponent) distribution).doSupportBackpressure());
    // Hive
    assertFalse(((HiveComponent) distribution).doSupportHive1());
    assertTrue(((HiveComponent) distribution).doSupportHive2());
    assertTrue(((HiveComponent) distribution).doSupportTezForHive());
    assertFalse(((HiveComponent) distribution).doSupportHBaseForHive());
    assertFalse(((HiveComponent) distribution).doSupportSSL());
    assertTrue(((HiveComponent) distribution).doSupportORCFormat());
    assertTrue(((HiveComponent) distribution).doSupportAvroFormat());
    assertTrue(((HiveComponent) distribution).doSupportParquetFormat());
    assertFalse(((HiveComponent) distribution).doSupportStoreAsParquet());
    assertFalse(distribution instanceof HCatalogComponent);
    assertFalse(distribution instanceof ImpalaComponent);
    assertTrue(distribution.doSupportHDFSEncryption());
    assertTrue(distribution.doSupportCreateServiceConnection());
    assertTrue((distribution.getNecessaryServiceName() == null ? 0 : distribution.getNecessaryServiceName().size()) == 0);
}
Also used : ImpalaComponent(org.talend.hadoop.distribution.component.ImpalaComponent) SparkBatchComponent(org.talend.hadoop.distribution.component.SparkBatchComponent) HDInsight40Distribution(org.talend.hadoop.distribution.hdinsight400.HDInsight40Distribution) HDFSComponent(org.talend.hadoop.distribution.component.HDFSComponent) HadoopComponent(org.talend.hadoop.distribution.component.HadoopComponent) SparkStreamingComponent(org.talend.hadoop.distribution.component.SparkStreamingComponent) HCatalogComponent(org.talend.hadoop.distribution.component.HCatalogComponent) SqoopComponent(org.talend.hadoop.distribution.component.SqoopComponent) HBaseComponent(org.talend.hadoop.distribution.component.HBaseComponent) Test(org.junit.Test) AbstractDistributionTest(org.talend.hadoop.distribution.test.AbstractDistributionTest)

Aggregations

ImpalaComponent (org.talend.hadoop.distribution.component.ImpalaComponent)7 SqoopComponent (org.talend.hadoop.distribution.component.SqoopComponent)6 HCatalogComponent (org.talend.hadoop.distribution.component.HCatalogComponent)5 SparkBatchComponent (org.talend.hadoop.distribution.component.SparkBatchComponent)5 SparkStreamingComponent (org.talend.hadoop.distribution.component.SparkStreamingComponent)5 Test (org.junit.Test)4 HBaseComponent (org.talend.hadoop.distribution.component.HBaseComponent)4 HadoopComponent (org.talend.hadoop.distribution.component.HadoopComponent)4 ComponentType (org.talend.hadoop.distribution.ComponentType)3 HiveOnSparkComponent (org.talend.hadoop.distribution.component.HiveOnSparkComponent)3 DynamicPluginAdapter (org.talend.hadoop.distribution.dynamic.adapter.DynamicPluginAdapter)3 HDFSComponent (org.talend.hadoop.distribution.component.HDFSComponent)2 MRComponent (org.talend.hadoop.distribution.component.MRComponent)2 DynamicImpalaModuleGroupTemplate (org.talend.hadoop.distribution.dynamic.template.DynamicImpalaModuleGroupTemplate)2 IDynamicModuleGroupTemplate (org.talend.hadoop.distribution.dynamic.template.IDynamicModuleGroupTemplate)2 AbstractDistributionTest (org.talend.hadoop.distribution.test.AbstractDistributionTest)2 HashMap (java.util.HashMap)1 HiveComponent (org.talend.hadoop.distribution.component.HiveComponent)1 MapRDBComponent (org.talend.hadoop.distribution.component.MapRDBComponent)1 CustomDistribution (org.talend.hadoop.distribution.custom.CustomDistribution)1