Search in sources :

Example 1 with SmartConf

use of org.smartdata.conf.SmartConf in project SSM by Intel-bigdata.

the class SmartShell method main.

/**
   * main() has some simple utility methods
   * @param argv the command and its arguments
   * @throws Exception upon error
   */
public static void main(String[] argv) throws Exception {
    SmartShell shell = newShellInstance();
    Configuration conf = new SmartConf();
    conf.setQuietMode(false);
    shell.setConf(conf);
    int res;
    try {
        res = ToolRunner.run(shell, argv);
    } finally {
        shell.close();
    }
//System.exit(res);
}
Also used : Configuration(org.apache.hadoop.conf.Configuration) SmartConf(org.smartdata.conf.SmartConf)

Example 2 with SmartConf

use of org.smartdata.conf.SmartConf in project SSM by Intel-bigdata.

the class SmartServer method main.

public static void main(String[] args) {
    SmartConf conf = new SmartConf();
    // if SSM exit normally then the errorCode is 0
    int errorCode = 0;
    try {
        SmartServer ssm = createSSM(args, conf);
        if (ssm != null) {
            // TODO: block now,  to be refined
            while (true) {
                Thread.sleep(1000);
            }
        } else {
            errorCode = 1;
        }
    } catch (Exception e) {
        System.out.println("\n");
        e.printStackTrace();
        System.exit(1);
    } finally {
        System.exit(errorCode);
    }
}
Also used : SmartConf(org.smartdata.conf.SmartConf) URISyntaxException(java.net.URISyntaxException) AlreadyBeingCreatedException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException) IOException(java.io.IOException) RemoteException(org.apache.hadoop.ipc.RemoteException) ParseException(org.apache.commons.cli.ParseException)

Example 3 with SmartConf

use of org.smartdata.conf.SmartConf in project SSM by Intel-bigdata.

the class ActionMiniCluster method init.

@Before
public void init() throws Exception {
    SmartConf conf = new SmartConf();
    initConf(conf);
    cluster = new MiniDFSCluster.Builder(conf).numDataNodes(5).storagesPerDatanode(3).storageTypes(new StorageType[] { StorageType.DISK, StorageType.ARCHIVE, StorageType.SSD }).build();
    cluster.waitActive();
    dfs = cluster.getFileSystem();
    dfsClient = dfs.getClient();
    smartContext = new SmartContext(conf);
}
Also used : MiniDFSCluster(org.apache.hadoop.hdfs.MiniDFSCluster) SmartContext(org.smartdata.SmartContext) SmartConf(org.smartdata.conf.SmartConf) Before(org.junit.Before)

Example 4 with SmartConf

use of org.smartdata.conf.SmartConf in project SSM by Intel-bigdata.

the class TestSubmitRule method setUp.

@Before
public void setUp() throws Exception {
    conf = new SmartConf();
    cluster = new MiniDFSCluster.Builder(conf).numDataNodes(3).build();
    Collection<URI> namenodes = DFSUtil.getInternalNsRpcUris(conf);
    List<URI> uriList = new ArrayList<>(namenodes);
    conf.set(DFS_NAMENODE_HTTP_ADDRESS_KEY, uriList.get(0).toString());
    conf.set(SmartConfKeys.DFS_SSM_NAMENODE_RPCSERVER_KEY, uriList.get(0).toString());
    // Set db used
    String dbFile = TestDBUtil.getUniqueEmptySqliteDBFile();
    String dbUrl = Util.SQLITE_URL_PREFIX + dbFile;
    conf.set(SmartConfKeys.DFS_SSM_DEFAULT_DB_URL_KEY, dbUrl);
    // rpcServer start in SmartServer
    ssm = SmartServer.createSSM(null, conf);
}
Also used : MiniDFSCluster(org.apache.hadoop.hdfs.MiniDFSCluster) ArrayList(java.util.ArrayList) SmartConf(org.smartdata.conf.SmartConf) URI(java.net.URI) Before(org.junit.Before)

Example 5 with SmartConf

use of org.smartdata.conf.SmartConf in project SSM by Intel-bigdata.

the class TestNamespaceFetcher method testNamespaceFetcher.

@Test
public void testNamespaceFetcher() throws IOException, InterruptedException, MissingEventsException, SQLException {
    final Configuration conf = new SmartConf();
    final MiniDFSCluster cluster = new MiniDFSCluster.Builder(conf).numDataNodes(2).build();
    final DistributedFileSystem dfs = cluster.getFileSystem();
    dfs.mkdir(new Path("/user"), new FsPermission("777"));
    dfs.create(new Path("/user/user1"));
    dfs.create(new Path("/user/user2"));
    dfs.mkdir(new Path("/tmp"), new FsPermission("777"));
    DFSClient client = dfs.getClient();
    DBAdapter adapter = mock(DBAdapter.class);
    NamespaceFetcher fetcher = new NamespaceFetcher(client, adapter, 100);
    fetcher.startFetch();
    List<String> expected = Arrays.asList("/", "/user", "/user/user1", "/user/user2", "/tmp");
    Thread.sleep(1000);
    verify(adapter).insertFiles(argThat(new FileStatusArgMatcher(expected)));
    fetcher.stop();
    cluster.shutdown();
}
Also used : Path(org.apache.hadoop.fs.Path) DFSClient(org.apache.hadoop.hdfs.DFSClient) MiniDFSCluster(org.apache.hadoop.hdfs.MiniDFSCluster) Configuration(org.apache.hadoop.conf.Configuration) DistributedFileSystem(org.apache.hadoop.hdfs.DistributedFileSystem) DBAdapter(org.smartdata.server.metastore.DBAdapter) SmartConf(org.smartdata.conf.SmartConf) FsPermission(org.apache.hadoop.fs.permission.FsPermission) Test(org.junit.Test)

Aggregations

SmartConf (org.smartdata.conf.SmartConf)9 MiniDFSCluster (org.apache.hadoop.hdfs.MiniDFSCluster)6 URI (java.net.URI)4 ArrayList (java.util.ArrayList)4 Configuration (org.apache.hadoop.conf.Configuration)3 Before (org.junit.Before)3 Test (org.junit.Test)3 IOException (java.io.IOException)2 DistributedFileSystem (org.apache.hadoop.hdfs.DistributedFileSystem)2 SmartAdmin (org.smartdata.admin.SmartAdmin)2 URISyntaxException (java.net.URISyntaxException)1 ParseException (org.apache.commons.cli.ParseException)1 Path (org.apache.hadoop.fs.Path)1 FsPermission (org.apache.hadoop.fs.permission.FsPermission)1 DFSClient (org.apache.hadoop.hdfs.DFSClient)1 AlreadyBeingCreatedException (org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException)1 RemoteException (org.apache.hadoop.ipc.RemoteException)1 SmartContext (org.smartdata.SmartContext)1 RuleInfo (org.smartdata.common.rule.RuleInfo)1 DBAdapter (org.smartdata.server.metastore.DBAdapter)1