Search in sources :

Example 16 with ScannerOpts

use of org.apache.accumulo.core.cli.ScannerOpts in project accumulo by apache.

the class BinaryIT method runTest.

public static void runTest(Connector c, String tableName) throws Exception {
    BatchWriterOpts bwOpts = new BatchWriterOpts();
    ScannerOpts scanOpts = new ScannerOpts();
    TestBinaryRows.Opts opts = new TestBinaryRows.Opts();
    opts.setTableName(tableName);
    opts.start = 0;
    opts.num = 100000;
    opts.mode = "ingest";
    TestBinaryRows.runTest(c, opts, bwOpts, scanOpts);
    opts.mode = "verify";
    TestBinaryRows.runTest(c, opts, bwOpts, scanOpts);
    opts.start = 25000;
    opts.num = 50000;
    opts.mode = "delete";
    TestBinaryRows.runTest(c, opts, bwOpts, scanOpts);
    opts.start = 0;
    opts.num = 25000;
    opts.mode = "verify";
    TestBinaryRows.runTest(c, opts, bwOpts, scanOpts);
    opts.start = 75000;
    opts.num = 25000;
    opts.mode = "randomLookups";
    TestBinaryRows.runTest(c, opts, bwOpts, scanOpts);
    opts.start = 25000;
    opts.num = 50000;
    opts.mode = "verifyDeleted";
    TestBinaryRows.runTest(c, opts, bwOpts, scanOpts);
}
Also used : TestBinaryRows(org.apache.accumulo.test.TestBinaryRows) ScannerOpts(org.apache.accumulo.core.cli.ScannerOpts) BatchWriterOpts(org.apache.accumulo.core.cli.BatchWriterOpts) ScannerOpts(org.apache.accumulo.core.cli.ScannerOpts) BatchWriterOpts(org.apache.accumulo.core.cli.BatchWriterOpts)

Example 17 with ScannerOpts

use of org.apache.accumulo.core.cli.ScannerOpts in project accumulo by apache.

the class BulkSplitOptimizationIT method testBulkSplitOptimization.

@Test
public void testBulkSplitOptimization() throws Exception {
    final Connector c = getConnector();
    final String tableName = getUniqueNames(1)[0];
    c.tableOperations().create(tableName);
    c.tableOperations().setProperty(tableName, Property.TABLE_MAJC_RATIO.getKey(), "1000");
    c.tableOperations().setProperty(tableName, Property.TABLE_FILE_MAX.getKey(), "1000");
    c.tableOperations().setProperty(tableName, Property.TABLE_SPLIT_THRESHOLD.getKey(), "1G");
    FileSystem fs = cluster.getFileSystem();
    Path testDir = new Path(getUsableDir(), "testmf");
    FunctionalTestUtils.createRFiles(c, fs, testDir.toString(), ROWS, SPLITS, 8);
    FileStatus[] stats = fs.listStatus(testDir);
    System.out.println("Number of generated files: " + stats.length);
    FunctionalTestUtils.bulkImport(c, fs, tableName, testDir.toString());
    FunctionalTestUtils.checkSplits(c, tableName, 0, 0);
    FunctionalTestUtils.checkRFiles(c, tableName, 1, 1, 100, 100);
    // initiate splits
    getConnector().tableOperations().setProperty(tableName, Property.TABLE_SPLIT_THRESHOLD.getKey(), "100K");
    sleepUninterruptibly(2, TimeUnit.SECONDS);
    // wait until over split threshold -- should be 78 splits
    while (getConnector().tableOperations().listSplits(tableName).size() < 75) {
        sleepUninterruptibly(500, TimeUnit.MILLISECONDS);
    }
    FunctionalTestUtils.checkSplits(c, tableName, 50, 100);
    VerifyIngest.Opts opts = new VerifyIngest.Opts();
    opts.timestamp = 1;
    opts.dataSize = 50;
    opts.random = 56;
    opts.rows = 100000;
    opts.startRow = 0;
    opts.cols = 1;
    opts.setTableName(tableName);
    AuthenticationToken adminToken = getAdminToken();
    if (adminToken instanceof PasswordToken) {
        PasswordToken token = (PasswordToken) getAdminToken();
        opts.setPassword(new Password(new String(token.getPassword(), UTF_8)));
        opts.setPrincipal(getAdminPrincipal());
    } else if (adminToken instanceof KerberosToken) {
        ClientConfiguration clientConf = cluster.getClientConfig();
        opts.updateKerberosCredentials(clientConf);
    } else {
        Assert.fail("Unknown token type");
    }
    VerifyIngest.verifyIngest(c, opts, new ScannerOpts());
    // ensure each tablet does not have all map files, should be ~2.5 files per tablet
    FunctionalTestUtils.checkRFiles(c, tableName, 50, 100, 1, 4);
}
Also used : Path(org.apache.hadoop.fs.Path) Connector(org.apache.accumulo.core.client.Connector) FileStatus(org.apache.hadoop.fs.FileStatus) AuthenticationToken(org.apache.accumulo.core.client.security.tokens.AuthenticationToken) ScannerOpts(org.apache.accumulo.core.cli.ScannerOpts) KerberosToken(org.apache.accumulo.core.client.security.tokens.KerberosToken) ScannerOpts(org.apache.accumulo.core.cli.ScannerOpts) PasswordToken(org.apache.accumulo.core.client.security.tokens.PasswordToken) VerifyIngest(org.apache.accumulo.test.VerifyIngest) FileSystem(org.apache.hadoop.fs.FileSystem) ClientConfiguration(org.apache.accumulo.core.client.ClientConfiguration) Password(org.apache.accumulo.core.cli.ClientOpts.Password) Test(org.junit.Test)

Example 18 with ScannerOpts

use of org.apache.accumulo.core.cli.ScannerOpts in project accumulo by apache.

the class VerifyIngest method main.

public static void main(String[] args) throws Exception {
    Opts opts = new Opts();
    ScannerOpts scanOpts = new ScannerOpts();
    opts.parseArgs(VerifyIngest.class.getName(), args, scanOpts);
    try {
        if (opts.trace) {
            String name = VerifyIngest.class.getSimpleName();
            DistributedTrace.enable();
            Trace.on(name);
            Trace.data("cmdLine", Arrays.asList(args).toString());
        }
        verifyIngest(opts.getConnector(), opts, scanOpts);
    } finally {
        Trace.off();
        DistributedTrace.disable();
    }
}
Also used : ScannerOpts(org.apache.accumulo.core.cli.ScannerOpts) ScannerOpts(org.apache.accumulo.core.cli.ScannerOpts)

Example 19 with ScannerOpts

use of org.apache.accumulo.core.cli.ScannerOpts in project accumulo by apache.

the class CompactionIT method test.

@Test
public void test() throws Exception {
    final Connector c = getConnector();
    final String tableName = getUniqueNames(1)[0];
    c.tableOperations().create(tableName);
    c.tableOperations().setProperty(tableName, Property.TABLE_MAJC_RATIO.getKey(), "1.0");
    FileSystem fs = getFileSystem();
    Path root = new Path(cluster.getTemporaryPath(), getClass().getName());
    Path testrf = new Path(root, "testrf");
    FunctionalTestUtils.createRFiles(c, fs, testrf.toString(), 500000, 59, 4);
    FunctionalTestUtils.bulkImport(c, fs, tableName, testrf.toString());
    int beforeCount = countFiles(c);
    final AtomicBoolean fail = new AtomicBoolean(false);
    final ClientConfiguration clientConf = cluster.getClientConfig();
    final int THREADS = 5;
    for (int count = 0; count < THREADS; count++) {
        ExecutorService executor = Executors.newFixedThreadPool(THREADS);
        final int span = 500000 / 59;
        for (int i = 0; i < 500000; i += 500000 / 59) {
            final int finalI = i;
            Runnable r = new Runnable() {

                @Override
                public void run() {
                    try {
                        VerifyIngest.Opts opts = new VerifyIngest.Opts();
                        opts.startRow = finalI;
                        opts.rows = span;
                        opts.random = 56;
                        opts.dataSize = 50;
                        opts.cols = 1;
                        opts.setTableName(tableName);
                        if (clientConf.hasSasl()) {
                            opts.updateKerberosCredentials(clientConf);
                        } else {
                            opts.setPrincipal(getAdminPrincipal());
                            PasswordToken passwordToken = (PasswordToken) getAdminToken();
                            opts.setPassword(new Password(new String(passwordToken.getPassword(), UTF_8)));
                        }
                        VerifyIngest.verifyIngest(c, opts, new ScannerOpts());
                    } catch (Exception ex) {
                        log.warn("Got exception verifying data", ex);
                        fail.set(true);
                    }
                }
            };
            executor.execute(r);
        }
        executor.shutdown();
        executor.awaitTermination(defaultTimeoutSeconds(), TimeUnit.SECONDS);
        assertFalse("Failed to successfully run all threads, Check the test output for error", fail.get());
    }
    int finalCount = countFiles(c);
    assertTrue(finalCount < beforeCount);
    try {
        getClusterControl().adminStopAll();
    } finally {
        // Make sure the internal state in the cluster is reset (e.g. processes in MAC)
        getCluster().stop();
        if (ClusterType.STANDALONE == getClusterType()) {
            // Then restart things for the next test if it's a standalone
            getCluster().start();
        }
    }
}
Also used : Path(org.apache.hadoop.fs.Path) Connector(org.apache.accumulo.core.client.Connector) ScannerOpts(org.apache.accumulo.core.cli.ScannerOpts) AtomicBoolean(java.util.concurrent.atomic.AtomicBoolean) ScannerOpts(org.apache.accumulo.core.cli.ScannerOpts) PasswordToken(org.apache.accumulo.core.client.security.tokens.PasswordToken) VerifyIngest(org.apache.accumulo.test.VerifyIngest) FileSystem(org.apache.hadoop.fs.FileSystem) RawLocalFileSystem(org.apache.hadoop.fs.RawLocalFileSystem) ExecutorService(java.util.concurrent.ExecutorService) ClientConfiguration(org.apache.accumulo.core.client.ClientConfiguration) Password(org.apache.accumulo.core.cli.ClientOpts.Password) Test(org.junit.Test)

Example 20 with ScannerOpts

use of org.apache.accumulo.core.cli.ScannerOpts in project accumulo by apache.

the class TestMultiTableIngest method main.

public static void main(String[] args) throws Exception {
    ArrayList<String> tableNames = new ArrayList<>();
    Opts opts = new Opts();
    ScannerOpts scanOpts = new ScannerOpts();
    BatchWriterOpts bwOpts = new BatchWriterOpts();
    opts.parseArgs(TestMultiTableIngest.class.getName(), args, scanOpts, bwOpts);
    // create the test table within accumulo
    Connector connector;
    try {
        connector = opts.getConnector();
    } catch (AccumuloException | AccumuloSecurityException e) {
        throw new RuntimeException(e);
    }
    for (int i = 0; i < opts.tables; i++) {
        tableNames.add(String.format(opts.prefix + "%04d", i));
    }
    if (!opts.readonly) {
        for (String table : tableNames) connector.tableOperations().create(table);
        MultiTableBatchWriter b;
        try {
            b = connector.createMultiTableBatchWriter(bwOpts.getBatchWriterConfig());
        } catch (Exception e) {
            throw new RuntimeException(e);
        }
        // populate
        for (int i = 0; i < opts.count; i++) {
            Mutation m = new Mutation(new Text(String.format("%06d", i)));
            m.put(new Text("col" + Integer.toString((i % 3) + 1)), new Text("qual"), new Value("junk".getBytes(UTF_8)));
            b.getBatchWriter(tableNames.get(i % tableNames.size())).addMutation(m);
        }
        try {
            b.close();
        } catch (MutationsRejectedException e) {
            throw new RuntimeException(e);
        }
    }
    try {
        readBack(opts, scanOpts, connector, tableNames);
    } catch (Exception e) {
        throw new RuntimeException(e);
    }
}
Also used : Connector(org.apache.accumulo.core.client.Connector) AccumuloException(org.apache.accumulo.core.client.AccumuloException) MultiTableBatchWriter(org.apache.accumulo.core.client.MultiTableBatchWriter) ClientOpts(org.apache.accumulo.core.cli.ClientOpts) BatchWriterOpts(org.apache.accumulo.core.cli.BatchWriterOpts) ScannerOpts(org.apache.accumulo.core.cli.ScannerOpts) ArrayList(java.util.ArrayList) Text(org.apache.hadoop.io.Text) MutationsRejectedException(org.apache.accumulo.core.client.MutationsRejectedException) AccumuloException(org.apache.accumulo.core.client.AccumuloException) AccumuloSecurityException(org.apache.accumulo.core.client.AccumuloSecurityException) ScannerOpts(org.apache.accumulo.core.cli.ScannerOpts) Value(org.apache.accumulo.core.data.Value) BatchWriterOpts(org.apache.accumulo.core.cli.BatchWriterOpts) AccumuloSecurityException(org.apache.accumulo.core.client.AccumuloSecurityException) Mutation(org.apache.accumulo.core.data.Mutation) MutationsRejectedException(org.apache.accumulo.core.client.MutationsRejectedException)

Aggregations

ScannerOpts (org.apache.accumulo.core.cli.ScannerOpts)25 BatchWriterOpts (org.apache.accumulo.core.cli.BatchWriterOpts)16 Connector (org.apache.accumulo.core.client.Connector)14 VerifyIngest (org.apache.accumulo.test.VerifyIngest)14 Test (org.junit.Test)13 ClientConfiguration (org.apache.accumulo.core.client.ClientConfiguration)11 TestIngest (org.apache.accumulo.test.TestIngest)11 ArrayList (java.util.ArrayList)4 ExecutorService (java.util.concurrent.ExecutorService)3 Password (org.apache.accumulo.core.cli.ClientOpts.Password)3 Scanner (org.apache.accumulo.core.client.Scanner)3 ClientContext (org.apache.accumulo.core.client.impl.ClientContext)3 Credentials (org.apache.accumulo.core.client.impl.Credentials)3 Table (org.apache.accumulo.core.client.impl.Table)3 PasswordToken (org.apache.accumulo.core.client.security.tokens.PasswordToken)3 Value (org.apache.accumulo.core.data.Value)3 KeyExtent (org.apache.accumulo.core.data.impl.KeyExtent)3 FileSystem (org.apache.hadoop.fs.FileSystem)3 Path (org.apache.hadoop.fs.Path)3 AccumuloException (org.apache.accumulo.core.client.AccumuloException)2