Search in sources :

Example 11 with Session

use of com.datastax.driver.core.Session in project cassandra by apache.

the class CqlInputFormat method getSplits.

public List<org.apache.hadoop.mapreduce.InputSplit> getSplits(JobContext context) throws IOException {
    Configuration conf = HadoopCompat.getConfiguration(context);
    validateConfiguration(conf);
    keyspace = ConfigHelper.getInputKeyspace(conf);
    cfName = ConfigHelper.getInputColumnFamily(conf);
    partitioner = ConfigHelper.getInputPartitioner(conf);
    logger.trace("partitioner is {}", partitioner);
    // canonical ranges, split into pieces, fetching the splits in parallel
    ExecutorService executor = new ThreadPoolExecutor(0, 128, 60L, TimeUnit.SECONDS, new LinkedBlockingQueue<Runnable>());
    List<org.apache.hadoop.mapreduce.InputSplit> splits = new ArrayList<>();
    try (Cluster cluster = CqlConfigHelper.getInputCluster(ConfigHelper.getInputInitialAddress(conf).split(","), conf);
        Session session = cluster.connect()) {
        List<Future<List<org.apache.hadoop.mapreduce.InputSplit>>> splitfutures = new ArrayList<>();
        Pair<String, String> jobKeyRange = ConfigHelper.getInputKeyRange(conf);
        Range<Token> jobRange = null;
        if (jobKeyRange != null) {
            jobRange = new Range<>(partitioner.getTokenFactory().fromString(jobKeyRange.left), partitioner.getTokenFactory().fromString(jobKeyRange.right));
        }
        Metadata metadata = cluster.getMetadata();
        // canonical ranges and nodes holding replicas
        Map<TokenRange, Set<Host>> masterRangeNodes = getRangeMap(keyspace, metadata);
        for (TokenRange range : masterRangeNodes.keySet()) {
            if (jobRange == null) {
                // for each tokenRange, pick a live owner and ask it to compute bite-sized splits
                splitfutures.add(executor.submit(new SplitCallable(range, masterRangeNodes.get(range), conf, session)));
            } else {
                TokenRange jobTokenRange = rangeToTokenRange(metadata, jobRange);
                if (range.intersects(jobTokenRange)) {
                    for (TokenRange intersection : range.intersectWith(jobTokenRange)) {
                        // for each tokenRange, pick a live owner and ask it to compute bite-sized splits
                        splitfutures.add(executor.submit(new SplitCallable(intersection, masterRangeNodes.get(range), conf, session)));
                    }
                }
            }
        }
        // wait until we have all the results back
        for (Future<List<org.apache.hadoop.mapreduce.InputSplit>> futureInputSplits : splitfutures) {
            try {
                splits.addAll(futureInputSplits.get());
            } catch (Exception e) {
                throw new IOException("Could not get input splits", e);
            }
        }
    } finally {
        executor.shutdownNow();
    }
    assert splits.size() > 0;
    Collections.shuffle(splits, new Random(System.nanoTime()));
    return splits;
}
Also used : ResultSet(com.datastax.driver.core.ResultSet) Configuration(org.apache.hadoop.conf.Configuration) Metadata(com.datastax.driver.core.Metadata) org.apache.cassandra.hadoop(org.apache.cassandra.hadoop) InputSplit(org.apache.hadoop.mapred.InputSplit) Cluster(com.datastax.driver.core.Cluster) IOException(java.io.IOException) IOException(java.io.IOException) TokenRange(com.datastax.driver.core.TokenRange) Session(com.datastax.driver.core.Session)

Example 12 with Session

use of com.datastax.driver.core.Session in project cassandra by apache.

the class DeleteTest method lostDeletesTest.

@Test
public void lostDeletesTest() {
    Session session = sessionNet();
    for (int i = 0; i < 500; i++) {
        session.execute(pstmtI.bind(1, 1, "inhB", "valB"));
        ResultSetFuture[] futures = load();
        Assert.assertTrue(futures[0].getUninterruptibly().isExhausted());
        Assert.assertTrue(futures[1].getUninterruptibly().isExhausted());
        Assert.assertNotNull(futures[2].getUninterruptibly().one());
        Assert.assertTrue(futures[3].getUninterruptibly().isExhausted());
        Assert.assertTrue(futures[4].getUninterruptibly().isExhausted());
        session.execute(pstmtU.bind("inhBu", "valBu", 1, 1));
        futures = load();
        Assert.assertTrue(futures[0].getUninterruptibly().isExhausted());
        Assert.assertTrue(futures[1].getUninterruptibly().isExhausted());
        Assert.assertNotNull(futures[2].getUninterruptibly().one());
        Assert.assertTrue(futures[3].getUninterruptibly().isExhausted());
        Assert.assertTrue(futures[4].getUninterruptibly().isExhausted());
        session.execute(pstmtD.bind(1, 1));
        futures = load();
        Assert.assertTrue(futures[0].getUninterruptibly().isExhausted());
        Assert.assertTrue(futures[1].getUninterruptibly().isExhausted());
        Assert.assertTrue(futures[2].getUninterruptibly().isExhausted());
        Assert.assertTrue(futures[3].getUninterruptibly().isExhausted());
        Assert.assertTrue(futures[4].getUninterruptibly().isExhausted());
    }
}
Also used : ResultSetFuture(com.datastax.driver.core.ResultSetFuture) Session(com.datastax.driver.core.Session) Test(org.junit.Test)

Example 13 with Session

use of com.datastax.driver.core.Session in project cassandra by apache.

the class IndexQueryPagingTest method executePagingQuery.

private void executePagingQuery(String cql, int rowCount) {
    // Execute an index query which should return all rows,
    // setting the fetch size < than the row count. Assert
    // that all rows are returned, so we know that paging
    // of the results was involved.
    Session session = sessionNet();
    Statement stmt = new SimpleStatement(String.format(cql, KEYSPACE + '.' + currentTable()));
    stmt.setFetchSize(rowCount - 1);
    assertEquals(rowCount, session.execute(stmt).all().size());
}
Also used : SimpleStatement(com.datastax.driver.core.SimpleStatement) Statement(com.datastax.driver.core.Statement) SimpleStatement(com.datastax.driver.core.SimpleStatement) Session(com.datastax.driver.core.Session)

Example 14 with Session

use of com.datastax.driver.core.Session in project cassandra by apache.

the class TraceCqlTest method testCqlStatementTracing.

@Test
public void testCqlStatementTracing() throws Throwable {
    requireNetwork();
    createTable("CREATE TABLE %s (id int primary key, v1 text, v2 text)");
    execute("INSERT INTO %s (id, v1, v2) VALUES (?, ?, ?)", 1, "Apache", "Cassandra");
    execute("INSERT INTO %s (id, v1, v2) VALUES (?, ?, ?)", 2, "trace", "test");
    try (Session session = sessionNet()) {
        String cql = "SELECT id, v1, v2 FROM " + KEYSPACE + '.' + currentTable() + " WHERE id = ?";
        PreparedStatement pstmt = session.prepare(cql).enableTracing();
        QueryTrace trace = session.execute(pstmt.bind(1)).getExecutionInfo().getQueryTrace();
        assertEquals(cql, trace.getParameters().get("query"));
        assertEquals("1", trace.getParameters().get("bound_var_0_id"));
        String cql2 = "SELECT id, v1, v2 FROM " + KEYSPACE + '.' + currentTable() + " WHERE id IN (?, ?, ?)";
        pstmt = session.prepare(cql2).enableTracing();
        trace = session.execute(pstmt.bind(19, 15, 16)).getExecutionInfo().getQueryTrace();
        assertEquals(cql2, trace.getParameters().get("query"));
        assertEquals("19", trace.getParameters().get("bound_var_0_id"));
        assertEquals("15", trace.getParameters().get("bound_var_1_id"));
        assertEquals("16", trace.getParameters().get("bound_var_2_id"));
        //some more complex tests for tables with map and tuple data types and long bound values
        createTable("CREATE TABLE %s (id int primary key, v1 text, v2 tuple<int, text, float>, v3 map<int, text>)");
        execute("INSERT INTO %s (id, v1, v2, v3) values (?, ?, ?, ?)", 12, "mahdix", tuple(3, "bar", 2.1f), map(1290, "birthday", 39, "anniversary"));
        execute("INSERT INTO %s (id, v1, v2, v3) values (?, ?, ?, ?)", 274, "CassandraRocks", tuple(9, "foo", 3.14f), map(9181, "statement", 716, "public speech"));
        cql = "SELECT id, v1, v2, v3 FROM " + KEYSPACE + '.' + currentTable() + " WHERE v2 = ? ALLOW FILTERING";
        pstmt = session.prepare(cql).enableTracing();
        TupleType tt = TupleType.of(ProtocolVersion.NEWEST_SUPPORTED, CodecRegistry.DEFAULT_INSTANCE, DataType.cint(), DataType.text(), DataType.cfloat());
        TupleValue value = tt.newValue();
        value.setInt(0, 3);
        value.setString(1, "bar");
        value.setFloat(2, 2.1f);
        trace = session.execute(pstmt.bind(value)).getExecutionInfo().getQueryTrace();
        assertEquals(cql, trace.getParameters().get("query"));
        assertEquals("(3, 'bar', 2.1)", trace.getParameters().get("bound_var_0_v2"));
        cql2 = "SELECT id, v1, v2, v3 FROM " + KEYSPACE + '.' + currentTable() + " WHERE v3 CONTAINS KEY ? ALLOW FILTERING";
        pstmt = session.prepare(cql2).enableTracing();
        trace = session.execute(pstmt.bind(9181)).getExecutionInfo().getQueryTrace();
        assertEquals(cql2, trace.getParameters().get("query"));
        assertEquals("9181", trace.getParameters().get("bound_var_0_key(v3)"));
        String boundValue = "Indulgence announcing uncommonly met she continuing two unpleasing terminated. Now " + "busy say down the shed eyes roof paid her. Of shameless collected suspicion existence " + "in. Share walls stuff think but the arise guest. Course suffer to do he sussex it " + "window advice. Yet matter enable misery end extent common men should. Her indulgence " + "but assistance favourable cultivated everything collecting." + "On projection apartments unsatiable so if he entreaties appearance. Rose you wife " + "how set lady half wish. Hard sing an in true felt. Welcomed stronger if steepest " + "ecstatic an suitable finished of oh. Entered at excited at forming between so " + "produce. Chicken unknown besides attacks gay compact out you. Continuing no " + "simplicity no favourable on reasonably melancholy estimating. Own hence views two " + "ask right whole ten seems. What near kept met call old west dine. Our announcing " + "sufficient why pianoforte. Full age foo set feel her told. Tastes giving in passed" + "direct me valley as supply. End great stood boy noisy often way taken short. Rent the " + "size our more door. Years no place abode in no child my. Man pianoforte too " + "solicitude friendship devonshire ten ask. Course sooner its silent but formal she " + "led. Extensive he assurance extremity at breakfast. Dear sure ye sold fine sell on. " + "Projection at up connection literature insensible motionless projecting." + "Nor hence hoped her after other known defer his. For county now sister engage had " + "season better had waited. Occasional mrs interested far expression acceptance. Day " + "either mrs talent pulled men rather regret admire but. Life ye sake it shed. Five " + "lady he cold in meet up. Service get met adapted matters offence for. Principles man " + "any insipidity age you simplicity understood. Do offering pleasure no ecstatic " + "whatever on mr directly. ";
        String cql3 = "SELECT id, v1, v2, v3 FROM " + KEYSPACE + '.' + currentTable() + " WHERE v3 CONTAINS ? ALLOW FILTERING";
        pstmt = session.prepare(cql3).enableTracing();
        trace = session.execute(pstmt.bind(boundValue)).getExecutionInfo().getQueryTrace();
        assertEquals(cql3, trace.getParameters().get("query"));
        //when tracing is done, this boundValue will be surrounded by single quote, and first 1000 characters
        //will be filtered. Here we take into account single quotes by adding them to the expected output
        assertEquals("'" + boundValue.substring(0, 999) + "...'", trace.getParameters().get("bound_var_0_value(v3)"));
    }
}
Also used : TupleType(com.datastax.driver.core.TupleType) PreparedStatement(com.datastax.driver.core.PreparedStatement) QueryTrace(com.datastax.driver.core.QueryTrace) TupleValue(com.datastax.driver.core.TupleValue) Session(com.datastax.driver.core.Session) Test(org.junit.Test)

Example 15 with Session

use of com.datastax.driver.core.Session in project flink by apache.

the class CassandraTupleWriteAheadSinkTest method testAckLoopExitOnException.

@Test(timeout = 20000)
public void testAckLoopExitOnException() throws Exception {
    final AtomicReference<Runnable> runnableFuture = new AtomicReference<>();
    final ClusterBuilder clusterBuilder = new ClusterBuilder() {

        private static final long serialVersionUID = 4624400760492936756L;

        @Override
        protected Cluster buildCluster(Cluster.Builder builder) {
            try {
                BoundStatement boundStatement = mock(BoundStatement.class);
                when(boundStatement.setDefaultTimestamp(any(long.class))).thenReturn(boundStatement);
                PreparedStatement preparedStatement = mock(PreparedStatement.class);
                when(preparedStatement.bind(Matchers.anyVararg())).thenReturn(boundStatement);
                ResultSetFuture future = mock(ResultSetFuture.class);
                when(future.get()).thenThrow(new RuntimeException("Expected exception."));
                doAnswer(new Answer<Void>() {

                    @Override
                    public Void answer(InvocationOnMock invocationOnMock) throws Throwable {
                        synchronized (runnableFuture) {
                            runnableFuture.set((((Runnable) invocationOnMock.getArguments()[0])));
                            runnableFuture.notifyAll();
                        }
                        return null;
                    }
                }).when(future).addListener(any(Runnable.class), any(Executor.class));
                Session session = mock(Session.class);
                when(session.prepare(anyString())).thenReturn(preparedStatement);
                when(session.executeAsync(any(BoundStatement.class))).thenReturn(future);
                Cluster cluster = mock(Cluster.class);
                when(cluster.connect()).thenReturn(session);
                return cluster;
            } catch (Exception e) {
                throw new RuntimeException(e);
            }
        }
    };
    // Our asynchronous executor thread
    new Thread(new Runnable() {

        @Override
        public void run() {
            synchronized (runnableFuture) {
                while (runnableFuture.get() == null) {
                    try {
                        runnableFuture.wait();
                    } catch (InterruptedException e) {
                    // ignore interrupts
                    }
                }
            }
            runnableFuture.get().run();
        }
    }).start();
    CheckpointCommitter cc = mock(CheckpointCommitter.class);
    final CassandraTupleWriteAheadSink<Tuple0> sink = new CassandraTupleWriteAheadSink<>("abc", TupleTypeInfo.of(Tuple0.class).createSerializer(new ExecutionConfig()), clusterBuilder, cc);
    OneInputStreamOperatorTestHarness<Tuple0, Tuple0> harness = new OneInputStreamOperatorTestHarness(sink);
    harness.getEnvironment().getTaskConfiguration().setBoolean("checkpointing", true);
    harness.setup();
    sink.open();
    // we should leave the loop and return false since we've seen an exception
    assertFalse(sink.sendValues(Collections.singleton(new Tuple0()), 0L));
    sink.close();
}
Also used : ResultSetFuture(com.datastax.driver.core.ResultSetFuture) ExecutionConfig(org.apache.flink.api.common.ExecutionConfig) CheckpointCommitter(org.apache.flink.streaming.runtime.operators.CheckpointCommitter) Executor(java.util.concurrent.Executor) Cluster(com.datastax.driver.core.Cluster) AtomicReference(java.util.concurrent.atomic.AtomicReference) PreparedStatement(com.datastax.driver.core.PreparedStatement) OneInputStreamOperatorTestHarness(org.apache.flink.streaming.util.OneInputStreamOperatorTestHarness) Tuple0(org.apache.flink.api.java.tuple.Tuple0) InvocationOnMock(org.mockito.invocation.InvocationOnMock) BoundStatement(com.datastax.driver.core.BoundStatement) Session(com.datastax.driver.core.Session) Test(org.junit.Test)

Aggregations

Session (com.datastax.driver.core.Session)30 Cluster (com.datastax.driver.core.Cluster)16 Test (org.junit.Test)16 ResultSet (com.datastax.driver.core.ResultSet)11 Row (com.datastax.driver.core.Row)10 BoundStatement (com.datastax.driver.core.BoundStatement)3 PreparedStatement (com.datastax.driver.core.PreparedStatement)3 ResultSetFuture (com.datastax.driver.core.ResultSetFuture)3 Update (com.datastax.driver.core.querybuilder.Update)3 ReconnectionPolicy (com.datastax.driver.core.policies.ReconnectionPolicy)2 IOException (java.io.IOException)2 IntegrationTest (tech.sirwellington.alchemy.annotations.testing.IntegrationTest)2 ColumnMetadata (com.datastax.driver.core.ColumnMetadata)1 DataType (com.datastax.driver.core.DataType)1 Host (com.datastax.driver.core.Host)1 IndexMetadata (com.datastax.driver.core.IndexMetadata)1 KeyspaceMetadata (com.datastax.driver.core.KeyspaceMetadata)1 Metadata (com.datastax.driver.core.Metadata)1 QueryTrace (com.datastax.driver.core.QueryTrace)1 SimpleStatement (com.datastax.driver.core.SimpleStatement)1