Search in sources :

Example 46 with PreparedStatement

use of com.datastax.driver.core.PreparedStatement in project cassandra by apache.

the class CQLMetricsTableTest method testUsingPrepareStmts.

@Test
public void testUsingPrepareStmts() throws Throwable {
    CQLMetricsTable table = new CQLMetricsTable(KS_NAME);
    VirtualKeyspaceRegistry.instance.register(new VirtualKeyspace(KS_NAME, ImmutableList.of(table)));
    String ks = createKeyspace("CREATE KEYSPACE %s WITH replication={ 'class' : 'SimpleStrategy', 'replication_factor' : 1 }");
    String tbl = createTable(ks, "CREATE TABLE %s (id int PRIMARY KEY, cid int, val text)");
    Session session = sessionNet();
    String insertCQL = "INSERT INTO " + ks + "." + tbl + " (id, cid, val) VALUES (?, ?, ?)";
    PreparedStatement preparedInsert = session.prepare(insertCQL);
    String selectCQL = "Select * from " + ks + "." + tbl + " where id = ?";
    PreparedStatement preparedSelect = session.prepare(selectCQL);
    for (int i = 0; i < 10; i++) {
        session.execute(preparedInsert.bind(i, i, "value" + i));
        session.execute(preparedSelect.bind(i));
    }
    queryAndValidateMetrics(QueryProcessor.metrics);
}
Also used : PreparedStatement(com.datastax.driver.core.PreparedStatement) Session(com.datastax.driver.core.Session) Test(org.junit.Test)

Example 47 with PreparedStatement

use of com.datastax.driver.core.PreparedStatement in project flink by apache.

the class CassandraTupleWriteAheadSinkTest method testAckLoopExitOnException.

@Test(timeout = 20000)
public void testAckLoopExitOnException() throws Exception {
    final AtomicReference<Runnable> runnableFuture = new AtomicReference<>();
    final ClusterBuilder clusterBuilder = new ClusterBuilder() {

        private static final long serialVersionUID = 4624400760492936756L;

        @Override
        protected Cluster buildCluster(Cluster.Builder builder) {
            try {
                BoundStatement boundStatement = mock(BoundStatement.class);
                when(boundStatement.setDefaultTimestamp(any(long.class))).thenReturn(boundStatement);
                PreparedStatement preparedStatement = mock(PreparedStatement.class);
                when(preparedStatement.bind(Matchers.anyVararg())).thenReturn(boundStatement);
                ResultSetFuture future = mock(ResultSetFuture.class);
                when(future.get()).thenThrow(new RuntimeException("Expected exception."));
                doAnswer(new Answer<Void>() {

                    @Override
                    public Void answer(InvocationOnMock invocationOnMock) throws Throwable {
                        synchronized (runnableFuture) {
                            runnableFuture.set((((Runnable) invocationOnMock.getArguments()[0])));
                            runnableFuture.notifyAll();
                        }
                        return null;
                    }
                }).when(future).addListener(any(Runnable.class), any(Executor.class));
                Session session = mock(Session.class);
                when(session.prepare(anyString())).thenReturn(preparedStatement);
                when(session.executeAsync(any(BoundStatement.class))).thenReturn(future);
                Cluster cluster = mock(Cluster.class);
                when(cluster.connect()).thenReturn(session);
                return cluster;
            } catch (Exception e) {
                throw new RuntimeException(e);
            }
        }
    };
    // Our asynchronous executor thread
    new Thread(new Runnable() {

        @Override
        public void run() {
            synchronized (runnableFuture) {
                while (runnableFuture.get() == null) {
                    try {
                        runnableFuture.wait();
                    } catch (InterruptedException e) {
                    // ignore interrupts
                    }
                }
            }
            runnableFuture.get().run();
        }
    }).start();
    CheckpointCommitter cc = mock(CheckpointCommitter.class);
    final CassandraTupleWriteAheadSink<Tuple0> sink = new CassandraTupleWriteAheadSink<>("abc", TupleTypeInfo.of(Tuple0.class).createSerializer(new ExecutionConfig()), clusterBuilder, cc);
    OneInputStreamOperatorTestHarness<Tuple0, Tuple0> harness = new OneInputStreamOperatorTestHarness<>(sink);
    harness.getEnvironment().getTaskConfiguration().setBoolean("checkpointing", true);
    harness.setup();
    sink.open();
    // we should leave the loop and return false since we've seen an exception
    assertFalse(sink.sendValues(Collections.singleton(new Tuple0()), 1L, 0L));
    sink.close();
}
Also used : ResultSetFuture(com.datastax.driver.core.ResultSetFuture) ExecutionConfig(org.apache.flink.api.common.ExecutionConfig) CheckpointCommitter(org.apache.flink.streaming.runtime.operators.CheckpointCommitter) Executor(java.util.concurrent.Executor) Cluster(com.datastax.driver.core.Cluster) AtomicReference(java.util.concurrent.atomic.AtomicReference) PreparedStatement(com.datastax.driver.core.PreparedStatement) OneInputStreamOperatorTestHarness(org.apache.flink.streaming.util.OneInputStreamOperatorTestHarness) Tuple0(org.apache.flink.api.java.tuple.Tuple0) InvocationOnMock(org.mockito.invocation.InvocationOnMock) BoundStatement(com.datastax.driver.core.BoundStatement) Session(com.datastax.driver.core.Session) Test(org.junit.Test)

Example 48 with PreparedStatement

use of com.datastax.driver.core.PreparedStatement in project YCSB by brianfrankcooper.

the class CassandraCQLClient method scan.

/**
 * Perform a range scan for a set of records in the database. Each field/value
 * pair from the result will be stored in a HashMap.
 *
 * Cassandra CQL uses "token" method for range scan which doesn't always yield
 * intuitive results.
 *
 * @param table
 *          The name of the table
 * @param startkey
 *          The record key of the first record to read.
 * @param recordcount
 *          The number of records to read
 * @param fields
 *          The list of fields to read, or null for all of them
 * @param result
 *          A Vector of HashMaps, where each HashMap is a set field/value
 *          pairs for one record
 * @return Zero on success, a non-zero error code on error
 */
@Override
public Status scan(String table, String startkey, int recordcount, Set<String> fields, Vector<HashMap<String, ByteIterator>> result) {
    try {
        PreparedStatement stmt = (fields == null) ? scanAllStmt.get() : scanStmts.get(fields);
        // Prepare statement on demand
        if (stmt == null) {
            Select.Builder selectBuilder;
            if (fields == null) {
                selectBuilder = QueryBuilder.select().all();
            } else {
                selectBuilder = QueryBuilder.select();
                for (String col : fields) {
                    ((Select.Selection) selectBuilder).column(col);
                }
            }
            Select selectStmt = selectBuilder.from(table);
            // The statement builder is not setup right for tokens.
            // So, we need to build it manually.
            String initialStmt = selectStmt.toString();
            StringBuilder scanStmt = new StringBuilder();
            scanStmt.append(initialStmt.substring(0, initialStmt.length() - 1));
            scanStmt.append(" WHERE ");
            scanStmt.append(QueryBuilder.token(YCSB_KEY));
            scanStmt.append(" >= ");
            scanStmt.append("token(");
            scanStmt.append(QueryBuilder.bindMarker());
            scanStmt.append(")");
            scanStmt.append(" LIMIT ");
            scanStmt.append(QueryBuilder.bindMarker());
            stmt = session.prepare(scanStmt.toString());
            stmt.setConsistencyLevel(readConsistencyLevel);
            if (trace) {
                stmt.enableTracing();
            }
            PreparedStatement prevStmt = (fields == null) ? scanAllStmt.getAndSet(stmt) : scanStmts.putIfAbsent(new HashSet(fields), stmt);
            if (prevStmt != null) {
                stmt = prevStmt;
            }
        }
        logger.debug(stmt.getQueryString());
        logger.debug("startKey = {}, recordcount = {}", startkey, recordcount);
        ResultSet rs = session.execute(stmt.bind(startkey, Integer.valueOf(recordcount)));
        HashMap<String, ByteIterator> tuple;
        while (!rs.isExhausted()) {
            Row row = rs.one();
            tuple = new HashMap<String, ByteIterator>();
            ColumnDefinitions cd = row.getColumnDefinitions();
            for (ColumnDefinitions.Definition def : cd) {
                ByteBuffer val = row.getBytesUnsafe(def.getName());
                if (val != null) {
                    tuple.put(def.getName(), new ByteArrayByteIterator(val.array()));
                } else {
                    tuple.put(def.getName(), null);
                }
            }
            result.add(tuple);
        }
        return Status.OK;
    } catch (Exception e) {
        logger.error(MessageFormatter.format("Error scanning with startkey: {}", startkey).getMessage(), e);
        return Status.ERROR;
    }
}
Also used : ColumnDefinitions(com.datastax.driver.core.ColumnDefinitions) PreparedStatement(com.datastax.driver.core.PreparedStatement) ByteBuffer(java.nio.ByteBuffer) DBException(site.ycsb.DBException) ByteArrayByteIterator(site.ycsb.ByteArrayByteIterator) ByteIterator(site.ycsb.ByteIterator) ByteArrayByteIterator(site.ycsb.ByteArrayByteIterator) Select(com.datastax.driver.core.querybuilder.Select) ResultSet(com.datastax.driver.core.ResultSet) Row(com.datastax.driver.core.Row) HashSet(java.util.HashSet)

Example 49 with PreparedStatement

use of com.datastax.driver.core.PreparedStatement in project YCSB by brianfrankcooper.

the class CassandraCQLClient method update.

/**
 * Update a record in the database. Any field/value pairs in the specified
 * values HashMap will be written into the record with the specified record
 * key, overwriting any existing values with the same field name.
 *
 * @param table
 *          The name of the table
 * @param key
 *          The record key of the record to write.
 * @param values
 *          A HashMap of field/value pairs to update in the record
 * @return Zero on success, a non-zero error code on error
 */
@Override
public Status update(String table, String key, Map<String, ByteIterator> values) {
    try {
        Set<String> fields = values.keySet();
        PreparedStatement stmt = updateStmts.get(fields);
        // Prepare statement on demand
        if (stmt == null) {
            Update updateStmt = QueryBuilder.update(table);
            // Add fields
            for (String field : fields) {
                updateStmt.with(QueryBuilder.set(field, QueryBuilder.bindMarker()));
            }
            // Add key
            updateStmt.where(QueryBuilder.eq(YCSB_KEY, QueryBuilder.bindMarker()));
            stmt = session.prepare(updateStmt);
            stmt.setConsistencyLevel(writeConsistencyLevel);
            if (trace) {
                stmt.enableTracing();
            }
            PreparedStatement prevStmt = updateStmts.putIfAbsent(new HashSet(fields), stmt);
            if (prevStmt != null) {
                stmt = prevStmt;
            }
        }
        if (logger.isDebugEnabled()) {
            logger.debug(stmt.getQueryString());
            logger.debug("key = {}", key);
            for (Map.Entry<String, ByteIterator> entry : values.entrySet()) {
                logger.debug("{} = {}", entry.getKey(), entry.getValue());
            }
        }
        // Add fields
        ColumnDefinitions vars = stmt.getVariables();
        BoundStatement boundStmt = stmt.bind();
        for (int i = 0; i < vars.size() - 1; i++) {
            boundStmt.setString(i, values.get(vars.getName(i)).toString());
        }
        // Add key
        boundStmt.setString(vars.size() - 1, key);
        session.execute(boundStmt);
        return Status.OK;
    } catch (Exception e) {
        logger.error(MessageFormatter.format("Error updating key: {}", key).getMessage(), e);
    }
    return Status.ERROR;
}
Also used : ColumnDefinitions(com.datastax.driver.core.ColumnDefinitions) ByteIterator(site.ycsb.ByteIterator) ByteArrayByteIterator(site.ycsb.ByteArrayByteIterator) PreparedStatement(com.datastax.driver.core.PreparedStatement) Update(com.datastax.driver.core.querybuilder.Update) HashMap(java.util.HashMap) ConcurrentMap(java.util.concurrent.ConcurrentMap) Map(java.util.Map) ConcurrentHashMap(java.util.concurrent.ConcurrentHashMap) BoundStatement(com.datastax.driver.core.BoundStatement) DBException(site.ycsb.DBException) HashSet(java.util.HashSet)

Example 50 with PreparedStatement

use of com.datastax.driver.core.PreparedStatement in project newts by OpenNMS.

the class CassandraIndexerTest method insertStatementsAreDeduplicatedWhenIndexingManySamples.

@Test
public void insertStatementsAreDeduplicatedWhenIndexingManySamples() {
    CassandraSession session = mock(CassandraSession.class);
    ArgumentCaptor<Statement> statementCaptor = ArgumentCaptor.forClass(Statement.class);
    when(session.executeAsync(statementCaptor.capture())).thenReturn(mock(ResultSetFuture.class));
    PreparedStatement statement = mock(PreparedStatement.class);
    BoundStatement boundStatement = mock(BoundStatement.class);
    when(session.prepare(any(RegularStatement.class))).thenReturn(statement);
    when(statement.bind()).thenReturn(boundStatement);
    when(boundStatement.setString(any(String.class), any(String.class))).thenReturn(boundStatement);
    CassandraIndexingOptions options = new CassandraIndexingOptions.Builder().withHierarchicalIndexing(true).withMaxBatchSize(1).build();
    MetricRegistry registry = new MetricRegistry();
    GuavaResourceMetadataCache cache = new GuavaResourceMetadataCache(2048, registry);
    CassandraIndexer indexer = new CassandraIndexer(session, 0, cache, registry, options, new EscapableResourceIdSplitter(), new ContextConfigurations());
    Resource r = new Resource("snmp:1589:vmware5Cpu:2:vmware5Cpu");
    List<Sample> samples = Lists.newArrayList();
    samples.add(new Sample(Timestamp.now(), r, "CpuCostopSum", MetricType.GAUGE, new Gauge(0)));
    samples.add(new Sample(Timestamp.now(), r, "CpuIdleSum", MetricType.GAUGE, new Gauge(19299.0)));
    samples.add(new Sample(Timestamp.now(), r, "CpuMaxLdSum", MetricType.GAUGE, new Gauge(0)));
    samples.add(new Sample(Timestamp.now(), r, "CpuOverlapSum", MetricType.GAUGE, new Gauge(5.0)));
    samples.add(new Sample(Timestamp.now(), r, "CpuRdySum", MetricType.GAUGE, new Gauge(41.0)));
    samples.add(new Sample(Timestamp.now(), r, "CpuRunSum", MetricType.GAUGE, new Gauge(619.0)));
    samples.add(new Sample(Timestamp.now(), r, "CpuSpwaitSum", MetricType.GAUGE, new Gauge(0)));
    samples.add(new Sample(Timestamp.now(), r, "CpuSystemSum", MetricType.GAUGE, new Gauge(0)));
    samples.add(new Sample(Timestamp.now(), r, "CpuUsagemhzAvg", MetricType.GAUGE, new Gauge(32.0)));
    samples.add(new Sample(Timestamp.now(), r, "CpuUsedSum", MetricType.GAUGE, new Gauge(299.0)));
    samples.add(new Sample(Timestamp.now(), r, "CpuWaitSum", MetricType.GAUGE, new Gauge(19343)));
    // Index the collection of samples
    indexer.update(samples);
    // Verify the number of exectuteAsync calls
    verify(session, times(20)).executeAsync(any(Statement.class));
}
Also used : ResultSetFuture(com.datastax.driver.core.ResultSetFuture) RegularStatement(com.datastax.driver.core.RegularStatement) PreparedStatement(com.datastax.driver.core.PreparedStatement) BoundStatement(com.datastax.driver.core.BoundStatement) Statement(com.datastax.driver.core.Statement) Sample(org.opennms.newts.api.Sample) MetricRegistry(com.codahale.metrics.MetricRegistry) Resource(org.opennms.newts.api.Resource) CassandraSession(org.opennms.newts.cassandra.CassandraSession) PreparedStatement(com.datastax.driver.core.PreparedStatement) RegularStatement(com.datastax.driver.core.RegularStatement) Gauge(org.opennms.newts.api.Gauge) ContextConfigurations(org.opennms.newts.cassandra.ContextConfigurations) BoundStatement(com.datastax.driver.core.BoundStatement) Test(org.junit.Test)

Aggregations

PreparedStatement (com.datastax.driver.core.PreparedStatement)113 ResultSet (com.datastax.driver.core.ResultSet)60 BoundStatement (com.datastax.driver.core.BoundStatement)59 Session (com.datastax.driver.core.Session)39 Test (org.junit.Test)30 Row (com.datastax.driver.core.Row)27 InvalidQueryException (com.datastax.driver.core.exceptions.InvalidQueryException)27 XMLStreamException (javolution.xml.stream.XMLStreamException)25 PersistenceException (org.mobicents.smsc.cassandra.PersistenceException)15 Cluster (com.datastax.driver.core.Cluster)9 Date (java.util.Date)9 IInvokableInstance (org.apache.cassandra.distributed.api.IInvokableInstance)8 ArrayList (java.util.ArrayList)7 List (java.util.List)7 Map (java.util.Map)7 QueryProcessor (org.apache.cassandra.cql3.QueryProcessor)7 GOSSIP (org.apache.cassandra.distributed.api.Feature.GOSSIP)7 NATIVE_PROTOCOL (org.apache.cassandra.distributed.api.Feature.NATIVE_PROTOCOL)7 NETWORK (org.apache.cassandra.distributed.api.Feature.NETWORK)7 ICluster (org.apache.cassandra.distributed.api.ICluster)7