Search in sources :

Example 1 with Bind

use of org.skife.jdbi.v2.sqlobject.Bind in project dropwizard by dropwizard.

the class JDBITest method setUp.

@Before
public void setUp() throws Exception {
    when(environment.healthChecks()).thenReturn(healthChecks);
    when(environment.lifecycle()).thenReturn(lifecycleEnvironment);
    when(environment.metrics()).thenReturn(metricRegistry);
    when(environment.getHealthCheckExecutorService()).thenReturn(Executors.newSingleThreadExecutor());
    this.dbi = factory.build(environment, hsqlConfig, "hsql");
    final ArgumentCaptor<Managed> managedCaptor = ArgumentCaptor.forClass(Managed.class);
    verify(lifecycleEnvironment).manage(managedCaptor.capture());
    managed.addAll(managedCaptor.getAllValues());
    for (Managed obj : managed) {
        obj.start();
    }
    try (Handle handle = dbi.open()) {
        handle.createCall("DROP TABLE people IF EXISTS").invoke();
        handle.createCall("CREATE TABLE people (name varchar(100) primary key, email varchar(100), age int, created_at timestamp)").invoke();
        handle.createStatement("INSERT INTO people VALUES (?, ?, ?, ?)").bind(0, "Coda Hale").bind(1, "chale@yammer-inc.com").bind(2, 30).bind(3, new Timestamp(1365465078000L)).execute();
        handle.createStatement("INSERT INTO people VALUES (?, ?, ?, ?)").bind(0, "Kris Gale").bind(1, "kgale@yammer-inc.com").bind(2, 32).bind(3, new Timestamp(1365465078000L)).execute();
        handle.createStatement("INSERT INTO people VALUES (?, ?, ?, ?)").bind(0, "Old Guy").bindNull(1, Types.VARCHAR).bind(2, 99).bind(3, new Timestamp(1365465078000L)).execute();
        handle.createStatement("INSERT INTO people VALUES (?, ?, ?, ?)").bind(0, "Alice Example").bind(1, "alice@example.org").bind(2, 99).bindNull(3, Types.TIMESTAMP).execute();
    }
}
Also used : Timestamp(java.sql.Timestamp) Managed(io.dropwizard.lifecycle.Managed) Handle(org.skife.jdbi.v2.Handle) Before(org.junit.Before)

Example 2 with Bind

use of org.skife.jdbi.v2.sqlobject.Bind in project druid by druid-io.

the class IndexerSQLMetadataStorageCoordinator method announceHistoricalSegment.

/**
   * Attempts to insert a single segment to the database. If the segment already exists, will do nothing; although,
   * this checking is imperfect and callers must be prepared to retry their entire transaction on exceptions.
   *
   * @return true if the segment was added, false if it already existed
   */
private boolean announceHistoricalSegment(final Handle handle, final DataSegment segment, final boolean used) throws IOException {
    try {
        if (segmentExists(handle, segment)) {
            log.info("Found [%s] in DB, not updating DB", segment.getIdentifier());
            return false;
        }
        // SELECT -> INSERT can fail due to races; callers must be prepared to retry.
        // Avoiding ON DUPLICATE KEY since it's not portable.
        // Avoiding try/catch since it may cause inadvertent transaction-splitting.
        handle.createStatement(String.format("INSERT INTO %1$s (id, dataSource, created_date, start, %2$send%2$s, partitioned, version, used, payload) " + "VALUES (:id, :dataSource, :created_date, :start, :end, :partitioned, :version, :used, :payload)", dbTables.getSegmentsTable(), connector.getQuoteString())).bind("id", segment.getIdentifier()).bind("dataSource", segment.getDataSource()).bind("created_date", new DateTime().toString()).bind("start", segment.getInterval().getStart().toString()).bind("end", segment.getInterval().getEnd().toString()).bind("partitioned", (segment.getShardSpec() instanceof NoneShardSpec) ? false : true).bind("version", segment.getVersion()).bind("used", used).bind("payload", jsonMapper.writeValueAsBytes(segment)).execute();
        log.info("Published segment [%s] to DB", segment.getIdentifier());
    } catch (Exception e) {
        log.error(e, "Exception inserting segment [%s] into DB", segment.getIdentifier());
        throw e;
    }
    return true;
}
Also used : NoneShardSpec(io.druid.timeline.partition.NoneShardSpec) DateTime(org.joda.time.DateTime) SQLException(java.sql.SQLException) IOException(java.io.IOException) CallbackFailedException(org.skife.jdbi.v2.exceptions.CallbackFailedException)

Example 3 with Bind

use of org.skife.jdbi.v2.sqlobject.Bind in project druid by druid-io.

the class SQLMetadataSegmentManager method removeSegment.

@Override
public boolean removeSegment(String ds, final String segmentID) {
    try {
        connector.getDBI().withHandle(new HandleCallback<Void>() {

            @Override
            public Void withHandle(Handle handle) throws Exception {
                handle.createStatement(String.format("UPDATE %s SET used=false WHERE id = :segmentID", getSegmentsTable())).bind("segmentID", segmentID).execute();
                return null;
            }
        });
        ConcurrentHashMap<String, DruidDataSource> dataSourceMap = dataSources.get();
        if (!dataSourceMap.containsKey(ds)) {
            log.warn("Cannot find datasource %s", ds);
            return false;
        }
        DruidDataSource dataSource = dataSourceMap.get(ds);
        dataSource.removePartition(segmentID);
        if (dataSource.isEmpty()) {
            dataSourceMap.remove(ds);
        }
    } catch (Exception e) {
        log.error(e, e.toString());
        return false;
    }
    return true;
}
Also used : DruidDataSource(io.druid.client.DruidDataSource) SQLException(java.sql.SQLException) IOException(java.io.IOException) Handle(org.skife.jdbi.v2.Handle)

Example 4 with Bind

use of org.skife.jdbi.v2.sqlobject.Bind in project druid by druid-io.

the class SQLMetadataSegmentManager method enableDatasource.

@Override
public boolean enableDatasource(final String ds) {
    try {
        final IDBI dbi = connector.getDBI();
        VersionedIntervalTimeline<String, DataSegment> segmentTimeline = connector.inReadOnlyTransaction(new TransactionCallback<VersionedIntervalTimeline<String, DataSegment>>() {

            @Override
            public VersionedIntervalTimeline<String, DataSegment> inTransaction(Handle handle, TransactionStatus status) throws Exception {
                return handle.createQuery(String.format("SELECT payload FROM %s WHERE dataSource = :dataSource", getSegmentsTable())).setFetchSize(connector.getStreamingFetchSize()).bind("dataSource", ds).map(ByteArrayMapper.FIRST).fold(new VersionedIntervalTimeline<String, DataSegment>(Ordering.natural()), new Folder3<VersionedIntervalTimeline<String, DataSegment>, byte[]>() {

                    @Override
                    public VersionedIntervalTimeline<String, DataSegment> fold(VersionedIntervalTimeline<String, DataSegment> timeline, byte[] payload, FoldController foldController, StatementContext statementContext) throws SQLException {
                        try {
                            final DataSegment segment = DATA_SEGMENT_INTERNER.intern(jsonMapper.readValue(payload, DataSegment.class));
                            timeline.add(segment.getInterval(), segment.getVersion(), segment.getShardSpec().createChunk(segment));
                            return timeline;
                        } catch (Exception e) {
                            throw new SQLException(e.toString());
                        }
                    }
                });
            }
        });
        final List<DataSegment> segments = Lists.newArrayList();
        for (TimelineObjectHolder<String, DataSegment> objectHolder : segmentTimeline.lookup(new Interval("0000-01-01/3000-01-01"))) {
            for (PartitionChunk<DataSegment> partitionChunk : objectHolder.getObject()) {
                segments.add(partitionChunk.getObject());
            }
        }
        if (segments.isEmpty()) {
            log.warn("No segments found in the database!");
            return false;
        }
        dbi.withHandle(new HandleCallback<Void>() {

            @Override
            public Void withHandle(Handle handle) throws Exception {
                Batch batch = handle.createBatch();
                for (DataSegment segment : segments) {
                    batch.add(String.format("UPDATE %s SET used=true WHERE id = '%s'", getSegmentsTable(), segment.getIdentifier()));
                }
                batch.execute();
                return null;
            }
        });
    } catch (Exception e) {
        log.error(e, "Exception enabling datasource %s", ds);
        return false;
    }
    return true;
}
Also used : IDBI(org.skife.jdbi.v2.IDBI) SQLException(java.sql.SQLException) TransactionStatus(org.skife.jdbi.v2.TransactionStatus) DataSegment(io.druid.timeline.DataSegment) SQLException(java.sql.SQLException) IOException(java.io.IOException) Handle(org.skife.jdbi.v2.Handle) StatementContext(org.skife.jdbi.v2.StatementContext) FoldController(org.skife.jdbi.v2.FoldController) Batch(org.skife.jdbi.v2.Batch) VersionedIntervalTimeline(io.druid.timeline.VersionedIntervalTimeline) Folder3(org.skife.jdbi.v2.Folder3) Interval(org.joda.time.Interval)

Example 5 with Bind

use of org.skife.jdbi.v2.sqlobject.Bind in project druid by druid-io.

the class SQLMetadataSegmentPublisher method publishSegment.

@VisibleForTesting
void publishSegment(final String identifier, final String dataSource, final String createdDate, final String start, final String end, final boolean partitioned, final String version, final boolean used, final byte[] payload) {
    try {
        final DBI dbi = connector.getDBI();
        List<Map<String, Object>> exists = dbi.withHandle(new HandleCallback<List<Map<String, Object>>>() {

            @Override
            public List<Map<String, Object>> withHandle(Handle handle) throws Exception {
                return handle.createQuery(String.format("SELECT id FROM %s WHERE id=:id", config.getSegmentsTable())).bind("id", identifier).list();
            }
        });
        if (!exists.isEmpty()) {
            log.info("Found [%s] in DB, not updating DB", identifier);
            return;
        }
        dbi.withHandle(new HandleCallback<Void>() {

            @Override
            public Void withHandle(Handle handle) throws Exception {
                handle.createStatement(statement).bind("id", identifier).bind("dataSource", dataSource).bind("created_date", createdDate).bind("start", start).bind("end", end).bind("partitioned", partitioned).bind("version", version).bind("used", used).bind("payload", payload).execute();
                return null;
            }
        });
    } catch (Exception e) {
        log.error(e, "Exception inserting into DB");
        throw new RuntimeException(e);
    }
}
Also used : DBI(org.skife.jdbi.v2.DBI) IOException(java.io.IOException) Handle(org.skife.jdbi.v2.Handle) List(java.util.List) Map(java.util.Map) VisibleForTesting(com.google.common.annotations.VisibleForTesting)

Aggregations

Handle (org.skife.jdbi.v2.Handle)23 IOException (java.io.IOException)12 SQLException (java.sql.SQLException)7 ArrayList (java.util.ArrayList)7 Map (java.util.Map)7 List (java.util.List)6 DataSegment (org.apache.druid.timeline.DataSegment)5 CallbackFailedException (org.skife.jdbi.v2.exceptions.CallbackFailedException)5 Test (org.junit.Test)4 JsonProcessingException (com.fasterxml.jackson.core.JsonProcessingException)3 ImmutableList (com.google.common.collect.ImmutableList)3 ResultSet (java.sql.ResultSet)3 Interval (org.joda.time.Interval)3 DBI (org.skife.jdbi.v2.DBI)3 IDBI (org.skife.jdbi.v2.IDBI)3 Query (org.skife.jdbi.v2.Query)3 StatementContext (org.skife.jdbi.v2.StatementContext)3 ObjectMapper (com.fasterxml.jackson.databind.ObjectMapper)2 VisibleForTesting (com.google.common.annotations.VisibleForTesting)2 AbstractModule (com.google.inject.AbstractModule)2