Search in sources :

Example 26 with DateTime

use of org.joda.time.DateTime in project druid by druid-io.

the class IncrementalIndexTest method testDuplicateDimensionsFirstOccurrence.

@Test(expected = ISE.class)
public void testDuplicateDimensionsFirstOccurrence() throws IndexSizeExceededException {
    IncrementalIndex index = closer.closeLater(indexCreator.createIndex());
    index.add(new MapBasedInputRow(new DateTime().minus(1).getMillis(), Lists.newArrayList("billy", "joe", "joe"), ImmutableMap.<String, Object>of("billy", "A", "joe", "B")));
}
Also used : MapBasedInputRow(io.druid.data.input.MapBasedInputRow) DateTime(org.joda.time.DateTime) Test(org.junit.Test)

Example 27 with DateTime

use of org.joda.time.DateTime in project druid by druid-io.

the class IncrementalIndexTest method testDuplicateDimensions.

@Test(expected = ISE.class)
public void testDuplicateDimensions() throws IndexSizeExceededException {
    IncrementalIndex index = closer.closeLater(indexCreator.createIndex());
    index.add(new MapBasedInputRow(new DateTime().minus(1).getMillis(), Lists.newArrayList("billy", "joe"), ImmutableMap.<String, Object>of("billy", "A", "joe", "B")));
    index.add(new MapBasedInputRow(new DateTime().minus(1).getMillis(), Lists.newArrayList("billy", "joe", "joe"), ImmutableMap.<String, Object>of("billy", "A", "joe", "B")));
}
Also used : MapBasedInputRow(io.druid.data.input.MapBasedInputRow) DateTime(org.joda.time.DateTime) Test(org.junit.Test)

Example 28 with DateTime

use of org.joda.time.DateTime in project druid by druid-io.

the class IncrementalIndexTest method testNullDimensionTransform.

@Test
public void testNullDimensionTransform() throws IndexSizeExceededException {
    IncrementalIndex<?> index = closer.closeLater(indexCreator.createIndex());
    index.add(new MapBasedInputRow(new DateTime().minus(1).getMillis(), Lists.newArrayList("string", "float", "long"), ImmutableMap.<String, Object>of("string", Arrays.asList("A", null, ""), "float", Arrays.asList(Float.MAX_VALUE, null, ""), "long", Arrays.asList(Long.MIN_VALUE, null, ""))));
    Row row = index.iterator().next();
    Assert.assertEquals(Arrays.asList(new String[] { "", "", "A" }), row.getRaw("string"));
    Assert.assertEquals(Arrays.asList(new String[] { "", "", String.valueOf(Float.MAX_VALUE) }), row.getRaw("float"));
    Assert.assertEquals(Arrays.asList(new String[] { "", "", String.valueOf(Long.MIN_VALUE) }), row.getRaw("long"));
}
Also used : MapBasedInputRow(io.druid.data.input.MapBasedInputRow) MapBasedInputRow(io.druid.data.input.MapBasedInputRow) Row(io.druid.data.input.Row) DateTime(org.joda.time.DateTime) Test(org.junit.Test)

Example 29 with DateTime

use of org.joda.time.DateTime in project druid by druid-io.

the class IndexerSQLMetadataStorageCoordinator method announceHistoricalSegment.

/**
   * Attempts to insert a single segment to the database. If the segment already exists, will do nothing; although,
   * this checking is imperfect and callers must be prepared to retry their entire transaction on exceptions.
   *
   * @return true if the segment was added, false if it already existed
   */
private boolean announceHistoricalSegment(final Handle handle, final DataSegment segment, final boolean used) throws IOException {
    try {
        if (segmentExists(handle, segment)) {
            log.info("Found [%s] in DB, not updating DB", segment.getIdentifier());
            return false;
        }
        // SELECT -> INSERT can fail due to races; callers must be prepared to retry.
        // Avoiding ON DUPLICATE KEY since it's not portable.
        // Avoiding try/catch since it may cause inadvertent transaction-splitting.
        handle.createStatement(String.format("INSERT INTO %1$s (id, dataSource, created_date, start, %2$send%2$s, partitioned, version, used, payload) " + "VALUES (:id, :dataSource, :created_date, :start, :end, :partitioned, :version, :used, :payload)", dbTables.getSegmentsTable(), connector.getQuoteString())).bind("id", segment.getIdentifier()).bind("dataSource", segment.getDataSource()).bind("created_date", new DateTime().toString()).bind("start", segment.getInterval().getStart().toString()).bind("end", segment.getInterval().getEnd().toString()).bind("partitioned", (segment.getShardSpec() instanceof NoneShardSpec) ? false : true).bind("version", segment.getVersion()).bind("used", used).bind("payload", jsonMapper.writeValueAsBytes(segment)).execute();
        log.info("Published segment [%s] to DB", segment.getIdentifier());
    } catch (Exception e) {
        log.error(e, "Exception inserting segment [%s] into DB", segment.getIdentifier());
        throw e;
    }
    return true;
}
Also used : NoneShardSpec(io.druid.timeline.partition.NoneShardSpec) DateTime(org.joda.time.DateTime) SQLException(java.sql.SQLException) IOException(java.io.IOException) CallbackFailedException(org.skife.jdbi.v2.exceptions.CallbackFailedException)

Example 30 with DateTime

use of org.joda.time.DateTime in project druid by druid-io.

the class IndexerSQLMetadataStorageCoordinator method updateDataSourceMetadataWithHandle.

/**
   * Compare-and-swap dataSource metadata in a transaction. This will only modify dataSource metadata if it equals
   * oldCommitMetadata when this function is called (based on T.equals). This method is idempotent in that if
   * the metadata already equals newCommitMetadata, it will return true.
   *
   * @param handle        database handle
   * @param dataSource    druid dataSource
   * @param startMetadata dataSource metadata pre-insert must match this startMetadata according to
   *                      {@link DataSourceMetadata#matches(DataSourceMetadata)}
   * @param endMetadata   dataSource metadata post-insert will have this endMetadata merged in with
   *                      {@link DataSourceMetadata#plus(DataSourceMetadata)}
   *
   * @return true if dataSource metadata was updated from matching startMetadata to matching endMetadata
   */
protected DataSourceMetadataUpdateResult updateDataSourceMetadataWithHandle(final Handle handle, final String dataSource, final DataSourceMetadata startMetadata, final DataSourceMetadata endMetadata) throws IOException {
    Preconditions.checkNotNull(dataSource, "dataSource");
    Preconditions.checkNotNull(startMetadata, "startMetadata");
    Preconditions.checkNotNull(endMetadata, "endMetadata");
    final byte[] oldCommitMetadataBytesFromDb = getDataSourceMetadataWithHandleAsBytes(handle, dataSource);
    final String oldCommitMetadataSha1FromDb;
    final DataSourceMetadata oldCommitMetadataFromDb;
    if (oldCommitMetadataBytesFromDb == null) {
        oldCommitMetadataSha1FromDb = null;
        oldCommitMetadataFromDb = null;
    } else {
        oldCommitMetadataSha1FromDb = BaseEncoding.base16().encode(Hashing.sha1().hashBytes(oldCommitMetadataBytesFromDb).asBytes());
        oldCommitMetadataFromDb = jsonMapper.readValue(oldCommitMetadataBytesFromDb, DataSourceMetadata.class);
    }
    final boolean startMetadataMatchesExisting = oldCommitMetadataFromDb == null ? startMetadata.isValidStart() : startMetadata.matches(oldCommitMetadataFromDb);
    if (!startMetadataMatchesExisting) {
        // Not in the desired start state.
        log.info("Not updating metadata, existing state is not the expected start state.");
        return DataSourceMetadataUpdateResult.FAILURE;
    }
    final DataSourceMetadata newCommitMetadata = oldCommitMetadataFromDb == null ? endMetadata : oldCommitMetadataFromDb.plus(endMetadata);
    final byte[] newCommitMetadataBytes = jsonMapper.writeValueAsBytes(newCommitMetadata);
    final String newCommitMetadataSha1 = BaseEncoding.base16().encode(Hashing.sha1().hashBytes(newCommitMetadataBytes).asBytes());
    final DataSourceMetadataUpdateResult retVal;
    if (oldCommitMetadataBytesFromDb == null) {
        // SELECT -> INSERT can fail due to races; callers must be prepared to retry.
        final int numRows = handle.createStatement(String.format("INSERT INTO %s (dataSource, created_date, commit_metadata_payload, commit_metadata_sha1) " + "VALUES (:dataSource, :created_date, :commit_metadata_payload, :commit_metadata_sha1)", dbTables.getDataSourceTable())).bind("dataSource", dataSource).bind("created_date", new DateTime().toString()).bind("commit_metadata_payload", newCommitMetadataBytes).bind("commit_metadata_sha1", newCommitMetadataSha1).execute();
        retVal = numRows == 1 ? DataSourceMetadataUpdateResult.SUCCESS : DataSourceMetadataUpdateResult.TRY_AGAIN;
    } else {
        // Expecting a particular old metadata; use the SHA1 in a compare-and-swap UPDATE
        final int numRows = handle.createStatement(String.format("UPDATE %s SET " + "commit_metadata_payload = :new_commit_metadata_payload, " + "commit_metadata_sha1 = :new_commit_metadata_sha1 " + "WHERE dataSource = :dataSource AND commit_metadata_sha1 = :old_commit_metadata_sha1", dbTables.getDataSourceTable())).bind("dataSource", dataSource).bind("old_commit_metadata_sha1", oldCommitMetadataSha1FromDb).bind("new_commit_metadata_payload", newCommitMetadataBytes).bind("new_commit_metadata_sha1", newCommitMetadataSha1).execute();
        retVal = numRows == 1 ? DataSourceMetadataUpdateResult.SUCCESS : DataSourceMetadataUpdateResult.TRY_AGAIN;
    }
    if (retVal == DataSourceMetadataUpdateResult.SUCCESS) {
        log.info("Updated metadata from[%s] to[%s].", oldCommitMetadataFromDb, newCommitMetadata);
    } else {
        log.info("Not updating metadata, compare-and-swap failure.");
    }
    return retVal;
}
Also used : DataSourceMetadata(io.druid.indexing.overlord.DataSourceMetadata) DateTime(org.joda.time.DateTime)

Aggregations

DateTime (org.joda.time.DateTime)3381 Test (org.junit.Test)1000 Test (org.testng.annotations.Test)499 DateTimeRfc1123 (com.microsoft.rest.DateTimeRfc1123)349 ResponseBody (okhttp3.ResponseBody)332 ArrayList (java.util.ArrayList)299 LocalDate (org.joda.time.LocalDate)256 Date (java.util.Date)239 Interval (org.joda.time.Interval)200 Result (io.druid.query.Result)153 ServiceCall (com.microsoft.rest.ServiceCall)148 HashMap (java.util.HashMap)144 BigDecimal (java.math.BigDecimal)132 List (java.util.List)131 DateTimeZone (org.joda.time.DateTimeZone)127 LocalDateTime (org.joda.time.LocalDateTime)98 UUID (java.util.UUID)93 DateTimeFormatter (org.joda.time.format.DateTimeFormatter)88 IOException (java.io.IOException)85 Map (java.util.Map)85