Search in sources :

Example 1 with DataChangeRecord

use of org.apache.beam.sdk.io.gcp.spanner.changestreams.model.DataChangeRecord in project beam by apache.

the class QueryChangeStreamAction method run.

/**
 * This method will dispatch a change stream query for the given partition, it delegate the
 * processing of the records to one of the corresponding action classes registered and it will
 * keep the state of the partition up to date in the Connector's metadata table.
 *
 * <p>The algorithm is as follows:
 *
 * <ol>
 *   <li>A change stream query for the partition is performed.
 *   <li>For each record, we check the type of the record and dispatch the processing to one of
 *       the actions registered.
 *   <li>If an {@link Optional} with a {@link ProcessContinuation#stop()} is returned from the
 *       actions, we stop processing and return.
 *   <li>Before returning we register a bundle finalizer callback to update the watermark of the
 *       partition in the metadata tables to the latest processed timestamp.
 *   <li>When a change stream query finishes successfully (no more records) we update the
 *       partition state to FINISHED.
 * </ol>
 *
 * There might be cases where due to a split at the exact end timestamp of a partition's change
 * stream query, this function could process a residual with an invalid timestamp. In this case,
 * the error is ignored and no work is done for the residual.
 *
 * @param partition the current partition being processed
 * @param tracker the restriction tracker of the {@link
 *     org.apache.beam.sdk.io.gcp.spanner.changestreams.dofn.ReadChangeStreamPartitionDoFn} SDF
 * @param receiver the output receiver of the {@link
 *     org.apache.beam.sdk.io.gcp.spanner.changestreams.dofn.ReadChangeStreamPartitionDoFn} SDF
 * @param watermarkEstimator the watermark estimator of the {@link
 *     org.apache.beam.sdk.io.gcp.spanner.changestreams.dofn.ReadChangeStreamPartitionDoFn} SDF
 * @param bundleFinalizer the bundle finalizer for {@link
 *     org.apache.beam.sdk.io.gcp.spanner.changestreams.dofn.ReadChangeStreamPartitionDoFn} SDF
 *     bundles
 * @return a {@link ProcessContinuation#stop()} if a record timestamp could not be claimed or if
 *     the partition processing has finished
 */
@SuppressWarnings("nullness")
@VisibleForTesting
public ProcessContinuation run(PartitionMetadata partition, RestrictionTracker<OffsetRange, Long> tracker, OutputReceiver<DataChangeRecord> receiver, ManualWatermarkEstimator<Instant> watermarkEstimator, BundleFinalizer bundleFinalizer) {
    final String token = partition.getPartitionToken();
    final Timestamp endTimestamp = partition.getEndTimestamp();
    /*
     * FIXME(b/202802422): Workaround until the backend is fixed.
     * The change stream API returns invalid argument if we try to use a child partition start
     * timestamp for a previously returned query. If we split at that exact time, we won't be able
     * to obtain the child partition on the residual restriction, since it will start at the child
     * partition start time.
     * To circumvent this, we always start querying one microsecond before the restriction start
     * time, and ignore any records that are before the restriction start time. This way the child
     * partition should be returned within the query.
     */
    final Timestamp restrictionStartTimestamp = Timestamp.ofTimeMicroseconds(tracker.currentRestriction().getFrom());
    final Timestamp previousStartTimestamp = Timestamp.ofTimeMicroseconds(TimestampConverter.timestampToMicros(restrictionStartTimestamp) - 1);
    final boolean isFirstRun = restrictionStartTimestamp.compareTo(partition.getStartTimestamp()) == 0;
    final Timestamp startTimestamp = isFirstRun ? restrictionStartTimestamp : previousStartTimestamp;
    try (Scope scope = TRACER.spanBuilder("QueryChangeStreamAction").setRecordEvents(true).startScopedSpan()) {
        TRACER.getCurrentSpan().putAttribute(PARTITION_ID_ATTRIBUTE_LABEL, AttributeValue.stringAttributeValue(token));
        // TODO: Potentially we can avoid this fetch, by enriching the runningAt timestamp when the
        // ReadChangeStreamPartitionDoFn#processElement is called
        final PartitionMetadata updatedPartition = Optional.ofNullable(partitionMetadataDao.getPartition(token)).map(partitionMetadataMapper::from).orElseThrow(() -> new IllegalStateException("Partition " + token + " not found in metadata table"));
        try (ChangeStreamResultSet resultSet = changeStreamDao.changeStreamQuery(token, startTimestamp, endTimestamp, partition.getHeartbeatMillis())) {
            while (resultSet.next()) {
                final List<ChangeStreamRecord> records = changeStreamRecordMapper.toChangeStreamRecords(updatedPartition, resultSet.getCurrentRowAsStruct(), resultSet.getMetadata());
                Optional<ProcessContinuation> maybeContinuation;
                for (final ChangeStreamRecord record : records) {
                    if (record.getRecordTimestamp().compareTo(restrictionStartTimestamp) < 0) {
                        continue;
                    }
                    if (record instanceof DataChangeRecord) {
                        maybeContinuation = dataChangeRecordAction.run(updatedPartition, (DataChangeRecord) record, tracker, receiver, watermarkEstimator);
                    } else if (record instanceof HeartbeatRecord) {
                        maybeContinuation = heartbeatRecordAction.run(updatedPartition, (HeartbeatRecord) record, tracker, watermarkEstimator);
                    } else if (record instanceof ChildPartitionsRecord) {
                        maybeContinuation = childPartitionsRecordAction.run(updatedPartition, (ChildPartitionsRecord) record, tracker, watermarkEstimator);
                    } else {
                        LOG.error("[" + token + "] Unknown record type " + record.getClass());
                        throw new IllegalArgumentException("Unknown record type " + record.getClass());
                    }
                    if (maybeContinuation.isPresent()) {
                        LOG.debug("[" + token + "] Continuation present, returning " + maybeContinuation);
                        bundleFinalizer.afterBundleCommit(Instant.now().plus(BUNDLE_FINALIZER_TIMEOUT), updateWatermarkCallback(token, watermarkEstimator));
                        return maybeContinuation.get();
                    }
                }
            }
            bundleFinalizer.afterBundleCommit(Instant.now().plus(BUNDLE_FINALIZER_TIMEOUT), updateWatermarkCallback(token, watermarkEstimator));
        } catch (SpannerException e) {
            if (isTimestampOutOfRange(e)) {
                LOG.debug("[" + token + "] query change stream is out of range for " + startTimestamp + " to " + endTimestamp + ", finishing stream");
            } else {
                throw e;
            }
        }
    }
    final long endMicros = TimestampConverter.timestampToMicros(endTimestamp);
    LOG.debug("[" + token + "] change stream completed successfully");
    if (tracker.tryClaim(endMicros)) {
        LOG.debug("[" + token + "] Finishing partition");
        partitionMetadataDao.updateToFinished(token);
        LOG.info("[" + token + "] Partition finished");
    }
    return ProcessContinuation.stop();
}
Also used : DataChangeRecord(org.apache.beam.sdk.io.gcp.spanner.changestreams.model.DataChangeRecord) HeartbeatRecord(org.apache.beam.sdk.io.gcp.spanner.changestreams.model.HeartbeatRecord) Timestamp(com.google.cloud.Timestamp) ProcessContinuation(org.apache.beam.sdk.transforms.DoFn.ProcessContinuation) ChangeStreamResultSet(org.apache.beam.sdk.io.gcp.spanner.changestreams.dao.ChangeStreamResultSet) Scope(io.opencensus.common.Scope) PartitionMetadata(org.apache.beam.sdk.io.gcp.spanner.changestreams.model.PartitionMetadata) ChildPartitionsRecord(org.apache.beam.sdk.io.gcp.spanner.changestreams.model.ChildPartitionsRecord) SpannerException(com.google.cloud.spanner.SpannerException) ChangeStreamRecord(org.apache.beam.sdk.io.gcp.spanner.changestreams.model.ChangeStreamRecord) VisibleForTesting(org.apache.beam.vendor.guava.v26_0_jre.com.google.common.annotations.VisibleForTesting)

Example 2 with DataChangeRecord

use of org.apache.beam.sdk.io.gcp.spanner.changestreams.model.DataChangeRecord in project beam by apache.

the class ChangeStreamRecordMapperTest method testMappingInsertStructRowToDataChangeRecord.

@Test
public void testMappingInsertStructRowToDataChangeRecord() {
    final DataChangeRecord dataChangeRecord = new DataChangeRecord("partitionToken", Timestamp.ofTimeSecondsAndNanos(10L, 20), "transactionId", false, "1", "tableName", Arrays.asList(new ColumnType("column1", new TypeCode("type1"), true, 1L), new ColumnType("column2", new TypeCode("type2"), false, 2L)), Collections.singletonList(new Mod("{\"column1\": \"value1\"}", null, "{\"column2\": \"newValue2\"}")), ModType.INSERT, ValueCaptureType.OLD_AND_NEW_VALUES, 10L, 2L, null);
    final Struct stringFieldsStruct = recordsToStructWithStrings(dataChangeRecord);
    final Struct jsonFieldsStruct = recordsToStructWithJson(dataChangeRecord);
    assertEquals(Collections.singletonList(dataChangeRecord), mapper.toChangeStreamRecords(partition, stringFieldsStruct, resultSetMetadata));
    assertEquals(Collections.singletonList(dataChangeRecord), mapper.toChangeStreamRecords(partition, jsonFieldsStruct, resultSetMetadata));
}
Also used : ColumnType(org.apache.beam.sdk.io.gcp.spanner.changestreams.model.ColumnType) Mod(org.apache.beam.sdk.io.gcp.spanner.changestreams.model.Mod) TypeCode(org.apache.beam.sdk.io.gcp.spanner.changestreams.model.TypeCode) DataChangeRecord(org.apache.beam.sdk.io.gcp.spanner.changestreams.model.DataChangeRecord) Struct(com.google.cloud.spanner.Struct) Test(org.junit.Test)

Example 3 with DataChangeRecord

use of org.apache.beam.sdk.io.gcp.spanner.changestreams.model.DataChangeRecord in project beam by apache.

the class ChangeStreamRecordMapperTest method testMappingDeleteStructRowToDataChangeRecord.

@Test
public void testMappingDeleteStructRowToDataChangeRecord() {
    final DataChangeRecord dataChangeRecord = new DataChangeRecord("partitionToken", Timestamp.ofTimeSecondsAndNanos(10L, 20), "transactionId", false, "1", "tableName", Arrays.asList(new ColumnType("column1", new TypeCode("type1"), true, 1L), new ColumnType("column2", new TypeCode("type2"), false, 2L)), Collections.singletonList(new Mod("{\"column1\": \"value1\"}", "{\"column2\": \"oldValue2\"}", null)), ModType.DELETE, ValueCaptureType.OLD_AND_NEW_VALUES, 10L, 2L, null);
    final Struct stringFieldsStruct = recordsToStructWithStrings(dataChangeRecord);
    final Struct jsonFieldsStruct = recordsToStructWithJson(dataChangeRecord);
    assertEquals(Collections.singletonList(dataChangeRecord), mapper.toChangeStreamRecords(partition, stringFieldsStruct, resultSetMetadata));
    assertEquals(Collections.singletonList(dataChangeRecord), mapper.toChangeStreamRecords(partition, jsonFieldsStruct, resultSetMetadata));
}
Also used : ColumnType(org.apache.beam.sdk.io.gcp.spanner.changestreams.model.ColumnType) Mod(org.apache.beam.sdk.io.gcp.spanner.changestreams.model.Mod) TypeCode(org.apache.beam.sdk.io.gcp.spanner.changestreams.model.TypeCode) DataChangeRecord(org.apache.beam.sdk.io.gcp.spanner.changestreams.model.DataChangeRecord) Struct(com.google.cloud.spanner.Struct) Test(org.junit.Test)

Example 4 with DataChangeRecord

use of org.apache.beam.sdk.io.gcp.spanner.changestreams.model.DataChangeRecord in project beam by apache.

the class ChangeStreamRecordMapperTest method testMappingUpdateStructRowToDataChangeRecord.

@Test
public void testMappingUpdateStructRowToDataChangeRecord() {
    final DataChangeRecord dataChangeRecord = new DataChangeRecord("partitionToken", Timestamp.ofTimeSecondsAndNanos(10L, 20), "serverTransactionId", true, "1", "tableName", Arrays.asList(new ColumnType("column1", new TypeCode("type1"), true, 1L), new ColumnType("column2", new TypeCode("type2"), false, 2L)), Collections.singletonList(new Mod("{\"column1\": \"value1\"}", "{\"column2\": \"oldValue2\"}", "{\"column2\": \"newValue2\"}")), ModType.UPDATE, ValueCaptureType.OLD_AND_NEW_VALUES, 10L, 2L, null);
    final Struct stringFieldsStruct = recordsToStructWithStrings(dataChangeRecord);
    final Struct jsonFieldsStruct = recordsToStructWithJson(dataChangeRecord);
    assertEquals(Collections.singletonList(dataChangeRecord), mapper.toChangeStreamRecords(partition, stringFieldsStruct, resultSetMetadata));
    assertEquals(Collections.singletonList(dataChangeRecord), mapper.toChangeStreamRecords(partition, jsonFieldsStruct, resultSetMetadata));
}
Also used : ColumnType(org.apache.beam.sdk.io.gcp.spanner.changestreams.model.ColumnType) Mod(org.apache.beam.sdk.io.gcp.spanner.changestreams.model.Mod) TypeCode(org.apache.beam.sdk.io.gcp.spanner.changestreams.model.TypeCode) DataChangeRecord(org.apache.beam.sdk.io.gcp.spanner.changestreams.model.DataChangeRecord) Struct(com.google.cloud.spanner.Struct) Test(org.junit.Test)

Example 5 with DataChangeRecord

use of org.apache.beam.sdk.io.gcp.spanner.changestreams.model.DataChangeRecord in project beam by apache.

the class DataChangeRecordActionTest method testRestrictionNotClaimed.

@Test
public void testRestrictionNotClaimed() {
    final String partitionToken = "partitionToken";
    final Timestamp timestamp = Timestamp.ofTimeMicroseconds(10L);
    final DataChangeRecord record = mock(DataChangeRecord.class);
    when(record.getCommitTimestamp()).thenReturn(timestamp);
    when(tracker.tryClaim(10L)).thenReturn(false);
    when(partition.getPartitionToken()).thenReturn(partitionToken);
    final Optional<ProcessContinuation> maybeContinuation = action.run(partition, record, tracker, outputReceiver, watermarkEstimator);
    assertEquals(Optional.of(ProcessContinuation.stop()), maybeContinuation);
    verify(outputReceiver, never()).outputWithTimestamp(any(), any());
    verify(watermarkEstimator, never()).setWatermark(any());
}
Also used : DataChangeRecord(org.apache.beam.sdk.io.gcp.spanner.changestreams.model.DataChangeRecord) Timestamp(com.google.cloud.Timestamp) ProcessContinuation(org.apache.beam.sdk.transforms.DoFn.ProcessContinuation) Test(org.junit.Test)

Aggregations

DataChangeRecord (org.apache.beam.sdk.io.gcp.spanner.changestreams.model.DataChangeRecord)7 Test (org.junit.Test)6 Struct (com.google.cloud.spanner.Struct)4 ProcessContinuation (org.apache.beam.sdk.transforms.DoFn.ProcessContinuation)4 Timestamp (com.google.cloud.Timestamp)3 ColumnType (org.apache.beam.sdk.io.gcp.spanner.changestreams.model.ColumnType)3 Mod (org.apache.beam.sdk.io.gcp.spanner.changestreams.model.Mod)3 TypeCode (org.apache.beam.sdk.io.gcp.spanner.changestreams.model.TypeCode)3 ChangeStreamResultSet (org.apache.beam.sdk.io.gcp.spanner.changestreams.dao.ChangeStreamResultSet)2 SpannerException (com.google.cloud.spanner.SpannerException)1 Scope (io.opencensus.common.Scope)1 ChangeStreamResultSetMetadata (org.apache.beam.sdk.io.gcp.spanner.changestreams.dao.ChangeStreamResultSetMetadata)1 ChangeStreamRecord (org.apache.beam.sdk.io.gcp.spanner.changestreams.model.ChangeStreamRecord)1 ChildPartitionsRecord (org.apache.beam.sdk.io.gcp.spanner.changestreams.model.ChildPartitionsRecord)1 HeartbeatRecord (org.apache.beam.sdk.io.gcp.spanner.changestreams.model.HeartbeatRecord)1 PartitionMetadata (org.apache.beam.sdk.io.gcp.spanner.changestreams.model.PartitionMetadata)1 VisibleForTesting (org.apache.beam.vendor.guava.v26_0_jre.com.google.common.annotations.VisibleForTesting)1 Instant (org.joda.time.Instant)1