Search in sources :

Example 1 with DropSourceCommand

use of io.confluent.ksql.execution.ddl.commands.DropSourceCommand in project ksql by confluentinc.

the class DdlCommandExecTest method shouldDropStreamIfConstraintExistsAndRestoreIsInProgress.

@Test
public void shouldDropStreamIfConstraintExistsAndRestoreIsInProgress() {
    // Given:
    final CreateStreamCommand stream1 = buildCreateStream(SourceName.of("s1"), SCHEMA, false, false);
    final CreateStreamCommand stream2 = buildCreateStream(SourceName.of("s2"), SCHEMA, false, false);
    final CreateStreamCommand stream3 = buildCreateStream(SourceName.of("s3"), SCHEMA, false, false);
    cmdExec.execute(SQL_TEXT, stream1, true, Collections.emptySet());
    cmdExec.execute(SQL_TEXT, stream2, true, Collections.singleton(SourceName.of("s1")));
    cmdExec.execute(SQL_TEXT, stream3, true, Collections.singleton(SourceName.of("s1")));
    // When:
    final DropSourceCommand dropStream = buildDropSourceCommand(SourceName.of("s1"));
    final DdlCommandResult result = cmdExec.execute(SQL_TEXT, dropStream, false, Collections.emptySet(), true);
    // Then
    assertThat(result.isSuccess(), is(true));
    assertThat(result.getMessage(), equalTo(String.format("Source %s (topic: %s) was dropped.", STREAM_NAME, TOPIC_NAME)));
}
Also used : DdlCommandResult(io.confluent.ksql.execution.ddl.commands.DdlCommandResult) CreateStreamCommand(io.confluent.ksql.execution.ddl.commands.CreateStreamCommand) DropSourceCommand(io.confluent.ksql.execution.ddl.commands.DropSourceCommand) Test(org.junit.Test)

Example 2 with DropSourceCommand

use of io.confluent.ksql.execution.ddl.commands.DropSourceCommand in project ksql by confluentinc.

the class DdlCommandExecTest method shouldThrowOnDropTableWhenConstraintExist.

@Test
public void shouldThrowOnDropTableWhenConstraintExist() {
    // Given:
    final CreateTableCommand table1 = buildCreateTable(SourceName.of("t1"), false, false);
    final CreateTableCommand table2 = buildCreateTable(SourceName.of("t2"), false, false);
    final CreateTableCommand table3 = buildCreateTable(SourceName.of("t3"), false, false);
    cmdExec.execute(SQL_TEXT, table1, true, Collections.emptySet());
    cmdExec.execute(SQL_TEXT, table2, true, Collections.singleton(SourceName.of("t1")));
    cmdExec.execute(SQL_TEXT, table3, true, Collections.singleton(SourceName.of("t1")));
    // When:
    final DropSourceCommand dropStream = buildDropSourceCommand(SourceName.of("t1"));
    final Exception e = assertThrows(KsqlReferentialIntegrityException.class, () -> cmdExec.execute(SQL_TEXT, dropStream, false, Collections.emptySet()));
    // Then:
    assertThat(e.getMessage(), containsString("Cannot drop t1."));
    assertThat(e.getMessage(), containsString("The following streams and/or tables read from this source: [t2, t3]."));
    assertThat(e.getMessage(), containsString("You need to drop them before dropping t1."));
}
Also used : DropSourceCommand(io.confluent.ksql.execution.ddl.commands.DropSourceCommand) CreateTableCommand(io.confluent.ksql.execution.ddl.commands.CreateTableCommand) KsqlReferentialIntegrityException(io.confluent.ksql.util.KsqlReferentialIntegrityException) KsqlException(io.confluent.ksql.util.KsqlException) Test(org.junit.Test)

Example 3 with DropSourceCommand

use of io.confluent.ksql.execution.ddl.commands.DropSourceCommand in project ksql by confluentinc.

the class RecoveryTest method shouldRecoverWhenDropWithSourceConstraintsFoundOnMetastore.

@Test
public void shouldRecoverWhenDropWithSourceConstraintsFoundOnMetastore() {
    // Verify that an upgrade will not be affected if DROP commands are not in order.
    server1.submitCommands("CREATE STREAM A (COLUMN STRING) WITH (KAFKA_TOPIC='A', VALUE_FORMAT='JSON');", "CREATE STREAM B AS SELECT * FROM A;", "INSERT INTO B SELECT * FROM A;");
    // ksqlDB does not allow a DROP STREAM A because 'A' is used by 'B'.
    // However, if a ksqlDB upgrade is done, then this order can be possible.
    final Command dropACommand = new Command("DROP STREAM A;", Optional.of(ImmutableMap.of()), Optional.of(ImmutableMap.of()), Optional.of(KsqlPlan.ddlPlanCurrent("DROP STREAM A;", new DropSourceCommand(SourceName.of("A")))), Optional.of(Command.VERSION));
    // Add the DROP STREAM A manually to prevent server1 to fail if done on submitCommands()
    commands.add(new QueuedCommand(InternalTopicSerdes.serializer().serialize("", new CommandId(CommandId.Type.STREAM, "`A`", CommandId.Action.DROP)), InternalTopicSerdes.serializer().serialize("", dropACommand), Optional.empty(), (long) commands.size()));
    final KsqlServer recovered = new KsqlServer(commands);
    recovered.recover();
    // Original server has streams 'A' and 'B' because DROP statements weren't executed, but
    // the previous hack added them to the list of command topic statements
    assertThat(server1.ksqlEngine.getMetaStore().getAllDataSources().size(), is(2));
    assertThat(server1.ksqlEngine.getMetaStore().getAllDataSources(), hasKey(SourceName.of("A")));
    assertThat(server1.ksqlEngine.getMetaStore().getAllDataSources(), hasKey(SourceName.of("B")));
    assertThat(recovered.ksqlEngine.getAllLiveQueries().size(), is(2));
    // Recovered server has stream 'B' only. It restored the previous CREATE and DROP statements
    assertThat(recovered.ksqlEngine.getMetaStore().getAllDataSources().size(), is(1));
    assertThat(recovered.ksqlEngine.getMetaStore().getAllDataSources(), hasKey(SourceName.of("B")));
    assertThat(recovered.ksqlEngine.getAllLiveQueries().size(), is(2));
}
Also used : CreateStreamCommand(io.confluent.ksql.execution.ddl.commands.CreateStreamCommand) DropSourceCommand(io.confluent.ksql.execution.ddl.commands.DropSourceCommand) DropSourceCommand(io.confluent.ksql.execution.ddl.commands.DropSourceCommand) CommandId(io.confluent.ksql.rest.entity.CommandId) Test(org.junit.Test)

Example 4 with DropSourceCommand

use of io.confluent.ksql.execution.ddl.commands.DropSourceCommand in project ksql by confluentinc.

the class RecoveryTest method shouldRecoverWhenDropWithSourceConstraintsAndCreateSourceAgainFoundOnMetastore.

@Test
public void shouldRecoverWhenDropWithSourceConstraintsAndCreateSourceAgainFoundOnMetastore() {
    // Verify that an upgrade will not be affected if DROP commands are not in order.
    server1.submitCommands("CREATE STREAM A (COLUMN STRING) WITH (KAFKA_TOPIC='A', VALUE_FORMAT='JSON');", "CREATE STREAM B AS SELECT * FROM A;");
    // ksqlDB does not allow a DROP STREAM A because 'A' is used by 'B'.
    // However, if a ksqlDB upgrade is done, then this order can be possible.
    final Command dropACommand = new Command("DROP STREAM A;", Optional.of(ImmutableMap.of()), Optional.of(ImmutableMap.of()), Optional.of(KsqlPlan.ddlPlanCurrent("DROP STREAM A;", new DropSourceCommand(SourceName.of("A")))), Optional.of(Command.VERSION));
    // Add the DROP STREAM A manually to prevent server1 to fail if done on submitCommands()
    commands.add(new QueuedCommand(InternalTopicSerdes.serializer().serialize("", new CommandId(CommandId.Type.STREAM, "`A`", CommandId.Action.DROP)), InternalTopicSerdes.serializer().serialize("", dropACommand), Optional.empty(), (long) commands.size()));
    // Add CREATE STREAM after the DROP again
    final Command createACommand = new Command("CREATE STREAM A (COLUMN STRING) WITH (KAFKA_TOPIC='A', VALUE_FORMAT='JSON');", Optional.of(ImmutableMap.of()), Optional.of(ImmutableMap.of()), Optional.of(KsqlPlan.ddlPlanCurrent("CREATE STREAM A (COLUMN STRING) WITH (KAFKA_TOPIC='A', VALUE_FORMAT='JSON');", new CreateStreamCommand(SourceName.of("A"), LogicalSchema.builder().valueColumn(ColumnName.of("COLUMN"), SqlTypes.STRING).build(), Optional.empty(), "A", Formats.of(KeyFormat.nonWindowed(FormatInfo.of(FormatFactory.KAFKA.name()), SerdeFeatures.of()).getFormatInfo(), ValueFormat.of(FormatInfo.of(FormatFactory.JSON.name()), SerdeFeatures.of()).getFormatInfo(), SerdeFeatures.of(), SerdeFeatures.of()), Optional.empty(), Optional.of(false), Optional.of(false)))), Optional.of(Command.VERSION));
    // Add the CREATE STREAM A manually to prevent server1 to fail if done on submitCommands()
    commands.add(new QueuedCommand(InternalTopicSerdes.serializer().serialize("", new CommandId(CommandId.Type.STREAM, "`A`", CommandId.Action.CREATE)), InternalTopicSerdes.serializer().serialize("", createACommand), Optional.empty(), (long) commands.size()));
    final KsqlServer recovered = new KsqlServer(commands);
    recovered.recover();
    // Original server has both streams
    assertThat(server1.ksqlEngine.getMetaStore().getAllDataSources().size(), is(2));
    assertThat(server1.ksqlEngine.getMetaStore().getAllDataSources(), hasKey(SourceName.of("A")));
    assertThat(server1.ksqlEngine.getMetaStore().getAllDataSources(), hasKey(SourceName.of("B")));
    // Recovered server has only stream 'B'
    assertThat(recovered.ksqlEngine.getMetaStore().getAllDataSources().size(), is(2));
    assertThat(recovered.ksqlEngine.getMetaStore().getAllDataSources(), hasKey(SourceName.of("A")));
    assertThat(recovered.ksqlEngine.getMetaStore().getAllDataSources(), hasKey(SourceName.of("B")));
}
Also used : CreateStreamCommand(io.confluent.ksql.execution.ddl.commands.CreateStreamCommand) CreateStreamCommand(io.confluent.ksql.execution.ddl.commands.CreateStreamCommand) DropSourceCommand(io.confluent.ksql.execution.ddl.commands.DropSourceCommand) DropSourceCommand(io.confluent.ksql.execution.ddl.commands.DropSourceCommand) CommandId(io.confluent.ksql.rest.entity.CommandId) Test(org.junit.Test)

Example 5 with DropSourceCommand

use of io.confluent.ksql.execution.ddl.commands.DropSourceCommand in project ksql by confluentinc.

the class DropSourceFactoryTest method shouldCreateDropSourceOnMissingSourceWithIfExistsForStream.

@Test
public void shouldCreateDropSourceOnMissingSourceWithIfExistsForStream() {
    // Given:
    final DropStream dropStream = new DropStream(SOME_NAME, true, true);
    when(metaStore.getSource(SOME_NAME)).thenReturn(null);
    // When:
    final DropSourceCommand cmd = dropSourceFactory.create(dropStream);
    // Then:
    assertThat(cmd.getSourceName(), equalTo(SourceName.of("bob")));
}
Also used : DropSourceCommand(io.confluent.ksql.execution.ddl.commands.DropSourceCommand) DropStream(io.confluent.ksql.parser.tree.DropStream) Test(org.junit.Test)

Aggregations

DropSourceCommand (io.confluent.ksql.execution.ddl.commands.DropSourceCommand)6 Test (org.junit.Test)6 CreateStreamCommand (io.confluent.ksql.execution.ddl.commands.CreateStreamCommand)4 CommandId (io.confluent.ksql.rest.entity.CommandId)2 KsqlException (io.confluent.ksql.util.KsqlException)2 KsqlReferentialIntegrityException (io.confluent.ksql.util.KsqlReferentialIntegrityException)2 CreateTableCommand (io.confluent.ksql.execution.ddl.commands.CreateTableCommand)1 DdlCommandResult (io.confluent.ksql.execution.ddl.commands.DdlCommandResult)1 DropStream (io.confluent.ksql.parser.tree.DropStream)1