Search in sources :

Example 1 with BaseNessieClientServerException

use of org.projectnessie.error.BaseNessieClientServerException in project iceberg by apache.

the class NessieCatalog method renameTable.

@Override
public void renameTable(TableIdentifier from, TableIdentifier toOriginal) {
    reference.checkMutable();
    TableIdentifier to = NessieUtil.removeCatalogName(toOriginal, name());
    IcebergTable existingFromTable = table(from);
    if (existingFromTable == null) {
        throw new NoSuchTableException("table %s doesn't exists", from.name());
    }
    IcebergTable existingToTable = table(to);
    if (existingToTable != null) {
        throw new AlreadyExistsException("table %s already exists", to.name());
    }
    CommitMultipleOperationsBuilder operations = api.commitMultipleOperations().commitMeta(NessieUtil.buildCommitMetadata(String.format("Iceberg rename table from '%s' to '%s'", from, to), catalogOptions)).operation(Operation.Put.of(NessieUtil.toKey(to), existingFromTable, existingFromTable)).operation(Operation.Delete.of(NessieUtil.toKey(from)));
    try {
        Tasks.foreach(operations).retry(5).stopRetryOn(NessieNotFoundException.class).throwFailureWhenFinished().onFailure((o, exception) -> refresh()).run(ops -> {
            Branch branch = ops.branch(reference.getAsBranch()).commit();
            reference.updateReference(branch);
        }, BaseNessieClientServerException.class);
    } catch (NessieNotFoundException e) {
        // and removed by another.
        throw new RuntimeException("Failed to drop table as ref is no longer valid.", e);
    } catch (BaseNessieClientServerException e) {
        throw new CommitFailedException(e, "Failed to rename table: the current reference is not up to date.");
    } catch (HttpClientException ex) {
        // safe than sorry.
        throw new CommitStateUnknownException(ex);
    }
// Intentionally just "throw through" Nessie's HttpClientException here and do not "special case"
// just the "timeout" variant to propagate all kinds of network errors (e.g. connection reset).
// Network code implementation details and all kinds of network devices can induce unexpected
// behavior. So better be safe than sorry.
}
Also used : TableIdentifier(org.apache.iceberg.catalog.TableIdentifier) AlreadyExistsException(org.apache.iceberg.exceptions.AlreadyExistsException) HttpClientBuilder(org.projectnessie.client.http.HttpClientBuilder) CatalogUtil(org.apache.iceberg.CatalogUtil) ImmutableMap(org.apache.iceberg.relocated.com.google.common.collect.ImmutableMap) CommitStateUnknownException(org.apache.iceberg.exceptions.CommitStateUnknownException) LoggerFactory(org.slf4j.LoggerFactory) HadoopFileIO(org.apache.iceberg.hadoop.HadoopFileIO) Function(java.util.function.Function) Reference(org.projectnessie.model.Reference) NessieClientBuilder(org.projectnessie.client.NessieClientBuilder) HttpClientException(org.projectnessie.client.http.HttpClientException) NessieConflictException(org.projectnessie.error.NessieConflictException) CatalogProperties(org.apache.iceberg.CatalogProperties) TableOperations(org.apache.iceberg.TableOperations) NoSuchNamespaceException(org.apache.iceberg.exceptions.NoSuchNamespaceException) Map(java.util.Map) Configuration(org.apache.hadoop.conf.Configuration) BaseMetastoreCatalog(org.apache.iceberg.BaseMetastoreCatalog) NoSuchTableException(org.apache.iceberg.exceptions.NoSuchTableException) Namespace(org.apache.iceberg.catalog.Namespace) Content(org.projectnessie.model.Content) Configurable(org.apache.hadoop.conf.Configurable) SupportsNamespaces(org.apache.iceberg.catalog.SupportsNamespaces) CommitFailedException(org.apache.iceberg.exceptions.CommitFailedException) Operation(org.projectnessie.model.Operation) Logger(org.slf4j.Logger) TableIdentifier(org.apache.iceberg.catalog.TableIdentifier) Branch(org.projectnessie.model.Branch) Set(java.util.Set) Collectors(java.util.stream.Collectors) Joiner(org.apache.iceberg.relocated.com.google.common.base.Joiner) NessieApiV1(org.projectnessie.client.api.NessieApiV1) List(java.util.List) Stream(java.util.stream.Stream) NessieConfigConstants(org.projectnessie.client.NessieConfigConstants) IcebergTable(org.projectnessie.model.IcebergTable) Tasks(org.apache.iceberg.util.Tasks) DynMethods(org.apache.iceberg.common.DynMethods) Preconditions(org.apache.iceberg.relocated.com.google.common.base.Preconditions) Tag(org.projectnessie.model.Tag) BaseNessieClientServerException(org.projectnessie.error.BaseNessieClientServerException) ContentKey(org.projectnessie.model.ContentKey) FileIO(org.apache.iceberg.io.FileIO) CommitMultipleOperationsBuilder(org.projectnessie.client.api.CommitMultipleOperationsBuilder) NessieNotFoundException(org.projectnessie.error.NessieNotFoundException) VisibleForTesting(org.apache.iceberg.relocated.com.google.common.annotations.VisibleForTesting) TableReference(org.projectnessie.model.TableReference) CommitMultipleOperationsBuilder(org.projectnessie.client.api.CommitMultipleOperationsBuilder) AlreadyExistsException(org.apache.iceberg.exceptions.AlreadyExistsException) NoSuchTableException(org.apache.iceberg.exceptions.NoSuchTableException) CommitStateUnknownException(org.apache.iceberg.exceptions.CommitStateUnknownException) NessieNotFoundException(org.projectnessie.error.NessieNotFoundException) HttpClientException(org.projectnessie.client.http.HttpClientException) Branch(org.projectnessie.model.Branch) IcebergTable(org.projectnessie.model.IcebergTable) CommitFailedException(org.apache.iceberg.exceptions.CommitFailedException) BaseNessieClientServerException(org.projectnessie.error.BaseNessieClientServerException)

Example 2 with BaseNessieClientServerException

use of org.projectnessie.error.BaseNessieClientServerException in project iceberg by apache.

the class NessieCatalog method dropTable.

@Override
public boolean dropTable(TableIdentifier identifier, boolean purge) {
    reference.checkMutable();
    IcebergTable existingTable = table(identifier);
    if (existingTable == null) {
        return false;
    }
    if (purge) {
        LOG.info("Purging data for table {} was set to true but is ignored", identifier.toString());
    }
    CommitMultipleOperationsBuilder commitBuilderBase = api.commitMultipleOperations().commitMeta(NessieUtil.buildCommitMetadata(String.format("Iceberg delete table %s", identifier), catalogOptions)).operation(Operation.Delete.of(NessieUtil.toKey(identifier)));
    // We try to drop the table. Simple retry after ref update.
    boolean threw = true;
    try {
        Tasks.foreach(commitBuilderBase).retry(5).stopRetryOn(NessieNotFoundException.class).throwFailureWhenFinished().onFailure((o, exception) -> refresh()).run(commitBuilder -> {
            Branch branch = commitBuilder.branch(reference.getAsBranch()).commit();
            reference.updateReference(branch);
        }, BaseNessieClientServerException.class);
        threw = false;
    } catch (NessieConflictException e) {
        LOG.error("Cannot drop table: failed after retry (update ref and retry)", e);
    } catch (NessieNotFoundException e) {
        LOG.error("Cannot drop table: ref is no longer valid.", e);
    } catch (BaseNessieClientServerException e) {
        LOG.error("Cannot drop table: unknown error", e);
    }
    return !threw;
}
Also used : AlreadyExistsException(org.apache.iceberg.exceptions.AlreadyExistsException) HttpClientBuilder(org.projectnessie.client.http.HttpClientBuilder) CatalogUtil(org.apache.iceberg.CatalogUtil) ImmutableMap(org.apache.iceberg.relocated.com.google.common.collect.ImmutableMap) CommitStateUnknownException(org.apache.iceberg.exceptions.CommitStateUnknownException) LoggerFactory(org.slf4j.LoggerFactory) HadoopFileIO(org.apache.iceberg.hadoop.HadoopFileIO) Function(java.util.function.Function) Reference(org.projectnessie.model.Reference) NessieClientBuilder(org.projectnessie.client.NessieClientBuilder) HttpClientException(org.projectnessie.client.http.HttpClientException) NessieConflictException(org.projectnessie.error.NessieConflictException) CatalogProperties(org.apache.iceberg.CatalogProperties) TableOperations(org.apache.iceberg.TableOperations) NoSuchNamespaceException(org.apache.iceberg.exceptions.NoSuchNamespaceException) Map(java.util.Map) Configuration(org.apache.hadoop.conf.Configuration) BaseMetastoreCatalog(org.apache.iceberg.BaseMetastoreCatalog) NoSuchTableException(org.apache.iceberg.exceptions.NoSuchTableException) Namespace(org.apache.iceberg.catalog.Namespace) Content(org.projectnessie.model.Content) Configurable(org.apache.hadoop.conf.Configurable) SupportsNamespaces(org.apache.iceberg.catalog.SupportsNamespaces) CommitFailedException(org.apache.iceberg.exceptions.CommitFailedException) Operation(org.projectnessie.model.Operation) Logger(org.slf4j.Logger) TableIdentifier(org.apache.iceberg.catalog.TableIdentifier) Branch(org.projectnessie.model.Branch) Set(java.util.Set) Collectors(java.util.stream.Collectors) Joiner(org.apache.iceberg.relocated.com.google.common.base.Joiner) NessieApiV1(org.projectnessie.client.api.NessieApiV1) List(java.util.List) Stream(java.util.stream.Stream) NessieConfigConstants(org.projectnessie.client.NessieConfigConstants) IcebergTable(org.projectnessie.model.IcebergTable) Tasks(org.apache.iceberg.util.Tasks) DynMethods(org.apache.iceberg.common.DynMethods) Preconditions(org.apache.iceberg.relocated.com.google.common.base.Preconditions) Tag(org.projectnessie.model.Tag) BaseNessieClientServerException(org.projectnessie.error.BaseNessieClientServerException) ContentKey(org.projectnessie.model.ContentKey) FileIO(org.apache.iceberg.io.FileIO) CommitMultipleOperationsBuilder(org.projectnessie.client.api.CommitMultipleOperationsBuilder) NessieNotFoundException(org.projectnessie.error.NessieNotFoundException) VisibleForTesting(org.apache.iceberg.relocated.com.google.common.annotations.VisibleForTesting) TableReference(org.projectnessie.model.TableReference) CommitMultipleOperationsBuilder(org.projectnessie.client.api.CommitMultipleOperationsBuilder) Branch(org.projectnessie.model.Branch) IcebergTable(org.projectnessie.model.IcebergTable) NessieConflictException(org.projectnessie.error.NessieConflictException) NessieNotFoundException(org.projectnessie.error.NessieNotFoundException) BaseNessieClientServerException(org.projectnessie.error.BaseNessieClientServerException)

Example 3 with BaseNessieClientServerException

use of org.projectnessie.error.BaseNessieClientServerException in project nessie by projectnessie.

the class GenerateContent method execute.

@Override
public void execute() throws BaseNessieClientServerException {
    if (runtimeDuration != null) {
        if (runtimeDuration.isZero() || runtimeDuration.isNegative()) {
            throw new ParameterException(spec.commandLine(), "Duration must be absent to greater than zero.");
        }
    }
    Duration perCommitDuration = Optional.ofNullable(runtimeDuration).orElse(Duration.ZERO).dividedBy(numCommits);
    ThreadLocalRandom random = ThreadLocalRandom.current();
    String runStartTime = DateTimeFormatter.ofPattern("yyyy-MM-dd-HH-mm-ss").format(LocalDateTime.now());
    List<ContentKey> tableNames = IntStream.range(0, numTables).mapToObj(i -> ContentKey.of(String.format("create-contents-%s", runStartTime), "contents", Integer.toString(i))).collect(Collectors.toList());
    try (NessieApiV1 api = createNessieApiInstance()) {
        Branch defaultBranch;
        if (defaultBranchName == null) {
            // Use the server's default branch.
            defaultBranch = api.getDefaultBranch();
        } else {
            // Use the specified default branch.
            try {
                defaultBranch = (Branch) api.getReference().refName(defaultBranchName).get();
            } catch (NessieReferenceNotFoundException e) {
                // Create branch if it does not exist.
                defaultBranch = api.getDefaultBranch();
                defaultBranch = (Branch) api.createReference().reference(Branch.of(defaultBranchName, defaultBranch.getHash())).sourceRefName(defaultBranch.getName()).create();
            }
        }
        List<String> branches = new ArrayList<>();
        branches.add(defaultBranch.getName());
        while (branches.size() < branchCount) {
            // Create a new branch
            String newBranchName = "branch-" + runStartTime + "_" + (branches.size() - 1);
            Branch branch = Branch.of(newBranchName, defaultBranch.getHash());
            spec.commandLine().getOut().printf("Creating branch '%s' from '%s' at %s%n", branch.getName(), defaultBranch.getName(), branch.getHash());
            api.createReference().reference(branch).sourceRefName(defaultBranch.getName()).create();
            branches.add(newBranchName);
        }
        spec.commandLine().getOut().printf("Starting contents generation, %d commits...%n", numCommits);
        for (int i = 0; i < numCommits; i++) {
            // Choose a random branch to commit to
            String branchName = branches.get(random.nextInt(branches.size()));
            Branch commitToBranch = (Branch) api.getReference().refName(branchName).get();
            ContentKey tableName = tableNames.get(random.nextInt(tableNames.size()));
            Content tableContents = api.getContent().refName(branchName).key(tableName).get().get(tableName);
            Content newContents = createContents(tableContents, random);
            spec.commandLine().getOut().printf("Committing content-key '%s' to branch '%s' at %s%n", tableName, commitToBranch.getName(), commitToBranch.getHash());
            CommitMultipleOperationsBuilder commit = api.commitMultipleOperations().branch(commitToBranch).commitMeta(CommitMeta.builder().message(String.format("%s table %s on %s, commit #%d of %d", tableContents != null ? "Update" : "Create", tableName, branchName, i, numCommits)).author(System.getProperty("user.name")).authorTime(Instant.now()).build());
            if (newContents instanceof IcebergTable || newContents instanceof IcebergView) {
                commit.operation(Put.of(tableName, newContents, tableContents));
            } else {
                commit.operation(Put.of(tableName, newContents));
            }
            Branch newHead = commit.commit();
            if (random.nextDouble() < newTagProbability) {
                Tag tag = Tag.of("new-tag-" + random.nextLong(), newHead.getHash());
                spec.commandLine().getOut().printf("Creating tag '%s' from '%s' at %s%n", tag.getName(), branchName, tag.getHash());
                api.createReference().reference(tag).sourceRefName(branchName).create();
            }
            try {
                TimeUnit.NANOSECONDS.sleep(perCommitDuration.toNanos());
            } catch (InterruptedException e) {
                Thread.currentThread().interrupt();
                break;
            }
        }
    }
    spec.commandLine().getOut().printf("Done creating contents.%n");
}
Also used : IntStream(java.util.stream.IntStream) ImmutableIcebergView(org.projectnessie.model.ImmutableIcebergView) Put(org.projectnessie.model.Operation.Put) LocalDateTime(java.time.LocalDateTime) NessieReferenceNotFoundException(org.projectnessie.error.NessieReferenceNotFoundException) ParameterException(picocli.CommandLine.ParameterException) Spec(picocli.CommandLine.Spec) ArrayList(java.util.ArrayList) Duration(java.time.Duration) ThreadLocalRandom(java.util.concurrent.ThreadLocalRandom) Content(org.projectnessie.model.Content) CommitMeta(org.projectnessie.model.CommitMeta) Command(picocli.CommandLine.Command) Branch(org.projectnessie.model.Branch) ImmutableDeltaLakeTable(org.projectnessie.model.ImmutableDeltaLakeTable) Min(javax.validation.constraints.Min) Instant(java.time.Instant) Collectors(java.util.stream.Collectors) NessieApiV1(org.projectnessie.client.api.NessieApiV1) TimeUnit(java.util.concurrent.TimeUnit) List(java.util.List) ImmutableIcebergTable(org.projectnessie.model.ImmutableIcebergTable) Option(picocli.CommandLine.Option) IcebergView(org.projectnessie.model.IcebergView) IcebergTable(org.projectnessie.model.IcebergTable) DateTimeFormatter(java.time.format.DateTimeFormatter) Optional(java.util.Optional) Tag(org.projectnessie.model.Tag) BaseNessieClientServerException(org.projectnessie.error.BaseNessieClientServerException) ContentKey(org.projectnessie.model.ContentKey) CommandSpec(picocli.CommandLine.Model.CommandSpec) CommitMultipleOperationsBuilder(org.projectnessie.client.api.CommitMultipleOperationsBuilder) CommitMultipleOperationsBuilder(org.projectnessie.client.api.CommitMultipleOperationsBuilder) ArrayList(java.util.ArrayList) Duration(java.time.Duration) ImmutableIcebergView(org.projectnessie.model.ImmutableIcebergView) IcebergView(org.projectnessie.model.IcebergView) ContentKey(org.projectnessie.model.ContentKey) NessieReferenceNotFoundException(org.projectnessie.error.NessieReferenceNotFoundException) Branch(org.projectnessie.model.Branch) Content(org.projectnessie.model.Content) ThreadLocalRandom(java.util.concurrent.ThreadLocalRandom) ImmutableIcebergTable(org.projectnessie.model.ImmutableIcebergTable) IcebergTable(org.projectnessie.model.IcebergTable) ParameterException(picocli.CommandLine.ParameterException) Tag(org.projectnessie.model.Tag) NessieApiV1(org.projectnessie.client.api.NessieApiV1)

Example 4 with BaseNessieClientServerException

use of org.projectnessie.error.BaseNessieClientServerException in project nessie by projectnessie.

the class AbstractRestRefLog method testReflog.

@Test
public void testReflog() throws BaseNessieClientServerException {
    String tagName = "tag1_test_reflog";
    String branch1 = "branch1_test_reflog";
    String branch2 = "branch2_test_reflog";
    String branch3 = "branch3_test_reflog";
    String root = "ref_name_test_reflog";
    List<Tuple> expectedEntries = new ArrayList<>(12);
    // reflog 1: creating the default branch0
    Branch branch0 = createBranch(root);
    expectedEntries.add(Tuple.tuple(root, "CREATE_REFERENCE"));
    // reflog 2: create tag1
    Reference createdTag = getApi().createReference().sourceRefName(branch0.getName()).reference(Tag.of(tagName, branch0.getHash())).create();
    expectedEntries.add(Tuple.tuple(tagName, "CREATE_REFERENCE"));
    // reflog 3: create branch1
    Reference createdBranch1 = getApi().createReference().sourceRefName(branch0.getName()).reference(Branch.of(branch1, branch0.getHash())).create();
    expectedEntries.add(Tuple.tuple(branch1, "CREATE_REFERENCE"));
    // reflog 4: create branch2
    Reference createdBranch2 = getApi().createReference().sourceRefName(branch0.getName()).reference(Branch.of(branch2, branch0.getHash())).create();
    expectedEntries.add(Tuple.tuple(branch2, "CREATE_REFERENCE"));
    // reflog 5: create branch2
    Branch createdBranch3 = (Branch) getApi().createReference().sourceRefName(branch0.getName()).reference(Branch.of(branch3, branch0.getHash())).create();
    expectedEntries.add(Tuple.tuple(branch3, "CREATE_REFERENCE"));
    // reflog 6: commit on default branch0
    IcebergTable meta = IcebergTable.of("meep", 42, 42, 42, 42);
    branch0 = getApi().commitMultipleOperations().branchName(branch0.getName()).hash(branch0.getHash()).commitMeta(CommitMeta.builder().message("dummy commit log").properties(ImmutableMap.of("prop1", "val1", "prop2", "val2")).build()).operation(Operation.Put.of(ContentKey.of("meep"), meta)).commit();
    expectedEntries.add(Tuple.tuple(root, "COMMIT"));
    // reflog 7: assign tag
    getApi().assignTag().tagName(tagName).hash(createdTag.getHash()).assignTo(branch0).assign();
    expectedEntries.add(Tuple.tuple(tagName, "ASSIGN_REFERENCE"));
    // reflog 8: assign ref
    getApi().assignBranch().branchName(branch1).hash(createdBranch1.getHash()).assignTo(branch0).assign();
    expectedEntries.add(Tuple.tuple(branch1, "ASSIGN_REFERENCE"));
    // reflog 9: merge
    getApi().mergeRefIntoBranch().branchName(branch2).hash(createdBranch2.getHash()).fromRefName(branch1).fromHash(branch0.getHash()).merge();
    expectedEntries.add(Tuple.tuple(branch2, "MERGE"));
    // reflog 10: transplant
    getApi().transplantCommitsIntoBranch().hashesToTransplant(ImmutableList.of(Objects.requireNonNull(branch0.getHash()))).fromRefName(branch1).branch(createdBranch3).transplant();
    expectedEntries.add(Tuple.tuple(branch3, "TRANSPLANT"));
    // reflog 11: delete branch
    getApi().deleteBranch().branchName(branch1).hash(branch0.getHash()).delete();
    expectedEntries.add(Tuple.tuple(branch1, "DELETE_REFERENCE"));
    // reflog 12: delete tag
    getApi().deleteTag().tagName(tagName).hash(branch0.getHash()).delete();
    expectedEntries.add(Tuple.tuple(tagName, "DELETE_REFERENCE"));
    // In the reflog output new entry will be the head. Hence, reverse the expected list
    Collections.reverse(expectedEntries);
    RefLogResponse refLogResponse = getApi().getRefLog().get();
    // verify reflog entries
    assertThat(refLogResponse.getLogEntries().subList(0, 12)).extracting(RefLogResponse.RefLogResponseEntry::getRefName, RefLogResponse.RefLogResponseEntry::getOperation).isEqualTo(expectedEntries);
    // verify pagination (limit and token)
    RefLogResponse refLogResponse1 = getApi().getRefLog().maxRecords(2).get();
    assertThat(refLogResponse1.getLogEntries()).isEqualTo(refLogResponse.getLogEntries().subList(0, 2));
    assertThat(refLogResponse1.isHasMore()).isTrue();
    RefLogResponse refLogResponse2 = getApi().getRefLog().pageToken(refLogResponse1.getToken()).get();
    // should start from the token.
    assertThat(refLogResponse2.getLogEntries().get(0).getRefLogId()).isEqualTo(refLogResponse1.getToken());
    assertThat(refLogResponse2.getLogEntries().subList(0, 10)).isEqualTo(refLogResponse.getLogEntries().subList(2, 12));
    // verify startHash and endHash
    RefLogResponse refLogResponse3 = getApi().getRefLog().fromHash(refLogResponse.getLogEntries().get(10).getRefLogId()).get();
    assertThat(refLogResponse3.getLogEntries().subList(0, 2)).isEqualTo(refLogResponse.getLogEntries().subList(10, 12));
    RefLogResponse refLogResponse4 = getApi().getRefLog().fromHash(refLogResponse.getLogEntries().get(3).getRefLogId()).untilHash(refLogResponse.getLogEntries().get(5).getRefLogId()).get();
    assertThat(refLogResponse4.getLogEntries()).isEqualTo(refLogResponse.getLogEntries().subList(3, 6));
    // use invalid reflog id f1234d75178d892a133a410355a5a990cf75d2f33eba25d575943d4df632f3a4
    // computed using Hash.of(
    // UnsafeByteOperations.unsafeWrap(newHasher().putString("invalid",
    // StandardCharsets.UTF_8).hash().asBytes()));
    assertThatThrownBy(() -> getApi().getRefLog().fromHash("f1234d75178d892a133a410355a5a990cf75d2f33eba25d575943d4df632f3a4").get()).isInstanceOf(NessieRefLogNotFoundException.class).hasMessageContaining("RefLog entry for 'f1234d75178d892a133a410355a5a990cf75d2f33eba25d575943d4df632f3a4' does not exist");
    // verify source hashes for assign reference
    assertThat(refLogResponse.getLogEntries().get(4).getSourceHashes()).isEqualTo(Collections.singletonList(createdBranch1.getHash()));
    // verify source hashes for merge
    assertThat(refLogResponse.getLogEntries().get(3).getSourceHashes()).isEqualTo(Collections.singletonList(branch0.getHash()));
    // verify source hashes for transplant
    assertThat(refLogResponse.getLogEntries().get(2).getSourceHashes()).isEqualTo(Collections.singletonList(branch0.getHash()));
    // test filter with stream
    List<RefLogResponse.RefLogResponseEntry> filteredResult = StreamingUtil.getReflogStream(getApi(), builder -> builder.filter("reflog.operation == 'ASSIGN_REFERENCE' " + "&& reflog.refName == 'tag1_test_reflog'"), OptionalInt.empty()).collect(Collectors.toList());
    assertThat(filteredResult.size()).isEqualTo(1);
    assertThat(filteredResult.get(0)).extracting(RefLogResponse.RefLogResponseEntry::getRefName, RefLogResponse.RefLogResponseEntry::getOperation).isEqualTo(expectedEntries.get(5).toList());
}
Also used : Operation(org.projectnessie.model.Operation) Tuple(org.assertj.core.groups.Tuple) ImmutableMap(com.google.common.collect.ImmutableMap) Assertions.assertThat(org.assertj.core.api.Assertions.assertThat) Branch(org.projectnessie.model.Branch) OptionalInt(java.util.OptionalInt) Collectors(java.util.stream.Collectors) Reference(org.projectnessie.model.Reference) NessieRefLogNotFoundException(org.projectnessie.error.NessieRefLogNotFoundException) ArrayList(java.util.ArrayList) Objects(java.util.Objects) Test(org.junit.jupiter.api.Test) List(java.util.List) Assertions.assertThatThrownBy(org.assertj.core.api.Assertions.assertThatThrownBy) ImmutableList(com.google.common.collect.ImmutableList) StreamingUtil(org.projectnessie.client.StreamingUtil) IcebergTable(org.projectnessie.model.IcebergTable) Tag(org.projectnessie.model.Tag) BaseNessieClientServerException(org.projectnessie.error.BaseNessieClientServerException) ContentKey(org.projectnessie.model.ContentKey) CommitMeta(org.projectnessie.model.CommitMeta) Collections(java.util.Collections) RefLogResponse(org.projectnessie.model.RefLogResponse) Branch(org.projectnessie.model.Branch) Reference(org.projectnessie.model.Reference) ArrayList(java.util.ArrayList) IcebergTable(org.projectnessie.model.IcebergTable) RefLogResponse(org.projectnessie.model.RefLogResponse) NessieRefLogNotFoundException(org.projectnessie.error.NessieRefLogNotFoundException) Tuple(org.assertj.core.groups.Tuple) Test(org.junit.jupiter.api.Test)

Example 5 with BaseNessieClientServerException

use of org.projectnessie.error.BaseNessieClientServerException in project nessie by projectnessie.

the class AbstractRestDiff method testDiff.

@ParameterizedTest
@MethodSource("diffRefModes")
public void testDiff(ReferenceMode refModeFrom, ReferenceMode refModeTo) throws BaseNessieClientServerException {
    int commitsPerBranch = 10;
    Reference fromRef = getApi().createReference().reference(Branch.of("testDiffFromRef", null)).create();
    Reference toRef = getApi().createReference().reference(Branch.of("testDiffToRef", null)).create();
    String toRefHash = createCommits(toRef, 1, commitsPerBranch, toRef.getHash());
    toRef = Branch.of(toRef.getName(), toRefHash);
    List<DiffEntry> diffOnRefHeadResponse = getApi().getDiff().fromRef(refModeFrom.transform(fromRef)).toRef(refModeTo.transform(toRef)).get().getDiffs();
    // we only committed to toRef, the "from" diff should be null
    assertThat(diffOnRefHeadResponse).hasSize(commitsPerBranch).allSatisfy(diff -> {
        assertThat(diff.getKey()).isNotNull();
        assertThat(diff.getFrom()).isNull();
        assertThat(diff.getTo()).isNotNull();
    });
    // Some combinations with explicit fromHashOnRef/toHashOnRef
    assertThat(getApi().getDiff().fromRefName(fromRef.getName()).fromHashOnRef(fromRef.getHash()).toRefName(toRef.getName()).toHashOnRef(toRef.getHash()).get().getDiffs()).isEqualTo(diffOnRefHeadResponse);
    // result
    if (refModeTo != ReferenceMode.NAME_ONLY) {
        Branch toRefAtFrom = Branch.of(toRef.getName(), fromRef.getHash());
        assertThat(getApi().getDiff().fromRef(refModeFrom.transform(fromRef)).toRef(refModeTo.transform(toRefAtFrom)).get().getDiffs()).isEmpty();
    }
    // after committing to fromRef, "from/to" diffs should both have data
    fromRef = Branch.of(fromRef.getName(), createCommits(fromRef, 1, commitsPerBranch, fromRef.getHash()));
    assertThat(getApi().getDiff().fromRef(refModeFrom.transform(fromRef)).toRef(refModeTo.transform(toRef)).get().getDiffs()).hasSize(commitsPerBranch).allSatisfy(diff -> {
        assertThat(diff.getKey()).isNotNull();
        assertThat(diff.getFrom()).isNotNull();
        assertThat(diff.getTo()).isNotNull();
        // we only have a diff on the ID
        assertThat(diff.getFrom().getId()).isNotEqualTo(diff.getTo().getId());
        Optional<IcebergTable> fromTable = diff.getFrom().unwrap(IcebergTable.class);
        assertThat(fromTable).isPresent();
        Optional<IcebergTable> toTable = diff.getTo().unwrap(IcebergTable.class);
        assertThat(toTable).isPresent();
        assertThat(fromTable.get().getMetadataLocation()).isEqualTo(toTable.get().getMetadataLocation());
        assertThat(fromTable.get().getSchemaId()).isEqualTo(toTable.get().getSchemaId());
        assertThat(fromTable.get().getSnapshotId()).isEqualTo(toTable.get().getSnapshotId());
        assertThat(fromTable.get().getSortOrderId()).isEqualTo(toTable.get().getSortOrderId());
        assertThat(fromTable.get().getSpecId()).isEqualTo(toTable.get().getSpecId());
    });
    List<ContentKey> keys = IntStream.rangeClosed(0, commitsPerBranch).mapToObj(i -> ContentKey.of("table" + i)).collect(Collectors.toList());
    // request all keys and delete the tables for them on toRef
    Map<ContentKey, Content> map = getApi().getContent().refName(toRef.getName()).keys(keys).get();
    for (Map.Entry<ContentKey, Content> entry : map.entrySet()) {
        toRef = getApi().commitMultipleOperations().branchName(toRef.getName()).hash(toRefHash).commitMeta(CommitMeta.fromMessage("delete")).operation(Delete.of(entry.getKey())).commit();
    }
    // now that we deleted all tables on toRef, the diff for "to" should be null
    assertThat(getApi().getDiff().fromRef(refModeFrom.transform(fromRef)).toRef(refModeTo.transform(toRef)).get().getDiffs()).hasSize(commitsPerBranch).allSatisfy(diff -> {
        assertThat(diff.getKey()).isNotNull();
        assertThat(diff.getFrom()).isNotNull();
        assertThat(diff.getTo()).isNull();
    });
}
Also used : IntStream(java.util.stream.IntStream) DiffEntry(org.projectnessie.model.DiffResponse.DiffEntry) Arrays(java.util.Arrays) Assertions.assertThat(org.assertj.core.api.Assertions.assertThat) Branch(org.projectnessie.model.Branch) Collectors(java.util.stream.Collectors) Reference(org.projectnessie.model.Reference) List(java.util.List) ParameterizedTest(org.junit.jupiter.params.ParameterizedTest) Stream(java.util.stream.Stream) Delete(org.projectnessie.model.Operation.Delete) IcebergTable(org.projectnessie.model.IcebergTable) Map(java.util.Map) Optional(java.util.Optional) BaseNessieClientServerException(org.projectnessie.error.BaseNessieClientServerException) Content(org.projectnessie.model.Content) ContentKey(org.projectnessie.model.ContentKey) CommitMeta(org.projectnessie.model.CommitMeta) MethodSource(org.junit.jupiter.params.provider.MethodSource) Reference(org.projectnessie.model.Reference) ContentKey(org.projectnessie.model.ContentKey) Branch(org.projectnessie.model.Branch) Content(org.projectnessie.model.Content) IcebergTable(org.projectnessie.model.IcebergTable) Map(java.util.Map) DiffEntry(org.projectnessie.model.DiffResponse.DiffEntry) ParameterizedTest(org.junit.jupiter.params.ParameterizedTest) MethodSource(org.junit.jupiter.params.provider.MethodSource)

Aggregations

BaseNessieClientServerException (org.projectnessie.error.BaseNessieClientServerException)12 Branch (org.projectnessie.model.Branch)11 ContentKey (org.projectnessie.model.ContentKey)11 List (java.util.List)10 Collectors (java.util.stream.Collectors)10 IcebergTable (org.projectnessie.model.IcebergTable)10 CommitMeta (org.projectnessie.model.CommitMeta)8 Map (java.util.Map)7 Stream (java.util.stream.Stream)7 Operation (org.projectnessie.model.Operation)7 Reference (org.projectnessie.model.Reference)7 CommitMultipleOperationsBuilder (org.projectnessie.client.api.CommitMultipleOperationsBuilder)6 Content (org.projectnessie.model.Content)6 Assertions.assertThat (org.assertj.core.api.Assertions.assertThat)5 IcebergView (org.projectnessie.model.IcebergView)5 Put (org.projectnessie.model.Operation.Put)5 Optional (java.util.Optional)4 Function (java.util.function.Function)4 Assertions.assertThatThrownBy (org.assertj.core.api.Assertions.assertThatThrownBy)4 Test (org.junit.jupiter.api.Test)4