Search in sources :

Example 1 with BlobContainer

use of org.opensearch.common.blobstore.BlobContainer in project OpenSearch by opensearch-project.

the class URLBlobStoreTests method testNoBlobFound.

public void testNoBlobFound() throws IOException {
    BlobContainer container = urlBlobStore.blobContainer(BlobPath.cleanPath().add("indices"));
    String incorrectBlobName = "incorrect_" + blobName;
    try (InputStream ignored = container.readBlob(incorrectBlobName)) {
        fail("Should have thrown NoSuchFileException exception");
        ignored.read();
    } catch (NoSuchFileException e) {
        assertEquals(String.format("[%s] blob not found", incorrectBlobName), e.getMessage());
    }
}
Also used : InputStream(java.io.InputStream) BlobContainer(org.opensearch.common.blobstore.BlobContainer) NoSuchFileException(java.nio.file.NoSuchFileException)

Example 2 with BlobContainer

use of org.opensearch.common.blobstore.BlobContainer in project OpenSearch by opensearch-project.

the class HdfsBlobContainer method children.

@Override
public Map<String, BlobContainer> children() throws IOException {
    FileStatus[] files = store.execute(fileContext -> fileContext.util().listStatus(path));
    Map<String, BlobContainer> map = new LinkedHashMap<>();
    for (FileStatus file : files) {
        if (file.isDirectory()) {
            final String name = file.getPath().getName();
            map.put(name, new HdfsBlobContainer(path().add(name), store, new Path(path, name), bufferSize, securityContext));
        }
    }
    return Collections.unmodifiableMap(map);
}
Also used : Path(org.apache.hadoop.fs.Path) BlobPath(org.opensearch.common.blobstore.BlobPath) FileStatus(org.apache.hadoop.fs.FileStatus) BlobContainer(org.opensearch.common.blobstore.BlobContainer) FsBlobContainer(org.opensearch.common.blobstore.fs.FsBlobContainer) AbstractBlobContainer(org.opensearch.common.blobstore.support.AbstractBlobContainer) LinkedHashMap(java.util.LinkedHashMap)

Example 3 with BlobContainer

use of org.opensearch.common.blobstore.BlobContainer in project OpenSearch by opensearch-project.

the class HdfsBlobStoreContainerTests method testReadOnly.

public void testReadOnly() throws Exception {
    FileContext fileContext = createTestContext();
    // Constructor will not create dir if read only
    HdfsBlobStore hdfsBlobStore = new HdfsBlobStore(fileContext, "dir", 1024, true);
    FileContext.Util util = fileContext.util();
    Path root = fileContext.makeQualified(new Path("dir"));
    assertFalse(util.exists(root));
    BlobPath blobPath = BlobPath.cleanPath().add("path");
    // blobContainer() will not create path if read only
    hdfsBlobStore.blobContainer(blobPath);
    Path hdfsPath = root;
    for (String p : blobPath) {
        hdfsPath = new Path(hdfsPath, p);
    }
    assertFalse(util.exists(hdfsPath));
    // if not read only, directory will be created
    hdfsBlobStore = new HdfsBlobStore(fileContext, "dir", 1024, false);
    assertTrue(util.exists(root));
    BlobContainer container = hdfsBlobStore.blobContainer(blobPath);
    assertTrue(util.exists(hdfsPath));
    byte[] data = randomBytes(randomIntBetween(10, scaledRandomIntBetween(1024, 1 << 16)));
    writeBlob(container, "foo", new BytesArray(data), randomBoolean());
    assertArrayEquals(readBlobFully(container, "foo", data.length), data);
    assertTrue(container.blobExists("foo"));
}
Also used : BlobPath(org.opensearch.common.blobstore.BlobPath) Path(org.apache.hadoop.fs.Path) BlobPath(org.opensearch.common.blobstore.BlobPath) BytesArray(org.opensearch.common.bytes.BytesArray) BlobContainer(org.opensearch.common.blobstore.BlobContainer) FileContext(org.apache.hadoop.fs.FileContext)

Example 4 with BlobContainer

use of org.opensearch.common.blobstore.BlobContainer in project OpenSearch by opensearch-project.

the class AzureBlobContainerRetriesTests method testRetryUntilFail.

public void testRetryUntilFail() throws IOException {
    final AtomicBoolean requestReceived = new AtomicBoolean(false);
    httpServer.createContext("/container/write_blob_max_retries", exchange -> {
        try {
            if (requestReceived.compareAndSet(false, true)) {
                throw new AssertionError("Should not receive two requests");
            } else {
                exchange.sendResponseHeaders(RestStatus.CREATED.getStatus(), -1);
            }
        } finally {
            exchange.close();
        }
    });
    final BlobContainer blobContainer = createBlobContainer(randomIntBetween(2, 5));
    try (InputStream stream = new InputStream() {

        @Override
        public int read() throws IOException {
            throw new IOException("foo");
        }

        @Override
        public boolean markSupported() {
            return true;
        }

        @Override
        public void reset() {
            throw new AssertionError("should not be called");
        }
    }) {
        final IOException ioe = expectThrows(IOException.class, () -> blobContainer.writeBlob("write_blob_max_retries", stream, randomIntBetween(1, 128), randomBoolean()));
        assertThat(ioe.getMessage(), is("foo"));
    }
}
Also used : AtomicBoolean(java.util.concurrent.atomic.AtomicBoolean) InputStream(java.io.InputStream) BlobContainer(org.opensearch.common.blobstore.BlobContainer) IOException(java.io.IOException)

Example 5 with BlobContainer

use of org.opensearch.common.blobstore.BlobContainer in project OpenSearch by opensearch-project.

the class AzureBlobContainerRetriesTests method testReadRangeBlobWithRetries.

public void testReadRangeBlobWithRetries() throws Exception {
    // The request retry policy counts the first attempt as retry, so we need to
    // account for that and increase the max retry count by one.
    final int maxRetries = randomIntBetween(2, 6);
    final CountDown countDownGet = new CountDown(maxRetries - 1);
    final byte[] bytes = randomBlobContent();
    httpServer.createContext("/container/read_range_blob_max_retries", exchange -> {
        try {
            Streams.readFully(exchange.getRequestBody());
            if ("HEAD".equals(exchange.getRequestMethod())) {
                exchange.getResponseHeaders().add("Content-Type", "application/octet-stream");
                exchange.getResponseHeaders().add("Content-Length", String.valueOf(bytes.length));
                exchange.getResponseHeaders().add("x-ms-blob-type", "blockblob");
                exchange.sendResponseHeaders(RestStatus.OK.getStatus(), -1);
                return;
            } else if ("GET".equals(exchange.getRequestMethod())) {
                if (countDownGet.countDown()) {
                    final int rangeStart = getRangeStart(exchange);
                    assertThat(rangeStart, lessThan(bytes.length));
                    final Optional<Integer> rangeEnd = getRangeEnd(exchange);
                    assertThat(rangeEnd.isPresent(), is(true));
                    assertThat(rangeEnd.get(), greaterThanOrEqualTo(rangeStart));
                    final int length = (rangeEnd.get() - rangeStart) + 1;
                    assertThat(length, lessThanOrEqualTo(bytes.length - rangeStart));
                    exchange.getResponseHeaders().add("Content-Type", "application/octet-stream");
                    exchange.getResponseHeaders().add("Content-Length", String.valueOf(length));
                    exchange.getResponseHeaders().add("x-ms-blob-type", "blockblob");
                    exchange.sendResponseHeaders(RestStatus.OK.getStatus(), length);
                    exchange.getResponseBody().write(bytes, rangeStart, length);
                    return;
                }
            }
            if (randomBoolean()) {
                AzureHttpHandler.sendError(exchange, randomFrom(RestStatus.INTERNAL_SERVER_ERROR, RestStatus.SERVICE_UNAVAILABLE));
            }
        } finally {
            exchange.close();
        }
    });
    final BlobContainer blobContainer = createBlobContainer(maxRetries);
    final int position = randomIntBetween(0, bytes.length - 1);
    final int length = randomIntBetween(1, bytes.length - position);
    try (InputStream inputStream = blobContainer.readBlob("read_range_blob_max_retries", position, length)) {
        final byte[] bytesRead = BytesReference.toBytes(Streams.readFully(inputStream));
        assertArrayEquals(Arrays.copyOfRange(bytes, position, Math.min(bytes.length, position + length)), bytesRead);
        assertThat(countDownGet.isCountedDown(), is(true));
    }
}
Also used : Optional(java.util.Optional) InputStream(java.io.InputStream) BlobContainer(org.opensearch.common.blobstore.BlobContainer) CountDown(org.opensearch.common.util.concurrent.CountDown)

Aggregations

BlobContainer (org.opensearch.common.blobstore.BlobContainer)60 InputStream (java.io.InputStream)35 IOException (java.io.IOException)26 BlobPath (org.opensearch.common.blobstore.BlobPath)25 NoSuchFileException (java.nio.file.NoSuchFileException)22 BlobStore (org.opensearch.common.blobstore.BlobStore)22 TimeValue (org.opensearch.common.unit.TimeValue)21 BytesArray (org.opensearch.common.bytes.BytesArray)18 RepositoryMetadata (org.opensearch.cluster.metadata.RepositoryMetadata)17 InputStreamIndexInput (org.opensearch.common.lucene.store.InputStreamIndexInput)17 Map (java.util.Map)16 FsBlobContainer (org.opensearch.common.blobstore.fs.FsBlobContainer)15 BytesReference (org.opensearch.common.bytes.BytesReference)15 Strings (org.opensearch.common.Strings)14 FilterInputStream (java.io.FilterInputStream)13 Optional (java.util.Optional)13 AtomicReference (java.util.concurrent.atomic.AtomicReference)13 Executor (java.util.concurrent.Executor)12 AtomicLong (java.util.concurrent.atomic.AtomicLong)12 CorruptIndexException (org.apache.lucene.index.CorruptIndexException)12