Search in sources :

Example 1 with ColumnList

use of com.netflix.astyanax.model.ColumnList in project janusgraph by JanusGraph.

the class AstyanaxKeyColumnValueStore method getNamesSlice.

public Map<StaticBuffer, EntryList> getNamesSlice(List<StaticBuffer> keys, SliceQuery query, StoreTransaction txh) throws BackendException {
    /*
         * RowQuery<K,C> should be parametrized as
         * RowQuery<ByteBuffer,ByteBuffer>. However, this causes the following
         * compilation error when attempting to call withColumnRange on a
         * RowQuery<ByteBuffer,ByteBuffer> instance:
         *
         * java.lang.Error: Unresolved compilation problem: The method
         * withColumnRange(ByteBuffer, ByteBuffer, boolean, int) is ambiguous
         * for the type RowQuery<ByteBuffer,ByteBuffer>
         *
         * The compiler substitutes ByteBuffer=C for both startColumn and
         * endColumn, compares it to its identical twin with that type
         * hard-coded, and dies.
         *
         */
    // Add one for last column potentially removed in CassandraHelper.makeEntryList
    final int queryLimit = query.getLimit() + (query.hasLimit() ? 1 : 0);
    final int pageLimit = Math.min(this.readPageSize, queryLimit);
    ByteBuffer sliceStart = query.getSliceStart().asByteBuffer();
    final ByteBuffer sliceEnd = query.getSliceEnd().asByteBuffer();
    final RowSliceQuery rq = keyspace.prepareQuery(columnFamily).setConsistencyLevel(getTx(txh).getReadConsistencyLevel().getAstyanax()).withRetryPolicy(retryPolicy.duplicate()).getKeySlice(CassandraHelper.convert(keys));
    // Don't directly chain due to ambiguity resolution; see top comment
    rq.withColumnRange(sliceStart, sliceEnd, false, pageLimit);
    final OperationResult<Rows<ByteBuffer, ByteBuffer>> r;
    try {
        r = (OperationResult<Rows<ByteBuffer, ByteBuffer>>) rq.execute();
    } catch (ConnectionException e) {
        throw new TemporaryBackendException(e);
    }
    final Rows<ByteBuffer, ByteBuffer> rows = r.getResult();
    final Map<StaticBuffer, EntryList> result = new HashMap<>(rows.size());
    for (Row<ByteBuffer, ByteBuffer> row : rows) {
        assert !result.containsKey(row.getKey());
        final ByteBuffer key = row.getKey();
        ColumnList<ByteBuffer> pageColumns = row.getColumns();
        final List<Column<ByteBuffer>> queryColumns = new ArrayList();
        Iterables.addAll(queryColumns, pageColumns);
        while (pageColumns.size() == pageLimit && queryColumns.size() < queryLimit) {
            final Column<ByteBuffer> lastColumn = queryColumns.get(queryColumns.size() - 1);
            sliceStart = lastColumn.getName();
            // No possibility of two values at the same column name, so start the
            // next slice one bit after the last column found by the previous query.
            // byte[] is little-endian
            Integer position = null;
            for (int i = sliceStart.array().length - 1; i >= 0; i--) {
                if (sliceStart.array()[i] < Byte.MAX_VALUE) {
                    position = i;
                    sliceStart.array()[i]++;
                    break;
                }
            }
            if (null == position) {
                throw new PermanentBackendException("Column was not incrementable");
            }
            final RowQuery pageQuery = keyspace.prepareQuery(columnFamily).setConsistencyLevel(getTx(txh).getReadConsistencyLevel().getAstyanax()).withRetryPolicy(retryPolicy.duplicate()).getKey(row.getKey());
            // Don't directly chain due to ambiguity resolution; see top comment
            pageQuery.withColumnRange(sliceStart, sliceEnd, false, pageLimit);
            final OperationResult<ColumnList<ByteBuffer>> pageResult;
            try {
                pageResult = (OperationResult<ColumnList<ByteBuffer>>) pageQuery.execute();
            } catch (ConnectionException e) {
                throw new TemporaryBackendException(e);
            }
            if (Thread.interrupted()) {
                throw new TraversalInterruptedException();
            }
            // Reset the incremented position to avoid leaking mutations up the
            // stack to callers - sliceStart.array() in fact refers to a column name
            // that will be later read to deserialize an edge (since we assigned it
            // via de-referencing a column from the previous query).
            sliceStart.array()[position]--;
            pageColumns = pageResult.getResult();
            Iterables.addAll(queryColumns, pageColumns);
        }
        result.put(StaticArrayBuffer.of(key), CassandraHelper.makeEntryList(queryColumns, entryGetter, query.getSliceEnd(), query.getLimit()));
    }
    return result;
}
Also used : TraversalInterruptedException(org.apache.tinkerpop.gremlin.process.traversal.util.TraversalInterruptedException) PermanentBackendException(org.janusgraph.diskstorage.PermanentBackendException) EntryList(org.janusgraph.diskstorage.EntryList) ByteBuffer(java.nio.ByteBuffer) TemporaryBackendException(org.janusgraph.diskstorage.TemporaryBackendException) Column(com.netflix.astyanax.model.Column) RowSliceQuery(com.netflix.astyanax.query.RowSliceQuery) StaticBuffer(org.janusgraph.diskstorage.StaticBuffer) ColumnList(com.netflix.astyanax.model.ColumnList) ConnectionException(com.netflix.astyanax.connectionpool.exceptions.ConnectionException) RowQuery(com.netflix.astyanax.query.RowQuery) Rows(com.netflix.astyanax.model.Rows)

Example 2 with ColumnList

use of com.netflix.astyanax.model.ColumnList in project coprhd-controller by CoprHD.

the class RowMutationTest method testTimeUUID.

@Test
public void testTimeUUID() throws Exception {
    Volume volume = new Volume();
    URI id1 = URIUtil.createId(Volume.class);
    URI pool1 = URIUtil.createId(StoragePool.class);
    volume.setId(id1);
    volume.setLabel("volume1");
    volume.setPool(pool1);
    volume.setInactive(false);
    volume.setAllocatedCapacity(1000L);
    volume.setProvisionedCapacity(2000L);
    volume.setAssociatedSourceVolume(URI.create("test"));
    volume.setVolumeGroupIds(new StringSet(Sets.newHashSet("v1", "v2")));
    getDbClient().updateObject(volume);
    DataObjectType doType = TypeMap.getDoType(Volume.class);
    OperationResult<ColumnList<CompositeColumnName>> result = ((DbClientImpl) getDbClient()).getLocalContext().getKeyspace().prepareQuery(doType.getCF()).getKey(volume.getId().toString()).execute();
    List<Long> columnTimeUUIDStamps = new ArrayList<Long>();
    for (Column<CompositeColumnName> column : result.getResult()) {
        if (column.getName().getTimeUUID() != null) {
            columnTimeUUIDStamps.add(TimeUUIDUtils.getMicrosTimeFromUUID(column.getName().getTimeUUID()));
        }
    }
    Collections.sort(columnTimeUUIDStamps);
    for (int i = 1; i < columnTimeUUIDStamps.size(); i++) {
        Assert.assertEquals(1, columnTimeUUIDStamps.get(i) - columnTimeUUIDStamps.get(i - 1));
    }
}
Also used : CompositeColumnName(com.emc.storageos.db.client.impl.CompositeColumnName) ArrayList(java.util.ArrayList) URI(java.net.URI) Volume(com.emc.storageos.db.client.model.Volume) DbClientImpl(com.emc.storageos.db.client.impl.DbClientImpl) StringSet(com.emc.storageos.db.client.model.StringSet) ColumnList(com.netflix.astyanax.model.ColumnList) DataObjectType(com.emc.storageos.db.client.impl.DataObjectType) Test(org.junit.Test)

Example 3 with ColumnList

use of com.netflix.astyanax.model.ColumnList in project coprhd-controller by CoprHD.

the class RebuildIndexDuplicatedCFNameMigrationTest method testHandleDataObjectClass.

@Test
public void testHandleDataObjectClass() throws Exception {
    DataObjectType doType = TypeMap.getDoType(FileShare.class);
    for (int i = 0; i < 5; i++) {
        FileShare testData = new FileShare();
        testData.setId(URIUtil.createId(FileShare.class));
        testData.setPath("duplicated_value" + i);
        testData.setMountPath("duplicated_value" + i);
        getDbClient().updateObject(testData);
    }
    // create data object whose index are neede to be rebuild
    resetRowMutatorTimeStampOffSet(0);
    FileShare[] testDataArray = new FileShare[10];
    for (int i = 0; i < 10; i++) {
        FileShare testData = new FileShare();
        testData.setId(URIUtil.createId(FileShare.class));
        testData.setPath("duplicated_value" + i);
        testData.setMountPath("duplicated_value" + i);
        testDataArray[i] = testData;
        getDbClient().updateObject(testData);
    }
    resetRowMutatorTimeStampOffSet(1);
    target = new RebuildIndexDuplicatedCFNameMigration();
    target.setDbClient(getDbClient());
    target.process();
    assertEquals(testDataArray.length, target.getTotalProcessedIndexCount());
    for (FileShare testData : testDataArray) {
        FileShare targetData = (FileShare) getDbClient().queryObject(testData.getId());
        assertEquals(testData.getPath(), targetData.getPath());
        assertEquals(testData.getMountPath(), targetData.getMountPath());
        OperationResult<ColumnList<CompositeColumnName>> result = ((DbClientImpl) getDbClient()).getLocalContext().getKeyspace().prepareQuery(doType.getCF()).getKey(testData.getId().toString()).execute();
        long pathTime = 0;
        long mountPathTime = 0;
        for (Column<CompositeColumnName> column : result.getResult()) {
            if (column.getName().getOne().equals("path")) {
                pathTime = TimeUUIDUtils.getMicrosTimeFromUUID(column.getName().getTimeUUID());
            } else if (column.getName().getOne().equals("mountPath")) {
                mountPathTime = TimeUUIDUtils.getMicrosTimeFromUUID(column.getName().getTimeUUID());
            }
        }
        assertEquals(1, Math.abs(pathTime - mountPathTime));
    }
}
Also used : CompositeColumnName(com.emc.storageos.db.client.impl.CompositeColumnName) DbClientImpl(com.emc.storageos.db.client.impl.DbClientImpl) ColumnList(com.netflix.astyanax.model.ColumnList) RebuildIndexDuplicatedCFNameMigration(com.emc.storageos.db.client.upgrade.callbacks.RebuildIndexDuplicatedCFNameMigration) DataObjectType(com.emc.storageos.db.client.impl.DataObjectType) FileShare(com.emc.storageos.db.client.model.FileShare) Test(org.junit.Test)

Example 4 with ColumnList

use of com.netflix.astyanax.model.ColumnList in project coprhd-controller by CoprHD.

the class DbConsistencyCheckerHelper method checkIndexingCF.

/**
 * Scan all the indices and related data object records, to find out
 * the index record is existing but the related data object records is missing.
 *
 * @return number of the corrupted rows in this index CF
 * @throws ConnectionException
 */
public void checkIndexingCF(IndexAndCf indexAndCf, boolean toConsole, CheckResult checkResult, boolean isParallel) throws ConnectionException {
    initSchemaVersions();
    String indexCFName = indexAndCf.cf.getName();
    Map<String, ColumnFamily<String, CompositeColumnName>> objCfs = getDataObjectCFs();
    _log.info("Start checking the index CF {} with double confirmed option: {}", indexCFName, doubleConfirmed);
    Map<ColumnFamily<String, CompositeColumnName>, Map<String, List<IndexEntry>>> objsToCheck = new HashMap<>();
    ColumnFamilyQuery<String, IndexColumnName> query = indexAndCf.keyspace.prepareQuery(indexAndCf.cf);
    OperationResult<Rows<String, IndexColumnName>> result = query.getAllRows().setRowLimit(dbClient.DEFAULT_PAGE_SIZE).withColumnRange(new RangeBuilder().setLimit(0).build()).execute();
    int scannedRows = 0;
    long beginTime = System.currentTimeMillis();
    for (Row<String, IndexColumnName> row : result.getResult()) {
        RowQuery<String, IndexColumnName> rowQuery = indexAndCf.keyspace.prepareQuery(indexAndCf.cf).getKey(row.getKey()).autoPaginate(true).withColumnRange(new RangeBuilder().setLimit(dbClient.DEFAULT_PAGE_SIZE).build());
        ColumnList<IndexColumnName> columns;
        while (!(columns = rowQuery.execute().getResult()).isEmpty()) {
            for (Column<IndexColumnName> column : columns) {
                scannedRows++;
                ObjectEntry objEntry = extractObjectEntryFromIndex(row.getKey(), column.getName(), indexAndCf.indexType, toConsole);
                if (objEntry == null) {
                    continue;
                }
                ColumnFamily<String, CompositeColumnName> objCf = objCfs.get(objEntry.getClassName());
                if (objCf == null) {
                    logMessage(String.format("DataObject does not exist for %s", row.getKey()), true, toConsole);
                    continue;
                }
                if (skipCheckCFs.contains(objCf.getName())) {
                    _log.debug("Skip checking CF {} for index CF {}", objCf.getName(), indexAndCf.cf.getName());
                    continue;
                }
                Map<String, List<IndexEntry>> objKeysIdxEntryMap = objsToCheck.get(objCf);
                if (objKeysIdxEntryMap == null) {
                    objKeysIdxEntryMap = new HashMap<>();
                    objsToCheck.put(objCf, objKeysIdxEntryMap);
                }
                List<IndexEntry> idxEntries = objKeysIdxEntryMap.get(objEntry.getObjectId());
                if (idxEntries == null) {
                    idxEntries = new ArrayList<>();
                    objKeysIdxEntryMap.put(objEntry.getObjectId(), idxEntries);
                }
                idxEntries.add(new IndexEntry(row.getKey(), column.getName()));
            }
            int size = getObjsSize(objsToCheck);
            if (size >= INDEX_OBJECTS_BATCH_SIZE) {
                if (isParallel) {
                    processBatchIndexObjectsWithMultipleThreads(indexAndCf, toConsole, objsToCheck, checkResult);
                } else {
                    processBatchIndexObjects(indexAndCf, toConsole, objsToCheck, checkResult);
                }
                objsToCheck = new HashMap<>();
            }
            if (scannedRows >= THRESHHOLD_FOR_OUTPUT_DEBUG) {
                _log.info("{} data objects have been check with time {}", scannedRows, DurationFormatUtils.formatDurationHMS(System.currentTimeMillis() - beginTime));
                scannedRows = 0;
                beginTime = System.currentTimeMillis();
            }
        }
    }
    // Detect whether the DataObject CFs have the records
    if (isParallel) {
        processBatchIndexObjectsWithMultipleThreads(indexAndCf, toConsole, objsToCheck, checkResult);
    } else {
        processBatchIndexObjects(indexAndCf, toConsole, objsToCheck, checkResult);
    }
}
Also used : HashMap(java.util.HashMap) RangeBuilder(com.netflix.astyanax.util.RangeBuilder) ColumnFamily(com.netflix.astyanax.model.ColumnFamily) ColumnList(com.netflix.astyanax.model.ColumnList) List(java.util.List) ArrayList(java.util.ArrayList) Rows(com.netflix.astyanax.model.Rows) Map(java.util.Map) HashMap(java.util.HashMap) TreeMap(java.util.TreeMap)

Aggregations

ColumnList (com.netflix.astyanax.model.ColumnList)4 CompositeColumnName (com.emc.storageos.db.client.impl.CompositeColumnName)2 DataObjectType (com.emc.storageos.db.client.impl.DataObjectType)2 DbClientImpl (com.emc.storageos.db.client.impl.DbClientImpl)2 Rows (com.netflix.astyanax.model.Rows)2 ArrayList (java.util.ArrayList)2 Test (org.junit.Test)2 FileShare (com.emc.storageos.db.client.model.FileShare)1 StringSet (com.emc.storageos.db.client.model.StringSet)1 Volume (com.emc.storageos.db.client.model.Volume)1 RebuildIndexDuplicatedCFNameMigration (com.emc.storageos.db.client.upgrade.callbacks.RebuildIndexDuplicatedCFNameMigration)1 ConnectionException (com.netflix.astyanax.connectionpool.exceptions.ConnectionException)1 Column (com.netflix.astyanax.model.Column)1 ColumnFamily (com.netflix.astyanax.model.ColumnFamily)1 RowQuery (com.netflix.astyanax.query.RowQuery)1 RowSliceQuery (com.netflix.astyanax.query.RowSliceQuery)1 RangeBuilder (com.netflix.astyanax.util.RangeBuilder)1 URI (java.net.URI)1 ByteBuffer (java.nio.ByteBuffer)1 HashMap (java.util.HashMap)1