Search in sources :

Example 1 with VariableByteOutputStream

use of org.exist.storage.io.VariableByteOutputStream in project exist by eXist-db.

the class UnixStylePermissionTest method writeRead_roundtrip.

@Test
public void writeRead_roundtrip() throws IOException {
    final SecurityManager mockSecurityManager = EasyMock.createMock(SecurityManager.class);
    final int ownerId = new Random().nextInt();
    final int mode = 0700;
    final int ownerGroupId = new Random().nextInt();
    final VariableByteOutputStream mockOstream = EasyMock.createMock(VariableByteOutputStream.class);
    final VariableByteInput mockIstream = EasyMock.createMock(VariableByteInput.class);
    final TestableUnixStylePermission permission = new TestableUnixStylePermission(mockSecurityManager, ownerId, ownerGroupId, mode);
    final long permissionVector = permission.getVector_testable();
    // expectations
    mockOstream.writeLong(permissionVector);
    expect(mockIstream.readLong()).andReturn(permissionVector);
    replay(mockSecurityManager, mockOstream, mockIstream);
    permission.write(mockOstream);
    permission.read(mockIstream);
    verify(mockSecurityManager, mockOstream, mockIstream);
    assertEquals(permissionVector, permission.getVector_testable());
}
Also used : VariableByteInput(org.exist.storage.io.VariableByteInput) Random(java.util.Random) VariableByteOutputStream(org.exist.storage.io.VariableByteOutputStream) Test(org.junit.Test)

Example 2 with VariableByteOutputStream

use of org.exist.storage.io.VariableByteOutputStream in project exist by eXist-db.

the class NativeBroker method saveCollection.

@Override
public void saveCollection(final Txn transaction, final Collection collection) throws IOException {
    if (collection == null) {
        LOG.error("NativeBroker.saveCollection called with collection == null! Aborting.");
        return;
    }
    if (isReadOnly()) {
        throw new IOException(DATABASE_IS_READ_ONLY);
    }
    final CollectionCache collectionsCache = pool.getCollectionsCache();
    collectionsCache.put(collection);
    try (final ManagedLock<ReentrantLock> collectionsDbLock = lockManager.acquireBtreeWriteLock(collectionsDb.getLockName())) {
        final Value name = new CollectionStore.CollectionKey(collection.getURI().toString());
        try (final VariableByteOutputStream os = new VariableByteOutputStream(256)) {
            collection.serialize(os);
            final long address = collectionsDb.put(transaction, name, os.data(), true);
            if (address == BFile.UNKNOWN_ADDRESS) {
                throw new IOException("Could not store collection data for '" + collection.getURI() + "', address=BFile.UNKNOWN_ADDRESS");
            }
        }
    } catch (final LockException e) {
        throw new IOException(e);
    }
}
Also used : ReentrantLock(java.util.concurrent.locks.ReentrantLock) VariableByteOutputStream(org.exist.storage.io.VariableByteOutputStream)

Example 3 with VariableByteOutputStream

use of org.exist.storage.io.VariableByteOutputStream in project exist by eXist-db.

the class NativeBroker method storeXMLResource.

/**
 * store Document entry into its collection.
 */
@Override
public void storeXMLResource(final Txn transaction, final DocumentImpl doc) {
    try (final VariableByteOutputStream os = new VariableByteOutputStream(256);
        final ManagedLock<ReentrantLock> collectionsDbLock = lockManager.acquireBtreeWriteLock(collectionsDb.getLockName())) {
        doc.write(os);
        final Value key = new CollectionStore.DocumentKey(doc.getCollection().getId(), doc.getResourceType(), doc.getDocId());
        collectionsDb.put(transaction, key, os.data(), true);
    // } catch (ReadOnlyException e) {
    // LOG.warn(DATABASE_IS_READ_ONLY);
    } catch (final LockException e) {
        LOG.error("Failed to acquire lock on {}", FileUtils.fileName(collectionsDb.getFile()));
    } catch (final IOException e) {
        LOG.error("IOException while writing document data: {}", doc.getURI(), e);
    }
}
Also used : ReentrantLock(java.util.concurrent.locks.ReentrantLock) VariableByteOutputStream(org.exist.storage.io.VariableByteOutputStream)

Example 4 with VariableByteOutputStream

use of org.exist.storage.io.VariableByteOutputStream in project exist by eXist-db.

the class NativeValueIndex method remove.

private <T> void remove(final PendingChanges<T> pending, final FunctionE<T, Value, EXistException> dbKeyFn) {
    final VariableByteOutputStream nodeIdOs = new VariableByteOutputStream();
    for (final Map.Entry<T, List<NodeId>> entry : pending.changes.entrySet()) {
        final T key = entry.getKey();
        final List<NodeId> storedGIDList = entry.getValue();
        final List<NodeId> newGIDList = new ArrayList<>();
        os.clear();
        try (final ManagedLock<ReentrantLock> bfileLock = lockManager.acquireBtreeWriteLock(dbValues.getLockName())) {
            // Compute a key for the value
            final Value searchKey = dbKeyFn.apply(key);
            final Value value = dbValues.get(searchKey);
            // Does the value already has data in the index ?
            if (value != null) {
                // Add its data to the new list
                final VariableByteArrayInput is = new VariableByteArrayInput(value.getData());
                while (is.available() > 0) {
                    final int storedDocId = is.readInt();
                    final int gidsCount = is.readInt();
                    final int size = is.readFixedInt();
                    if (storedDocId != this.doc.getDocId()) {
                        // data are related to another document:
                        // append them to any existing data
                        os.writeInt(storedDocId);
                        os.writeInt(gidsCount);
                        os.writeFixedInt(size);
                        is.copyRaw(os, size);
                    } else {
                        // data are related to our document:
                        // feed the new list with the GIDs
                        NodeId previous = null;
                        for (int j = 0; j < gidsCount; j++) {
                            final NodeId nodeId = broker.getBrokerPool().getNodeFactory().createFromStream(previous, is);
                            previous = nodeId;
                            // in the list of removed nodes
                            if (!containsNode(storedGIDList, nodeId)) {
                                newGIDList.add(nodeId);
                            }
                        }
                    }
                }
                // append the data from the new list
                if (newGIDList.size() > 0) {
                    final int gidsCount = newGIDList.size();
                    // Don't forget this one
                    FastQSort.sort(newGIDList, 0, gidsCount - 1);
                    os.writeInt(this.doc.getDocId());
                    os.writeInt(gidsCount);
                    // Compute the new GID list
                    try {
                        NodeId previous = null;
                        for (final NodeId nodeId : newGIDList) {
                            previous = nodeId.write(previous, nodeIdOs);
                        }
                        final byte[] nodeIdsData = nodeIdOs.toByteArray();
                        // clear the buf for the next iteration
                        nodeIdOs.clear();
                        // Write length of node IDs (bytes)
                        os.writeFixedInt(nodeIdsData.length);
                        // write the node IDs
                        os.write(nodeIdsData);
                    } catch (final IOException e) {
                        LOG.warn("IO error while writing range index: {}", e.getMessage(), e);
                    // TODO : throw exception?
                    }
                }
                // dbValues.remove(value);
                if (dbValues.update(value.getAddress(), searchKey, os.data()) == BFile.UNKNOWN_ADDRESS) {
                    LOG.error("Could not update index data for value '{}'", searchKey);
                // TODO: throw exception ?
                }
            } else {
                if (dbValues.put(searchKey, os.data()) == BFile.UNKNOWN_ADDRESS) {
                    LOG.error("Could not put index data for value '{}'", searchKey);
                // TODO : throw exception ?
                }
            }
        } catch (final EXistException | IOException e) {
            LOG.error(e.getMessage(), e);
        } catch (final LockException e) {
            LOG.warn("Failed to acquire lock for '{}'", FileUtils.fileName(dbValues.getFile()), e);
        // TODO : return ?
        } finally {
            os.clear();
        }
    }
    pending.changes.clear();
}
Also used : ReentrantLock(java.util.concurrent.locks.ReentrantLock) IOException(java.io.IOException) EXistException(org.exist.EXistException) VariableByteArrayInput(org.exist.storage.io.VariableByteArrayInput) VariableByteOutputStream(org.exist.storage.io.VariableByteOutputStream) NodeId(org.exist.numbering.NodeId) AtomicValue(org.exist.xquery.value.AtomicValue) StringValue(org.exist.xquery.value.StringValue) Value(org.exist.storage.btree.Value)

Example 5 with VariableByteOutputStream

use of org.exist.storage.io.VariableByteOutputStream in project exist by eXist-db.

the class NGramIndexWorker method dropIndex.

private void dropIndex(final ReindexMode mode) {
    if (ngrams.isEmpty()) {
        return;
    }
    final VariableByteOutputStream buf = new VariableByteOutputStream();
    for (final Map.Entry<QNameTerm, OccurrenceList> entry : ngrams.entrySet()) {
        final QNameTerm key = entry.getKey();
        final OccurrenceList occurencesList = entry.getValue();
        occurencesList.sort();
        os.clear();
        try (final ManagedLock<ReentrantLock> dbLock = lockManager.acquireBtreeWriteLock(index.db.getLockName())) {
            final NGramQNameKey value = new NGramQNameKey(currentDoc.getCollection().getId(), key.qname, index.getBrokerPool().getSymbols(), key.term);
            boolean changed = false;
            os.clear();
            final VariableByteInput is = index.db.getAsStream(value);
            if (is == null) {
                continue;
            }
            while (is.available() > 0) {
                final int storedDocId = is.readInt();
                final byte nameType = is.readByte();
                final int occurrences = is.readInt();
                // Read (variable) length of node IDs + frequency + offsets
                final int length = is.readFixedInt();
                if (storedDocId != currentDoc.getDocId()) {
                    // data are related to another document:
                    // copy them to any existing data
                    os.writeInt(storedDocId);
                    os.writeByte(nameType);
                    os.writeInt(occurrences);
                    os.writeFixedInt(length);
                    is.copyRaw(os, length);
                } else {
                    // data are related to our document:
                    if (mode == ReindexMode.REMOVE_ALL_NODES) {
                        // skip them
                        is.skipBytes(length);
                    } else {
                        // removing nodes: need to filter out the node ids to be removed
                        // feed the new list with the GIDs
                        final OccurrenceList newOccurrences = new OccurrenceList();
                        NodeId previous = null;
                        for (int m = 0; m < occurrences; m++) {
                            final NodeId nodeId = index.getBrokerPool().getNodeFactory().createFromStream(previous, is);
                            previous = nodeId;
                            final int freq = is.readInt();
                            // in the list of removed nodes
                            if (!occurencesList.contains(nodeId)) {
                                for (int n = 0; n < freq; n++) {
                                    newOccurrences.add(nodeId, is.readInt());
                                }
                            } else {
                                is.skip(freq);
                            }
                        }
                        // append the data from the new list
                        if (newOccurrences.getSize() > 0) {
                            // Don't forget this one
                            newOccurrences.sort();
                            os.writeInt(currentDoc.getDocId());
                            os.writeByte(nameType);
                            os.writeInt(newOccurrences.getTermCount());
                            // write nodeids, freq, and offsets to a `temp` buf
                            previous = null;
                            for (int m = 0; m < newOccurrences.getSize(); ) {
                                previous = newOccurrences.getNode(m).write(previous, buf);
                                final int freq = newOccurrences.getOccurrences(m);
                                buf.writeInt(freq);
                                for (int n = 0; n < freq; n++) {
                                    buf.writeInt(newOccurrences.getOffset(m + n));
                                }
                                m += freq;
                            }
                            final byte[] bufData = buf.toByteArray();
                            // clear the buf for the next iteration
                            buf.clear();
                            // Write length of node IDs + frequency + offsets (bytes)
                            os.writeFixedInt(bufData.length);
                            // Write the node IDs + frequency + offset
                            os.write(bufData);
                        }
                    }
                    changed = true;
                }
            }
            // Store new data, if relevant
            if (changed) {
                // Well, nothing to store : remove the existing data
                if (os.data().size() == 0) {
                    index.db.remove(value);
                } else {
                    if (index.db.put(value, os.data()) == BFile.UNKNOWN_ADDRESS) {
                        LOG.error("Could not put index data for token '{}' in '{}'", key.term, FileUtils.fileName(index.db.getFile()));
                    }
                }
            }
        } catch (final LockException e) {
            LOG.warn("Failed to acquire lock for file {}", FileUtils.fileName(index.db.getFile()), e);
        } catch (final IOException e) {
            LOG.warn("IO error for file {}", FileUtils.fileName(index.db.getFile()), e);
        } finally {
            os.clear();
        }
    }
    ngrams.clear();
}
Also used : ReentrantLock(java.util.concurrent.locks.ReentrantLock) OccurrenceList(org.exist.storage.OccurrenceList) IOException(java.io.IOException) VariableByteInput(org.exist.storage.io.VariableByteInput) VariableByteOutputStream(org.exist.storage.io.VariableByteOutputStream) NodeId(org.exist.numbering.NodeId)

Aggregations

VariableByteOutputStream (org.exist.storage.io.VariableByteOutputStream)10 ReentrantLock (java.util.concurrent.locks.ReentrantLock)6 IOException (java.io.IOException)4 NodeId (org.exist.numbering.NodeId)4 EXistException (org.exist.EXistException)3 VariableByteInput (org.exist.storage.io.VariableByteInput)3 Test (org.junit.Test)3 OccurrenceList (org.exist.storage.OccurrenceList)2 Value (org.exist.storage.btree.Value)2 AtomicValue (org.exist.xquery.value.AtomicValue)2 StringValue (org.exist.xquery.value.StringValue)2 Random (java.util.Random)1 UnsynchronizedByteArrayInputStream (org.apache.commons.io.input.UnsynchronizedByteArrayInputStream)1 Database (org.exist.Database)1 DBBroker (org.exist.storage.DBBroker)1 VariableByteArrayInput (org.exist.storage.io.VariableByteArrayInput)1 VariableByteInputStream (org.exist.storage.io.VariableByteInputStream)1