Search in sources :

Example 41 with MutationsRejectedException

use of org.apache.accumulo.core.client.MutationsRejectedException in project accumulo by apache.

the class MetaDataStateStore method suspend.

@Override
public void suspend(Collection<TabletLocationState> tablets, Map<TServerInstance, List<Path>> logsForDeadServers, long suspensionTimestamp) throws DistributedStoreException {
    BatchWriter writer = createBatchWriter();
    try {
        for (TabletLocationState tls : tablets) {
            Mutation m = new Mutation(tls.extent.getMetadataEntry());
            if (tls.current != null) {
                tls.current.clearLocation(m);
                if (logsForDeadServers != null) {
                    List<Path> logs = logsForDeadServers.get(tls.current);
                    if (logs != null) {
                        for (Path log : logs) {
                            LogEntry entry = new LogEntry(tls.extent, 0, tls.current.hostPort(), log.toString());
                            m.put(entry.getColumnFamily(), entry.getColumnQualifier(), entry.getValue());
                        }
                    }
                }
                if (suspensionTimestamp >= 0) {
                    SuspendingTServer suspender = new SuspendingTServer(tls.current.getLocation(), suspensionTimestamp);
                    suspender.setSuspension(m);
                }
            }
            if (tls.suspend != null && suspensionTimestamp < 0) {
                SuspendingTServer.clearSuspension(m);
            }
            if (tls.future != null) {
                tls.future.clearFutureLocation(m);
            }
            writer.addMutation(m);
        }
    } catch (Exception ex) {
        throw new DistributedStoreException(ex);
    } finally {
        try {
            writer.close();
        } catch (MutationsRejectedException e) {
            throw new DistributedStoreException(e);
        }
    }
}
Also used : Path(org.apache.hadoop.fs.Path) BatchWriter(org.apache.accumulo.core.client.BatchWriter) Mutation(org.apache.accumulo.core.data.Mutation) LogEntry(org.apache.accumulo.core.tabletserver.log.LogEntry) MutationsRejectedException(org.apache.accumulo.core.client.MutationsRejectedException) MutationsRejectedException(org.apache.accumulo.core.client.MutationsRejectedException)

Example 42 with MutationsRejectedException

use of org.apache.accumulo.core.client.MutationsRejectedException in project accumulo by apache.

the class MetaDataStateStore method unsuspend.

@Override
public void unsuspend(Collection<TabletLocationState> tablets) throws DistributedStoreException {
    BatchWriter writer = createBatchWriter();
    try {
        for (TabletLocationState tls : tablets) {
            if (tls.suspend != null) {
                continue;
            }
            Mutation m = new Mutation(tls.extent.getMetadataEntry());
            SuspendingTServer.clearSuspension(m);
            writer.addMutation(m);
        }
    } catch (Exception ex) {
        throw new DistributedStoreException(ex);
    } finally {
        try {
            writer.close();
        } catch (MutationsRejectedException e) {
            throw new DistributedStoreException(e);
        }
    }
}
Also used : BatchWriter(org.apache.accumulo.core.client.BatchWriter) Mutation(org.apache.accumulo.core.data.Mutation) MutationsRejectedException(org.apache.accumulo.core.client.MutationsRejectedException) MutationsRejectedException(org.apache.accumulo.core.client.MutationsRejectedException)

Example 43 with MutationsRejectedException

use of org.apache.accumulo.core.client.MutationsRejectedException in project accumulo by apache.

the class MetaDataStateStore method setFutureLocations.

@Override
public void setFutureLocations(Collection<Assignment> assignments) throws DistributedStoreException {
    BatchWriter writer = createBatchWriter();
    try {
        for (Assignment assignment : assignments) {
            Mutation m = new Mutation(assignment.tablet.getMetadataEntry());
            SuspendingTServer.clearSuspension(m);
            assignment.server.putFutureLocation(m);
            writer.addMutation(m);
        }
    } catch (Exception ex) {
        throw new DistributedStoreException(ex);
    } finally {
        try {
            writer.close();
        } catch (MutationsRejectedException e) {
            throw new DistributedStoreException(e);
        }
    }
}
Also used : BatchWriter(org.apache.accumulo.core.client.BatchWriter) Mutation(org.apache.accumulo.core.data.Mutation) MutationsRejectedException(org.apache.accumulo.core.client.MutationsRejectedException) MutationsRejectedException(org.apache.accumulo.core.client.MutationsRejectedException)

Example 44 with MutationsRejectedException

use of org.apache.accumulo.core.client.MutationsRejectedException in project accumulo by apache.

the class FinishedWorkUpdater method run.

@Override
public void run() {
    log.debug("Looking for finished replication work");
    if (!ReplicationTable.isOnline(conn)) {
        log.debug("Replication table is not yet online, will retry");
        return;
    }
    BatchScanner bs;
    BatchWriter replBw;
    try {
        bs = ReplicationTable.getBatchScanner(conn, 4);
        replBw = ReplicationTable.getBatchWriter(conn);
    } catch (ReplicationTableOfflineException e) {
        log.debug("Table is no longer online, will retry");
        return;
    }
    IteratorSetting cfg = new IteratorSetting(50, WholeRowIterator.class);
    bs.addScanIterator(cfg);
    WorkSection.limit(bs);
    bs.setRanges(Collections.singleton(new Range()));
    try {
        for (Entry<Key, Value> serializedRow : bs) {
            SortedMap<Key, Value> wholeRow;
            try {
                wholeRow = WholeRowIterator.decodeRow(serializedRow.getKey(), serializedRow.getValue());
            } catch (IOException e) {
                log.warn("Could not deserialize whole row with key {}", serializedRow.getKey().toStringNoTruncate(), e);
                continue;
            }
            log.debug("Processing work progress for {} with {} columns", serializedRow.getKey().getRow(), wholeRow.size());
            Map<Table.ID, Long> tableIdToProgress = new HashMap<>();
            boolean error = false;
            Text buffer = new Text();
            // We want to determine what the minimum point that all Work entries have replicated to
            for (Entry<Key, Value> entry : wholeRow.entrySet()) {
                Status status;
                try {
                    status = Status.parseFrom(entry.getValue().get());
                } catch (InvalidProtocolBufferException e) {
                    log.warn("Could not deserialize protobuf for {}", entry.getKey(), e);
                    error = true;
                    break;
                }
                // Get the replication target for the work record
                entry.getKey().getColumnQualifier(buffer);
                ReplicationTarget target = ReplicationTarget.from(buffer);
                // Initialize the value in the map if we don't have one
                if (!tableIdToProgress.containsKey(target.getSourceTableId())) {
                    tableIdToProgress.put(target.getSourceTableId(), Long.MAX_VALUE);
                }
                // Find the minimum value for begin (everyone has replicated up to this offset in the file)
                tableIdToProgress.put(target.getSourceTableId(), Math.min(tableIdToProgress.get(target.getSourceTableId()), status.getBegin()));
            }
            if (error) {
                continue;
            }
            // Update the replication table for each source table we found work records for
            for (Entry<Table.ID, Long> entry : tableIdToProgress.entrySet()) {
                // If the progress is 0, then no one has replicated anything, and we don't need to update anything
                if (0 == entry.getValue()) {
                    continue;
                }
                serializedRow.getKey().getRow(buffer);
                log.debug("For {}, source table ID {} has replicated through {}", serializedRow.getKey().getRow(), entry.getKey(), entry.getValue());
                Mutation replMutation = new Mutation(buffer);
                // Set that we replicated at least this much data, ignoring the other fields
                Status updatedStatus = StatusUtil.replicated(entry.getValue());
                Value serializedUpdatedStatus = ProtobufUtil.toValue(updatedStatus);
                // Pull the sourceTableId into a Text
                Table.ID srcTableId = entry.getKey();
                // Make the mutation
                StatusSection.add(replMutation, srcTableId, serializedUpdatedStatus);
                log.debug("Updating replication status entry for {} with {}", serializedRow.getKey().getRow(), ProtobufUtil.toString(updatedStatus));
                try {
                    replBw.addMutation(replMutation);
                } catch (MutationsRejectedException e) {
                    log.error("Error writing mutations to update replication Status messages in StatusSection, will retry", e);
                    return;
                }
            }
        }
    } finally {
        log.debug("Finished updating files with completed replication work");
        bs.close();
        try {
            replBw.close();
        } catch (MutationsRejectedException e) {
            log.error("Error writing mutations to update replication Status messages in StatusSection, will retry", e);
        }
    }
}
Also used : Status(org.apache.accumulo.server.replication.proto.Replication.Status) Table(org.apache.accumulo.core.client.impl.Table) ReplicationTable(org.apache.accumulo.core.replication.ReplicationTable) HashMap(java.util.HashMap) BatchScanner(org.apache.accumulo.core.client.BatchScanner) InvalidProtocolBufferException(com.google.protobuf.InvalidProtocolBufferException) Text(org.apache.hadoop.io.Text) IOException(java.io.IOException) Range(org.apache.accumulo.core.data.Range) IteratorSetting(org.apache.accumulo.core.client.IteratorSetting) ReplicationTarget(org.apache.accumulo.core.replication.ReplicationTarget) Value(org.apache.accumulo.core.data.Value) BatchWriter(org.apache.accumulo.core.client.BatchWriter) ReplicationTableOfflineException(org.apache.accumulo.core.replication.ReplicationTableOfflineException) Mutation(org.apache.accumulo.core.data.Mutation) Key(org.apache.accumulo.core.data.Key) MutationsRejectedException(org.apache.accumulo.core.client.MutationsRejectedException)

Example 45 with MutationsRejectedException

use of org.apache.accumulo.core.client.MutationsRejectedException in project accumulo by apache.

the class StatusMaker method addStatusRecord.

/**
 * Create a status record in the replication table
 */
protected boolean addStatusRecord(Text file, Table.ID tableId, Value v) {
    try {
        Mutation m = new Mutation(file);
        m.put(StatusSection.NAME, new Text(tableId.getUtf8()), v);
        try {
            replicationWriter.addMutation(m);
        } catch (MutationsRejectedException e) {
            log.warn("Failed to write work mutations for replication, will retry", e);
            return false;
        }
    } finally {
        try {
            replicationWriter.flush();
        } catch (MutationsRejectedException e) {
            log.warn("Failed to write work mutations for replication, will retry", e);
            return false;
        }
    }
    return true;
}
Also used : Text(org.apache.hadoop.io.Text) Mutation(org.apache.accumulo.core.data.Mutation) MutationsRejectedException(org.apache.accumulo.core.client.MutationsRejectedException)

Aggregations

MutationsRejectedException (org.apache.accumulo.core.client.MutationsRejectedException)68 Mutation (org.apache.accumulo.core.data.Mutation)48 BatchWriter (org.apache.accumulo.core.client.BatchWriter)40 BatchWriterConfig (org.apache.accumulo.core.client.BatchWriterConfig)23 Value (org.apache.accumulo.core.data.Value)23 TableNotFoundException (org.apache.accumulo.core.client.TableNotFoundException)21 Text (org.apache.hadoop.io.Text)20 Key (org.apache.accumulo.core.data.Key)13 IOException (java.io.IOException)12 AccumuloSecurityException (org.apache.accumulo.core.client.AccumuloSecurityException)12 AccumuloException (org.apache.accumulo.core.client.AccumuloException)11 HashMap (java.util.HashMap)10 ColumnVisibility (org.apache.accumulo.core.security.ColumnVisibility)9 ArrayList (java.util.ArrayList)8 Test (org.junit.Test)8 Entry (java.util.Map.Entry)6 TableExistsException (org.apache.accumulo.core.client.TableExistsException)6 ConditionalMutation (org.apache.accumulo.core.data.ConditionalMutation)6 ConstraintViolationSummary (org.apache.accumulo.core.data.ConstraintViolationSummary)6 PrestoException (com.facebook.presto.spi.PrestoException)5