Search in sources :

Example 1 with MergeInfo

use of org.apache.accumulo.server.manager.state.MergeInfo in project accumulo by apache.

the class Manager method getMergeInfo.

public MergeInfo getMergeInfo(TableId tableId) {
    ServerContext context = getContext();
    synchronized (mergeLock) {
        try {
            String path = getZooKeeperRoot() + Constants.ZTABLES + "/" + tableId + "/merge";
            if (!context.getZooReaderWriter().exists(path)) {
                return new MergeInfo();
            }
            byte[] data = context.getZooReaderWriter().getData(path);
            DataInputBuffer in = new DataInputBuffer();
            in.reset(data, data.length);
            MergeInfo info = new MergeInfo();
            info.readFields(in);
            return info;
        } catch (KeeperException.NoNodeException ex) {
            log.info("Error reading merge state, it probably just finished");
            return new MergeInfo();
        } catch (Exception ex) {
            log.warn("Unexpected error reading merge state", ex);
            return new MergeInfo();
        }
    }
}
Also used : MergeInfo(org.apache.accumulo.server.manager.state.MergeInfo) DataInputBuffer(org.apache.hadoop.io.DataInputBuffer) ServerContext(org.apache.accumulo.server.ServerContext) KeeperException(org.apache.zookeeper.KeeperException) TableNotFoundException(org.apache.accumulo.core.client.TableNotFoundException) NoAuthException(org.apache.zookeeper.KeeperException.NoAuthException) TException(org.apache.thrift.TException) IOException(java.io.IOException) UnknownHostException(java.net.UnknownHostException) ExecutionException(java.util.concurrent.ExecutionException) TTransportException(org.apache.thrift.transport.TTransportException) KeeperException(org.apache.zookeeper.KeeperException) ThriftTableOperationException(org.apache.accumulo.core.clientImpl.thrift.ThriftTableOperationException)

Example 2 with MergeInfo

use of org.apache.accumulo.server.manager.state.MergeInfo in project accumulo by apache.

the class MergeStats method main.

public static void main(String[] args) throws Exception {
    ServerUtilOpts opts = new ServerUtilOpts();
    opts.parseArgs(MergeStats.class.getName(), args);
    Span span = TraceUtil.startSpan(MergeStats.class, "main");
    try (Scope scope = span.makeCurrent()) {
        try (AccumuloClient client = Accumulo.newClient().from(opts.getClientProps()).build()) {
            Map<String, String> tableIdMap = client.tableOperations().tableIdMap();
            ZooReaderWriter zooReaderWriter = opts.getServerContext().getZooReaderWriter();
            for (Entry<String, String> entry : tableIdMap.entrySet()) {
                final String table = entry.getKey(), tableId = entry.getValue();
                String path = ZooUtil.getRoot(client.instanceOperations().getInstanceId()) + Constants.ZTABLES + "/" + tableId + "/merge";
                MergeInfo info = new MergeInfo();
                if (zooReaderWriter.exists(path)) {
                    byte[] data = zooReaderWriter.getData(path);
                    DataInputBuffer in = new DataInputBuffer();
                    in.reset(data, data.length);
                    info.readFields(in);
                }
                System.out.printf("%25s  %10s %10s %s%n", table, info.getState(), info.getOperation(), info.getExtent());
            }
        }
    } finally {
        span.end();
    }
}
Also used : AccumuloClient(org.apache.accumulo.core.client.AccumuloClient) MergeInfo(org.apache.accumulo.server.manager.state.MergeInfo) DataInputBuffer(org.apache.hadoop.io.DataInputBuffer) Scope(io.opentelemetry.context.Scope) ZooReaderWriter(org.apache.accumulo.fate.zookeeper.ZooReaderWriter) ServerUtilOpts(org.apache.accumulo.server.cli.ServerUtilOpts) Span(io.opentelemetry.api.trace.Span)

Example 3 with MergeInfo

use of org.apache.accumulo.server.manager.state.MergeInfo in project accumulo by apache.

the class TableRangeOp method undo.

@Override
public void undo(long tid, Manager env) throws Exception {
    // Not sure this is a good thing to do. The Manager state engine should be the one to remove it.
    MergeInfo mergeInfo = env.getMergeInfo(tableId);
    if (mergeInfo.getState() != MergeState.NONE)
        log.info("removing merge information {}", mergeInfo);
    env.clearMergeState(tableId);
    Utils.unreserveNamespace(env, namespaceId, tid, false);
    Utils.unreserveTable(env, tableId, tid, true);
}
Also used : MergeInfo(org.apache.accumulo.server.manager.state.MergeInfo)

Example 4 with MergeInfo

use of org.apache.accumulo.server.manager.state.MergeInfo in project accumulo by apache.

the class TableRangeOpWait method call.

@Override
public Repo<Manager> call(long tid, Manager manager) throws Exception {
    MergeInfo mergeInfo = manager.getMergeInfo(tableId);
    log.info("removing merge information " + mergeInfo);
    manager.clearMergeState(tableId);
    Utils.unreserveTable(manager, tableId, tid, true);
    Utils.unreserveNamespace(manager, namespaceId, tid, false);
    return null;
}
Also used : MergeInfo(org.apache.accumulo.server.manager.state.MergeInfo)

Example 5 with MergeInfo

use of org.apache.accumulo.server.manager.state.MergeInfo in project accumulo by apache.

the class TabletStateChangeIteratorIT method test.

@Test
public void test() throws AccumuloException, AccumuloSecurityException, TableExistsException, TableNotFoundException {
    try (AccumuloClient client = Accumulo.newClient().from(getClientProps()).build()) {
        String[] tables = getUniqueNames(6);
        final String t1 = tables[0];
        final String t2 = tables[1];
        final String t3 = tables[2];
        final String metaCopy1 = tables[3];
        final String metaCopy2 = tables[4];
        final String metaCopy3 = tables[5];
        // create some metadata
        createTable(client, t1, true);
        createTable(client, t2, false);
        createTable(client, t3, true);
        // examine a clone of the metadata table, so we can manipulate it
        copyTable(client, MetadataTable.NAME, metaCopy1);
        State state = new State(client);
        int tabletsInFlux = findTabletsNeedingAttention(client, metaCopy1, state);
        while (tabletsInFlux > 0) {
            log.debug("Waiting for {} tablets for {}", tabletsInFlux, metaCopy1);
            UtilWaitThread.sleep(500);
            copyTable(client, MetadataTable.NAME, metaCopy1);
            tabletsInFlux = findTabletsNeedingAttention(client, metaCopy1, state);
        }
        assertEquals("No tables should need attention", 0, findTabletsNeedingAttention(client, metaCopy1, state));
        // The metadata table stabilized and metaCopy1 contains a copy suitable for testing. Before
        // metaCopy1 is modified, copy it for subsequent test.
        copyTable(client, metaCopy1, metaCopy2);
        copyTable(client, metaCopy1, metaCopy3);
        // test the assigned case (no location)
        removeLocation(client, metaCopy1, t3);
        assertEquals("Should have two tablets without a loc", 2, findTabletsNeedingAttention(client, metaCopy1, state));
        // test the cases where the assignment is to a dead tserver
        reassignLocation(client, metaCopy2, t3);
        assertEquals("Should have one tablet that needs to be unassigned", 1, findTabletsNeedingAttention(client, metaCopy2, state));
        // test the cases where there is ongoing merges
        state = new State(client) {

            @Override
            public Collection<MergeInfo> merges() {
                TableId tableIdToModify = TableId.of(client.tableOperations().tableIdMap().get(t3));
                return Collections.singletonList(new MergeInfo(new KeyExtent(tableIdToModify, null, null), MergeInfo.Operation.MERGE));
            }
        };
        assertEquals("Should have 2 tablets that need to be chopped or unassigned", 1, findTabletsNeedingAttention(client, metaCopy2, state));
        // test the bad tablet location state case (inconsistent metadata)
        state = new State(client);
        addDuplicateLocation(client, metaCopy3, t3);
        assertEquals("Should have 1 tablet that needs a metadata repair", 1, findTabletsNeedingAttention(client, metaCopy3, state));
        // clean up
        dropTables(client, t1, t2, t3, metaCopy1, metaCopy2, metaCopy3);
    }
}
Also used : AccumuloClient(org.apache.accumulo.core.client.AccumuloClient) TableId(org.apache.accumulo.core.data.TableId) MergeInfo(org.apache.accumulo.server.manager.state.MergeInfo) TableState(org.apache.accumulo.core.manager.state.tables.TableState) ManagerState(org.apache.accumulo.core.manager.thrift.ManagerState) CurrentState(org.apache.accumulo.server.manager.state.CurrentState) Collection(java.util.Collection) KeyExtent(org.apache.accumulo.core.dataImpl.KeyExtent) Test(org.junit.Test)

Aggregations

MergeInfo (org.apache.accumulo.server.manager.state.MergeInfo)8 AccumuloClient (org.apache.accumulo.core.client.AccumuloClient)3 TableId (org.apache.accumulo.core.data.TableId)3 KeyExtent (org.apache.accumulo.core.dataImpl.KeyExtent)3 IOException (java.io.IOException)2 Collection (java.util.Collection)2 TableNotFoundException (org.apache.accumulo.core.client.TableNotFoundException)2 ManagerState (org.apache.accumulo.core.manager.thrift.ManagerState)2 TabletLocationState (org.apache.accumulo.core.metadata.TabletLocationState)2 MergeStats (org.apache.accumulo.manager.state.MergeStats)2 ServerContext (org.apache.accumulo.server.ServerContext)2 Assignment (org.apache.accumulo.server.manager.state.Assignment)2 DataInputBuffer (org.apache.hadoop.io.DataInputBuffer)2 Text (org.apache.hadoop.io.Text)2 TException (org.apache.thrift.TException)2 Test (org.junit.Test)2 Span (io.opentelemetry.api.trace.Span)1 Scope (io.opentelemetry.context.Scope)1 UnknownHostException (java.net.UnknownHostException)1 HashMap (java.util.HashMap)1