Search in sources :

Example 61 with KeyExtent

use of org.apache.accumulo.core.data.impl.KeyExtent in project accumulo by apache.

the class Tablet method getCompactionID.

public Pair<Long, UserCompactionConfig> getCompactionID() throws NoNodeException {
    try {
        String zTablePath = Constants.ZROOT + "/" + tabletServer.getInstance().getInstanceID() + Constants.ZTABLES + "/" + extent.getTableId() + Constants.ZTABLE_COMPACT_ID;
        String[] tokens = new String(ZooReaderWriter.getInstance().getData(zTablePath, null), UTF_8).split(",");
        long compactID = Long.parseLong(tokens[0]);
        UserCompactionConfig compactionConfig = new UserCompactionConfig();
        if (tokens.length > 1) {
            Hex hex = new Hex();
            ByteArrayInputStream bais = new ByteArrayInputStream(hex.decode(tokens[1].split("=")[1].getBytes(UTF_8)));
            DataInputStream dis = new DataInputStream(bais);
            try {
                compactionConfig.readFields(dis);
            } catch (IOException e) {
                throw new RuntimeException(e);
            }
            KeyExtent ke = new KeyExtent(extent.getTableId(), compactionConfig.getEndRow(), compactionConfig.getStartRow());
            if (!ke.overlaps(extent)) {
                // only use iterators if compaction range overlaps
                compactionConfig = new UserCompactionConfig();
            }
        }
        return new Pair<>(compactID, compactionConfig);
    } catch (InterruptedException | DecoderException | NumberFormatException e) {
        throw new RuntimeException(e);
    } catch (KeeperException ke) {
        if (ke instanceof NoNodeException) {
            throw (NoNodeException) ke;
        } else {
            throw new RuntimeException(ke);
        }
    }
}
Also used : NoNodeException(org.apache.zookeeper.KeeperException.NoNodeException) UserCompactionConfig(org.apache.accumulo.server.master.tableOps.UserCompactionConfig) IOException(java.io.IOException) DataInputStream(java.io.DataInputStream) KeyExtent(org.apache.accumulo.core.data.impl.KeyExtent) IterationInterruptedException(org.apache.accumulo.core.iterators.IterationInterruptedException) DecoderException(org.apache.commons.codec.DecoderException) ByteArrayInputStream(java.io.ByteArrayInputStream) Hex(org.apache.commons.codec.binary.Hex) KeeperException(org.apache.zookeeper.KeeperException) Pair(org.apache.accumulo.core.util.Pair)

Example 62 with KeyExtent

use of org.apache.accumulo.core.data.impl.KeyExtent in project accumulo by apache.

the class Tablet method split.

public TreeMap<KeyExtent, TabletData> split(byte[] sp) throws IOException {
    if (sp != null && extent.getEndRow() != null && extent.getEndRow().equals(new Text(sp))) {
        throw new IllegalArgumentException();
    }
    if (sp != null && sp.length > tableConfiguration.getAsBytes(Property.TABLE_MAX_END_ROW_SIZE)) {
        String msg = "Cannot split tablet " + extent + ", selected split point too long.  Length :  " + sp.length;
        log.warn(msg);
        throw new IOException(msg);
    }
    if (extent.isRootTablet()) {
        String msg = "Cannot split root tablet";
        log.warn(msg);
        throw new RuntimeException(msg);
    }
    try {
        initiateClose(true, false, false);
    } catch (IllegalStateException ise) {
        log.debug("File {} not splitting : {}", extent, ise.getMessage());
        return null;
    }
    // obtain this info outside of synch block since it will involve opening
    // the map files... it is ok if the set of map files changes, because
    // this info is used for optimization... it is ok if map files are missing
    // from the set... can still query and insert into the tablet while this
    // map file operation is happening
    Map<FileRef, FileUtil.FileInfo> firstAndLastRows = FileUtil.tryToGetFirstAndLastRows(getTabletServer().getFileSystem(), getTabletServer().getConfiguration(), getDatafileManager().getFiles());
    synchronized (this) {
        // java needs tuples ...
        TreeMap<KeyExtent, TabletData> newTablets = new TreeMap<>();
        long t1 = System.currentTimeMillis();
        // choose a split point
        SplitRowSpec splitPoint;
        if (sp == null)
            splitPoint = findSplitRow(getDatafileManager().getFiles());
        else {
            Text tsp = new Text(sp);
            splitPoint = new SplitRowSpec(FileUtil.estimatePercentageLTE(getTabletServer().getFileSystem(), tabletDirectory, getTabletServer().getConfiguration(), extent.getPrevEndRow(), extent.getEndRow(), FileUtil.toPathStrings(getDatafileManager().getFiles()), tsp), tsp);
        }
        if (splitPoint == null || splitPoint.row == null) {
            log.info("had to abort split because splitRow was null");
            closeState = CloseState.OPEN;
            return null;
        }
        closeState = CloseState.CLOSING;
        completeClose(true, false);
        Text midRow = splitPoint.row;
        double splitRatio = splitPoint.splitRatio;
        KeyExtent low = new KeyExtent(extent.getTableId(), midRow, extent.getPrevEndRow());
        KeyExtent high = new KeyExtent(extent.getTableId(), extent.getEndRow(), midRow);
        String lowDirectory = createTabletDirectory(getTabletServer().getFileSystem(), extent.getTableId(), midRow);
        // write new tablet information to MetadataTable
        SortedMap<FileRef, DataFileValue> lowDatafileSizes = new TreeMap<>();
        SortedMap<FileRef, DataFileValue> highDatafileSizes = new TreeMap<>();
        List<FileRef> highDatafilesToRemove = new ArrayList<>();
        MetadataTableUtil.splitDatafiles(midRow, splitRatio, firstAndLastRows, getDatafileManager().getDatafileSizes(), lowDatafileSizes, highDatafileSizes, highDatafilesToRemove);
        log.debug("Files for low split {} {}", low, lowDatafileSizes.keySet());
        log.debug("Files for high split {} {}", high, highDatafileSizes.keySet());
        String time = tabletTime.getMetadataValue();
        MetadataTableUtil.splitTablet(high, extent.getPrevEndRow(), splitRatio, getTabletServer(), getTabletServer().getLock());
        MasterMetadataUtil.addNewTablet(getTabletServer(), low, lowDirectory, getTabletServer().getTabletSession(), lowDatafileSizes, getBulkIngestedFiles(), time, lastFlushID, lastCompactID, getTabletServer().getLock());
        MetadataTableUtil.finishSplit(high, highDatafileSizes, highDatafilesToRemove, getTabletServer(), getTabletServer().getLock());
        log.debug("TABLET_HIST {} split {} {}", extent, low, high);
        newTablets.put(high, new TabletData(tabletDirectory, highDatafileSizes, time, lastFlushID, lastCompactID, lastLocation, getBulkIngestedFiles()));
        newTablets.put(low, new TabletData(lowDirectory, lowDatafileSizes, time, lastFlushID, lastCompactID, lastLocation, getBulkIngestedFiles()));
        long t2 = System.currentTimeMillis();
        log.debug(String.format("offline split time : %6.2f secs", (t2 - t1) / 1000.0));
        closeState = CloseState.COMPLETE;
        return newTablets;
    }
}
Also used : DataFileValue(org.apache.accumulo.core.metadata.schema.DataFileValue) CopyOnWriteArrayList(java.util.concurrent.CopyOnWriteArrayList) ArrayList(java.util.ArrayList) Text(org.apache.hadoop.io.Text) IOException(java.io.IOException) TreeMap(java.util.TreeMap) KeyExtent(org.apache.accumulo.core.data.impl.KeyExtent) MapFileInfo(org.apache.accumulo.core.data.thrift.MapFileInfo) FileRef(org.apache.accumulo.server.fs.FileRef)

Example 63 with KeyExtent

use of org.apache.accumulo.core.data.impl.KeyExtent in project accumulo by apache.

the class LogFileTest method testReadFields.

@Test
public void testReadFields() throws IOException {
    LogFileKey key = new LogFileKey();
    LogFileValue value = new LogFileValue();
    key.tserverSession = "";
    readWrite(OPEN, -1, -1, null, null, null, key, value);
    assertEquals(key.event, OPEN);
    readWrite(COMPACTION_FINISH, 1, 2, null, null, null, key, value);
    assertEquals(key.event, COMPACTION_FINISH);
    assertEquals(key.seq, 1);
    assertEquals(key.tid, 2);
    readWrite(COMPACTION_START, 3, 4, "some file", null, null, key, value);
    assertEquals(key.event, COMPACTION_START);
    assertEquals(key.seq, 3);
    assertEquals(key.tid, 4);
    assertEquals(key.filename, "some file");
    KeyExtent tablet = new KeyExtent(Table.ID.of("table"), new Text("bbbb"), new Text("aaaa"));
    readWrite(DEFINE_TABLET, 5, 6, null, tablet, null, key, value);
    assertEquals(key.event, DEFINE_TABLET);
    assertEquals(key.seq, 5);
    assertEquals(key.tid, 6);
    assertEquals(key.tablet, tablet);
    Mutation m = new ServerMutation(new Text("row"));
    m.put(new Text("cf"), new Text("cq"), new Value("value".getBytes()));
    readWrite(MUTATION, 7, 8, null, null, new Mutation[] { m }, key, value);
    assertEquals(key.event, MUTATION);
    assertEquals(key.seq, 7);
    assertEquals(key.tid, 8);
    assertEquals(value.mutations, Arrays.asList(m));
    m = new ServerMutation(new Text("row"));
    m.put(new Text("cf"), new Text("cq"), new ColumnVisibility("vis"), 12345, new Value("value".getBytes()));
    m.put(new Text("cf"), new Text("cq"), new ColumnVisibility("vis2"), new Value("value".getBytes()));
    m.putDelete(new Text("cf"), new Text("cq"), new ColumnVisibility("vis2"));
    readWrite(MUTATION, 8, 9, null, null, new Mutation[] { m }, key, value);
    assertEquals(key.event, MUTATION);
    assertEquals(key.seq, 8);
    assertEquals(key.tid, 9);
    assertEquals(value.mutations, Arrays.asList(m));
    readWrite(MANY_MUTATIONS, 9, 10, null, null, new Mutation[] { m, m }, key, value);
    assertEquals(key.event, MANY_MUTATIONS);
    assertEquals(key.seq, 9);
    assertEquals(key.tid, 10);
    assertEquals(value.mutations, Arrays.asList(m, m));
}
Also used : Value(org.apache.accumulo.core.data.Value) ServerMutation(org.apache.accumulo.server.data.ServerMutation) Text(org.apache.hadoop.io.Text) ServerMutation(org.apache.accumulo.server.data.ServerMutation) Mutation(org.apache.accumulo.core.data.Mutation) ColumnVisibility(org.apache.accumulo.core.security.ColumnVisibility) KeyExtent(org.apache.accumulo.core.data.impl.KeyExtent) Test(org.junit.Test)

Example 64 with KeyExtent

use of org.apache.accumulo.core.data.impl.KeyExtent in project accumulo by apache.

the class AccumuloReplicaSystemTest method restartInFileKnowsAboutPreviousTableDefines.

@Test
public void restartInFileKnowsAboutPreviousTableDefines() throws Exception {
    ByteArrayOutputStream baos = new ByteArrayOutputStream();
    DataOutputStream dos = new DataOutputStream(baos);
    LogFileKey key = new LogFileKey();
    LogFileValue value = new LogFileValue();
    // What is seq used for?
    key.seq = 1l;
    /*
     * Disclaimer: the following series of LogFileKey and LogFileValue pairs have *no* bearing whatsoever in reality regarding what these entries would actually
     * look like in a WAL. They are solely for testing that each LogEvents is handled, order is not important.
     */
    key.event = LogEvents.DEFINE_TABLET;
    key.tablet = new KeyExtent(Table.ID.of("1"), null, null);
    key.tid = 1;
    key.write(dos);
    value.write(dos);
    key.tablet = null;
    key.event = LogEvents.MUTATION;
    key.filename = "/accumulo/wals/tserver+port/" + UUID.randomUUID();
    value.mutations = Arrays.asList(new ServerMutation(new Text("row")));
    key.write(dos);
    value.write(dos);
    key.tablet = null;
    key.event = LogEvents.MUTATION;
    key.tid = 1;
    key.filename = "/accumulo/wals/tserver+port/" + UUID.randomUUID();
    value.mutations = Arrays.asList(new ServerMutation(new Text("row")));
    key.write(dos);
    value.write(dos);
    dos.close();
    Map<String, String> confMap = new HashMap<>();
    confMap.put(Property.REPLICATION_NAME.getKey(), "source");
    AccumuloConfiguration conf = new ConfigurationCopy(confMap);
    AccumuloReplicaSystem ars = new AccumuloReplicaSystem();
    ars.setConf(conf);
    Status status = Status.newBuilder().setBegin(0).setEnd(0).setInfiniteEnd(true).setClosed(false).build();
    DataInputStream dis = new DataInputStream(new ByteArrayInputStream(baos.toByteArray()));
    HashSet<Integer> tids = new HashSet<>();
    // Only consume the first mutation, not the second
    WalReplication repl = ars.getWalEdits(new ReplicationTarget("peer", "1", Table.ID.of("1")), dis, new Path("/accumulo/wals/tserver+port/wal"), status, 1l, tids);
    // We stopped because we got to the end of the file
    Assert.assertEquals(2, repl.entriesConsumed);
    Assert.assertEquals(1, repl.walEdits.getEditsSize());
    Assert.assertEquals(1, repl.sizeInRecords);
    Assert.assertNotEquals(0, repl.sizeInBytes);
    status = Status.newBuilder(status).setBegin(2).build();
    // Consume the rest of the mutations
    repl = ars.getWalEdits(new ReplicationTarget("peer", "1", Table.ID.of("1")), dis, new Path("/accumulo/wals/tserver+port/wal"), status, 1l, tids);
    // We stopped because we got to the end of the file
    Assert.assertEquals(1, repl.entriesConsumed);
    Assert.assertEquals(1, repl.walEdits.getEditsSize());
    Assert.assertEquals(1, repl.sizeInRecords);
    Assert.assertNotEquals(0, repl.sizeInBytes);
}
Also used : Status(org.apache.accumulo.server.replication.proto.Replication.Status) Path(org.apache.hadoop.fs.Path) ConfigurationCopy(org.apache.accumulo.core.conf.ConfigurationCopy) HashMap(java.util.HashMap) DataOutputStream(java.io.DataOutputStream) WalReplication(org.apache.accumulo.tserver.replication.AccumuloReplicaSystem.WalReplication) ServerMutation(org.apache.accumulo.server.data.ServerMutation) Text(org.apache.hadoop.io.Text) ByteArrayOutputStream(java.io.ByteArrayOutputStream) LogFileKey(org.apache.accumulo.tserver.logger.LogFileKey) DataInputStream(java.io.DataInputStream) KeyExtent(org.apache.accumulo.core.data.impl.KeyExtent) ReplicationTarget(org.apache.accumulo.core.replication.ReplicationTarget) ByteArrayInputStream(java.io.ByteArrayInputStream) LogFileValue(org.apache.accumulo.tserver.logger.LogFileValue) AccumuloConfiguration(org.apache.accumulo.core.conf.AccumuloConfiguration) HashSet(java.util.HashSet) Test(org.junit.Test)

Example 65 with KeyExtent

use of org.apache.accumulo.core.data.impl.KeyExtent in project accumulo by apache.

the class AssignmentWatcherTest method testAssignmentWarning.

@Test
public void testAssignmentWarning() {
    ActiveAssignmentRunnable task = EasyMock.createMock(ActiveAssignmentRunnable.class);
    RunnableStartedAt run = new RunnableStartedAt(task, System.currentTimeMillis());
    EasyMock.expect(conf.getTimeInMillis(Property.TSERV_ASSIGNMENT_DURATION_WARNING)).andReturn(0l);
    assignments.put(new KeyExtent(Table.ID.of("1"), null, null), run);
    EasyMock.expect(task.getException()).andReturn(new Exception("Assignment warning happened"));
    EasyMock.expect(timer.schedule(watcher, 5000l)).andReturn(null);
    EasyMock.replay(timer, conf, task);
    watcher.run();
    EasyMock.verify(timer, conf, task);
}
Also used : KeyExtent(org.apache.accumulo.core.data.impl.KeyExtent) Test(org.junit.Test)

Aggregations

KeyExtent (org.apache.accumulo.core.data.impl.KeyExtent)219 Test (org.junit.Test)84 Text (org.apache.hadoop.io.Text)82 Value (org.apache.accumulo.core.data.Value)67 ArrayList (java.util.ArrayList)63 Key (org.apache.accumulo.core.data.Key)59 HashMap (java.util.HashMap)50 Mutation (org.apache.accumulo.core.data.Mutation)40 Scanner (org.apache.accumulo.core.client.Scanner)39 Range (org.apache.accumulo.core.data.Range)39 TreeMap (java.util.TreeMap)37 TServerInstance (org.apache.accumulo.server.master.state.TServerInstance)36 Table (org.apache.accumulo.core.client.impl.Table)34 HashSet (java.util.HashSet)30 List (java.util.List)29 TKeyExtent (org.apache.accumulo.core.data.thrift.TKeyExtent)29 Connector (org.apache.accumulo.core.client.Connector)28 IOException (java.io.IOException)27 MetadataTable (org.apache.accumulo.core.metadata.MetadataTable)25 DataFileValue (org.apache.accumulo.core.metadata.schema.DataFileValue)25