Search in sources :

Example 6 with Pauser

use of net.openhft.chronicle.threads.Pauser in project Chronicle-Queue by OpenHFT.

the class SingleTableBuilder method build.

// *************************************************************************
// 
// *************************************************************************
@NotNull
public TableStore<T> build() {
    if (readOnly) {
        if (!file.exists())
            throw new IORuntimeException("Metadata file not found in readOnly mode");
        // Wait a short time for the file to be initialized
        TimingPauser pauser = Pauser.balanced();
        try {
            while (file.length() < OS.mapAlignment()) {
                pauser.pause(1, TimeUnit.SECONDS);
            }
        } catch (TimeoutException e) {
            throw new IORuntimeException("Metadata file found in readOnly mode, but not initialized yet");
        }
    }
    MappedBytes bytes = null;
    try {
        if (!readOnly && file.createNewFile() && !file.canWrite()) {
            throw new IllegalStateException("Cannot write to tablestore file " + file);
        }
        bytes = MappedBytes.mappedBytes(file, OS.SAFE_PAGE_SIZE, OS.SAFE_PAGE_SIZE, readOnly);
        // these MappedBytes are shared, but the assumption is they shouldn't grow. Supports 2K entries.
        bytes.disableThreadSafetyCheck(true);
        // eagerly initialize backing MappedFile page - otherwise wire.writeFirstHeader() will try to lock the file
        // to allocate the first byte store and that will cause lock overlap
        bytes.readVolatileInt(0);
        Wire wire = wireType.apply(bytes);
        if (readOnly)
            return SingleTableStore.doWithSharedLock(file, v -> {
                try {
                    return readTableStore(wire);
                } catch (IOException ex) {
                    throw Jvm.rethrow(ex);
                }
            }, () -> null);
        else {
            MappedBytes finalBytes = bytes;
            return SingleTableStore.doWithExclusiveLock(file, v -> {
                try {
                    if (wire.writeFirstHeader()) {
                        return writeTableStore(finalBytes, wire);
                    } else {
                        return readTableStore(wire);
                    }
                } catch (IOException ex) {
                    throw Jvm.rethrow(ex);
                }
            }, () -> null);
        }
    } catch (IOException e) {
        throw new IORuntimeException("file=" + file.getAbsolutePath(), e);
    } finally {
        if (bytes != null)
            bytes.clearUsedByThread();
    }
}
Also used : IORuntimeException(net.openhft.chronicle.core.io.IORuntimeException) MappedBytes(net.openhft.chronicle.bytes.MappedBytes) StreamCorruptedException(java.io.StreamCorruptedException) TimingPauser(net.openhft.chronicle.threads.TimingPauser) TimeoutException(java.util.concurrent.TimeoutException) IOException(java.io.IOException) Builder(net.openhft.chronicle.core.util.Builder) Wire(net.openhft.chronicle.wire.Wire) WireType(net.openhft.chronicle.wire.WireType) Jvm(net.openhft.chronicle.core.Jvm) File(java.io.File) Objects(java.util.Objects) TimeUnit(java.util.concurrent.TimeUnit) ValueIn(net.openhft.chronicle.wire.ValueIn) StringUtils(net.openhft.chronicle.core.util.StringUtils) CLASS_ALIASES(net.openhft.chronicle.core.pool.ClassAliasPool.CLASS_ALIASES) OS(net.openhft.chronicle.core.OS) NotNull(org.jetbrains.annotations.NotNull) MetaDataKeys(net.openhft.chronicle.queue.impl.single.MetaDataKeys) Path(java.nio.file.Path) Pauser(net.openhft.chronicle.threads.Pauser) Wires(net.openhft.chronicle.wire.Wires) TableStore(net.openhft.chronicle.queue.impl.TableStore) IORuntimeException(net.openhft.chronicle.core.io.IORuntimeException) TimingPauser(net.openhft.chronicle.threads.TimingPauser) IOException(java.io.IOException) Wire(net.openhft.chronicle.wire.Wire) MappedBytes(net.openhft.chronicle.bytes.MappedBytes) TimeoutException(java.util.concurrent.TimeoutException) NotNull(org.jetbrains.annotations.NotNull)

Example 7 with Pauser

use of net.openhft.chronicle.threads.Pauser in project cassandra by apache.

the class AuditLogViewer method dump.

static void dump(List<String> pathList, String rollCycle, boolean follow, boolean ignoreUnsupported, Consumer<String> displayFun) {
    // Backoff strategy for spinning on the queue, not aggressive at all as this doesn't need to be low latency
    Pauser pauser = Pauser.millis(100);
    List<ExcerptTailer> tailers = pathList.stream().distinct().map(path -> SingleChronicleQueueBuilder.single(new File(path).toJavaIOFile()).readOnly(true).rollCycle(RollCycles.valueOf(rollCycle)).build()).map(SingleChronicleQueue::createTailer).collect(Collectors.toList());
    boolean hadWork = true;
    while (hadWork) {
        hadWork = false;
        for (ExcerptTailer tailer : tailers) {
            while (tailer.readDocument(new DisplayRecord(ignoreUnsupported, displayFun))) {
                hadWork = true;
            }
        }
        if (follow) {
            if (!hadWork) {
                // Chronicle queue doesn't support blocking so use this backoff strategy
                pauser.pause();
            }
            // Don't terminate the loop even if there wasn't work
            hadWork = true;
        }
    }
}
Also used : Pauser(net.openhft.chronicle.threads.Pauser) File(org.apache.cassandra.io.util.File) ExcerptTailer(net.openhft.chronicle.queue.ExcerptTailer)

Example 8 with Pauser

use of net.openhft.chronicle.threads.Pauser in project cassandra by apache.

the class Dump method dump.

public static void dump(List<String> arguments, String rollCycle, boolean follow) {
    StringBuilder sb = new StringBuilder();
    ReadMarshallable reader = wireIn -> {
        sb.setLength(0);
        int version = wireIn.read(BinLog.VERSION).int16();
        if (version > FullQueryLogger.CURRENT_VERSION) {
            throw new IORuntimeException("Unsupported record version [" + version + "] - highest supported version is [" + FullQueryLogger.CURRENT_VERSION + ']');
        }
        String type = wireIn.read(BinLog.TYPE).text();
        if (!FullQueryLogger.SINGLE_QUERY.equals((type)) && !FullQueryLogger.BATCH.equals((type))) {
            throw new IORuntimeException("Unsupported record type field [" + type + "] - supported record types are [" + FullQueryLogger.SINGLE_QUERY + ", " + FullQueryLogger.BATCH + ']');
        }
        sb.append("Type: ").append(type).append(System.lineSeparator());
        long queryStartTime = wireIn.read(FullQueryLogger.QUERY_START_TIME).int64();
        sb.append("Query start time: ").append(queryStartTime).append(System.lineSeparator());
        int protocolVersion = wireIn.read(FullQueryLogger.PROTOCOL_VERSION).int32();
        sb.append("Protocol version: ").append(protocolVersion).append(System.lineSeparator());
        QueryOptions options = QueryOptions.codec.decode(Unpooled.wrappedBuffer(wireIn.read(FullQueryLogger.QUERY_OPTIONS).bytes()), ProtocolVersion.decode(protocolVersion, true));
        long generatedTimestamp = wireIn.read(FullQueryLogger.GENERATED_TIMESTAMP).int64();
        sb.append("Generated timestamp:").append(generatedTimestamp).append(System.lineSeparator());
        int generatedNowInSeconds = wireIn.read(FullQueryLogger.GENERATED_NOW_IN_SECONDS).int32();
        sb.append("Generated nowInSeconds:").append(generatedNowInSeconds).append(System.lineSeparator());
        switch(type) {
            case (FullQueryLogger.SINGLE_QUERY):
                dumpQuery(options, wireIn, sb);
                break;
            case (FullQueryLogger.BATCH):
                dumpBatch(options, wireIn, sb);
                break;
            default:
                throw new IORuntimeException("Log entry of unsupported type " + type);
        }
        System.out.print(sb.toString());
        System.out.flush();
    };
    // Backoff strategy for spinning on the queue, not aggressive at all as this doesn't need to be low latency
    Pauser pauser = Pauser.millis(100);
    List<ChronicleQueue> queues = arguments.stream().distinct().map(path -> SingleChronicleQueueBuilder.single(new File(path)).readOnly(true).rollCycle(RollCycles.valueOf(rollCycle)).build()).collect(Collectors.toList());
    List<ExcerptTailer> tailers = queues.stream().map(ChronicleQueue::createTailer).collect(Collectors.toList());
    boolean hadWork = true;
    while (hadWork) {
        hadWork = false;
        for (ExcerptTailer tailer : tailers) {
            while (tailer.readDocument(reader)) {
                hadWork = true;
            }
        }
        if (follow) {
            if (!hadWork) {
                // Chronicle queue doesn't support blocking so use this backoff strategy
                pauser.pause();
            }
            // Don't terminate the loop even if there wasn't work
            hadWork = true;
        }
    }
}
Also used : IORuntimeException(net.openhft.chronicle.core.io.IORuntimeException) ByteBuffer(java.nio.ByteBuffer) ArrayList(java.util.ArrayList) Unpooled(io.netty.buffer.Unpooled) Bytes(net.openhft.chronicle.bytes.Bytes) Option(io.airlift.airline.Option) ReadMarshallable(net.openhft.chronicle.wire.ReadMarshallable) ProtocolVersion(org.apache.cassandra.transport.ProtocolVersion) Pauser(net.openhft.chronicle.threads.Pauser) ExcerptTailer(net.openhft.chronicle.queue.ExcerptTailer) BinLog(org.apache.cassandra.utils.binlog.BinLog) ChronicleQueue(net.openhft.chronicle.queue.ChronicleQueue) Collectors(java.util.stream.Collectors) File(java.io.File) BufferUnderflowException(java.nio.BufferUnderflowException) WireIn(net.openhft.chronicle.wire.WireIn) Command(io.airlift.airline.Command) SingleChronicleQueueBuilder(net.openhft.chronicle.queue.impl.single.SingleChronicleQueueBuilder) List(java.util.List) ValueIn(net.openhft.chronicle.wire.ValueIn) FullQueryLogger(org.apache.cassandra.fql.FullQueryLogger) RollCycles(net.openhft.chronicle.queue.RollCycles) Arguments(io.airlift.airline.Arguments) Collections(java.util.Collections) QueryOptions(org.apache.cassandra.cql3.QueryOptions) QueryOptions(org.apache.cassandra.cql3.QueryOptions) Pauser(net.openhft.chronicle.threads.Pauser) ExcerptTailer(net.openhft.chronicle.queue.ExcerptTailer) ReadMarshallable(net.openhft.chronicle.wire.ReadMarshallable) IORuntimeException(net.openhft.chronicle.core.io.IORuntimeException) ChronicleQueue(net.openhft.chronicle.queue.ChronicleQueue) File(java.io.File)

Aggregations

Pauser (net.openhft.chronicle.threads.Pauser)8 TimeoutException (java.util.concurrent.TimeoutException)4 TimingPauser (net.openhft.chronicle.threads.TimingPauser)3 File (java.io.File)2 IORuntimeException (net.openhft.chronicle.core.io.IORuntimeException)2 InterruptedRuntimeException (net.openhft.chronicle.core.threads.InterruptedRuntimeException)2 ExcerptTailer (net.openhft.chronicle.queue.ExcerptTailer)2 UnrecoverableTimeoutException (net.openhft.chronicle.wire.UnrecoverableTimeoutException)2 ValueIn (net.openhft.chronicle.wire.ValueIn)2 Arguments (io.airlift.airline.Arguments)1 Command (io.airlift.airline.Command)1 Option (io.airlift.airline.Option)1 Unpooled (io.netty.buffer.Unpooled)1 IOException (java.io.IOException)1 StreamCorruptedException (java.io.StreamCorruptedException)1 BufferUnderflowException (java.nio.BufferUnderflowException)1 ByteBuffer (java.nio.ByteBuffer)1 Path (java.nio.file.Path)1 ArrayList (java.util.ArrayList)1 Collections (java.util.Collections)1