Search in sources :

Example 1 with RunFileWriter

use of org.apache.hyracks.dataflow.common.io.RunFileWriter in project asterixdb by apache.

the class MaterializerTaskState method open.

public void open(IHyracksTaskContext ctx) throws HyracksDataException {
    FileReference file = ctx.getJobletContext().createManagedWorkspaceFile(MaterializerTaskState.class.getSimpleName());
    out = new RunFileWriter(file, ctx.getIoManager());
    out.open();
}
Also used : FileReference(org.apache.hyracks.api.io.FileReference) RunFileWriter(org.apache.hyracks.dataflow.common.io.RunFileWriter)

Example 2 with RunFileWriter

use of org.apache.hyracks.dataflow.common.io.RunFileWriter in project asterixdb by apache.

the class AbstractSortRunGenerator method flushFramesToRun.

protected void flushFramesToRun() throws HyracksDataException {
    getSorter().sort();
    RunFileWriter runWriter = getRunFileWriter();
    IFrameWriter flushWriter = getFlushableFrameWriter(runWriter);
    flushWriter.open();
    try {
        getSorter().flush(flushWriter);
    } finally {
        flushWriter.close();
    }
    generatedRunFileReaders.add(runWriter.createDeleteOnCloseReader());
    getSorter().reset();
}
Also used : IFrameWriter(org.apache.hyracks.api.comm.IFrameWriter) RunFileWriter(org.apache.hyracks.dataflow.common.io.RunFileWriter)

Example 3 with RunFileWriter

use of org.apache.hyracks.dataflow.common.io.RunFileWriter in project asterixdb by apache.

the class OptimizedHybridHashJoin method spillPartition.

private void spillPartition(int pid) throws HyracksDataException {
    RunFileWriter writer = getSpillWriterOrCreateNewOneIfNotExist(pid, SIDE.BUILD);
    bufferManager.flushPartition(pid, writer);
    bufferManager.clearPartition(pid);
    spilledStatus.set(pid);
}
Also used : RunFileWriter(org.apache.hyracks.dataflow.common.io.RunFileWriter)

Example 4 with RunFileWriter

use of org.apache.hyracks.dataflow.common.io.RunFileWriter in project asterixdb by apache.

the class OptimizedHybridHashJoin method getSpillWriterOrCreateNewOneIfNotExist.

private RunFileWriter getSpillWriterOrCreateNewOneIfNotExist(int pid, SIDE whichSide) throws HyracksDataException {
    RunFileWriter[] runFileWriters = null;
    String refName = null;
    switch(whichSide) {
        case BUILD:
            runFileWriters = buildRFWriters;
            refName = buildRelName;
            break;
        case PROBE:
            refName = probeRelName;
            runFileWriters = probeRFWriters;
            break;
    }
    RunFileWriter writer = runFileWriters[pid];
    if (writer == null) {
        FileReference file = ctx.getJobletContext().createManagedWorkspaceFile(refName);
        writer = new RunFileWriter(file, ctx.getIoManager());
        writer.open();
        runFileWriters[pid] = writer;
    }
    return writer;
}
Also used : FileReference(org.apache.hyracks.api.io.FileReference) RunFileWriter(org.apache.hyracks.dataflow.common.io.RunFileWriter)

Example 5 with RunFileWriter

use of org.apache.hyracks.dataflow.common.io.RunFileWriter in project asterixdb by apache.

the class OptimizedHybridHashJoin method probe.

public void probe(ByteBuffer buffer, IFrameWriter writer) throws HyracksDataException {
    accessorProbe.reset(buffer);
    int tupleCount = accessorProbe.getTupleCount();
    if (isBuildRelAllInMemory()) {
        inMemJoiner.join(buffer, writer);
        return;
    }
    inMemJoiner.resetAccessorProbe(accessorProbe);
    for (int i = 0; i < tupleCount; ++i) {
        int pid = probeHpc.partition(accessorProbe, i, numOfPartitions);
        if (buildPSizeInTups[pid] > 0 || isLeftOuter) {
            //Tuple has potential match from previous phase
            if (spilledStatus.get(pid)) {
                //pid is Spilled
                while (!bufferManager.insertTuple(pid, accessorProbe, i, tempPtr)) {
                    int victim = pid;
                    if (bufferManager.getNumTuples(pid) == 0) {
                        // current pid is empty, choose the biggest one
                        victim = spillPolicy.findSpilledPartitionWithMaxMemoryUsage();
                    }
                    if (victim < 0) {
                        // current tuple is too big for all the free space
                        flushBigProbeObjectToDisk(pid, accessorProbe, i);
                        break;
                    }
                    RunFileWriter runFileWriter = getSpillWriterOrCreateNewOneIfNotExist(victim, SIDE.PROBE);
                    bufferManager.flushPartition(victim, runFileWriter);
                    bufferManager.clearPartition(victim);
                }
            } else {
                //pid is Resident
                inMemJoiner.join(i, writer);
            }
            probePSizeInTups[pid]++;
        }
    }
}
Also used : RunFileWriter(org.apache.hyracks.dataflow.common.io.RunFileWriter)

Aggregations

RunFileWriter (org.apache.hyracks.dataflow.common.io.RunFileWriter)13 HyracksDataException (org.apache.hyracks.api.exceptions.HyracksDataException)4 ISpillableTable (org.apache.hyracks.dataflow.std.group.ISpillableTable)3 IFrameWriter (org.apache.hyracks.api.comm.IFrameWriter)2 VSizeFrame (org.apache.hyracks.api.comm.VSizeFrame)2 FileReference (org.apache.hyracks.api.io.FileReference)2 ArrayList (java.util.ArrayList)1 FrameTupleAppender (org.apache.hyracks.dataflow.common.comm.io.FrameTupleAppender)1 GeneratedRunFileReader (org.apache.hyracks.dataflow.common.io.GeneratedRunFileReader)1 RunFileReader (org.apache.hyracks.dataflow.common.io.RunFileReader)1 GroupVSizeFrame (org.apache.hyracks.dataflow.std.sort.util.GroupVSizeFrame)1