use of mondrian.server.Locus in project mondrian by pentaho.
the class BatchLoader method loadAggregations.
/**
* Resolves any pending cell reads using the cache. After calling this
* method, all cells requested in a given batch are loaded into this
* statement's local cache.
*
* <p>The method is implemented by making an asynchronous call to the cache
* manager. The result is a list of segments that satisfies every cell
* request.</p>
*
* <p>The client should put the resulting segments into its "query local"
* cache, to ensure that future cells in that segment can be answered
* without a call to the cache manager. (That is probably 1000x faster.)</p>
*
* <p>The cache manager does not inform where client where each segment
* came from. There are several possibilities:</p>
*
* <ul>
* <li>Segment was already in cache (header and body)</li>
* <li>Segment is in the process of being loaded by executing a SQL
* statement (probably due to a request from another client)</li>
* <li>Segment is in an external cache (that is, header is in the cache,
* body is not yet)</li>
* <li>Segment can be created by rolling up one or more cache segments.
* (And of course each of these segments might be "paged out".)</li>
* <li>By executing a SQL {@code GROUP BY} statement</li>
* </ul>
*
* <p>Furthermore, segments in external cache may take some time to retrieve
* (a LAN round trip, say 1 millisecond, is a reasonable guess); and the
* request may fail. (It depends on the cache, but caches are at liberty
* to 'forget' segments.) So, any strategy that relies on cache segments
* should be able to fall back. Even if there are fall backs, only one call
* needs to be made to the cache manager.</p>
*
* @return Whether any aggregations were loaded.
*/
boolean loadAggregations() {
if (!isDirty()) {
return false;
}
// List of futures yielding segments populated by SQL statements. If
// loading requires several iterations, we just append to the list. We
// don't mind if it takes a while for SQL statements to return.
final List<Future<Map<Segment, SegmentWithData>>> sqlSegmentMapFutures = new ArrayList<Future<Map<Segment, SegmentWithData>>>();
final List<CellRequest> cellRequests1 = new ArrayList<CellRequest>(cellRequests);
preloadColumnCardinality(cellRequests1);
for (int iteration = 0; ; ++iteration) {
final BatchLoader.LoadBatchResponse response = cacheMgr.execute(new BatchLoader.LoadBatchCommand(Locus.peek(), cacheMgr, getDialect(), cube, Collections.unmodifiableList(cellRequests1)));
int failureCount = 0;
// Segments that have been retrieved from cache this cycle. Allows
// us to reduce calls to the external cache.
Map<SegmentHeader, SegmentBody> headerBodies = new HashMap<SegmentHeader, SegmentBody>();
// cacheMgr -- it's our cache.
for (SegmentHeader header : response.cacheSegments) {
final SegmentBody body = cacheMgr.compositeCache.get(header);
if (body == null) {
// it on the next iteration.
if (cube.getStar() != null) {
cacheMgr.remove(cube.getStar(), header);
}
++failureCount;
continue;
}
headerBodies.put(header, body);
final SegmentWithData segmentWithData = response.convert(header, body);
segmentWithData.getStar().register(segmentWithData);
}
// Perform each suggested rollup.
//
// TODO this could be improved.
// See http://jira.pentaho.com/browse/MONDRIAN-1195
// Rollups that succeeded. Will tell cache mgr to put the headers
// into the index and the header/bodies in cache.
final Map<SegmentHeader, SegmentBody> succeededRollups = new HashMap<SegmentHeader, SegmentBody>();
for (final BatchLoader.RollupInfo rollup : response.rollups) {
// Gather the required segments.
Map<SegmentHeader, SegmentBody> map = findResidentRollupCandidate(headerBodies, rollup);
if (map == null) {
// all present in the cache.
continue;
}
final Set<String> keepColumns = new HashSet<String>();
for (RolapStar.Column column : rollup.constrainedColumns) {
keepColumns.add(column.getExpression().getGenericExpression());
}
Pair<SegmentHeader, SegmentBody> rollupHeaderBody = SegmentBuilder.rollup(map, keepColumns, rollup.constrainedColumnsBitKey, rollup.measure.getAggregator().getRollup(), rollup.measure.getDatatype());
final SegmentHeader header = rollupHeaderBody.left;
final SegmentBody body = rollupHeaderBody.right;
if (headerBodies.containsKey(header)) {
// We had already created this segment, somehow.
continue;
}
headerBodies.put(header, body);
succeededRollups.put(header, body);
final SegmentWithData segmentWithData = response.convert(header, body);
// Register this segment with the local star.
segmentWithData.getStar().register(segmentWithData);
// Actor thread to ensure thread safety.
if (!MondrianProperties.instance().DisableCaching.get()) {
final Locus locus = Locus.peek();
cacheMgr.execute(new SegmentCacheManager.Command<Void>() {
public Void call() throws Exception {
SegmentCacheIndex index = cacheMgr.getIndexRegistry().getIndex(segmentWithData.getStar());
index.add(segmentWithData.getHeader(), response.converterMap.get(SegmentCacheIndexImpl.makeConverterKey(segmentWithData.getHeader())), true);
index.loadSucceeded(segmentWithData.getHeader(), body);
return null;
}
public Locus getLocus() {
return locus;
}
});
}
}
// Wait for SQL statements to end -- but only if there are no
// failures.
//
// If there are failures, and its the first iteration, it's more
// urgent that we create and execute a follow-up request. We will
// wait for the pending SQL statements at the end of that.
//
// If there are failures on later iterations, wait for SQL
// statements to end. The cache might be porous. SQL might be the
// only way to make progress.
sqlSegmentMapFutures.addAll(response.sqlSegmentMapFutures);
if (failureCount == 0 || iteration > 0) {
// Wait on segments being loaded by someone else.
for (Map.Entry<SegmentHeader, Future<SegmentBody>> entry : response.futures.entrySet()) {
final SegmentHeader header = entry.getKey();
final Future<SegmentBody> bodyFuture = entry.getValue();
final SegmentBody body = Util.safeGet(bodyFuture, "Waiting for someone else's segment to load via SQL");
final SegmentWithData segmentWithData = response.convert(header, body);
segmentWithData.getStar().register(segmentWithData);
}
// Wait on segments being loaded by SQL statements we asked for.
for (Future<Map<Segment, SegmentWithData>> sqlSegmentMapFuture : sqlSegmentMapFutures) {
final Map<Segment, SegmentWithData> segmentMap = Util.safeGet(sqlSegmentMapFuture, "Waiting for segment to load via SQL");
for (SegmentWithData segmentWithData : segmentMap.values()) {
segmentWithData.getStar().register(segmentWithData);
}
// TODO: also pass back SegmentHeader and SegmentBody,
// and add these to headerBodies. Might help?
}
}
if (failureCount == 0) {
break;
}
// Figure out which cell requests are not satisfied by any of the
// segments retrieved.
@SuppressWarnings("unchecked") List<CellRequest> old = new ArrayList<CellRequest>(cellRequests1);
cellRequests1.clear();
for (CellRequest cellRequest : old) {
if (cellRequest.getMeasure().getStar().getCellFromCache(cellRequest, null) == null) {
cellRequests1.add(cellRequest);
}
}
if (cellRequests1.isEmpty()) {
break;
}
if (cellRequests1.size() >= old.size() && iteration > 10) {
throw Util.newError("Cache round-trip did not resolve any cell requests. " + "Iteration #" + iteration + "; request count " + cellRequests1.size() + "; requested headers: " + response.cacheSegments.size() + "; requested rollups: " + response.rollups.size() + "; requested SQL: " + response.sqlSegmentMapFutures.size());
}
// Continue loop; form and execute a new request with the smaller
// set of cell requests.
}
dirty = false;
cellRequests.clear();
return true;
}
use of mondrian.server.Locus in project mondrian by pentaho.
the class SqlMemberSource method getMemberCount.
private int getMemberCount(RolapLevel level, DataSource dataSource) {
boolean[] mustCount = new boolean[1];
String sql = makeLevelMemberCountSql(level, dataSource, mustCount);
final SqlStatement stmt = RolapUtil.executeQuery(dataSource, sql, new Locus(Locus.peek().execution, "SqlMemberSource.getLevelMemberCount", "while counting members of level '" + level));
try {
ResultSet resultSet = stmt.getResultSet();
int count;
if (!mustCount[0]) {
Util.assertTrue(resultSet.next());
++stmt.rowCount;
count = resultSet.getInt(1);
} else {
// count distinct "manually"
ResultSetMetaData rmd = resultSet.getMetaData();
int nColumns = rmd.getColumnCount();
String[] colStrings = new String[nColumns];
count = 0;
while (resultSet.next()) {
++stmt.rowCount;
boolean isEqual = true;
for (int i = 0; i < nColumns; i++) {
String colStr = resultSet.getString(i + 1);
if (!Util.equals(colStr, colStrings[i])) {
isEqual = false;
}
colStrings[i] = colStr;
}
if (!isEqual) {
count++;
}
}
}
return count;
} catch (SQLException e) {
throw stmt.handle(e);
} finally {
stmt.close();
}
}
use of mondrian.server.Locus in project mondrian by pentaho.
the class SqlStatisticsProvider method getColumnCardinality.
public long getColumnCardinality(Dialect dialect, DataSource dataSource, String catalog, String schema, String table, String column, Execution execution) {
final String sql = generateColumnCardinalitySql(dialect, schema, table, column);
if (sql == null) {
return -1;
}
SqlStatement stmt = RolapUtil.executeQuery(dataSource, sql, new Locus(execution, "SqlStatisticsProvider.getColumnCardinality", "Reading cardinality for column " + Arrays.asList(catalog, schema, table, column)));
try {
ResultSet resultSet = stmt.getResultSet();
if (resultSet.next()) {
++stmt.rowCount;
return resultSet.getInt(1);
}
// huh?
return -1;
} catch (SQLException e) {
throw stmt.handle(e);
} finally {
stmt.close();
}
}
use of mondrian.server.Locus in project mondrian by pentaho.
the class SqlStatisticsProvider method getTableCardinality.
public long getTableCardinality(Dialect dialect, DataSource dataSource, String catalog, String schema, String table, Execution execution) {
StringBuilder buf = new StringBuilder("select count(*) from ");
dialect.quoteIdentifier(buf, catalog, schema, table);
final String sql = buf.toString();
SqlStatement stmt = RolapUtil.executeQuery(dataSource, sql, new Locus(execution, "SqlStatisticsProvider.getTableCardinality", "Reading row count from table " + Arrays.asList(catalog, schema, table)));
try {
ResultSet resultSet = stmt.getResultSet();
if (resultSet.next()) {
++stmt.rowCount;
return resultSet.getInt(1);
}
// huh?
return -1;
} catch (SQLException e) {
throw stmt.handle(e);
} finally {
stmt.close();
}
}
Aggregations