use of org.apache.cassandra.db.EchoedRow in project eiger by wlloyd.
the class CompactionController method getCompactedRow.
/**
* @return an AbstractCompactedRow implementation to write the merged rows in question.
*
* If there is a single source row, the data is from a current-version sstable, we don't
* need to purge and we aren't forcing deserialization for scrub, write it unchanged.
* Otherwise, we deserialize, purge tombstones, and reserialize in the latest version.
*/
public AbstractCompactedRow getCompactedRow(List<SSTableIdentityIterator> rows) {
long rowSize = 0;
for (SSTableIdentityIterator row : rows) rowSize += row.dataSize;
// is going to be less expensive than simply de/serializing the row again
if (rows.size() == 1 && !needDeserialize() && (rowSize > DatabaseDescriptor.getInMemoryCompactionLimit() || !keyExistenceIsExpensive) && !shouldPurge(rows.get(0).getKey())) {
return new EchoedRow(rows.get(0));
}
if (rowSize > DatabaseDescriptor.getInMemoryCompactionLimit()) {
String keyString = cfs.metadata.getKeyValidator().getString(rows.get(0).getKey().key);
logger.info(String.format("Compacting large row %s/%s:%s (%d bytes) incrementally", cfs.table.name, cfs.columnFamily, keyString, rowSize));
return new LazilyCompactedRow(this, rows);
}
return new PrecompactedRow(this, rows);
}
Aggregations