use of org.hsqldb_voltpatches.persist.PersistentStore in project voltdb by VoltDB.
the class DatabaseInformationMain method SYSTEM_BESTROWIDENTIFIER.
/**
* Retrieves a <code>Table</code> object describing the optimal
* set of visible columns that uniquely identifies a row
* for each accessible table defined within this database. <p>
*
* Each row describes a single column of the best row indentifier column
* set for a particular table. Each row has the following
* columns: <p>
*
* <pre class="SqlCodeExample">
* SCOPE SMALLINT scope of applicability
* COLUMN_NAME VARCHAR simple name of the column
* DATA_TYPE SMALLINT SQL data type from Types
* TYPE_NAME VARCHAR canonical type name
* COLUMN_SIZE INTEGER precision
* BUFFER_LENGTH INTEGER transfer size in bytes, if definitely known
* DECIMAL_DIGITS SMALLINT scale - fixed # of decimal digits
* PSEUDO_COLUMN SMALLINT is this a pseudo column like an Oracle ROWID?
* TABLE_CAT VARCHAR table catalog
* TABLE_SCHEM VARCHAR simple name of table schema
* TABLE_NAME VARCHAR simple table name
* NULLABLE SMALLINT is column nullable?
* IN_KEY BOOLEAN column belongs to a primary or alternate key?
* </pre> <p>
*
* <b>Notes:</b><p>
*
* <code>JDBCDatabaseMetaData.getBestRowIdentifier</code> uses its
* nullable parameter to filter the rows of this table in the following
* manner: <p>
*
* If the nullable parameter is <code>false</code>, then rows are reported
* only if, in addition to satisfying the other specified filter values,
* the IN_KEY column value is TRUE. If the nullable parameter is
* <code>true</code>, then the IN_KEY column value is ignored. <p>
*
* There is not yet infrastructure in place to make some of the ranking
* descisions described below, and it is anticipated that mechanisms
* upon which cost descisions could be based will change significantly over
* the next few releases. Hence, in the interest of simplicity and of not
* making overly complex dependency on features that will almost certainly
* change significantly in the near future, the current implementation,
* while perfectly adequate for all but the most demanding or exacting
* purposes, is actually sub-optimal in the strictest sense. <p>
*
* A description of the current implementation follows: <p>
*
* <b>DEFINTIONS:</b> <p>
*
* <b>Alternate key</b> <p>
*
* <UL>
* <LI> An attribute of a table that, by virtue of its having a set of
* columns that are both the full set of columns participating in a
* unique constraint or index and are all not null, yeilds the same
* selectability characteristic that would obtained by declaring a
* primary key on those same columns.
* </UL> <p>
*
* <b>Column set performance ranking</b> <p>
*
* <UL>
* <LI> The ranking of the expected average performance w.r.t a subset of
* a table's columns used to select and/or compare rows, as taken in
* relation to all other distinct candidate subsets under
* consideration. This can be estimated by comparing each cadidate
* subset in terms of total column count, relative peformance of
* comparisons amongst the domains of the columns and differences
* in other costs involved in the execution plans generated using
* each subset under consideration for row selection/comparison.
* </UL> <p>
*
*
* <b>Rules:</b> <p>
*
* Given the above definitions, the rules currently in effect for reporting
* best row identifier are as follows, in order of precedence: <p>
*
* <OL>
* <LI> if the table under consideration has a primary key contraint, then
* the columns of the primary key are reported, with no consideration
* given to the column set performance ranking over the set of
* candidate keys. Each row has its IN_KEY column set to TRUE.
*
* <LI> if 1.) does not hold, then if there exits one or more alternate
* keys, then the columns of the alternate key with the lowest column
* count are reported, with no consideration given to the column set
* performance ranking over the set of candidate keys. If there
* exists a tie for lowest column count, then the columns of the
* first such key encountered are reported.
* Each row has its IN_KEY column set to TRUE.
*
* <LI> if both 1.) and 2.) do not hold, then, if possible, a unique
* contraint/index is selected from the set of unique
* contraints/indices containing at least one column having
* a not null constraint, with no consideration given to the
* column set performance ranking over the set of all such
* candidate column sets. If there exists a tie for lowest non-zero
* count of columns having a not null constraint, then the columns
* of the first such encountered candidate set are reported. Each
* row has its IN_KEY column set to FALSE. <p>
*
* <LI> Finally, if the set of candidate column sets in 3.) is the empty,
* then no column set is reported for the table under consideration.
* </OL> <p>
*
* The scope reported for a best row identifier column set is determined
* thus: <p>
*
* <OL>
* <LI> if the database containing the table under consideration is in
* read-only mode or the table under consideration is GLOBAL TEMPORARY
* (a TEMP or TEMP TEXT table, in HSQLDB parlance), then the scope
* is reported as
* <code>java.sql.DatabaseMetaData.bestRowSession</code>.
*
* <LI> if 1.) does not hold, then the scope is reported as
* <code>java.sql.DatabaseMetaData.bestRowTemporary</code>.
* </OL> <p>
*
* @return a <code>Table</code> object describing the optimal
* set of visible columns that uniquely identifies a row
* for each accessible table defined within this database
*/
final Table SYSTEM_BESTROWIDENTIFIER() {
Table t = sysTables[SYSTEM_BESTROWIDENTIFIER];
if (t == null) {
t = createBlankTable(sysTableHsqlNames[SYSTEM_BESTROWIDENTIFIER]);
// not null
addColumn(t, "SCOPE", Type.SQL_SMALLINT);
// not null
addColumn(t, "COLUMN_NAME", SQL_IDENTIFIER);
// not null
addColumn(t, "DATA_TYPE", Type.SQL_SMALLINT);
// not null
addColumn(t, "TYPE_NAME", SQL_IDENTIFIER);
addColumn(t, "COLUMN_SIZE", Type.SQL_INTEGER);
addColumn(t, "BUFFER_LENGTH", Type.SQL_INTEGER);
addColumn(t, "DECIMAL_DIGITS", Type.SQL_SMALLINT);
// not null
addColumn(t, "PSEUDO_COLUMN", Type.SQL_SMALLINT);
addColumn(t, "TABLE_CAT", SQL_IDENTIFIER);
addColumn(t, "TABLE_SCHEM", SQL_IDENTIFIER);
// not null
addColumn(t, "TABLE_NAME", SQL_IDENTIFIER);
// not null
addColumn(t, "NULLABLE", Type.SQL_SMALLINT);
// not null
addColumn(t, "IN_KEY", Type.SQL_BOOLEAN);
// order: SCOPE
// for unique: TABLE_CAT, TABLE_SCHEM, TABLE_NAME, COLUMN_NAME
// false PK, as TABLE_CAT and/or TABLE_SCHEM may be null
HsqlName name = HsqlNameManager.newInfoSchemaObjectName(sysTableHsqlNames[SYSTEM_BESTROWIDENTIFIER].name, false, SchemaObject.INDEX);
t.createPrimaryKey(name, new int[] { 0, 8, 9, 10, 1 }, false);
return t;
}
PersistentStore store = database.persistentStoreCollection.getStore(t);
// calculated column values
// { temp, transaction, session }
Integer scope;
Integer pseudo;
//-------------------------------------------
// required for restriction of results via
// DatabaseMetaData filter parameters, but
// not actually required to be included in
// DatabaseMetaData.getBestRowIdentifier()
// result set
//-------------------------------------------
// table calalog
String tableCatalog;
// table schema
String tableSchema;
// table name
String tableName;
// column participates in PK or AK?
Boolean inKey;
//-------------------------------------------
/**
* @todo - Maybe include: - backing index (constraint) name?
* - column sequence in index (constraint)?
*/
//-------------------------------------------
// Intermediate holders
Iterator tables;
Table table;
DITableInfo ti;
int[] cols;
Object[] row;
HsqlProperties p;
// Column number mappings
final int iscope = 0;
final int icolumn_name = 1;
final int idata_type = 2;
final int itype_name = 3;
final int icolumn_size = 4;
final int ibuffer_length = 5;
final int idecimal_digits = 6;
final int ipseudo_column = 7;
final int itable_cat = 8;
final int itable_schem = 9;
final int itable_name = 10;
final int inullable = 11;
final int iinKey = 12;
// Initialization
ti = new DITableInfo();
tables = database.schemaManager.databaseObjectIterator(SchemaObject.TABLE);
// Do it.
while (tables.hasNext()) {
table = (Table) tables.next();
/** @todo - requires access to the actual columns */
if (table.isView() || !isAccessibleTable(table)) {
continue;
}
cols = table.getBestRowIdentifiers();
if (cols == null) {
continue;
}
ti.setTable(table);
inKey = ValuePool.getBoolean(table.isBestRowIdentifiersStrict());
tableCatalog = table.getCatalogName().name;
tableSchema = table.getSchemaName().name;
tableName = table.getName().name;
Type[] types = table.getColumnTypes();
scope = ti.getBRIScope();
pseudo = ti.getBRIPseudo();
for (int i = 0; i < cols.length; i++) {
ColumnSchema column = table.getColumn(i);
row = t.getEmptyRowData();
row[iscope] = scope;
row[icolumn_name] = column.getName().name;
row[idata_type] = ValuePool.getInt(types[i].getJDBCTypeCode());
row[itype_name] = types[i].getNameString();
row[icolumn_size] = types[i].getJDBCPrecision();
row[ibuffer_length] = null;
row[idecimal_digits] = types[i].getJDBCScale();
row[ipseudo_column] = pseudo;
row[itable_cat] = tableCatalog;
row[itable_schem] = tableSchema;
row[itable_name] = tableName;
row[inullable] = column.getNullability();
row[iinKey] = inKey;
t.insertSys(store, row);
}
}
return t;
}
use of org.hsqldb_voltpatches.persist.PersistentStore in project voltdb by VoltDB.
the class DatabaseInformationFull method CHECK_CONSTRAINTS.
/**
* The CHECK_CONSTRAINTS view has one row for each domain
* constraint, table check constraint, and assertion. <p>
*
* <b>Definition:</b><p>
*
* <pre class="SqlCodeExample">
* CONSTRAINT_CATALOG VARCHAR NULL,
* CONSTRAINT_SCHEMA VARCHAR NULL,
* CONSTRAINT_NAME VARCHAR NOT NULL,
* CHECK_CLAUSE VARCHAR NOT NULL,
* </pre>
*
* <b>Description:</b><p>
*
* <ol>
* <li> A constraint is shown in this view if the authorization for the
* schema that contains the constraint is the current user or is a role
* assigned to the current user. <p>
*
* <li> The values of CONSTRAINT_CATALOG, CONSTRAINT_SCHEMA and
* CONSTRAINT_NAME are the catalog name, unqualified schema name,
* and qualified identifier, respectively, of the constraint being
* described. <p>
*
* <li> Case: <p>
*
* <table>
* <tr>
* <td valign="top" halign="left">a)</td>
* <td> If the character representation of the
* <search condition> contained in the
* <check constraint definition>,
* <domain constraint definition>, or
* <assertion definition> that defined
* the check constraint being described can be
* represented without truncation, then the
* value of CHECK_CLAUSE is that character
* representation. </td>
* </tr>
* <tr>
* <td align="top" halign="left">b)</td>
* <td>Otherwise, the value of CHECK_CLAUSE is the
* null value.</td>
* </tr>
* </table>
* </ol>
*
* @return Table
*/
Table CHECK_CONSTRAINTS() {
Table t = sysTables[CHECK_CONSTRAINTS];
if (t == null) {
t = createBlankTable(sysTableHsqlNames[CHECK_CONSTRAINTS]);
addColumn(t, "CONSTRAINT_CATALOG", SQL_IDENTIFIER);
addColumn(t, "CONSTRAINT_SCHEMA", SQL_IDENTIFIER);
// not null
addColumn(t, "CONSTRAINT_NAME", SQL_IDENTIFIER);
// not null
addColumn(t, "CHECK_CLAUSE", CHARACTER_DATA);
HsqlName name = HsqlNameManager.newInfoSchemaObjectName(sysTableHsqlNames[CHECK_CONSTRAINTS].name, false, SchemaObject.INDEX);
t.createPrimaryKey(name, new int[] { 2, 1, 0 }, false);
return t;
}
// column number mappings
final int constraint_catalog = 0;
final int constraint_schema = 1;
final int constraint_name = 2;
final int check_clause = 3;
//
PersistentStore store = database.persistentStoreCollection.getStore(t);
// calculated column values
// Intermediate holders
Iterator tables;
Table table;
Constraint[] tableConstraints;
int constraintCount;
Constraint constraint;
Object[] row;
//
tables = database.schemaManager.databaseObjectIterator(SchemaObject.TABLE);
while (tables.hasNext()) {
table = (Table) tables.next();
if (table.isView() || !session.getGrantee().isFullyAccessibleByRole(table)) {
continue;
}
tableConstraints = table.getConstraints();
constraintCount = tableConstraints.length;
for (int i = 0; i < constraintCount; i++) {
constraint = tableConstraints[i];
if (constraint.getConstraintType() != Constraint.CHECK) {
continue;
}
row = t.getEmptyRowData();
row[constraint_catalog] = database.getCatalogName().name;
row[constraint_schema] = table.getSchemaName().name;
row[constraint_name] = constraint.getName().name;
try {
row[check_clause] = constraint.getCheckSQL();
} catch (Exception e) {
}
t.insertSys(store, row);
}
}
Iterator it = database.schemaManager.databaseObjectIterator(SchemaObject.DOMAIN);
while (it.hasNext()) {
Type domain = (Type) it.next();
if (!domain.isDomainType()) {
continue;
}
if (!session.getGrantee().isFullyAccessibleByRole(domain)) {
continue;
}
tableConstraints = domain.userTypeModifier.getConstraints();
constraintCount = tableConstraints.length;
for (int i = 0; i < constraintCount; i++) {
constraint = tableConstraints[i];
row = t.getEmptyRowData();
row[constraint_catalog] = database.getCatalogName().name;
row[constraint_schema] = domain.getSchemaName().name;
row[constraint_name] = constraint.getName().name;
try {
row[check_clause] = constraint.getCheckSQL();
} catch (Exception e) {
}
t.insertSys(store, row);
}
}
return t;
}
use of org.hsqldb_voltpatches.persist.PersistentStore in project voltdb by VoltDB.
the class StatementDML method executeMergeStatement.
/**
* Executes a MERGE statement. It is assumed that the argument
* is of the correct type.
*
* @return Result object
*/
Result executeMergeStatement(Session session) {
Result resultOut = null;
RowSetNavigator generatedNavigator = null;
PersistentStore store = session.sessionData.getRowStore(baseTable);
if (generatedIndexes != null) {
resultOut = Result.newUpdateCountResult(generatedResultMetaData, 0);
generatedNavigator = resultOut.getChainedResult().getNavigator();
}
int count = 0;
// data generated for non-matching rows
RowSetNavigatorClient newData = new RowSetNavigatorClient(8);
// rowset for update operation
HashMappedList updateRowSet = new HashMappedList();
RangeVariable[] joinRangeIterators = targetRangeVariables;
// populate insert and update lists
RangeIterator[] rangeIterators = new RangeIterator[joinRangeIterators.length];
for (int i = 0; i < joinRangeIterators.length; i++) {
rangeIterators[i] = joinRangeIterators[i].getIterator(session);
}
for (int currentIndex = 0; 0 <= currentIndex; ) {
RangeIterator it = rangeIterators[currentIndex];
boolean beforeFirst = it.isBeforeFirst();
if (it.next()) {
if (currentIndex < joinRangeIterators.length - 1) {
currentIndex++;
continue;
}
} else {
if (currentIndex == 1 && beforeFirst) {
Object[] data = getMergeInsertData(session);
if (data != null) {
newData.add(data);
}
}
it.reset();
currentIndex--;
continue;
}
// row matches!
if (updateExpressions != null) {
// this is always the second iterator
Row row = it.getCurrentRow();
Object[] data = getUpdatedData(session, baseTable, updateColumnMap, updateExpressions, baseTable.getColumnTypes(), row.getData());
updateRowSet.add(row, data);
}
}
// update any matched rows
if (updateRowSet.size() > 0) {
count = update(session, baseTable, updateRowSet);
}
// insert any non-matched rows
newData.beforeFirst();
while (newData.hasNext()) {
Object[] data = newData.getNext();
baseTable.insertRow(session, store, data);
if (generatedNavigator != null) {
Object[] generatedValues = getGeneratedColumns(data);
generatedNavigator.add(generatedValues);
}
}
baseTable.fireAfterTriggers(session, Trigger.INSERT_AFTER, newData);
count += newData.getSize();
if (resultOut == null) {
return Result.getUpdateCountResult(count);
} else {
resultOut.setUpdateCount(count);
return resultOut;
}
}
use of org.hsqldb_voltpatches.persist.PersistentStore in project voltdb by VoltDB.
the class SubQuery method materialise.
/**
* Fills the table with a result set
*/
public void materialise(Session session) {
PersistentStore store;
// table constructors
if (isDataExpression) {
store = session.sessionData.getSubqueryRowStore(table);
dataExpression.insertValuesIntoSubqueryTable(session, store);
return;
}
Result result = queryExpression.getResult(session, isExistsPredicate ? 1 : 0);
RowSetNavigatorData navigator = ((RowSetNavigatorData) result.getNavigator());
if (uniqueRows) {
navigator.removeDuplicates();
}
store = session.sessionData.getSubqueryRowStore(table);
table.insertResult(store, result);
result.getNavigator().close();
}
use of org.hsqldb_voltpatches.persist.PersistentStore in project voltdb by VoltDB.
the class Table method moveData.
/**
* Moves the data from table to table.
* The colindex argument is the index of the column that was
* added or removed. The adjust argument is {-1 | 0 | +1}
*/
void moveData(Session session, Table from, int colindex, int adjust) {
Object colvalue = null;
ColumnSchema column = null;
if (adjust >= 0 && colindex != -1) {
column = getColumn(colindex);
colvalue = column.getDefaultValue(session);
}
PersistentStore store = session.sessionData.getRowStore(this);
try {
RowIterator it = from.rowIterator(session);
while (it.hasNext()) {
Row row = it.getNextRow();
Object[] o = row.getData();
Object[] data = getEmptyRowData();
if (adjust == 0 && colindex != -1) {
colvalue = column.getDataType().convertToType(session, o[colindex], from.getColumn(colindex).getDataType());
}
ArrayUtil.copyAdjustArray(o, data, colvalue, colindex, adjust);
systemSetIdentityColumn(session, data);
enforceRowConstraints(session, data);
// get object without RowAction
Row newrow = (Row) store.getNewCachedObject(null, data);
if (row.rowAction != null) {
newrow.rowAction = row.rowAction.duplicate(newrow.getPos());
}
store.indexRow(null, newrow);
}
} catch (Throwable t) {
store.release();
if (t instanceof HsqlException) {
throw (HsqlException) t;
}
throw new HsqlException(t, "", 0);
}
}
Aggregations