Search in sources :

Example 16 with HashMappedList

use of org.hsqldb_voltpatches.lib.HashMappedList in project voltdb by VoltDB.

the class HSQLInterface method getXMLFromCatalog.

/**
     * Get an serialized XML representation of the current schema/catalog.
     *
     * @return The XML representing the catalog.
     * @throws HSQLParseException
     */
public VoltXMLElement getXMLFromCatalog() throws HSQLParseException {
    VoltXMLElement xml = emptySchema.duplicate();
    // load all the tables
    HashMappedList hsqlTables = getHSQLTables();
    for (int i = 0; i < hsqlTables.size(); i++) {
        Table table = (Table) hsqlTables.get(i);
        VoltXMLElement vxmle = table.voltGetTableXML(sessionProxy);
        assert (vxmle != null);
        xml.children.add(vxmle);
    }
    return xml;
}
Also used : HashMappedList(org.hsqldb_voltpatches.lib.HashMappedList)

Example 17 with HashMappedList

use of org.hsqldb_voltpatches.lib.HashMappedList in project voltdb by VoltDB.

the class StatementDML method executeMergeStatement.

/**
     * Executes a MERGE statement.  It is assumed that the argument
     * is of the correct type.
     *
     * @return Result object
     */
Result executeMergeStatement(Session session) {
    Result resultOut = null;
    RowSetNavigator generatedNavigator = null;
    PersistentStore store = session.sessionData.getRowStore(baseTable);
    if (generatedIndexes != null) {
        resultOut = Result.newUpdateCountResult(generatedResultMetaData, 0);
        generatedNavigator = resultOut.getChainedResult().getNavigator();
    }
    int count = 0;
    // data generated for non-matching rows
    RowSetNavigatorClient newData = new RowSetNavigatorClient(8);
    // rowset for update operation
    HashMappedList updateRowSet = new HashMappedList();
    RangeVariable[] joinRangeIterators = targetRangeVariables;
    // populate insert and update lists
    RangeIterator[] rangeIterators = new RangeIterator[joinRangeIterators.length];
    for (int i = 0; i < joinRangeIterators.length; i++) {
        rangeIterators[i] = joinRangeIterators[i].getIterator(session);
    }
    for (int currentIndex = 0; 0 <= currentIndex; ) {
        RangeIterator it = rangeIterators[currentIndex];
        boolean beforeFirst = it.isBeforeFirst();
        if (it.next()) {
            if (currentIndex < joinRangeIterators.length - 1) {
                currentIndex++;
                continue;
            }
        } else {
            if (currentIndex == 1 && beforeFirst) {
                Object[] data = getMergeInsertData(session);
                if (data != null) {
                    newData.add(data);
                }
            }
            it.reset();
            currentIndex--;
            continue;
        }
        // row matches!
        if (updateExpressions != null) {
            // this is always the second iterator
            Row row = it.getCurrentRow();
            Object[] data = getUpdatedData(session, baseTable, updateColumnMap, updateExpressions, baseTable.getColumnTypes(), row.getData());
            updateRowSet.add(row, data);
        }
    }
    // update any matched rows
    if (updateRowSet.size() > 0) {
        count = update(session, baseTable, updateRowSet);
    }
    // insert any non-matched rows
    newData.beforeFirst();
    while (newData.hasNext()) {
        Object[] data = newData.getNext();
        baseTable.insertRow(session, store, data);
        if (generatedNavigator != null) {
            Object[] generatedValues = getGeneratedColumns(data);
            generatedNavigator.add(generatedValues);
        }
    }
    baseTable.fireAfterTriggers(session, Trigger.INSERT_AFTER, newData);
    count += newData.getSize();
    if (resultOut == null) {
        return Result.getUpdateCountResult(count);
    } else {
        resultOut.setUpdateCount(count);
        return resultOut;
    }
}
Also used : HashMappedList(org.hsqldb_voltpatches.lib.HashMappedList) RangeIterator(org.hsqldb_voltpatches.navigator.RangeIterator) RowSetNavigator(org.hsqldb_voltpatches.navigator.RowSetNavigator) PersistentStore(org.hsqldb_voltpatches.persist.PersistentStore) Result(org.hsqldb_voltpatches.result.Result) RowSetNavigatorClient(org.hsqldb_voltpatches.navigator.RowSetNavigatorClient)

Example 18 with HashMappedList

use of org.hsqldb_voltpatches.lib.HashMappedList in project voltdb by VoltDB.

the class StatementDML method checkCascadeUpdate.

/**
     * Check or perform an update cascade operation on a single row. Check or
     * cascade an update (delete/insert) operation. The method takes a pair of
     * rows (new data,old data) and checks if Constraints permit the update
     * operation. A boolean arguement determines if the operation should realy
     * take place or if we just have to check for constraint violation. fredt -
     * cyclic conditions are now avoided by checking for second visit to each
     * constraint. The set of list of updates for all tables is passed and
     * filled in recursive calls.
     *
     * @param session current database session
     * @param table table to check
     * @param tableUpdateLists lists of updates
     * @param orow old row data to be deleted.
     * @param nrow new row data to be inserted.
     * @param cols indices of the columns actually changed.
     * @param ref This should be initialized to null when the method is called
     *   from the 'outside'. During recursion this will be the current table
     *   (i.e. this) to indicate from where we came. Foreign keys to this table
     *   do not have to be checked since they have triggered the update and are
     *   valid by definition.
     * @param path HashSet
     */
static void checkCascadeUpdate(Session session, Table table, HashMappedList tableUpdateLists, Row orow, Object[] nrow, int[] cols, Table ref, HashSet path) {
    // --
    for (int i = 0, size = table.fkConstraints.length; i < size; i++) {
        // -- (1) If it is a foreign key constraint we have to check if the
        // --     main table still holds a record which allows the new values
        // --     to be set in the updated columns. This test however will be
        // --     skipped if the reference table is the main table since changes
        // --     in the reference table triggered the update and therefor
        // --     the referential integrity is guaranteed to be valid.
        // --
        Constraint c = table.fkConstraints[i];
        if (ref == null || c.getMain() != ref) {
            // -- common indexes of the changed columns and the main/ref constraint
            if (ArrayUtil.countCommonElements(cols, c.getRefColumns()) == 0) {
                // -- Table::checkCascadeUpdate -- NO common cols; reiterating
                continue;
            }
            c.checkHasMainRef(session, nrow);
        }
    }
    for (int i = 0, size = table.fkMainConstraints.length; i < size; i++) {
        Constraint c = table.fkMainConstraints[i];
        // -- (2) If it happens to be a main constraint we check if the slave
        // --     table holds any records refering to the old contents. If so,
        // --     the constraint has to support an 'on update' action or we
        // --     throw an exception (all via a call to Constraint.findFkRef).
        // --
        // -- If there are no common columns between the reference constraint
        // -- and the changed columns, we reiterate.
        int[] common = ArrayUtil.commonElements(cols, c.getMainColumns());
        if (common == null) {
            // -- NO common cols between; reiterating
            continue;
        }
        int[] m_columns = c.getMainColumns();
        int[] r_columns = c.getRefColumns();
        // fredt - find out if the FK columns have actually changed
        boolean nochange = true;
        for (int j = 0; j < m_columns.length; j++) {
            // identity test is enough
            if (orow.getData()[m_columns[j]] != nrow[m_columns[j]]) {
                nochange = false;
                break;
            }
        }
        if (nochange) {
            continue;
        }
        // there must be no record in the 'slave' table
        // sebastian@scienion -- dependent on forDelete | forUpdate
        RowIterator refiterator = c.findFkRef(session, orow.getData(), false);
        if (refiterator.hasNext()) {
            if (c.core.updateAction == Constraint.NO_ACTION || c.core.updateAction == Constraint.RESTRICT) {
                int errorCode = c.core.deleteAction == Constraint.NO_ACTION ? ErrorCode.X_23501 : ErrorCode.X_23001;
                String[] info = new String[] { c.core.refName.name, c.core.refTable.getName().name };
                throw Error.error(errorCode, ErrorCode.CONSTRAINT, info);
            }
        } else {
            // no referencing row found
            continue;
        }
        Table reftable = c.getRef();
        // -- unused shortcut when update table has no imported constraint
        boolean hasref = reftable.getNextConstraintIndex(0, Constraint.MAIN) != -1;
        Index refindex = c.getRefIndex();
        // -- walk the index for all the nodes that reference update node
        HashMappedList rowSet = (HashMappedList) tableUpdateLists.get(reftable);
        if (rowSet == null) {
            rowSet = new HashMappedList();
            tableUpdateLists.add(reftable, rowSet);
        }
        for (Row refrow = refiterator.getNextRow(); ; refrow = refiterator.getNextRow()) {
            if (refrow == null || refindex.compareRowNonUnique(orow.getData(), m_columns, refrow.getData()) != 0) {
                break;
            }
            Object[] rnd = reftable.getEmptyRowData();
            System.arraycopy(refrow.getData(), 0, rnd, 0, rnd.length);
            // -- And handle the insertion procedure differently.
            if (c.getUpdateAction() == Constraint.SET_NULL) {
                // -- since we are setting <code>null</code> values
                for (int j = 0; j < r_columns.length; j++) {
                    rnd[r_columns[j]] = null;
                }
            } else if (c.getUpdateAction() == Constraint.SET_DEFAULT) {
                // -- the values and referential integrity is no longer guaranteed to be valid
                for (int j = 0; j < r_columns.length; j++) {
                    ColumnSchema col = reftable.getColumn(r_columns[j]);
                    rnd[r_columns[j]] = col.getDefaultValue(session);
                }
                if (path.add(c)) {
                    checkCascadeUpdate(session, reftable, tableUpdateLists, refrow, rnd, r_columns, null, path);
                    path.remove(c);
                }
            } else {
                // -- table therefor we set ref==this.
                for (int j = 0; j < m_columns.length; j++) {
                    rnd[r_columns[j]] = nrow[m_columns[j]];
                }
                if (path.add(c)) {
                    checkCascadeUpdate(session, reftable, tableUpdateLists, refrow, rnd, common, table, path);
                    path.remove(c);
                }
            }
            mergeUpdate(rowSet, refrow, rnd, r_columns);
        }
    }
}
Also used : HashMappedList(org.hsqldb_voltpatches.lib.HashMappedList) Index(org.hsqldb_voltpatches.index.Index) RowIterator(org.hsqldb_voltpatches.navigator.RowIterator)

Example 19 with HashMappedList

use of org.hsqldb_voltpatches.lib.HashMappedList in project voltdb by VoltDB.

the class SchemaObjectSet method getSQL.

String[] getSQL(OrderedHashSet resolved, OrderedHashSet unresolved) {
    HsqlArrayList list = new HsqlArrayList();
    if (!(map instanceof HashMappedList)) {
        return null;
    }
    if (map.isEmpty()) {
        return ValuePool.emptyStringArray;
    }
    Iterator it = map.values().iterator();
    if (type == SchemaObject.FUNCTION || type == SchemaObject.PROCEDURE) {
        OrderedHashSet set = new OrderedHashSet();
        while (it.hasNext()) {
            RoutineSchema routine = (RoutineSchema) it.next();
            for (int i = 0; i < routine.routines.length; i++) {
                set.add(routine.routines[i]);
            }
        }
        it = set.iterator();
    }
    while (it.hasNext()) {
        SchemaObject object = (SchemaObject) it.next();
        OrderedHashSet references = object.getReferences();
        if (references != null) {
            boolean isResolved = true;
            for (int j = 0; j < references.size(); j++) {
                HsqlName name = (HsqlName) references.get(j);
                if (SqlInvariants.isSchemaNameSystem(name)) {
                    continue;
                }
                if (name.type == SchemaObject.COLUMN) {
                    name = name.parent;
                }
                if (name.type == SchemaObject.CHARSET) {
                    // some built-in character sets have no schema
                    if (name.schema == null) {
                        continue;
                    }
                }
                if (!resolved.contains(name)) {
                    isResolved = false;
                    break;
                }
            }
            if (!isResolved) {
                unresolved.add(object);
                continue;
            }
        }
        resolved.add(object.getName());
        if (object.getType() == SchemaObject.TABLE) {
            list.addAll(((Table) object).getSQL(resolved, unresolved));
        } else {
            list.add(object.getSQL());
        }
    }
    String[] array = new String[list.size()];
    list.toArray(array);
    return array;
}
Also used : HashMappedList(org.hsqldb_voltpatches.lib.HashMappedList) HsqlArrayList(org.hsqldb_voltpatches.lib.HsqlArrayList) Iterator(org.hsqldb_voltpatches.lib.Iterator) OrderedHashSet(org.hsqldb_voltpatches.lib.OrderedHashSet) HsqlName(org.hsqldb_voltpatches.HsqlNameManager.HsqlName)

Example 20 with HashMappedList

use of org.hsqldb_voltpatches.lib.HashMappedList in project voltdb by VoltDB.

the class StatementDML method checkCascadeDelete.

// fredt@users 20020225 - patch 1.7.0 - CASCADING DELETES
/**
     *  Method is called recursively on a tree of tables from the current one
     *  until no referring foreign-key table is left. In the process, if a
     *  non-cascading foreign-key referring table contains data, an exception
     *  is thrown. Parameter delete indicates whether to delete refering rows.
     *  The method is called first to check if the row can be deleted, then to
     *  delete the row and all the refering rows.<p>
     *
     *  Support added for SET NULL and SET DEFAULT by kloska@users involves
     *  switching to checkCascadeUpdate(,,,,) when these rules are encountered
     *  in the constraint.(fredt@users)
     *
     * @param session current session
     * @param  table table to delete from
     * @param  tableUpdateList list of update lists
     * @param  row row to delete
     * @param  delete action
     * @param  path constraint path
     * @throws  HsqlException
     */
static void checkCascadeDelete(Session session, Table table, HashMappedList tableUpdateList, Row row, boolean delete, HashSet path) {
    for (int i = 0, size = table.fkMainConstraints.length; i < size; i++) {
        Constraint c = table.fkMainConstraints[i];
        RowIterator refiterator = c.findFkRef(session, row.getData(), delete);
        if (!refiterator.hasNext()) {
            continue;
        }
        try {
            if (c.core.deleteAction == Constraint.NO_ACTION || c.core.deleteAction == Constraint.RESTRICT) {
                if (c.core.mainTable == c.core.refTable) {
                    Row refrow = refiterator.getNextRow();
                    // with self-referencing FK's deletes
                    if (row.equals(refrow)) {
                        continue;
                    }
                }
                int errorCode = c.core.deleteAction == Constraint.NO_ACTION ? ErrorCode.X_23501 : ErrorCode.X_23001;
                String[] info = new String[] { c.core.refName.name, c.core.refTable.getName().name };
                throw Error.error(errorCode, ErrorCode.CONSTRAINT, info);
            }
            Table reftable = c.getRef();
            // shortcut when deltable has no imported constraint
            boolean hasref = reftable.fkMainConstraints.length > 0;
            // if (reftable == this) we don't need to go further and can return ??
            if (!delete && !hasref) {
                continue;
            }
            Index refindex = c.getRefIndex();
            int[] m_columns = c.getMainColumns();
            int[] r_columns = c.getRefColumns();
            Object[] mdata = row.getData();
            boolean isUpdate = c.getDeleteAction() == Constraint.SET_NULL || c.getDeleteAction() == Constraint.SET_DEFAULT;
            // -- list for records to be inserted if this is
            // -- a 'ON DELETE SET [NULL|DEFAULT]' constraint
            HashMappedList rowSet = null;
            if (isUpdate) {
                rowSet = (HashMappedList) tableUpdateList.get(reftable);
                if (rowSet == null) {
                    rowSet = new HashMappedList();
                    tableUpdateList.add(reftable, rowSet);
                }
            }
            // walk the index for all the nodes that reference delnode
            for (; ; ) {
                Row refrow = refiterator.getNextRow();
                if (refrow == null || refrow.isDeleted(session) || refindex.compareRowNonUnique(mdata, m_columns, refrow.getData()) != 0) {
                    break;
                }
                // -- switch over to the 'checkCascadeUpdate' method below this level
                if (isUpdate) {
                    Object[] rnd = reftable.getEmptyRowData();
                    System.arraycopy(refrow.getData(), 0, rnd, 0, rnd.length);
                    if (c.getDeleteAction() == Constraint.SET_NULL) {
                        for (int j = 0; j < r_columns.length; j++) {
                            rnd[r_columns[j]] = null;
                        }
                    } else {
                        for (int j = 0; j < r_columns.length; j++) {
                            ColumnSchema col = reftable.getColumn(r_columns[j]);
                            rnd[r_columns[j]] = col.getDefaultValue(session);
                        }
                    }
                    if (hasref && path.add(c)) {
                        // fredt - avoid infinite recursion on circular references
                        // these can be rings of two or more mutually dependent tables
                        // so only one visit per constraint is allowed
                        checkCascadeUpdate(session, reftable, null, refrow, rnd, r_columns, null, path);
                        path.remove(c);
                    }
                    if (delete) {
                        //  foreign key referencing own table - do not update the row to be deleted
                        if (reftable != table || !refrow.equals(row)) {
                            mergeUpdate(rowSet, refrow, rnd, r_columns);
                        }
                    }
                } else if (hasref) {
                    if (reftable != table) {
                        if (path.add(c)) {
                            checkCascadeDelete(session, reftable, tableUpdateList, refrow, delete, path);
                            path.remove(c);
                        }
                    } else {
                        // but chained rows can result in very deep recursion and StackOverflowError
                        if (refrow != row) {
                            checkCascadeDelete(session, reftable, tableUpdateList, refrow, delete, path);
                        }
                    }
                }
                if (delete && !isUpdate && !refrow.isDeleted(session)) {
                    reftable.deleteRowAsTriggeredAction(session, refrow);
                }
            }
        } finally {
            refiterator.release();
        }
    }
}
Also used : HashMappedList(org.hsqldb_voltpatches.lib.HashMappedList) Index(org.hsqldb_voltpatches.index.Index) RowIterator(org.hsqldb_voltpatches.navigator.RowIterator)

Aggregations

HashMappedList (org.hsqldb_voltpatches.lib.HashMappedList)23 HsqlName (org.hsqldb_voltpatches.HsqlNameManager.HsqlName)4 OrderedHashSet (org.hsqldb_voltpatches.lib.OrderedHashSet)4 HashSet (org.hsqldb_voltpatches.lib.HashSet)3 HsqlArrayList (org.hsqldb_voltpatches.lib.HsqlArrayList)3 Index (org.hsqldb_voltpatches.index.Index)2 Iterator (org.hsqldb_voltpatches.lib.Iterator)2 RowIterator (org.hsqldb_voltpatches.navigator.RowIterator)2 RowSetNavigator (org.hsqldb_voltpatches.navigator.RowSetNavigator)2 PersistentStore (org.hsqldb_voltpatches.persist.PersistentStore)2 Result (org.hsqldb_voltpatches.result.Result)2 EOFException (java.io.EOFException)1 IOException (java.io.IOException)1 InputStream (java.io.InputStream)1 InputStreamReader (java.io.InputStreamReader)1 LineNumberReader (java.io.LineNumberReader)1 HashSet (java.util.HashSet)1 HsqlException (org.hsqldb_voltpatches.HsqlException)1 SimpleName (org.hsqldb_voltpatches.HsqlNameManager.SimpleName)1 RangeIteratorBase (org.hsqldb_voltpatches.RangeVariable.RangeIteratorBase)1