Search in sources :

Example 11 with ServerRpcController

use of org.apache.hadoop.hbase.ipc.ServerRpcController in project phoenix by apache.

the class ConnectionQueryServicesImpl method checkClientServerCompatibility.

private void checkClientServerCompatibility(byte[] metaTable) throws SQLException {
    StringBuilder buf = new StringBuilder("The following servers require an updated " + QueryConstants.DEFAULT_COPROCESS_PATH + " to be put in the classpath of HBase: ");
    boolean isIncompatible = false;
    int minHBaseVersion = Integer.MAX_VALUE;
    boolean isTableNamespaceMappingEnabled = false;
    HTableInterface ht = null;
    try {
        List<HRegionLocation> locations = this.getAllTableRegions(metaTable);
        Set<HRegionLocation> serverMap = Sets.newHashSetWithExpectedSize(locations.size());
        TreeMap<byte[], HRegionLocation> regionMap = Maps.newTreeMap(Bytes.BYTES_COMPARATOR);
        List<byte[]> regionKeys = Lists.newArrayListWithExpectedSize(locations.size());
        for (HRegionLocation entry : locations) {
            if (!serverMap.contains(entry)) {
                regionKeys.add(entry.getRegionInfo().getStartKey());
                regionMap.put(entry.getRegionInfo().getRegionName(), entry);
                serverMap.add(entry);
            }
        }
        ht = this.getTable(metaTable);
        final Map<byte[], Long> results = ht.coprocessorService(MetaDataService.class, null, null, new Batch.Call<MetaDataService, Long>() {

            @Override
            public Long call(MetaDataService instance) throws IOException {
                ServerRpcController controller = new ServerRpcController();
                BlockingRpcCallback<GetVersionResponse> rpcCallback = new BlockingRpcCallback<GetVersionResponse>();
                GetVersionRequest.Builder builder = GetVersionRequest.newBuilder();
                builder.setClientVersion(VersionUtil.encodeVersion(PHOENIX_MAJOR_VERSION, PHOENIX_MINOR_VERSION, PHOENIX_PATCH_NUMBER));
                instance.getVersion(controller, builder.build(), rpcCallback);
                if (controller.getFailedOn() != null) {
                    throw controller.getFailedOn();
                }
                return rpcCallback.get().getVersion();
            }
        });
        for (Map.Entry<byte[], Long> result : results.entrySet()) {
            // This is the "phoenix.jar" is in-place, but server is out-of-sync with client case.
            long version = result.getValue();
            isTableNamespaceMappingEnabled |= MetaDataUtil.decodeTableNamespaceMappingEnabled(version);
            if (!isCompatible(result.getValue())) {
                isIncompatible = true;
                HRegionLocation name = regionMap.get(result.getKey());
                buf.append(name);
                buf.append(';');
            }
            hasIndexWALCodec &= hasIndexWALCodec(result.getValue());
            if (minHBaseVersion > MetaDataUtil.decodeHBaseVersion(result.getValue())) {
                minHBaseVersion = MetaDataUtil.decodeHBaseVersion(result.getValue());
            }
        }
        if (isTableNamespaceMappingEnabled != SchemaUtil.isNamespaceMappingEnabled(PTableType.TABLE, getProps())) {
            throw new SQLExceptionInfo.Builder(SQLExceptionCode.INCONSISTENET_NAMESPACE_MAPPING_PROPERTIES).setMessage("Ensure that config " + QueryServices.IS_NAMESPACE_MAPPING_ENABLED + " is consitent on client and server.").build().buildException();
        }
        lowestClusterHBaseVersion = minHBaseVersion;
    } catch (SQLException e) {
        throw e;
    } catch (Throwable t) {
        // This is the case if the "phoenix.jar" is not on the classpath of HBase on the region server
        throw new SQLExceptionInfo.Builder(SQLExceptionCode.INCOMPATIBLE_CLIENT_SERVER_JAR).setRootCause(t).setMessage("Ensure that " + QueryConstants.DEFAULT_COPROCESS_PATH + " is put on the classpath of HBase in every region server: " + t.getMessage()).build().buildException();
    } finally {
        if (ht != null) {
            try {
                ht.close();
            } catch (IOException e) {
                logger.warn("Could not close HTable", e);
            }
        }
    }
    if (isIncompatible) {
        buf.setLength(buf.length() - 1);
        throw new SQLExceptionInfo.Builder(SQLExceptionCode.OUTDATED_JARS).setMessage(buf.toString()).build().buildException();
    }
}
Also used : SQLException(java.sql.SQLException) KeyValueBuilder(org.apache.phoenix.hbase.index.util.KeyValueBuilder) NonTxIndexBuilder(org.apache.phoenix.hbase.index.covered.NonTxIndexBuilder) ThreadFactoryBuilder(com.google.common.util.concurrent.ThreadFactoryBuilder) PhoenixIndexBuilder(org.apache.phoenix.index.PhoenixIndexBuilder) HTableInterface(org.apache.hadoop.hbase.client.HTableInterface) ServerRpcController(org.apache.hadoop.hbase.ipc.ServerRpcController) HRegionLocation(org.apache.hadoop.hbase.HRegionLocation) MetaDataService(org.apache.phoenix.coprocessor.generated.MetaDataProtos.MetaDataService) Batch(org.apache.hadoop.hbase.client.coprocessor.Batch) BlockingRpcCallback(org.apache.hadoop.hbase.ipc.BlockingRpcCallback) SQLExceptionInfo(org.apache.phoenix.exception.SQLExceptionInfo) IOException(java.io.IOException) PhoenixIOException(org.apache.phoenix.exception.PhoenixIOException) PTinyint(org.apache.phoenix.schema.types.PTinyint) PUnsignedTinyint(org.apache.phoenix.schema.types.PUnsignedTinyint) MultiRowMutationEndpoint(org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint) GetVersionResponse(org.apache.phoenix.coprocessor.generated.MetaDataProtos.GetVersionResponse) PLong(org.apache.phoenix.schema.types.PLong) Map(java.util.Map) TreeMap(java.util.TreeMap) ConcurrentHashMap(java.util.concurrent.ConcurrentHashMap) ConcurrentMap(java.util.concurrent.ConcurrentMap) ImmutableMap(com.google.common.collect.ImmutableMap) HashMap(java.util.HashMap)

Example 12 with ServerRpcController

use of org.apache.hadoop.hbase.ipc.ServerRpcController in project phoenix by apache.

the class ConnectionQueryServicesImpl method clearCache.

/**
     * Clears the Phoenix meta data cache on each region server
     * @throws SQLException
     */
@Override
public long clearCache() throws SQLException {
    try {
        SQLException sqlE = null;
        HTableInterface htable = this.getTable(SchemaUtil.getPhysicalName(PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES, this.getProps()).getName());
        try {
            tableStatsCache.invalidateAll();
            final Map<byte[], Long> results = htable.coprocessorService(MetaDataService.class, HConstants.EMPTY_START_ROW, HConstants.EMPTY_END_ROW, new Batch.Call<MetaDataService, Long>() {

                @Override
                public Long call(MetaDataService instance) throws IOException {
                    ServerRpcController controller = new ServerRpcController();
                    BlockingRpcCallback<ClearCacheResponse> rpcCallback = new BlockingRpcCallback<ClearCacheResponse>();
                    ClearCacheRequest.Builder builder = ClearCacheRequest.newBuilder();
                    builder.setClientVersion(VersionUtil.encodeVersion(PHOENIX_MAJOR_VERSION, PHOENIX_MINOR_VERSION, PHOENIX_PATCH_NUMBER));
                    instance.clearCache(controller, builder.build(), rpcCallback);
                    if (controller.getFailedOn() != null) {
                        throw controller.getFailedOn();
                    }
                    return rpcCallback.get().getUnfreedBytes();
                }
            });
            long unfreedBytes = 0;
            for (Map.Entry<byte[], Long> result : results.entrySet()) {
                if (result.getValue() != null) {
                    unfreedBytes += result.getValue();
                }
            }
            return unfreedBytes;
        } catch (IOException e) {
            throw ServerUtil.parseServerException(e);
        } catch (Throwable e) {
            sqlE = new SQLException(e);
        } finally {
            try {
                tableStatsCache.invalidateAll();
                htable.close();
            } catch (IOException e) {
                if (sqlE == null) {
                    sqlE = ServerUtil.parseServerException(e);
                } else {
                    sqlE.setNextException(ServerUtil.parseServerException(e));
                }
            } finally {
                if (sqlE != null) {
                    throw sqlE;
                }
            }
        }
    } catch (Exception e) {
        throw new SQLException(ServerUtil.parseServerException(e));
    }
    return 0;
}
Also used : SQLException(java.sql.SQLException) KeyValueBuilder(org.apache.phoenix.hbase.index.util.KeyValueBuilder) NonTxIndexBuilder(org.apache.phoenix.hbase.index.covered.NonTxIndexBuilder) ThreadFactoryBuilder(com.google.common.util.concurrent.ThreadFactoryBuilder) PhoenixIndexBuilder(org.apache.phoenix.index.PhoenixIndexBuilder) ClearCacheResponse(org.apache.phoenix.coprocessor.generated.MetaDataProtos.ClearCacheResponse) IOException(java.io.IOException) PhoenixIOException(org.apache.phoenix.exception.PhoenixIOException) HTableInterface(org.apache.hadoop.hbase.client.HTableInterface) ServerRpcController(org.apache.hadoop.hbase.ipc.ServerRpcController) TableAlreadyExistsException(org.apache.phoenix.schema.TableAlreadyExistsException) UpgradeInProgressException(org.apache.phoenix.exception.UpgradeInProgressException) AccessDeniedException(org.apache.hadoop.hbase.security.AccessDeniedException) RetriableUpgradeException(org.apache.phoenix.exception.RetriableUpgradeException) IOException(java.io.IOException) ExecutionException(java.util.concurrent.ExecutionException) UpgradeNotRequiredException(org.apache.phoenix.exception.UpgradeNotRequiredException) NewerTableAlreadyExistsException(org.apache.phoenix.schema.NewerTableAlreadyExistsException) NewerSchemaAlreadyExistsException(org.apache.phoenix.schema.NewerSchemaAlreadyExistsException) PhoenixIOException(org.apache.phoenix.exception.PhoenixIOException) TableExistsException(org.apache.hadoop.hbase.TableExistsException) ColumnFamilyNotFoundException(org.apache.phoenix.schema.ColumnFamilyNotFoundException) TableNotFoundException(org.apache.phoenix.schema.TableNotFoundException) SQLException(java.sql.SQLException) ColumnAlreadyExistsException(org.apache.phoenix.schema.ColumnAlreadyExistsException) TimeoutException(java.util.concurrent.TimeoutException) FunctionNotFoundException(org.apache.phoenix.schema.FunctionNotFoundException) ReadOnlyTableException(org.apache.phoenix.schema.ReadOnlyTableException) EmptySequenceCacheException(org.apache.phoenix.schema.EmptySequenceCacheException) MetaDataService(org.apache.phoenix.coprocessor.generated.MetaDataProtos.MetaDataService) Batch(org.apache.hadoop.hbase.client.coprocessor.Batch) PLong(org.apache.phoenix.schema.types.PLong) BlockingRpcCallback(org.apache.hadoop.hbase.ipc.BlockingRpcCallback) Map(java.util.Map) TreeMap(java.util.TreeMap) ConcurrentHashMap(java.util.concurrent.ConcurrentHashMap) ConcurrentMap(java.util.concurrent.ConcurrentMap) ImmutableMap(com.google.common.collect.ImmutableMap) HashMap(java.util.HashMap)

Example 13 with ServerRpcController

use of org.apache.hadoop.hbase.ipc.ServerRpcController in project phoenix by apache.

the class ConnectionQueryServicesImpl method dropSchema.

@Override
public MetaDataMutationResult dropSchema(final List<Mutation> schemaMetaData, final String schemaName) throws SQLException {
    final MetaDataMutationResult result = metaDataCoprocessorExec(SchemaUtil.getSchemaKey(schemaName), new Batch.Call<MetaDataService, MetaDataResponse>() {

        @Override
        public MetaDataResponse call(MetaDataService instance) throws IOException {
            ServerRpcController controller = new ServerRpcController();
            BlockingRpcCallback<MetaDataResponse> rpcCallback = new BlockingRpcCallback<MetaDataResponse>();
            DropSchemaRequest.Builder builder = DropSchemaRequest.newBuilder();
            for (Mutation m : schemaMetaData) {
                MutationProto mp = ProtobufUtil.toProto(m);
                builder.addSchemaMetadataMutations(mp.toByteString());
            }
            builder.setSchemaName(schemaName);
            builder.setClientVersion(VersionUtil.encodeVersion(PHOENIX_MAJOR_VERSION, PHOENIX_MINOR_VERSION, PHOENIX_PATCH_NUMBER));
            instance.dropSchema(controller, builder.build(), rpcCallback);
            if (controller.getFailedOn() != null) {
                throw controller.getFailedOn();
            }
            return rpcCallback.get();
        }
    });
    final MutationCode code = result.getMutationCode();
    switch(code) {
        case SCHEMA_ALREADY_EXISTS:
            ReadOnlyProps props = this.getProps();
            boolean dropMetadata = props.getBoolean(DROP_METADATA_ATTRIB, DEFAULT_DROP_METADATA);
            if (dropMetadata) {
                ensureNamespaceDropped(schemaName, result.getMutationTime());
            }
            break;
        default:
            break;
    }
    return result;
}
Also used : MetaDataResponse(org.apache.phoenix.coprocessor.generated.MetaDataProtos.MetaDataResponse) KeyValueBuilder(org.apache.phoenix.hbase.index.util.KeyValueBuilder) NonTxIndexBuilder(org.apache.phoenix.hbase.index.covered.NonTxIndexBuilder) ThreadFactoryBuilder(com.google.common.util.concurrent.ThreadFactoryBuilder) PhoenixIndexBuilder(org.apache.phoenix.index.PhoenixIndexBuilder) IOException(java.io.IOException) PhoenixIOException(org.apache.phoenix.exception.PhoenixIOException) ServerRpcController(org.apache.hadoop.hbase.ipc.ServerRpcController) MutationProto(org.apache.hadoop.hbase.protobuf.generated.ClientProtos.MutationProto) MutationCode(org.apache.phoenix.coprocessor.MetaDataProtocol.MutationCode) ReadOnlyProps(org.apache.phoenix.util.ReadOnlyProps) MetaDataService(org.apache.phoenix.coprocessor.generated.MetaDataProtos.MetaDataService) Batch(org.apache.hadoop.hbase.client.coprocessor.Batch) BlockingRpcCallback(org.apache.hadoop.hbase.ipc.BlockingRpcCallback) Mutation(org.apache.hadoop.hbase.client.Mutation) MetaDataMutationResult(org.apache.phoenix.coprocessor.MetaDataProtocol.MetaDataMutationResult)

Example 14 with ServerRpcController

use of org.apache.hadoop.hbase.ipc.ServerRpcController in project phoenix by apache.

the class ConnectionQueryServicesImpl method createFunction.

// TODO the mutations should be added to System functions table.
@Override
public MetaDataMutationResult createFunction(final List<Mutation> functionData, final PFunction function, final boolean temporary) throws SQLException {
    byte[][] rowKeyMetadata = new byte[2][];
    Mutation m = MetaDataUtil.getPutOnlyTableHeaderRow(functionData);
    byte[] key = m.getRow();
    SchemaUtil.getVarChars(key, rowKeyMetadata);
    byte[] tenantIdBytes = rowKeyMetadata[PhoenixDatabaseMetaData.TENANT_ID_INDEX];
    byte[] functionBytes = rowKeyMetadata[PhoenixDatabaseMetaData.FUNTION_NAME_INDEX];
    byte[] functionKey = SchemaUtil.getFunctionKey(tenantIdBytes, functionBytes);
    MetaDataMutationResult result = metaDataCoprocessorExec(functionKey, new Batch.Call<MetaDataService, MetaDataResponse>() {

        @Override
        public MetaDataResponse call(MetaDataService instance) throws IOException {
            ServerRpcController controller = new ServerRpcController();
            BlockingRpcCallback<MetaDataResponse> rpcCallback = new BlockingRpcCallback<MetaDataResponse>();
            CreateFunctionRequest.Builder builder = CreateFunctionRequest.newBuilder();
            for (Mutation m : functionData) {
                MutationProto mp = ProtobufUtil.toProto(m);
                builder.addTableMetadataMutations(mp.toByteString());
            }
            builder.setTemporary(temporary);
            builder.setReplace(function.isReplace());
            builder.setClientVersion(VersionUtil.encodeVersion(PHOENIX_MAJOR_VERSION, PHOENIX_MINOR_VERSION, PHOENIX_PATCH_NUMBER));
            instance.createFunction(controller, builder.build(), rpcCallback);
            if (controller.getFailedOn() != null) {
                throw controller.getFailedOn();
            }
            return rpcCallback.get();
        }
    }, PhoenixDatabaseMetaData.SYSTEM_FUNCTION_NAME_BYTES);
    return result;
}
Also used : MetaDataResponse(org.apache.phoenix.coprocessor.generated.MetaDataProtos.MetaDataResponse) KeyValueBuilder(org.apache.phoenix.hbase.index.util.KeyValueBuilder) NonTxIndexBuilder(org.apache.phoenix.hbase.index.covered.NonTxIndexBuilder) ThreadFactoryBuilder(com.google.common.util.concurrent.ThreadFactoryBuilder) PhoenixIndexBuilder(org.apache.phoenix.index.PhoenixIndexBuilder) IOException(java.io.IOException) PhoenixIOException(org.apache.phoenix.exception.PhoenixIOException) ServerRpcController(org.apache.hadoop.hbase.ipc.ServerRpcController) MutationProto(org.apache.hadoop.hbase.protobuf.generated.ClientProtos.MutationProto) MetaDataService(org.apache.phoenix.coprocessor.generated.MetaDataProtos.MetaDataService) Batch(org.apache.hadoop.hbase.client.coprocessor.Batch) BlockingRpcCallback(org.apache.hadoop.hbase.ipc.BlockingRpcCallback) Mutation(org.apache.hadoop.hbase.client.Mutation) MetaDataMutationResult(org.apache.phoenix.coprocessor.MetaDataProtocol.MetaDataMutationResult)

Example 15 with ServerRpcController

use of org.apache.hadoop.hbase.ipc.ServerRpcController in project phoenix by apache.

the class ConnectionQueryServicesImpl method dropColumn.

@Override
public MetaDataMutationResult dropColumn(final List<Mutation> tableMetaData, PTableType tableType) throws SQLException {
    byte[][] rowKeyMetadata = new byte[3][];
    SchemaUtil.getVarChars(tableMetaData.get(0).getRow(), rowKeyMetadata);
    byte[] tenantIdBytes = rowKeyMetadata[PhoenixDatabaseMetaData.TENANT_ID_INDEX];
    byte[] schemaBytes = rowKeyMetadata[PhoenixDatabaseMetaData.SCHEMA_NAME_INDEX];
    byte[] tableBytes = rowKeyMetadata[PhoenixDatabaseMetaData.TABLE_NAME_INDEX];
    byte[] tableKey = SchemaUtil.getTableKey(tenantIdBytes, schemaBytes, tableBytes);
    MetaDataMutationResult result = metaDataCoprocessorExec(tableKey, new Batch.Call<MetaDataService, MetaDataResponse>() {

        @Override
        public MetaDataResponse call(MetaDataService instance) throws IOException {
            ServerRpcController controller = new ServerRpcController();
            BlockingRpcCallback<MetaDataResponse> rpcCallback = new BlockingRpcCallback<MetaDataResponse>();
            DropColumnRequest.Builder builder = DropColumnRequest.newBuilder();
            for (Mutation m : tableMetaData) {
                MutationProto mp = ProtobufUtil.toProto(m);
                builder.addTableMetadataMutations(mp.toByteString());
            }
            builder.setClientVersion(VersionUtil.encodeVersion(PHOENIX_MAJOR_VERSION, PHOENIX_MINOR_VERSION, PHOENIX_PATCH_NUMBER));
            instance.dropColumn(controller, builder.build(), rpcCallback);
            if (controller.getFailedOn() != null) {
                throw controller.getFailedOn();
            }
            return rpcCallback.get();
        }
    });
    final MutationCode code = result.getMutationCode();
    switch(code) {
        case TABLE_ALREADY_EXISTS:
            final ReadOnlyProps props = this.getProps();
            final boolean dropMetadata = props.getBoolean(DROP_METADATA_ATTRIB, DEFAULT_DROP_METADATA);
            if (dropMetadata) {
                dropTables(result.getTableNamesToDelete());
            } else {
                invalidateTableStats(result.getTableNamesToDelete());
            }
            break;
        default:
            break;
    }
    return result;
}
Also used : MetaDataResponse(org.apache.phoenix.coprocessor.generated.MetaDataProtos.MetaDataResponse) KeyValueBuilder(org.apache.phoenix.hbase.index.util.KeyValueBuilder) NonTxIndexBuilder(org.apache.phoenix.hbase.index.covered.NonTxIndexBuilder) ThreadFactoryBuilder(com.google.common.util.concurrent.ThreadFactoryBuilder) PhoenixIndexBuilder(org.apache.phoenix.index.PhoenixIndexBuilder) IOException(java.io.IOException) PhoenixIOException(org.apache.phoenix.exception.PhoenixIOException) ServerRpcController(org.apache.hadoop.hbase.ipc.ServerRpcController) MutationProto(org.apache.hadoop.hbase.protobuf.generated.ClientProtos.MutationProto) MutationCode(org.apache.phoenix.coprocessor.MetaDataProtocol.MutationCode) ReadOnlyProps(org.apache.phoenix.util.ReadOnlyProps) MetaDataService(org.apache.phoenix.coprocessor.generated.MetaDataProtos.MetaDataService) Batch(org.apache.hadoop.hbase.client.coprocessor.Batch) BlockingRpcCallback(org.apache.hadoop.hbase.ipc.BlockingRpcCallback) Mutation(org.apache.hadoop.hbase.client.Mutation) MetaDataMutationResult(org.apache.phoenix.coprocessor.MetaDataProtocol.MetaDataMutationResult)

Aggregations

ServerRpcController (org.apache.hadoop.hbase.ipc.ServerRpcController)31 IOException (java.io.IOException)24 Batch (org.apache.hadoop.hbase.client.coprocessor.Batch)21 BlockingRpcCallback (org.apache.hadoop.hbase.ipc.BlockingRpcCallback)17 MetaDataService (org.apache.phoenix.coprocessor.generated.MetaDataProtos.MetaDataService)15 KeyValueBuilder (org.apache.phoenix.hbase.index.util.KeyValueBuilder)13 ThreadFactoryBuilder (com.google.common.util.concurrent.ThreadFactoryBuilder)12 CoprocessorRpcUtils (org.apache.hadoop.hbase.ipc.CoprocessorRpcUtils)12 PhoenixIOException (org.apache.phoenix.exception.PhoenixIOException)12 NonTxIndexBuilder (org.apache.phoenix.hbase.index.covered.NonTxIndexBuilder)12 PhoenixIndexBuilder (org.apache.phoenix.index.PhoenixIndexBuilder)12 MetaDataResponse (org.apache.phoenix.coprocessor.generated.MetaDataProtos.MetaDataResponse)11 Mutation (org.apache.hadoop.hbase.client.Mutation)10 MutationProto (org.apache.hadoop.hbase.protobuf.generated.ClientProtos.MutationProto)10 MetaDataMutationResult (org.apache.phoenix.coprocessor.MetaDataProtocol.MetaDataMutationResult)8 Table (org.apache.hadoop.hbase.client.Table)6 SQLException (java.sql.SQLException)5 Map (java.util.Map)5 TreeMap (java.util.TreeMap)5 HTableInterface (org.apache.hadoop.hbase.client.HTableInterface)5