Search in sources :

Example 1 with SiteConfiguration

use of org.apache.accumulo.core.conf.SiteConfiguration in project accumulo by apache.

the class ServerConfigurationFactoryTest method testGetSiteConfiguration.

@Test
public void testGetSiteConfiguration() {
    ready();
    SiteConfiguration c = scf.getSiteConfiguration();
    assertNotNull(c);
}
Also used : SiteConfiguration(org.apache.accumulo.core.conf.SiteConfiguration) Test(org.junit.Test)

Example 2 with SiteConfiguration

use of org.apache.accumulo.core.conf.SiteConfiguration in project accumulo by apache.

the class CloseWriteAheadLogReferencesIT method setupEasyMockStuff.

@Before
public void setupEasyMockStuff() {
    Instance mockInst = createMock(Instance.class);
    SiteConfiguration siteConfig = EasyMock.createMock(SiteConfiguration.class);
    expect(mockInst.getInstanceID()).andReturn(testName.getMethodName()).anyTimes();
    expect(mockInst.getZooKeepers()).andReturn("localhost").anyTimes();
    expect(mockInst.getZooKeepersSessionTimeOut()).andReturn(30000).anyTimes();
    final AccumuloConfiguration systemConf = new ConfigurationCopy(new HashMap<>());
    ServerConfigurationFactory factory = createMock(ServerConfigurationFactory.class);
    expect(factory.getSystemConfiguration()).andReturn(systemConf).anyTimes();
    expect(factory.getSiteConfiguration()).andReturn(siteConfig).anyTimes();
    // Just make the SiteConfiguration delegate to our AccumuloConfiguration
    // Presently, we only need get(Property) and iterator().
    EasyMock.expect(siteConfig.get(EasyMock.anyObject(Property.class))).andAnswer(new IAnswer<String>() {

        @Override
        public String answer() {
            Object[] args = EasyMock.getCurrentArguments();
            return systemConf.get((Property) args[0]);
        }
    }).anyTimes();
    EasyMock.expect(siteConfig.getBoolean(EasyMock.anyObject(Property.class))).andAnswer(new IAnswer<Boolean>() {

        @Override
        public Boolean answer() {
            Object[] args = EasyMock.getCurrentArguments();
            return systemConf.getBoolean((Property) args[0]);
        }
    }).anyTimes();
    EasyMock.expect(siteConfig.iterator()).andAnswer(new IAnswer<Iterator<Entry<String, String>>>() {

        @Override
        public Iterator<Entry<String, String>> answer() {
            return systemConf.iterator();
        }
    }).anyTimes();
    replay(mockInst, factory, siteConfig);
    refs = new WrappedCloseWriteAheadLogReferences(new AccumuloServerContext(mockInst, factory));
}
Also used : IAnswer(org.easymock.IAnswer) Entry(java.util.Map.Entry) ConfigurationCopy(org.apache.accumulo.core.conf.ConfigurationCopy) AccumuloServerContext(org.apache.accumulo.server.AccumuloServerContext) Instance(org.apache.accumulo.core.client.Instance) SiteConfiguration(org.apache.accumulo.core.conf.SiteConfiguration) ServerConfigurationFactory(org.apache.accumulo.server.conf.ServerConfigurationFactory) AccumuloConfiguration(org.apache.accumulo.core.conf.AccumuloConfiguration) Before(org.junit.Before)

Example 3 with SiteConfiguration

use of org.apache.accumulo.core.conf.SiteConfiguration in project accumulo by apache.

the class SimpleGarbageCollectorTest method setUp.

@Before
public void setUp() {
    volMgr = createMock(VolumeManager.class);
    instance = createMock(Instance.class);
    SiteConfiguration siteConfig = EasyMock.createMock(SiteConfiguration.class);
    expect(instance.getInstanceID()).andReturn("mock").anyTimes();
    expect(instance.getZooKeepers()).andReturn("localhost").anyTimes();
    expect(instance.getZooKeepersSessionTimeOut()).andReturn(30000).anyTimes();
    opts = new Opts();
    systemConfig = createSystemConfig();
    ServerConfigurationFactory factory = createMock(ServerConfigurationFactory.class);
    expect(factory.getSystemConfiguration()).andReturn(systemConfig).anyTimes();
    expect(factory.getSiteConfiguration()).andReturn(siteConfig).anyTimes();
    // Just make the SiteConfiguration delegate to our AccumuloConfiguration
    // Presently, we only need get(Property) and iterator().
    EasyMock.expect(siteConfig.get(EasyMock.anyObject(Property.class))).andAnswer(new IAnswer<String>() {

        @Override
        public String answer() {
            Object[] args = EasyMock.getCurrentArguments();
            return systemConfig.get((Property) args[0]);
        }
    }).anyTimes();
    EasyMock.expect(siteConfig.getBoolean(EasyMock.anyObject(Property.class))).andAnswer(new IAnswer<Boolean>() {

        @Override
        public Boolean answer() {
            Object[] args = EasyMock.getCurrentArguments();
            return systemConfig.getBoolean((Property) args[0]);
        }
    }).anyTimes();
    EasyMock.expect(siteConfig.iterator()).andAnswer(new IAnswer<Iterator<Entry<String, String>>>() {

        @Override
        public Iterator<Entry<String, String>> answer() {
            return systemConfig.iterator();
        }
    }).anyTimes();
    replay(instance, factory, siteConfig);
    credentials = SystemCredentials.get(instance);
    gc = new SimpleGarbageCollector(opts, instance, volMgr, factory);
}
Also used : VolumeManager(org.apache.accumulo.server.fs.VolumeManager) IAnswer(org.easymock.IAnswer) Entry(java.util.Map.Entry) Instance(org.apache.accumulo.core.client.Instance) Opts(org.apache.accumulo.gc.SimpleGarbageCollector.Opts) SiteConfiguration(org.apache.accumulo.core.conf.SiteConfiguration) ServerConfigurationFactory(org.apache.accumulo.server.conf.ServerConfigurationFactory) Before(org.junit.Before)

Example 4 with SiteConfiguration

use of org.apache.accumulo.core.conf.SiteConfiguration in project accumulo by apache.

the class LargestFirstMemoryManagerTest method test.

@Test
public void test() throws Exception {
    LargestFirstMemoryManagerUnderTest mgr = new LargestFirstMemoryManagerUnderTest();
    ServerConfiguration config = new ServerConfiguration() {

        ServerConfigurationFactory delegate = new ServerConfigurationFactory(inst);

        @Override
        public AccumuloConfiguration getSystemConfiguration() {
            SiteConfiguration conf = SiteConfiguration.getInstance();
            conf.set(Property.TSERV_MAXMEM, "1g");
            return conf;
        }

        @Override
        public TableConfiguration getTableConfiguration(Table.ID tableId) {
            return delegate.getTableConfiguration(tableId);
        }

        @Override
        public NamespaceConfiguration getNamespaceConfiguration(Namespace.ID namespaceId) {
            return delegate.getNamespaceConfiguration(namespaceId);
        }
    };
    mgr.init(config);
    MemoryManagementActions result;
    // nothing to do
    result = mgr.getMemoryManagementActions(tablets(t(k("x"), ZERO, 1000, 0), t(k("y"), ZERO, 2000, 0)));
    assertEquals(0, result.tabletsToMinorCompact.size());
    // one tablet is really big
    result = mgr.getMemoryManagementActions(tablets(t(k("x"), ZERO, ONE_GIG, 0), t(k("y"), ZERO, 2000, 0)));
    assertEquals(1, result.tabletsToMinorCompact.size());
    assertEquals(k("x"), result.tabletsToMinorCompact.get(0));
    // one tablet is idle
    mgr.currentTime = LATER;
    result = mgr.getMemoryManagementActions(tablets(t(k("x"), ZERO, 1001, 0), t(k("y"), LATER, 2000, 0)));
    assertEquals(1, result.tabletsToMinorCompact.size());
    assertEquals(k("x"), result.tabletsToMinorCompact.get(0));
    // one tablet is idle, but one is really big
    result = mgr.getMemoryManagementActions(tablets(t(k("x"), ZERO, 1001, 0), t(k("y"), LATER, ONE_GIG, 0)));
    assertEquals(1, result.tabletsToMinorCompact.size());
    assertEquals(k("y"), result.tabletsToMinorCompact.get(0));
    // lots of work to do
    mgr = new LargestFirstMemoryManagerUnderTest();
    mgr.init(config);
    result = mgr.getMemoryManagementActions(tablets(t(k("a"), ZERO, HALF_GIG, 0), t(k("b"), ZERO, HALF_GIG + 1, 0), t(k("c"), ZERO, HALF_GIG + 2, 0), t(k("d"), ZERO, HALF_GIG + 3, 0), t(k("e"), ZERO, HALF_GIG + 4, 0), t(k("f"), ZERO, HALF_GIG + 5, 0), t(k("g"), ZERO, HALF_GIG + 6, 0), t(k("h"), ZERO, HALF_GIG + 7, 0), t(k("i"), ZERO, HALF_GIG + 8, 0)));
    assertEquals(2, result.tabletsToMinorCompact.size());
    assertEquals(k("i"), result.tabletsToMinorCompact.get(0));
    assertEquals(k("h"), result.tabletsToMinorCompact.get(1));
    // one finished, one in progress, one filled up
    mgr = new LargestFirstMemoryManagerUnderTest();
    mgr.init(config);
    result = mgr.getMemoryManagementActions(tablets(t(k("a"), ZERO, HALF_GIG, 0), t(k("b"), ZERO, HALF_GIG + 1, 0), t(k("c"), ZERO, HALF_GIG + 2, 0), t(k("d"), ZERO, HALF_GIG + 3, 0), t(k("e"), ZERO, HALF_GIG + 4, 0), t(k("f"), ZERO, HALF_GIG + 5, 0), t(k("g"), ZERO, ONE_GIG, 0), t(k("h"), ZERO, 0, HALF_GIG + 7), t(k("i"), ZERO, 0, 0)));
    assertEquals(1, result.tabletsToMinorCompact.size());
    assertEquals(k("g"), result.tabletsToMinorCompact.get(0));
    // memory is very full, lots of candidates
    result = mgr.getMemoryManagementActions(tablets(t(k("a"), ZERO, HALF_GIG, 0), t(k("b"), ZERO, ONE_GIG + 1, 0), t(k("c"), ZERO, ONE_GIG + 2, 0), t(k("d"), ZERO, ONE_GIG + 3, 0), t(k("e"), ZERO, ONE_GIG + 4, 0), t(k("f"), ZERO, ONE_GIG + 5, 0), t(k("g"), ZERO, ONE_GIG + 6, 0), t(k("h"), ZERO, 0, 0), t(k("i"), ZERO, 0, 0)));
    assertEquals(2, result.tabletsToMinorCompact.size());
    assertEquals(k("g"), result.tabletsToMinorCompact.get(0));
    assertEquals(k("f"), result.tabletsToMinorCompact.get(1));
    // only have two compactors, still busy
    result = mgr.getMemoryManagementActions(tablets(t(k("a"), ZERO, HALF_GIG, 0), t(k("b"), ZERO, ONE_GIG + 1, 0), t(k("c"), ZERO, ONE_GIG + 2, 0), t(k("d"), ZERO, ONE_GIG + 3, 0), t(k("e"), ZERO, ONE_GIG + 4, 0), t(k("f"), ZERO, ONE_GIG, ONE_GIG + 5), t(k("g"), ZERO, ONE_GIG, ONE_GIG + 6), t(k("h"), ZERO, 0, 0), t(k("i"), ZERO, 0, 0)));
    assertEquals(0, result.tabletsToMinorCompact.size());
    // finished one
    result = mgr.getMemoryManagementActions(tablets(t(k("a"), ZERO, HALF_GIG, 0), t(k("b"), ZERO, ONE_GIG + 1, 0), t(k("c"), ZERO, ONE_GIG + 2, 0), t(k("d"), ZERO, ONE_GIG + 3, 0), t(k("e"), ZERO, ONE_GIG + 4, 0), t(k("f"), ZERO, ONE_GIG, ONE_GIG + 5), t(k("g"), ZERO, ONE_GIG, 0), t(k("h"), ZERO, 0, 0), t(k("i"), ZERO, 0, 0)));
    assertEquals(1, result.tabletsToMinorCompact.size());
    assertEquals(k("e"), result.tabletsToMinorCompact.get(0));
    // many are running: do nothing
    mgr = new LargestFirstMemoryManagerUnderTest();
    mgr.init(config);
    result = mgr.getMemoryManagementActions(tablets(t(k("a"), ZERO, HALF_GIG, 0), t(k("b"), ZERO, HALF_GIG + 1, 0), t(k("c"), ZERO, HALF_GIG + 2, 0), t(k("d"), ZERO, 0, HALF_GIG), t(k("e"), ZERO, 0, HALF_GIG), t(k("f"), ZERO, 0, HALF_GIG), t(k("g"), ZERO, 0, HALF_GIG), t(k("i"), ZERO, 0, HALF_GIG), t(k("j"), ZERO, 0, HALF_GIG), t(k("k"), ZERO, 0, HALF_GIG), t(k("l"), ZERO, 0, HALF_GIG), t(k("m"), ZERO, 0, HALF_GIG)));
    assertEquals(0, result.tabletsToMinorCompact.size());
    // observe adjustment:
    mgr = new LargestFirstMemoryManagerUnderTest();
    mgr.init(config);
    // compact the largest
    result = mgr.getMemoryManagementActions(tablets(t(k("a"), ZERO, QGIG, 0), t(k("b"), ZERO, QGIG + 1, 0), t(k("c"), ZERO, QGIG + 2, 0)));
    assertEquals(1, result.tabletsToMinorCompact.size());
    assertEquals(k("c"), result.tabletsToMinorCompact.get(0));
    // show that it is compacting... do nothing
    result = mgr.getMemoryManagementActions(tablets(t(k("a"), ZERO, QGIG, 0), t(k("b"), ZERO, QGIG + 1, 0), t(k("c"), ZERO, 0, QGIG + 2)));
    assertEquals(0, result.tabletsToMinorCompact.size());
    // not going to bother compacting any more
    mgr.currentTime += ONE_MINUTE;
    result = mgr.getMemoryManagementActions(tablets(t(k("a"), ZERO, QGIG, 0), t(k("b"), ZERO, QGIG + 1, 0), t(k("c"), ZERO, 0, QGIG + 2)));
    assertEquals(0, result.tabletsToMinorCompact.size());
    // now do nothing
    mgr.currentTime += ONE_MINUTE;
    result = mgr.getMemoryManagementActions(tablets(t(k("a"), ZERO, QGIG, 0), t(k("b"), ZERO, 0, 0), t(k("c"), ZERO, 0, 0)));
    assertEquals(0, result.tabletsToMinorCompact.size());
    // on no! more data, this time we compact because we've adjusted
    mgr.currentTime += ONE_MINUTE;
    result = mgr.getMemoryManagementActions(tablets(t(k("a"), ZERO, QGIG, 0), t(k("b"), ZERO, QGIG + 1, 0), t(k("c"), ZERO, 0, 0)));
    assertEquals(1, result.tabletsToMinorCompact.size());
    assertEquals(k("b"), result.tabletsToMinorCompact.get(0));
}
Also used : MemoryManagementActions(org.apache.accumulo.server.tabletserver.MemoryManagementActions) ServerConfiguration(org.apache.accumulo.server.conf.ServerConfiguration) ServerConfigurationFactory(org.apache.accumulo.server.conf.ServerConfigurationFactory) SiteConfiguration(org.apache.accumulo.core.conf.SiteConfiguration) Test(org.junit.Test)

Example 5 with SiteConfiguration

use of org.apache.accumulo.core.conf.SiteConfiguration in project accumulo by apache.

the class Initialize method initialize.

private boolean initialize(Opts opts, String instanceNamePath, VolumeManager fs, String rootUser) {
    UUID uuid = UUID.randomUUID();
    // the actual disk locations of the root table and tablets
    String[] configuredVolumes = VolumeConfiguration.getVolumeUris(SiteConfiguration.getInstance());
    VolumeChooserEnvironment chooserEnv = new VolumeChooserEnvironment(ChooserScope.INIT);
    final String rootTabletDir = new Path(fs.choose(chooserEnv, configuredVolumes) + Path.SEPARATOR + ServerConstants.TABLE_DIR + Path.SEPARATOR + RootTable.ID + RootTable.ROOT_TABLET_LOCATION).toString();
    try {
        initZooKeeper(opts, uuid.toString(), instanceNamePath, rootTabletDir);
    } catch (Exception e) {
        log.error("FATAL: Failed to initialize zookeeper", e);
        return false;
    }
    try {
        initFileSystem(opts, fs, uuid, rootTabletDir);
    } catch (Exception e) {
        log.error("FATAL Failed to initialize filesystem", e);
        if (SiteConfiguration.getInstance().get(Property.INSTANCE_VOLUMES).trim().equals("")) {
            Configuration fsConf = CachedConfiguration.getInstance();
            final String defaultFsUri = "file:///";
            String fsDefaultName = fsConf.get("fs.default.name", defaultFsUri), fsDefaultFS = fsConf.get("fs.defaultFS", defaultFsUri);
            // Try to determine when we couldn't find an appropriate core-site.xml on the classpath
            if (defaultFsUri.equals(fsDefaultName) && defaultFsUri.equals(fsDefaultFS)) {
                log.error("FATAL: Default filesystem value ('fs.defaultFS' or 'fs.default.name') of '{}' was found in the Hadoop configuration", defaultFsUri);
                log.error("FATAL: Please ensure that the Hadoop core-site.xml is on the classpath using 'general.classpaths' in accumulo-site.xml");
            }
        }
        return false;
    }
    final Instance instance = HdfsZooInstance.getInstance();
    final ServerConfigurationFactory confFactory = new ServerConfigurationFactory(instance);
    // If they did not, fall back to the credentials present in accumulo-site.xml that the servers will use themselves.
    try {
        final SiteConfiguration siteConf = confFactory.getSiteConfiguration();
        if (siteConf.getBoolean(Property.INSTANCE_RPC_SASL_ENABLED)) {
            final UserGroupInformation ugi = UserGroupInformation.getCurrentUser();
            // We don't have any valid creds to talk to HDFS
            if (!ugi.hasKerberosCredentials()) {
                final String accumuloKeytab = siteConf.get(Property.GENERAL_KERBEROS_KEYTAB), accumuloPrincipal = siteConf.get(Property.GENERAL_KERBEROS_PRINCIPAL);
                // Fail if the site configuration doesn't contain appropriate credentials to login as servers
                if (StringUtils.isBlank(accumuloKeytab) || StringUtils.isBlank(accumuloPrincipal)) {
                    log.error("FATAL: No Kerberos credentials provided, and Accumulo is not properly configured for server login");
                    return false;
                }
                log.info("Logging in as {} with {}", accumuloPrincipal, accumuloKeytab);
                // Login using the keytab as the 'accumulo' user
                UserGroupInformation.loginUserFromKeytab(accumuloPrincipal, accumuloKeytab);
            }
        }
    } catch (IOException e) {
        log.error("FATAL: Failed to get the Kerberos user", e);
        return false;
    }
    try {
        AccumuloServerContext context = new AccumuloServerContext(instance, confFactory);
        initSecurity(context, opts, uuid.toString(), rootUser);
    } catch (Exception e) {
        log.error("FATAL: Failed to initialize security", e);
        return false;
    }
    if (opts.uploadAccumuloSite) {
        try {
            log.info("Uploading properties in accumulo-site.xml to Zookeeper. Properties that cannot be set in Zookeeper will be skipped:");
            Map<String, String> entries = new TreeMap<>();
            SiteConfiguration.getInstance().getProperties(entries, x -> true, false);
            for (Map.Entry<String, String> entry : entries.entrySet()) {
                String key = entry.getKey();
                String value = entry.getValue();
                if (Property.isValidZooPropertyKey(key)) {
                    SystemPropUtil.setSystemProperty(key, value);
                    log.info("Uploaded - {} = {}", key, Property.isSensitive(key) ? "<hidden>" : value);
                } else {
                    log.info("Skipped - {} = {}", key, Property.isSensitive(key) ? "<hidden>" : value);
                }
            }
        } catch (Exception e) {
            log.error("FATAL: Failed to upload accumulo-site.xml to Zookeeper", e);
            return false;
        }
    }
    return true;
}
Also used : Path(org.apache.hadoop.fs.Path) AccumuloServerContext(org.apache.accumulo.server.AccumuloServerContext) Configuration(org.apache.hadoop.conf.Configuration) VolumeConfiguration(org.apache.accumulo.core.volume.VolumeConfiguration) AccumuloConfiguration(org.apache.accumulo.core.conf.AccumuloConfiguration) SiteConfiguration(org.apache.accumulo.core.conf.SiteConfiguration) CachedConfiguration(org.apache.accumulo.core.util.CachedConfiguration) DefaultConfiguration(org.apache.accumulo.core.conf.DefaultConfiguration) Instance(org.apache.accumulo.core.client.Instance) HdfsZooInstance(org.apache.accumulo.server.client.HdfsZooInstance) ServerConfigurationFactory(org.apache.accumulo.server.conf.ServerConfigurationFactory) SiteConfiguration(org.apache.accumulo.core.conf.SiteConfiguration) IOException(java.io.IOException) TreeMap(java.util.TreeMap) ThriftSecurityException(org.apache.accumulo.core.client.impl.thrift.ThriftSecurityException) IOException(java.io.IOException) FileNotFoundException(java.io.FileNotFoundException) AccumuloSecurityException(org.apache.accumulo.core.client.AccumuloSecurityException) KeeperException(org.apache.zookeeper.KeeperException) VolumeChooserEnvironment(org.apache.accumulo.server.fs.VolumeChooserEnvironment) UUID(java.util.UUID) Map(java.util.Map) TreeMap(java.util.TreeMap) HashMap(java.util.HashMap) UserGroupInformation(org.apache.hadoop.security.UserGroupInformation)

Aggregations

SiteConfiguration (org.apache.accumulo.core.conf.SiteConfiguration)6 ServerConfigurationFactory (org.apache.accumulo.server.conf.ServerConfigurationFactory)4 Instance (org.apache.accumulo.core.client.Instance)3 Test (org.junit.Test)3 Entry (java.util.Map.Entry)2 AccumuloConfiguration (org.apache.accumulo.core.conf.AccumuloConfiguration)2 AccumuloServerContext (org.apache.accumulo.server.AccumuloServerContext)2 IAnswer (org.easymock.IAnswer)2 Before (org.junit.Before)2 FileNotFoundException (java.io.FileNotFoundException)1 IOException (java.io.IOException)1 HashMap (java.util.HashMap)1 Map (java.util.Map)1 TreeMap (java.util.TreeMap)1 UUID (java.util.UUID)1 AccumuloSecurityException (org.apache.accumulo.core.client.AccumuloSecurityException)1 ThriftSecurityException (org.apache.accumulo.core.client.impl.thrift.ThriftSecurityException)1 ConfigurationCopy (org.apache.accumulo.core.conf.ConfigurationCopy)1 DefaultConfiguration (org.apache.accumulo.core.conf.DefaultConfiguration)1 CachedConfiguration (org.apache.accumulo.core.util.CachedConfiguration)1