Search in sources :

Example 6 with UnsupportedFileSystemException

use of org.apache.hadoop.fs.UnsupportedFileSystemException in project elasticsearch by elastic.

the class HdfsBlobStoreContainerTests method createContext.

@SuppressForbidden(reason = "lesser of two evils (the other being a bunch of JNI/classloader nightmares)")
private FileContext createContext(URI uri) {
    // mirrors HdfsRepository.java behaviour
    Configuration cfg = new Configuration(true);
    cfg.setClassLoader(HdfsRepository.class.getClassLoader());
    cfg.reloadConfiguration();
    Constructor<?> ctor;
    Subject subject;
    try {
        Class<?> clazz = Class.forName("org.apache.hadoop.security.User");
        ctor = clazz.getConstructor(String.class);
        ctor.setAccessible(true);
    } catch (ClassNotFoundException | NoSuchMethodException e) {
        throw new RuntimeException(e);
    }
    try {
        Principal principal = (Principal) ctor.newInstance(System.getProperty("user.name"));
        subject = new Subject(false, Collections.singleton(principal), Collections.emptySet(), Collections.emptySet());
    } catch (InstantiationException | IllegalAccessException | InvocationTargetException e) {
        throw new RuntimeException(e);
    }
    // disable file system cache
    cfg.setBoolean("fs.hdfs.impl.disable.cache", true);
    // set file system to TestingFs to avoid a bunch of security
    // checks, similar to what is done in HdfsTests.java
    cfg.set("fs.AbstractFileSystem." + uri.getScheme() + ".impl", TestingFs.class.getName());
    // create the FileContext with our user
    return Subject.doAs(subject, (PrivilegedAction<FileContext>) () -> {
        try {
            TestingFs fs = (TestingFs) AbstractFileSystem.get(uri, cfg);
            return FileContext.getFileContext(fs, cfg);
        } catch (UnsupportedFileSystemException e) {
            throw new RuntimeException(e);
        }
    });
}
Also used : Configuration(org.apache.hadoop.conf.Configuration) Subject(javax.security.auth.Subject) InvocationTargetException(java.lang.reflect.InvocationTargetException) UnsupportedFileSystemException(org.apache.hadoop.fs.UnsupportedFileSystemException) Principal(java.security.Principal) FileContext(org.apache.hadoop.fs.FileContext) SuppressForbidden(org.elasticsearch.common.SuppressForbidden)

Example 7 with UnsupportedFileSystemException

use of org.apache.hadoop.fs.UnsupportedFileSystemException in project elasticsearch by elastic.

the class HdfsRepository method createContext.

// create hadoop filecontext
@SuppressForbidden(reason = "lesser of two evils (the other being a bunch of JNI/classloader nightmares)")
private static FileContext createContext(URI uri, Settings repositorySettings) {
    Configuration cfg = new Configuration(repositorySettings.getAsBoolean("load_defaults", true));
    cfg.setClassLoader(HdfsRepository.class.getClassLoader());
    cfg.reloadConfiguration();
    Map<String, String> map = repositorySettings.getByPrefix("conf.").getAsMap();
    for (Entry<String, String> entry : map.entrySet()) {
        cfg.set(entry.getKey(), entry.getValue());
    }
    // create a hadoop user. if we want some auth, it must be done different anyway, and tested.
    Subject subject;
    try {
        Class<?> clazz = Class.forName("org.apache.hadoop.security.User");
        Constructor<?> ctor = clazz.getConstructor(String.class);
        ctor.setAccessible(true);
        Principal principal = (Principal) ctor.newInstance(System.getProperty("user.name"));
        subject = new Subject(false, Collections.singleton(principal), Collections.emptySet(), Collections.emptySet());
    } catch (ReflectiveOperationException e) {
        throw new RuntimeException(e);
    }
    // disable FS cache
    cfg.setBoolean("fs.hdfs.impl.disable.cache", true);
    // create the filecontext with our user
    return Subject.doAs(subject, (PrivilegedAction<FileContext>) () -> {
        try {
            AbstractFileSystem fs = AbstractFileSystem.get(uri, cfg);
            return FileContext.getFileContext(fs, cfg);
        } catch (UnsupportedFileSystemException e) {
            throw new RuntimeException(e);
        }
    });
}
Also used : Configuration(org.apache.hadoop.conf.Configuration) Subject(javax.security.auth.Subject) AbstractFileSystem(org.apache.hadoop.fs.AbstractFileSystem) UnsupportedFileSystemException(org.apache.hadoop.fs.UnsupportedFileSystemException) Principal(java.security.Principal) FileContext(org.apache.hadoop.fs.FileContext) SuppressForbidden(org.elasticsearch.common.SuppressForbidden)

Example 8 with UnsupportedFileSystemException

use of org.apache.hadoop.fs.UnsupportedFileSystemException in project hadoop by apache.

the class ViewFileSystemUtil method getStatus.

/**
   * Get FsStatus for all ViewFsMountPoints matching path for the given
   * ViewFileSystem.
   *
   * Say ViewFileSystem has following mount points configured
   *  (1) hdfs://NN0_host:port/sales mounted on /dept/sales
   *  (2) hdfs://NN1_host:port/marketing mounted on /dept/marketing
   *  (3) hdfs://NN2_host:port/eng_usa mounted on /dept/eng/usa
   *  (4) hdfs://NN3_host:port/eng_asia mounted on /dept/eng/asia
   *
   * For the above config, here is a sample list of paths and their matching
   * mount points while getting FsStatus
   *
   *  Path                  Description                      Matching MountPoint
   *
   *  "/"                   Root ViewFileSystem lists all    (1), (2), (3), (4)
   *                         mount points.
   *
   *  "/dept"               Not a mount point, but a valid   (1), (2), (3), (4)
   *                         internal dir in the mount tree
   *                         and resolved down to "/" path.
   *
   *  "/dept/sales"         Matches a mount point            (1)
   *
   *  "/dept/sales/india"   Path is over a valid mount point (1)
   *                         and resolved down to
   *                         "/dept/sales"
   *
   *  "/dept/eng"           Not a mount point, but a valid   (1), (2), (3), (4)
   *                         internal dir in the mount tree
   *                         and resolved down to "/" path.
   *
   *  "/erp"                Doesn't match or leads to or
   *                         over any valid mount points     None
   *
   *
   * @param fileSystem - ViewFileSystem on which mount point exists
   * @param path - URI for which FsStatus is requested
   * @return Map of ViewFsMountPoint and FsStatus
   */
public static Map<MountPoint, FsStatus> getStatus(FileSystem fileSystem, Path path) throws IOException {
    if (!isViewFileSystem(fileSystem)) {
        throw new UnsupportedFileSystemException("FileSystem '" + fileSystem.getUri() + "'is not a ViewFileSystem.");
    }
    ViewFileSystem viewFileSystem = (ViewFileSystem) fileSystem;
    String viewFsUriPath = viewFileSystem.getUriPath(path);
    boolean isPathOverMountPoint = false;
    boolean isPathLeadingToMountPoint = false;
    boolean isPathIncludesAllMountPoint = false;
    Map<MountPoint, FsStatus> mountPointMap = new HashMap<>();
    for (MountPoint mountPoint : viewFileSystem.getMountPoints()) {
        String[] mountPointPathComponents = InodeTree.breakIntoPathComponents(mountPoint.getMountedOnPath().toString());
        String[] incomingPathComponents = InodeTree.breakIntoPathComponents(viewFsUriPath);
        int pathCompIndex;
        for (pathCompIndex = 0; pathCompIndex < mountPointPathComponents.length && pathCompIndex < incomingPathComponents.length; pathCompIndex++) {
            if (!mountPointPathComponents[pathCompIndex].equals(incomingPathComponents[pathCompIndex])) {
                break;
            }
        }
        if (pathCompIndex >= mountPointPathComponents.length) {
            // Patch matches or over a valid mount point
            isPathOverMountPoint = true;
            mountPointMap.clear();
            updateMountPointFsStatus(viewFileSystem, mountPointMap, mountPoint, new Path(viewFsUriPath));
            break;
        } else {
            if (pathCompIndex > 1) {
                // Path is in the mount tree
                isPathLeadingToMountPoint = true;
            } else if (incomingPathComponents.length <= 1) {
                // Special case of "/" path
                isPathIncludesAllMountPoint = true;
            }
            updateMountPointFsStatus(viewFileSystem, mountPointMap, mountPoint, mountPoint.getMountedOnPath());
        }
    }
    if (!isPathOverMountPoint && !isPathLeadingToMountPoint && !isPathIncludesAllMountPoint) {
        throw new NotInMountpointException(path, "getStatus");
    }
    return mountPointMap;
}
Also used : MountPoint(org.apache.hadoop.fs.viewfs.ViewFileSystem.MountPoint) Path(org.apache.hadoop.fs.Path) HashMap(java.util.HashMap) UnsupportedFileSystemException(org.apache.hadoop.fs.UnsupportedFileSystemException) FsStatus(org.apache.hadoop.fs.FsStatus) MountPoint(org.apache.hadoop.fs.viewfs.ViewFileSystem.MountPoint)

Example 9 with UnsupportedFileSystemException

use of org.apache.hadoop.fs.UnsupportedFileSystemException in project hadoop by apache.

the class ViewFileSystemBaseTest method testConfLinkSlash.

@Test
public void testConfLinkSlash() throws Exception {
    String clusterName = "ClusterX";
    URI viewFsUri = new URI(FsConstants.VIEWFS_SCHEME, clusterName, "/", null, null);
    Configuration newConf = new Configuration();
    ConfigUtil.addLink(newConf, clusterName, "/", new Path(targetTestRoot, "/").toUri());
    String mtPrefix = Constants.CONFIG_VIEWFS_PREFIX + "." + clusterName + ".";
    try {
        FileSystem.get(viewFsUri, newConf);
        fail("ViewFileSystem should error out on mount table entry: " + mtPrefix + Constants.CONFIG_VIEWFS_LINK + "." + "/");
    } catch (Exception e) {
        if (e instanceof UnsupportedFileSystemException) {
            String msg = Constants.CONFIG_VIEWFS_LINK_MERGE_SLASH + " is not supported yet.";
            assertThat(e.getMessage(), containsString(msg));
        } else {
            fail("Unexpected exception: " + e.getMessage());
        }
    }
}
Also used : Path(org.apache.hadoop.fs.Path) Configuration(org.apache.hadoop.conf.Configuration) UnsupportedFileSystemException(org.apache.hadoop.fs.UnsupportedFileSystemException) CoreMatchers.containsString(org.hamcrest.CoreMatchers.containsString) URI(java.net.URI) IOException(java.io.IOException) FileNotFoundException(java.io.FileNotFoundException) AccessControlException(org.apache.hadoop.security.AccessControlException) UnsupportedFileSystemException(org.apache.hadoop.fs.UnsupportedFileSystemException) Test(org.junit.Test)

Example 10 with UnsupportedFileSystemException

use of org.apache.hadoop.fs.UnsupportedFileSystemException in project hadoop by apache.

the class JobHistoryUtils method getDefaultFileContext.

/**
   * Get default file system URI for the cluster (used to ensure consistency
   * of history done/staging locations) over different context
   *
   * @return Default file context
   */
private static FileContext getDefaultFileContext() {
    // If FS_DEFAULT_NAME_KEY was set solely by core-default.xml then we ignore
    // ignore it. This prevents defaulting history paths to file system specified
    // by core-default.xml which would not make sense in any case. For a test
    // case to exploit this functionality it should create core-site.xml
    FileContext fc = null;
    Configuration defaultConf = new Configuration();
    String[] sources;
    sources = defaultConf.getPropertySources(CommonConfigurationKeysPublic.FS_DEFAULT_NAME_KEY);
    if (sources != null && (!Arrays.asList(sources).contains("core-default.xml") || sources.length > 1)) {
        try {
            fc = FileContext.getFileContext(defaultConf);
            LOG.info("Default file system [" + fc.getDefaultFileSystem().getUri() + "]");
        } catch (UnsupportedFileSystemException e) {
            LOG.error("Unable to create default file context [" + defaultConf.get(CommonConfigurationKeysPublic.FS_DEFAULT_NAME_KEY) + "]", e);
        }
    } else {
        LOG.info("Default file system is set solely " + "by core-default.xml therefore -  ignoring");
    }
    return fc;
}
Also used : Configuration(org.apache.hadoop.conf.Configuration) UnsupportedFileSystemException(org.apache.hadoop.fs.UnsupportedFileSystemException) FileContext(org.apache.hadoop.fs.FileContext)

Aggregations

UnsupportedFileSystemException (org.apache.hadoop.fs.UnsupportedFileSystemException)11 IOException (java.io.IOException)6 Configuration (org.apache.hadoop.conf.Configuration)6 Path (org.apache.hadoop.fs.Path)6 URI (java.net.URI)3 ArrayList (java.util.ArrayList)3 HashMap (java.util.HashMap)3 FileContext (org.apache.hadoop.fs.FileContext)3 FileSystem (org.apache.hadoop.fs.FileSystem)3 FileNotFoundException (java.io.FileNotFoundException)2 Principal (java.security.Principal)2 Subject (javax.security.auth.Subject)2 AbstractFileSystem (org.apache.hadoop.fs.AbstractFileSystem)2 FileStatus (org.apache.hadoop.fs.FileStatus)2 FsPermission (org.apache.hadoop.fs.permission.FsPermission)2 AccessControlException (org.apache.hadoop.security.AccessControlException)2 Test (org.junit.Test)2 File (java.io.File)1 InvocationTargetException (java.lang.reflect.InvocationTargetException)1 URISyntaxException (java.net.URISyntaxException)1