首页 > 代码库 > 实现hive proxy4-scratch目录权限问题解决
实现hive proxy4-scratch目录权限问题解决
hive在hdfs中的job中间文件是根据当前登陆用户产生的,其默认值为/tmp/hive-${user.name},这就导致实现proxy的功能时会遇到临时文件的权限问题,比如在实现了proxy功能后,以超级用户hdfs proxy到普通用户user时,在hdfs中的临时文件在/tmp/hive-user目录中,而目录的属主是hdfs,这时再以普通用户user运行job时,对这个目录就会有权限问题,下面说下这里proxy的实现和解决权限问题的方法:
1.实现proxy功能
更改org.apache.hadoop.hive.ql.Context类的构造方法:
public Context(Configuration conf, String executionId) { this.conf = conf; this.executionId = executionId; if(HiveConf.getBoolVar(conf,HiveConf.ConfVars.HIVE_USE_CUSTOM_PROXY)){ String proxyUser = HiveConf.getVar(conf, HiveConf.ConfVars.HIVE_CUSTOM_PROXY_USER); LOG.warn("use custom proxy,gen Scratch path,proxy user is " + proxyUser); if(("").equals(proxyUser)||proxyUser == null||("hdfs").equals(proxyUser)){ nonLocalScratchPath = new Path(HiveConf.getVar(conf, HiveConf.ConfVars.SCRATCHDIR),executionId); localScratchDir = new Path(HiveConf.getVar(conf, HiveConf.ConfVars.LOCALSCRATCHDIR),executionId).toUri().getPath(); }else{ localScratchDir = new Path((System.getProperty("java.io.tmpdir") + File.separator + proxyUser),executionId).toUri().getPath(); nonLocalScratchPath = new Path(("/tmp/hive-" + proxyUser),executionId); } }else{ nonLocalScratchPath = new Path(HiveConf.getVar(conf, HiveConf.ConfVars.SCRATCHDIR),executionId); localScratchDir = new Path(HiveConf.getVar(conf, HiveConf.ConfVars.LOCALSCRATCHDIR),executionId).toUri().getPath(); } LOG.warn("in Context init function nonLocalScratchPath is " + nonLocalScratchPath); LOG.warn("in Context init function localScratchPath is " + localScratchDir); scratchDirPermission= HiveConf.getVar(conf, HiveConf.ConfVars.SCRATCHDIRPERMISSION); }
2.权限问题的解决
在上面的代码中可以看到scratchDirPermission的设置,这个是指创建的目录的权限,默认是700,因为是中间文件,我们可以把权限设置的大一点,比如777,在设置了777之后,却发现目录的权限是755.
根据报错的堆栈可以看到方法在Context中调用的情况:
getExternalTmpPath--->getExternalScratchDir-->getScratchDir-->Utilities.createDirsWithPermission
(目录不存在时,根据HiveConf.ConfVars.SCRATCHDIRPERMISSION的设置创建hdfs tmp目录)
看下getScratchDir方法:
private final Map<String, Path> fsScratchDirs = new HashMap<String, Path>(); ..... private Path getScratchDir(String scheme, String authority, boolean mkdir, String scratchDir) { // 如果是explain语句mkdir为false String fileSystem = scheme + ":" + authority; Path dir = fsScratchDirs.get(fileSystem + "-" + TaskRunner.getTaskRunnerID()); if (dir == null) { Path dirPath = new Path(scheme, authority, scratchDir + "-" + TaskRunner.getTaskRunnerID()); if (mkdir) { try { FileSystem fs = dirPath.getFileSystem( conf); dirPath = new Path(fs.makeQualified(dirPath).toString()); FsPermission fsPermission = new FsPermission(Short.parseShort(scratchDirPermission.trim(), 8)); // 目录权限由HiveConf.ConfVars.SCRATCHDIRPERMISSION 设置 if (!Utilities.createDirsWithPermission(conf , dirPath, fsPermission)) { throw new RuntimeException("Cannot make directory: " + dirPath.toString()); } if (isHDFSCleanup ) { fs.deleteOnExit(dirPath); } } catch (IOException e) { throw new RuntimeException (e); } } dir = dirPath; fsScratchDirs.put(fileSystem + "-" + TaskRunner.getTaskRunnerID(), dir); } return dir; }
调用Utilities.createDirsWithPermission方法时,传入的目录的权限(由HiveConf.ConfVars.SCRATCHDIRPERMISSION 设置)默认是700
org.apache.hadoop.hive.ql.exec.Utilities类的createDirsWithPermission方法内容如下:
public static boolean createDirsWithPermission(Configuration conf, Path mkdir, FsPermission fsPermission) throws IOException { boolean recursive = false; if (SessionState.get() != null) { recursive = SessionState.get().isHiveServerQuery() && conf.getBoolean(HiveConf.ConfVars.HIVE_SERVER2_ENABLE_DOAS.varname, HiveConf.ConfVars.HIVE_SERVER2_ENABLE_DOAS.defaultBoolVal); //如果是来自hiverserver的请求,并且开启了doas,recursive为true,权限设置为777,umask 000 fsPermission = new FsPermission((short)00777); } // if we made it so far without exception we are good! return createDirsWithPermission(conf, mkdir, fsPermission, recursive); //默认recursive为false } ..... public static boolean createDirsWithPermission(Configuration conf, Path mkdirPath, FsPermission fsPermission, boolean recursive) throws IOException { String origUmask = null; LOG.warn("Create dirs " + mkdirPath + " with permission " + fsPermission + " recursive " + recursive); if (recursive) { //如果recursive为true,origUmask 为000,否则为null origUmask = conf.get("fs.permissions.umask-mode"); // this umask is required because by default the hdfs mask is 022 resulting in // all parents getting the fsPermission & !(022) permission instead of fsPermission conf.set("fs.permissions.umask-mode", "000"); } FileSystem fs = ShimLoader.getHadoopShims().getNonCachedFileSystem(mkdirPath.toUri(), conf); LOG.warn("fs.permissions.umask-mode is " + conf.get("fs.permissions.umask-mode")); //默认为022 boolean retval = false; try { retval = fs.mkdirs(mkdirPath, fsPermission); resetConfAndCloseFS(conf, recursive, origUmask, fs); //这里因为recursive为false,导致不会重置fs.permissions.umask-mode的配置, 即fs.permissions.umask-mode为022,因此导致即使设置了权限为777,创建的目录权限最终还是为755 } catch (IOException ioe) { try { resetConfAndCloseFS(conf, recursive, origUmask, fs); } catch (IOException e) { // do nothing - double failure } } return retval; }
hdfs中关于fs.permissions.umask-mode的配置,默认是002
public static final String FS_PERMISSIONS_UMASK_KEY = "fs.permissions.umask-mode"; public static final int FS_PERMISSIONS_UMASK_DEFAULT = 0022;
为了实现可以创建777权限的临时文件目录,更改createDirsWithPermission方法如下:
public static boolean createDirsWithPermission(Configuration conf, Path mkdir, FsPermission fsPermission) throws IOException { boolean recursive = false; if (SessionState.get() != null) { recursive = (SessionState.get().isHiveServerQuery() && conf.getBoolean(HiveConf.ConfVars.HIVE_SERVER2_ENABLE_DOAS.varname, HiveConf.ConfVars.HIVE_SERVER2_ENABLE_DOAS.defaultBoolVal))||(HiveConf.getBoolVar(conf,HiveConf.ConfVars.HIVE_USE_CUSTOM_PROXY)); fsPermission = new FsPermission((short)00777); } // if we made it so far without exception we are good! return createDirsWithPermission(conf, mkdir, fsPermission, recursive); }
这样,就可以创建出777的hdfs目录了。
本文出自 “菜光光的博客” 博客,请务必保留此出处http://caiguangguang.blog.51cto.com/1652935/1589879
实现hive proxy4-scratch目录权限问题解决