首页 > 代码库 > 复制本地文件到HDFS本地测试异常

复制本地文件到HDFS本地测试异常

项目中需要将本地文件拷贝到hdfs上,由于本人比较懒,于是使用擅长的Java程序通过Hadoop.FileSystem.CopyFromLocalFile方法来实现。 在本地(Window 7 环境)本地模式下运行却遇到了下述异常:

An exception or error caused a run to abort: org.apache.hadoop.io.nativeio.NativeIO$Windows.createFileWithMode0(Ljava/lang/String;JJJI)Ljava/io/FileDescriptor; java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.createFileWithMode0(Ljava/lang/String;JJJI)Ljava/io/FileDescriptor;    at org.apache.hadoop.io.nativeio.NativeIO$Windows.createFileWithMode0(Native Method)    at org.apache.hadoop.io.nativeio.NativeIO$Windows.createFileOutputStreamWithMode(NativeIO.java:559)    at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.<init>(RawLocalFileSystem.java:219)    at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.<init>(RawLocalFileSystem.java:209)    at org.apache.hadoop.fs.RawLocalFileSystem.createOutputStreamWithMode(RawLocalFileSystem.java:307)    at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:295)    at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:328)    at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSOutputSummer.<init>(ChecksumFileSystem.java:388)    at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:451)    at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:430)    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:920)    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:901)    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:798)    at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:368)    at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:341)    at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:292)    at org.apache.hadoop.fs.LocalFileSystem.copyFromLocalFile(LocalFileSystem.java:82)    at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1882)

通过分析异常堆栈可知,

org.apache.hadoop.io.nativeio.NativeIO$Windows.createFileWithMode0方法发生了异常,createFileWithMode0方法的实现如下:
    /** Wrapper around CreateFile() with security descriptor on Windows */    private static native FileDescriptor createFileWithMode0(String path,        long desiredAccess, long shareMode, long creationDisposition, int mode)        throws NativeIOException;

通过代码可知,这个方法是hadoop不支持的方法。那么为什么会调用这个方法,通过异常堆栈继续向上追踪,

是在 org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.<init> 过程调用了nativeio.NativeIO$Windows类,对应的方法如下:

 1  private LocalFSFileOutputStream(Path f, boolean append, 2         FsPermission permission) throws IOException { 3       File file = pathToFile(f); 4       if (permission == null) { 5         this.fos = new FileOutputStream(file, append); 6       } else { 7         if (Shell.WINDOWS && NativeIO.isAvailable()) { 8           this.fos = NativeIO.Windows.createFileOutputStreamWithMode(file, 9               append, permission.toShort());10         } else {11           this.fos = new FileOutputStream(file, append);12           boolean success = false;13           try {14             setPermission(f, permission);15             success = true;16           } finally {17             if (!success) {18               IOUtils.cleanup(LOG, this.fos);19             }20           }21         }22       }23     }

通过调用椎栈可知是上述代码第8行调用了NativeIO.Windows类。那么if判断应该是成立的,分析NativeIO.isAvailable方法代码如下:

1   /**2    * Return true if the JNI-based native IO extensions are available.3    */4   public static boolean isAvailable() {5     return NativeCodeLoader.isNativeCodeLoaded() && nativeLoaded;6   }

isAvailable方法主要是调用NativeCodeLoader.isNativeCodeLoaded方法

 1  static { 2     // Try to load native hadoop library and set fallback flag appropriately 3     if(LOG.isDebugEnabled()) { 4       LOG.debug("Trying to load the custom-built native-hadoop library..."); 5     } 6     try { 7       System.loadLibrary("hadoop"); 8       LOG.debug("Loaded the native-hadoop library"); 9       nativeCodeLoaded = true;10     } catch (Throwable t) {11       // Ignore failure to load12       if(LOG.isDebugEnabled()) {13         LOG.debug("Failed to load native-hadoop with error: " + t);14         LOG.debug("java.library.path=" +15             System.getProperty("java.library.path"));16       }17     }18     19     if (!nativeCodeLoaded) {20       LOG.warn("Unable to load native-hadoop library for your platform... " +21                "using builtin-java classes where applicable");22     }23   }24 25   /**26    * Check if native-hadoop code is loaded for this platform.27    * 28    * @return <code>true</code> if native-hadoop is loaded, 29    *         else <code>false</code>30    */31   public static boolean isNativeCodeLoaded() {32     return nativeCodeLoaded;33   }

通过可以看到,isNativeCodeLoaded方法就是返回一个属性值,那么问题出现在什么地方呢?

经过分析NativeCodeLoaded类的静态构造函数,有一个“System.loadLibrary("hadoop")”方法。 是不是这个方法导致的呢?通过在其他同事环境上调试,System.loadLibrary("hadoop") 会异常,从而运行catch部分,但是本人电脑却不会异常,直接继续运行。那么System.loadLibrary方法是什么用途呢,通过分析源码知道,这个方法是加载本地系统和用户的环境变量的。进而分析是因为本人在C:\\Windows\System32目录下有hadoop.dll文件或环境变量Path中配置了%Hadoop_Home%/bin目录而导致的。

简而言之,是因为配置的系统环境变量Path的任意目录下存在hadoop.dll文件,从而被认为这是一个hadoop集群环境,但是hadoop集群又不支持window环境而产生的异常。处理方法也很简单,检查系统环境变量Path下的每一个目录,确保没有hadoop.dll文件即可。

如果是删除系统环境变量Path的某一个目录,需要重启Intellij Idea后ClassLoader类中的usr_paths或sys_paths才会生效。

 

复制本地文件到HDFS本地测试异常