首页 > 代码库 > mahout部署实践
mahout部署实践
一 下载mahout并解压
unzip unzip mahout-distribution-0.9-src.zip
二 设置环境变量
1一些说明
JAVA_HOME mahout运行需指定jdk的目录
MAHOUT_JAVA_HOME指定此变量可覆盖JAVA_HOME值
HADOOP_HOME 如果配置,则在hadoop分布式平台上运行,否则单机运行
HADOOP_CONF_DIR指定hadoop的配置文件目录
MAHOUT_LOCAL 如果此变量值丌为空,则单机运行mahout。
MAHOUT_CONF_DIR mahout配置文件的路径,默认值是$MAHOUT_HOME/src/conf
MAHOUT_HEAPSIZE mahout运行时可用的最大heap大小
MAHOUT_JAVA_HOME指定此变量可覆盖JAVA_HOME值
HADOOP_HOME 如果配置,则在hadoop分布式平台上运行,否则单机运行
HADOOP_CONF_DIR指定hadoop的配置文件目录
MAHOUT_LOCAL 如果此变量值丌为空,则单机运行mahout。
MAHOUT_CONF_DIR mahout配置文件的路径,默认值是$MAHOUT_HOME/src/conf
MAHOUT_HEAPSIZE mahout运行时可用的最大heap大小
2具体操作
hadoop@namenode:~/mahout-distribution-0.9$ sudo vim /etc/profile
环境变量的修改,在该文件最后面添加
export JAVA_HOME=/usr/programs/jdk1.7.0_65
export HADOOP_HOME=/home/hadoop/hadoop-1.2.1
export HADOOP_CONF_DIR=/home/hadoop/hadoop-1.2.1/conf
export MAHOUT_HOME=/home/hadoop/mahout-distribution-0.9
export MAHOUT_CONF_DIR=/home/hadoop/mahout-distribution-0.9/conf
PATH=$MAHOUT_CONF_DIR:$MAHOUT_HOME/bin:$JAVA_HOME/bin:$HADOOP_HOME/bin:$PATH
export HADOOP_HOME=/home/hadoop/hadoop-1.2.1
export HADOOP_CONF_DIR=/home/hadoop/hadoop-1.2.1/conf
export MAHOUT_HOME=/home/hadoop/mahout-distribution-0.9
export MAHOUT_CONF_DIR=/home/hadoop/mahout-distribution-0.9/conf
PATH=$MAHOUT_CONF_DIR:$MAHOUT_HOME/bin:$JAVA_HOME/bin:$HADOOP_HOME/bin:$PATH
然后source /etc/profile
3一个问题
如果你遇到了如下问题
Could not find mahout-examples-*.job in /home/hadoop/mahout-distribution-0.9 or /home/hadoop/mahout-distribution-0.9/examples/target, please run ‘mvn install‘ to create the .job file
原因是下载的版本不对,你是不是下的有源代码的版本呀,下没有源代码的版本试试
三 验证是否安装成功
hadoop@namenode:~$ mahout
MAHOUT_LOCAL is not set; adding HADOOP_CONF_DIR to classpath.
Warning: $HADOOP_HOME is deprecated.
Running on hadoop, using /home/hadoop/hadoop-1.2.1/bin/hadoop and HADOOP_CONF_DIR=/home/hadoop/hadoop-1.2.1/conf
MAHOUT-JOB: /home/hadoop/mahout-distribution-0.9/mahout-examples-0.9-job.jar
Warning: $HADOOP_HOME is deprecated.
An example program must be given as the first argument.
Valid program names are:
arff.vector: : Generate Vectors from an ARFF file or directory
baumwelch: : Baum-Welch algorithm for unsupervised HMM training
canopy: : Canopy clustering
cat: : Print a file or resource as the logistic regression models would see it
cleansvd: : Cleanup and verification of SVD output
clusterdump: : Dump cluster output to text
clusterpp: : Groups Clustering Output In Clusters
cmdump: : Dump confusion matrix in HTML or text formats
concatmatrices: : Concatenates 2 matrices of same cardinality into a single matrix
cvb: : LDA via Collapsed Variation Bayes (0th deriv. approx)
cvb0_local: : LDA via Collapsed Variation Bayes, in memory locally.
evaluateFactorization: : compute RMSE and MAE of a rating matrix factorization against probes
fkmeans: : Fuzzy K-means clustering
hmmpredict: : Generate random sequence of observations by given HMM
itemsimilarity: : Compute the item-item-similarities for item-based collaborative filtering
kmeans: : K-means clustering
lucene.vector: : Generate Vectors from a Lucene index
lucene2seq: : Generate Text SequenceFiles from a Lucene index
matrixdump: : Dump matrix in CSV format
matrixmult: : Take the product of two matrices
parallelALS: : ALS-WR factorization of a rating matrix
qualcluster: : Runs clustering experiments and summarizes results in a CSV
recommendfactorized: : Compute recommendations using the factorization of a rating matrix
recommenditembased: : Compute recommendations using item-based collaborative filtering
regexconverter: : Convert text files on a per line basis based on regular expressions
resplit: : Splits a set of SequenceFiles into a number of equal splits
rowid: : Map SequenceFile<Text,VectorWritable> to {SequenceFile<IntWritable,VectorWritable>, SequenceFile<IntWritable,Text>}
rowsimilarity: : Compute the pairwise similarities of the rows of a matrix
runAdaptiveLogistic: : Score new production data using a probably trained and validated AdaptivelogisticRegression model
runlogistic: : Run a logistic regression model against CSV data
seq2encoded: : Encoded Sparse Vector generation from Text sequence files
seq2sparse: : Sparse Vector generation from Text sequence files
seqdirectory: : Generate sequence files (of Text) from a directory
seqdumper: : Generic Sequence File dumper
seqmailarchives: : Creates SequenceFile from a directory containing gzipped mail archives
seqwiki: : Wikipedia xml dump to sequence file
spectralkmeans: : Spectral k-means clustering
split: : Split Input data into test and train sets
splitDataset: : split a rating dataset into training and probe parts
ssvd: : Stochastic SVD
streamingkmeans: : Streaming k-means clustering
svd: : Lanczos Singular Value Decomposition
testnb: : Test the Vector-based Bayes classifier
trainAdaptiveLogistic: : Train an AdaptivelogisticRegression model
trainlogistic: : Train a logistic regression using stochastic gradient descent
trainnb: : Train the Vector-based Bayes classifier
transpose: : Take the transpose of a matrix
validateAdaptiveLogistic: : Validate an AdaptivelogisticRegression model against hold-out data set
vecdist: : Compute the distances between a set of Vectors (or Cluster or Canopy, they must fit in memory) and a list of Vectors
vectordump: : Dump vectors from a sequence file to text
viterbi: : Viterbi decoding of hidden states from given output states sequence
MAHOUT_LOCAL is not set; adding HADOOP_CONF_DIR to classpath.
Warning: $HADOOP_HOME is deprecated.
Running on hadoop, using /home/hadoop/hadoop-1.2.1/bin/hadoop and HADOOP_CONF_DIR=/home/hadoop/hadoop-1.2.1/conf
MAHOUT-JOB: /home/hadoop/mahout-distribution-0.9/mahout-examples-0.9-job.jar
Warning: $HADOOP_HOME is deprecated.
An example program must be given as the first argument.
Valid program names are:
arff.vector: : Generate Vectors from an ARFF file or directory
baumwelch: : Baum-Welch algorithm for unsupervised HMM training
canopy: : Canopy clustering
cat: : Print a file or resource as the logistic regression models would see it
cleansvd: : Cleanup and verification of SVD output
clusterdump: : Dump cluster output to text
clusterpp: : Groups Clustering Output In Clusters
cmdump: : Dump confusion matrix in HTML or text formats
concatmatrices: : Concatenates 2 matrices of same cardinality into a single matrix
cvb: : LDA via Collapsed Variation Bayes (0th deriv. approx)
cvb0_local: : LDA via Collapsed Variation Bayes, in memory locally.
evaluateFactorization: : compute RMSE and MAE of a rating matrix factorization against probes
fkmeans: : Fuzzy K-means clustering
hmmpredict: : Generate random sequence of observations by given HMM
itemsimilarity: : Compute the item-item-similarities for item-based collaborative filtering
kmeans: : K-means clustering
lucene.vector: : Generate Vectors from a Lucene index
lucene2seq: : Generate Text SequenceFiles from a Lucene index
matrixdump: : Dump matrix in CSV format
matrixmult: : Take the product of two matrices
parallelALS: : ALS-WR factorization of a rating matrix
qualcluster: : Runs clustering experiments and summarizes results in a CSV
recommendfactorized: : Compute recommendations using the factorization of a rating matrix
recommenditembased: : Compute recommendations using item-based collaborative filtering
regexconverter: : Convert text files on a per line basis based on regular expressions
resplit: : Splits a set of SequenceFiles into a number of equal splits
rowid: : Map SequenceFile<Text,VectorWritable> to {SequenceFile<IntWritable,VectorWritable>, SequenceFile<IntWritable,Text>}
rowsimilarity: : Compute the pairwise similarities of the rows of a matrix
runAdaptiveLogistic: : Score new production data using a probably trained and validated AdaptivelogisticRegression model
runlogistic: : Run a logistic regression model against CSV data
seq2encoded: : Encoded Sparse Vector generation from Text sequence files
seq2sparse: : Sparse Vector generation from Text sequence files
seqdirectory: : Generate sequence files (of Text) from a directory
seqdumper: : Generic Sequence File dumper
seqmailarchives: : Creates SequenceFile from a directory containing gzipped mail archives
seqwiki: : Wikipedia xml dump to sequence file
spectralkmeans: : Spectral k-means clustering
split: : Split Input data into test and train sets
splitDataset: : split a rating dataset into training and probe parts
ssvd: : Stochastic SVD
streamingkmeans: : Streaming k-means clustering
svd: : Lanczos Singular Value Decomposition
testnb: : Test the Vector-based Bayes classifier
trainAdaptiveLogistic: : Train an AdaptivelogisticRegression model
trainlogistic: : Train a logistic regression using stochastic gradient descent
trainnb: : Train the Vector-based Bayes classifier
transpose: : Take the transpose of a matrix
validateAdaptiveLogistic: : Validate an AdaptivelogisticRegression model against hold-out data set
vecdist: : Compute the distances between a set of Vectors (or Cluster or Canopy, they must fit in memory) and a list of Vectors
vectordump: : Dump vectors from a sequence file to text
viterbi: : Viterbi decoding of hidden states from given output states sequence
以上则表示安转成功
四 测试kmeans算法
1.下载测试数据
wget http://archive.ics.uci.edu/ml/databases/synthetic_control/synthetic_control.data
hadoop@namenode:~$ wget http://archive.ics.uci.edu/ml/databases/synthetic_control/synthetic_control.data
--2014-11-08 06:40:16-- http://archive.ics.uci.edu/ml/databases/synthetic_control/synthetic_control.data
Resolving archive.ics.uci.edu (archive.ics.uci.edu)... 128.195.1.87
Connecting to archive.ics.uci.edu (archive.ics.uci.edu)|128.195.1.87|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 288374 (282K) [text/plain]
Saving to: `synthetic_control.data‘
100%[=======================================================================>] 288,374 79.5K/s in 3.5s
2014-11-08 06:40:20 (79.5 KB/s) - `synthetic_control.data‘ saved [288374/288374]
--2014-11-08 06:40:16-- http://archive.ics.uci.edu/ml/databases/synthetic_control/synthetic_control.data
Resolving archive.ics.uci.edu (archive.ics.uci.edu)... 128.195.1.87
Connecting to archive.ics.uci.edu (archive.ics.uci.edu)|128.195.1.87|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 288374 (282K) [text/plain]
Saving to: `synthetic_control.data‘
100%[=======================================================================>] 288,374 79.5K/s in 3.5s
2014-11-08 06:40:20 (79.5 KB/s) - `synthetic_control.data‘ saved [288374/288374]
2.把测试数据放到hdfs中
hadoop@namenode:~$ hadoop fs -mkdir ./testdataWarning: $HADOOP_HOME is deprecated.hadoop@namenode:~$ hadoop fs -lsWarning: $HADOOP_HOME is deprecated.Found 4 itemsdrwxr-xr-x - hadoop supergroup 0 2014-11-06 07:48 /user/hadoop/inputdrwxr-xr-x - hadoop supergroup 0 2014-11-06 07:49 /user/hadoop/outputdrwxr-xr-x - Administrator supergroup 0 2014-11-06 08:01 /user/hadoop/output1drwxr-xr-x - hadoop supergroup 0 2014-11-08 06:41 /user/hadoop/testdatahadoop@namenode:~$ hadoop fs -put synthetic_control.data ./testdataWarning: $HADOOP_HOME is deprecated.hadoop@namenode:~$ hadoop fs -lsWarning: $HADOOP_HOME is deprecated.Found 4 itemsdrwxr-xr-x - hadoop supergroup 0 2014-11-06 07:48 /user/hadoop/inputdrwxr-xr-x - hadoop supergroup 0 2014-11-06 07:49 /user/hadoop/outputdrwxr-xr-x - Administrator supergroup 0 2014-11-06 08:01 /user/hadoop/output1drwxr-xr-x - hadoop supergroup 0 2014-11-08 06:42 /user/hadoop/testdatahadoop@namenode:~$ hadoop fs -ls ./testdataWarning: $HADOOP_HOME is deprecated.Found 1 items-rw-r--r-- 1 hadoop supergroup 288374 2014-11-08 06:42 /user/hadoop/testdata/synthetic_control.data
3.开始测试
hadoop@namenode:~$ mahout org.apache.mahout.clustering.syntheticcontrol.kmeans.JobMAHOUT_LOCAL is not set; adding HADOOP_CONF_DIR to classpath.Warning: $HADOOP_HOME is deprecated.Running on hadoop, using /home/hadoop/hadoop-1.2.1/bin/hadoop and HADOOP_CONF_DIR=/home/hadoop/hadoop-1.2.1/confMAHOUT-JOB: /home/hadoop/mahout-distribution-0.9/mahout-examples-0.9-job.jarWarning: $HADOOP_HOME is deprecated.14/11/08 06:47:25 WARN driver.MahoutDriver: No org.apache.mahout.clustering.syntheticcontrol.kmeans.Job.props found on classpath, will use command-line arguments only14/11/08 06:47:25 INFO kmeans.Job: Running with default arguments14/11/08 06:47:27 INFO kmeans.Job: Preparing Input14/11/08 06:47:27 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.14/11/08 06:47:30 INFO input.FileInputFormat: Total input paths to process : 114/11/08 06:47:30 INFO util.NativeCodeLoader: Loaded the native-hadoop library14/11/08 06:47:30 WARN snappy.LoadSnappy: Snappy native library not loaded14/11/08 06:47:31 INFO mapred.JobClient: Running job: job_201411080632_000214/11/08 06:47:32 INFO mapred.JobClient: map 0% reduce 0%14/11/08 06:48:18 INFO mapred.JobClient: map 100% reduce 0%14/11/08 06:48:21 INFO mapred.JobClient: Job complete: job_201411080632_000214/11/08 06:48:21 INFO mapred.JobClient: Counters: 1914/11/08 06:48:21 INFO mapred.JobClient: Job Counters 14/11/08 06:48:21 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=1968814/11/08 06:48:21 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=014/11/08 06:48:21 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=014/11/08 06:48:21 INFO mapred.JobClient: Rack-local map tasks=114/11/08 06:48:21 INFO mapred.JobClient: Launched map tasks=114/11/08 06:48:21 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=014/11/08 06:48:21 INFO mapred.JobClient: File Output Format Counters 14/11/08 06:48:21 INFO mapred.JobClient: Bytes Written=33547014/11/08 06:48:21 INFO mapred.JobClient: FileSystemCounters14/11/08 06:48:21 INFO mapred.JobClient: HDFS_BYTES_READ=28850314/11/08 06:48:21 INFO mapred.JobClient: FILE_BYTES_WRITTEN=5883814/11/08 06:48:21 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=33547014/11/08 06:48:21 INFO mapred.JobClient: File Input Format Counters 14/11/08 06:48:21 INFO mapred.JobClient: Bytes Read=28837414/11/08 06:48:21 INFO mapred.JobClient: Map-Reduce Framework14/11/08 06:48:21 INFO mapred.JobClient: Map input records=60014/11/08 06:48:21 INFO mapred.JobClient: Physical memory (bytes) snapshot=3847372814/11/08 06:48:21 INFO mapred.JobClient: Spilled Records=014/11/08 06:48:21 INFO mapred.JobClient: CPU time spent (ms)=91014/11/08 06:48:21 INFO mapred.JobClient: Total committed heap usage (bytes)=1625292814/11/08 06:48:21 INFO mapred.JobClient: Virtual memory (bytes) snapshot=34799206414/11/08 06:48:21 INFO mapred.JobClient: Map output records=60014/11/08 06:48:21 INFO mapred.JobClient: SPLIT_RAW_BYTES=12914/11/08 06:48:21 INFO kmeans.Job: Running random seed to get initial clusters14/11/08 06:48:21 INFO zlib.ZlibFactory: Successfully loaded & initialized native-zlib library14/11/08 06:48:21 INFO compress.CodecPool: Got brand-new compressor14/11/08 06:48:22 INFO kmeans.RandomSeedGenerator: Wrote 6 Klusters to output/random-seeds/part-randomSeed14/11/08 06:48:22 INFO kmeans.Job: Running KMeans with k = 614/11/08 06:48:22 INFO kmeans.KMeansDriver: Input: output/data Clusters In: output/random-seeds/part-randomSeed Out: output14/11/08 06:48:22 INFO kmeans.KMeansDriver: convergence: 0.5 max Iterations: 1014/11/08 06:48:22 INFO compress.CodecPool: Got brand-new decompressor14/11/08 06:48:23 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.14/11/08 06:48:24 INFO input.FileInputFormat: Total input paths to process : 114/11/08 06:48:25 INFO mapred.JobClient: Running job: job_201411080632_000314/11/08 06:48:26 INFO mapred.JobClient: map 0% reduce 0%14/11/08 06:48:56 INFO mapred.JobClient: map 100% reduce 0%14/11/08 06:49:09 INFO mapred.JobClient: map 100% reduce 100%14/11/08 06:49:12 INFO mapred.JobClient: Job complete: job_201411080632_000314/11/08 06:49:12 INFO mapred.JobClient: Counters: 2914/11/08 06:49:12 INFO mapred.JobClient: Job Counters 14/11/08 06:49:12 INFO mapred.JobClient: Launched reduce tasks=114/11/08 06:49:12 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=2125814/11/08 06:49:12 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=014/11/08 06:49:12 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=014/11/08 06:49:12 INFO mapred.JobClient: Launched map tasks=114/11/08 06:49:12 INFO mapred.JobClient: Data-local map tasks=114/11/08 06:49:12 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=1282114/11/08 06:49:12 INFO mapred.JobClient: File Output Format Counters 14/11/08 06:49:12 INFO mapred.JobClient: Bytes Written=758114/11/08 06:49:12 INFO mapred.JobClient: FileSystemCounters14/11/08 06:49:12 INFO mapred.JobClient: FILE_BYTES_READ=1065014/11/08 06:49:12 INFO mapred.JobClient: HDFS_BYTES_READ=35867214/11/08 06:49:12 INFO mapred.JobClient: FILE_BYTES_WRITTEN=14134114/11/08 06:49:12 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=758114/11/08 06:49:12 INFO mapred.JobClient: File Input Format Counters 14/11/08 06:49:12 INFO mapred.JobClient: Bytes Read=33547014/11/08 06:49:12 INFO mapred.JobClient: Map-Reduce Framework14/11/08 06:49:12 INFO mapred.JobClient: Map output materialized bytes=1065014/11/08 06:49:12 INFO mapred.JobClient: Map input records=60014/11/08 06:49:12 INFO mapred.JobClient: Reduce shuffle bytes=1065014/11/08 06:49:12 INFO mapred.JobClient: Spilled Records=1214/11/08 06:49:12 INFO mapred.JobClient: Map output bytes=1062014/11/08 06:49:12 INFO mapred.JobClient: Total committed heap usage (bytes)=13219020814/11/08 06:49:12 INFO mapred.JobClient: CPU time spent (ms)=749014/11/08 06:49:12 INFO mapred.JobClient: Combine input records=014/11/08 06:49:12 INFO mapred.JobClient: SPLIT_RAW_BYTES=12214/11/08 06:49:12 INFO mapred.JobClient: Reduce input records=614/11/08 06:49:12 INFO mapred.JobClient: Reduce input groups=614/11/08 06:49:12 INFO mapred.JobClient: Combine output records=014/11/08 06:49:12 INFO mapred.JobClient: Physical memory (bytes) snapshot=18387763214/11/08 06:49:12 INFO mapred.JobClient: Reduce output records=614/11/08 06:49:12 INFO mapred.JobClient: Virtual memory (bytes) snapshot=69665996814/11/08 06:49:12 INFO mapred.JobClient: Map output records=614/11/08 06:49:12 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.14/11/08 06:49:14 INFO input.FileInputFormat: Total input paths to process : 114/11/08 06:49:15 INFO mapred.JobClient: Running job: job_201411080632_000414/11/08 06:49:16 INFO mapred.JobClient: map 0% reduce 0%14/11/08 06:50:02 INFO mapred.JobClient: map 100% reduce 0%14/11/08 06:50:15 INFO mapred.JobClient: map 100% reduce 100%14/11/08 06:50:19 INFO mapred.JobClient: Job complete: job_201411080632_000414/11/08 06:50:19 INFO mapred.JobClient: Counters: 2914/11/08 06:50:19 INFO mapred.JobClient: Job Counters 14/11/08 06:50:19 INFO mapred.JobClient: Launched reduce tasks=114/11/08 06:50:19 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=2594614/11/08 06:50:19 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=014/11/08 06:50:19 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=014/11/08 06:50:19 INFO mapred.JobClient: Rack-local map tasks=114/11/08 06:50:19 INFO mapred.JobClient: Launched map tasks=114/11/08 06:50:19 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=1273814/11/08 06:50:19 INFO mapred.JobClient: File Output Format Counters 14/11/08 06:50:19 INFO mapred.JobClient: Bytes Written=758114/11/08 06:50:19 INFO mapred.JobClient: FileSystemCounters14/11/08 06:50:19 INFO mapred.JobClient: FILE_BYTES_READ=1389014/11/08 06:50:19 INFO mapred.JobClient: HDFS_BYTES_READ=35114214/11/08 06:50:19 INFO mapred.JobClient: FILE_BYTES_WRITTEN=14782114/11/08 06:50:19 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=758114/11/08 06:50:19 INFO mapred.JobClient: File Input Format Counters 14/11/08 06:50:19 INFO mapred.JobClient: Bytes Read=33547014/11/08 06:50:19 INFO mapred.JobClient: Map-Reduce Framework14/11/08 06:50:19 INFO mapred.JobClient: Map output materialized bytes=1389014/11/08 06:50:19 INFO mapred.JobClient: Map input records=60014/11/08 06:50:19 INFO mapred.JobClient: Reduce shuffle bytes=1389014/11/08 06:50:19 INFO mapred.JobClient: Spilled Records=1214/11/08 06:50:19 INFO mapred.JobClient: Map output bytes=1386014/11/08 06:50:19 INFO mapred.JobClient: Total committed heap usage (bytes)=13219020814/11/08 06:50:19 INFO mapred.JobClient: CPU time spent (ms)=655014/11/08 06:50:19 INFO mapred.JobClient: Combine input records=014/11/08 06:50:19 INFO mapred.JobClient: SPLIT_RAW_BYTES=12214/11/08 06:50:19 INFO mapred.JobClient: Reduce input records=614/11/08 06:50:19 INFO mapred.JobClient: Reduce input groups=614/11/08 06:50:19 INFO mapred.JobClient: Combine output records=014/11/08 06:50:19 INFO mapred.JobClient: Physical memory (bytes) snapshot=18333286414/11/08 06:50:19 INFO mapred.JobClient: Reduce output records=614/11/08 06:50:19 INFO mapred.JobClient: Virtual memory (bytes) snapshot=69561139214/11/08 06:50:19 INFO mapred.JobClient: Map output records=614/11/08 06:50:19 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.14/11/08 06:50:20 INFO input.FileInputFormat: Total input paths to process : 114/11/08 06:50:21 INFO mapred.JobClient: Running job: job_201411080632_000514/11/08 06:50:22 INFO mapred.JobClient: map 0% reduce 0%14/11/08 06:50:48 INFO mapred.JobClient: map 100% reduce 0%14/11/08 06:50:59 INFO mapred.JobClient: map 100% reduce 33%14/11/08 06:51:01 INFO mapred.JobClient: map 100% reduce 100%14/11/08 06:51:05 INFO mapred.JobClient: Job complete: job_201411080632_000514/11/08 06:51:05 INFO mapred.JobClient: Counters: 2914/11/08 06:51:05 INFO mapred.JobClient: Job Counters 14/11/08 06:51:05 INFO mapred.JobClient: Launched reduce tasks=114/11/08 06:51:05 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=1704714/11/08 06:51:05 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=014/11/08 06:51:05 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=014/11/08 06:51:05 INFO mapred.JobClient: Rack-local map tasks=114/11/08 06:51:05 INFO mapred.JobClient: Launched map tasks=114/11/08 06:51:05 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=1280414/11/08 06:51:05 INFO mapred.JobClient: File Output Format Counters 14/11/08 06:51:05 INFO mapred.JobClient: Bytes Written=758114/11/08 06:51:05 INFO mapred.JobClient: FileSystemCounters14/11/08 06:51:05 INFO mapred.JobClient: FILE_BYTES_READ=1389014/11/08 06:51:05 INFO mapred.JobClient: HDFS_BYTES_READ=35114214/11/08 06:51:05 INFO mapred.JobClient: FILE_BYTES_WRITTEN=14782114/11/08 06:51:05 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=758114/11/08 06:51:05 INFO mapred.JobClient: File Input Format Counters 14/11/08 06:51:05 INFO mapred.JobClient: Bytes Read=33547014/11/08 06:51:05 INFO mapred.JobClient: Map-Reduce Framework14/11/08 06:51:05 INFO mapred.JobClient: Map output materialized bytes=1389014/11/08 06:51:05 INFO mapred.JobClient: Map input records=60014/11/08 06:51:05 INFO mapred.JobClient: Reduce shuffle bytes=1389014/11/08 06:51:05 INFO mapred.JobClient: Spilled Records=1214/11/08 06:51:05 INFO mapred.JobClient: Map output bytes=1386014/11/08 06:51:05 INFO mapred.JobClient: Total committed heap usage (bytes)=13219020814/11/08 06:51:05 INFO mapred.JobClient: CPU time spent (ms)=328014/11/08 06:51:05 INFO mapred.JobClient: Combine input records=014/11/08 06:51:05 INFO mapred.JobClient: SPLIT_RAW_BYTES=12214/11/08 06:51:05 INFO mapred.JobClient: Reduce input records=614/11/08 06:51:05 INFO mapred.JobClient: Reduce input groups=614/11/08 06:51:05 INFO mapred.JobClient: Combine output records=014/11/08 06:51:05 INFO mapred.JobClient: Physical memory (bytes) snapshot=18319769614/11/08 06:51:05 INFO mapred.JobClient: Reduce output records=614/11/08 06:51:05 INFO mapred.JobClient: Virtual memory (bytes) snapshot=69561139214/11/08 06:51:05 INFO mapred.JobClient: Map output records=6
4观察输出
hadoop@namenode:~$ hadoop fs -ls ./output
Warning: $HADOOP_HOME is deprecated.
Found 15 items
-rw-r--r-- 1 hadoop supergroup 194 2014-11-08 06:56 /user/hadoop/output/_policy
drwxr-xr-x - hadoop supergroup 0 2014-11-08 06:57 /user/hadoop/output/clusteredPoints
drwxr-xr-x - hadoop supergroup 0 2014-11-08 06:48 /user/hadoop/output/clusters-0
drwxr-xr-x - hadoop supergroup 0 2014-11-08 06:49 /user/hadoop/output/clusters-1
drwxr-xr-x - hadoop supergroup 0 2014-11-08 06:56 /user/hadoop/output/clusters-10-final
drwxr-xr-x - hadoop supergroup 0 2014-11-08 06:50 /user/hadoop/output/clusters-2
drwxr-xr-x - hadoop supergroup 0 2014-11-08 06:51 /user/hadoop/output/clusters-3
drwxr-xr-x - hadoop supergroup 0 2014-11-08 06:51 /user/hadoop/output/clusters-4
drwxr-xr-x - hadoop supergroup 0 2014-11-08 06:52 /user/hadoop/output/clusters-5
drwxr-xr-x - hadoop supergroup 0 2014-11-08 06:53 /user/hadoop/output/clusters-6
drwxr-xr-x - hadoop supergroup 0 2014-11-08 06:54 /user/hadoop/output/clusters-7
drwxr-xr-x - hadoop supergroup 0 2014-11-08 06:54 /user/hadoop/output/clusters-8
drwxr-xr-x - hadoop supergroup 0 2014-11-08 06:55 /user/hadoop/output/clusters-9
drwxr-xr-x - hadoop supergroup 0 2014-11-08 06:48 /user/hadoop/output/data
drwxr-xr-x - hadoop supergroup 0 2014-11-08 06:48 /user/hadoop/output/random-seeds
Warning: $HADOOP_HOME is deprecated.
Found 15 items
-rw-r--r-- 1 hadoop supergroup 194 2014-11-08 06:56 /user/hadoop/output/_policy
drwxr-xr-x - hadoop supergroup 0 2014-11-08 06:57 /user/hadoop/output/clusteredPoints
drwxr-xr-x - hadoop supergroup 0 2014-11-08 06:48 /user/hadoop/output/clusters-0
drwxr-xr-x - hadoop supergroup 0 2014-11-08 06:49 /user/hadoop/output/clusters-1
drwxr-xr-x - hadoop supergroup 0 2014-11-08 06:56 /user/hadoop/output/clusters-10-final
drwxr-xr-x - hadoop supergroup 0 2014-11-08 06:50 /user/hadoop/output/clusters-2
drwxr-xr-x - hadoop supergroup 0 2014-11-08 06:51 /user/hadoop/output/clusters-3
drwxr-xr-x - hadoop supergroup 0 2014-11-08 06:51 /user/hadoop/output/clusters-4
drwxr-xr-x - hadoop supergroup 0 2014-11-08 06:52 /user/hadoop/output/clusters-5
drwxr-xr-x - hadoop supergroup 0 2014-11-08 06:53 /user/hadoop/output/clusters-6
drwxr-xr-x - hadoop supergroup 0 2014-11-08 06:54 /user/hadoop/output/clusters-7
drwxr-xr-x - hadoop supergroup 0 2014-11-08 06:54 /user/hadoop/output/clusters-8
drwxr-xr-x - hadoop supergroup 0 2014-11-08 06:55 /user/hadoop/output/clusters-9
drwxr-xr-x - hadoop supergroup 0 2014-11-08 06:48 /user/hadoop/output/data
drwxr-xr-x - hadoop supergroup 0 2014-11-08 06:48 /user/hadoop/output/random-seeds
mahout部署实践
声明:以上内容来自用户投稿及互联网公开渠道收集整理发布,本网站不拥有所有权,未作人工编辑处理,也不承担相关法律责任,若内容有误或涉及侵权可进行投诉: 投诉/举报 工作人员会在5个工作日内联系你,一经查实,本站将立刻删除涉嫌侵权内容。