首页 > 代码库 > MAHOUT_LOCAL is not set; adding HADOOP_CONF_DIR to classpath.
MAHOUT_LOCAL is not set; adding HADOOP_CONF_DIR to classpath.
[hxsyl@CentOSMaster hadoop-2.6.4]$ mahoutMAHOUT_LOCAL is not set; adding HADOOP_CONF_DIR to classpath.Running on hadoop, using HADOOP_HOME=/home/hxsyl/Spark_Relvant/hadoop-2.6.4HADOOP_CONF_DIR=/home/hxsyl/Spark_Relvant/hadoop-2.6.4/etc/hadoopMAHOUT-JOB: /home/hxsyl/Spark_Relvant/mahout-distribution-0.6/mahout-examples-0.6-job.jarAn example program must be given as the first argument.Valid program names are: arff.vector: : Generate Vectors from an ARFF file or directory baumwelch: : Baum-Welch algorithm for unsupervised HMM training canopy: : Canopy clustering cat: : Print a file or resource as the logistic regression models would see it cleansvd: : Cleanup and verification of SVD output clusterdump: : Dump cluster output to text clusterpp: : Groups Clustering Output In Clusters cmdump: : Dump confusion matrix in HTML or text formats cvb: : LDA via Collapsed Variation Bayes (0th deriv. approx) cvb0_local: : LDA via Collapsed Variation Bayes, in memory locally. dirichlet: : Dirichlet Clustering eigencuts: : Eigencuts spectral clustering evaluateFactorization: : compute RMSE and MAE of a rating matrix factorization against probes fkmeans: : Fuzzy K-means clustering fpg: : Frequent Pattern Growth hmmpredict: : Generate random sequence of observations by given HMM itemsimilarity: : Compute the item-item-similarities for item-based collaborative filtering kmeans: : K-means clustering lda: : Latent Dirchlet Allocation ldatopics: : LDA Print Topics lucene.vector: : Generate Vectors from a Lucene index matrixdump: : Dump matrix in CSV format matrixmult: : Take the product of two matrices meanshift: : Mean Shift clustering minhash: : Run Minhash clustering pagerank: : compute the PageRank of a graph parallelALS: : ALS-WR factorization of a rating matrix prepare20newsgroups: : Reformat 20 newsgroups data randomwalkwithrestart: : compute all other vertices‘ proximity to a source vertex in a graph recommendfactorized: : Compute recommendations using the factorization of a rating matrix recommenditembased: : Compute recommendations using item-based collaborative filtering regexconverter: : Convert text files on a per line basis based on regular expressions rowid: : Map SequenceFile<Text,VectorWritable> to {SequenceFile<IntWritable,VectorWritable>, SequenceFile<IntWritable,Text>} rowsimilarity: : Compute the pairwise similarities of the rows of a matrix runAdaptiveLogistic: : Score new production data using a probably trained and validated AdaptivelogisticRegression model runlogistic: : Run a logistic regression model against CSV data seq2encoded: : Encoded Sparse Vector generation from Text sequence files seq2sparse: : Sparse Vector generation from Text sequence files seqdirectory: : Generate sequence files (of Text) from a directory seqdumper: : Generic Sequence File dumper seqwiki: : Wikipedia xml dump to sequence file spectralkmeans: : Spectral k-means clustering split: : Split Input data into test and train sets splitDataset: : split a rating dataset into training and probe parts ssvd: : Stochastic SVD svd: : Lanczos Singular Value Decomposition testclassifier: : Test the text based Bayes Classifier testnb: : Test the Vector-based Bayes classifier trainAdaptiveLogistic: : Train an AdaptivelogisticRegression model trainclassifier: : Train the text based Bayes Classifier trainlogistic: : Train a logistic regression using stochastic gradient descent trainnb: : Train the Vector-based Bayes classifier transpose: : Take the transpose of a matrix validateAdaptiveLogistic: : Validate an AdaptivelogisticRegression model against hold-out data set vecdist: : Compute the distances between a set of Vectors (or Cluster or Canopy, they must fit in memory) and a list of Vectors vectordump: : Dump vectors from a sequence file to text viterbi: : Viterbi decoding of hidden states from given output states sequence wikipediaDataSetCreator: : Splits data set of wikipedia wrt feature like country wikipediaXMLSplitter: : Reads wikipedia data and creates ch
刚开始以为这样是错误的,后来发现这样是对的,不设置的MAHOUT_LOCAL的话在hadoop运行,否则单机运行。
值得注意的是修改/etc/profile的时候必须在root下,在hxsyl下几遍wq!也不行,在root下source以后,mahout提示类似上面的信息(用户不一样),然后切换到hxsyl下一直提示没有设置HADOOP_CONF_DIR,我明明设置了,然后我在hxsyl下source就好了。
MAHOUT_LOCAL is not set; adding HADOOP_CONF_DIR to classpath.
声明:以上内容来自用户投稿及互联网公开渠道收集整理发布,本网站不拥有所有权,未作人工编辑处理,也不承担相关法律责任,若内容有误或涉及侵权可进行投诉: 投诉/举报 工作人员会在5个工作日内联系你,一经查实,本站将立刻删除涉嫌侵权内容。