首页 > 代码库 > GraphX 的属性图
GraphX 的属性图
package main.scalaimport org.apache.spark.graphx.{Edge, Graph, VertexId}import org.apache.spark.rdd.RDDimport org.apache.spark.{SparkConf, SparkContext}object graph_test { // define hadoop_home directory System.setProperty("hadoop.home.dir","E:/zhuangji/winutil/") def main(args:Array[String]):Unit={ val conf=new SparkConf().setMaster("local[2]").setAppName("graph_test") val sc=new SparkContext(conf) // VertexRDD & EdgeRDD to build graph val users:RDD[(VertexId,(String,String))]= sc.parallelize(Array((3L,("rxin","student")),(7L,("jgonzal","postdoc")), (5L,("franklin","prof")),(2L,("istoica","prof")))) val relationships:RDD[Edge[String]]= sc.parallelize(Array(Edge(3L,7L,"collab"),Edge(5L,3L,"advisor"), Edge(2L,5L,"colleague"),Edge(5L,7L,"pi"))) val defaultUser=("John Doe","Missing") val graph=Graph(users,relationships,defaultUser) // graph.vertices & graph.edges to query graph println(graph.vertices.filter{case (id,(name,pos))=>pos=="prof"}.count) println(graph.edges.filter{case Edge(s,d,r)=>s<d}.count) // 两者 println(graph.edges.filter(e=>e.srcId<e.dstId).count) // 等价 // 三元组视图 graph.triplets could also query a graph val facts:RDD[String]= graph.triplets.map(triplet=> triplet.srcAttr._1 + " is the " + triplet.attr + " of " + triplet.dstAttr._1) facts.collect.foreach(println(_)) }}
GraphX 的属性图
声明:以上内容来自用户投稿及互联网公开渠道收集整理发布,本网站不拥有所有权,未作人工编辑处理,也不承担相关法律责任,若内容有误或涉及侵权可进行投诉: 投诉/举报 工作人员会在5个工作日内联系你,一经查实,本站将立刻删除涉嫌侵权内容。