首页 > 代码库 > hadoop生态系统默认端口集合

hadoop生态系统默认端口集合


     1 HDFS服务中,默认端口集合:

 1. HDFS 端口
ServiceServersDefault Ports UsedProtocolDescriptionNeed End User Access?Configuration Parameters

NameNode WebUI

Master Nodes (NameNode and any back-up NameNodes)50070httpWeb UI to look at current status of HDFS, explore file systemYes (Typically admins, Dev/Support teams)dfs.http.address
50470httpsSecure http servicedfs.https.address

NameNode metadata service

Master Nodes (NameNode and any back-up NameNodes)8020/9000IPC

File system metadata operations

Yes (All clients who directly need to interact with the HDFS)Embedded in URI specified by fs.default.name

DataNode

All Slave Nodes

50075

http

DataNode WebUI to access the status, logs etc.

Yes (Typically admins, Dev/Support teams)dfs.datanode.http.address

50475

https

Secure http service

dfs.datanode.https.address

50010

 

Data transfer

 dfs.datanode.address

50020

IPC

Metadata operations

Nodfs.datanode.ipc.address
Secondary NameNodeSecondary NameNode and any backup Secondanry NameNode

50090

http

Checkpoint for NameNode metadata

Nodfs.secondary.http.address

    2 MapReduce端口

     

2. MapReduce 端口
ServiceServersDefault Ports UsedProtocolDescriptionNeed End User Access?Configuration Parameters

JobTracker  WebUI

Master Nodes (JobTracker Node and any back-up Job-Tracker node )50030httpWeb UI for JobTrackerYesmapred.job.tracker.http.address

JobTracker

Master Nodes (JobTracker Node)8021IPC

For job submissions

Yes (All clients who need to submit the MapReduce jobs  including Hive, Hive server, Pig)Embedded in URI specified bymapred.job.tracker

Task-Tracker Web UI and Shuffle

All Slave Nodes

50060

httpDataNode Web UI to access status, logs, etc.Yes (Typically admins, Dev/Support teams)mapred.task.tracker.http.address
History Server WebUI 51111httpWeb UI for Job HistoryYesmapreduce.history.server.http.address


   3 Hive 端口

3. Hive 端口
ServiceServersDefault Ports UsedProtocolDescriptionNeed End User Access?Configuration Parameters

Hive Server2

Hive Server machine (Usually a utility machine)10000thriftService for programatically (Thrift/JDBC) connecting to HiveYes (Clients who need to connect to Hive either programatically or through UI SQL tools that use JDBC)ENV Variable HIVE_PORT

Hive Metastore

 9083thriftYes (Clients that run Hive, Pig and potentially M/R jobs that use HCatalog)hive.metastore.uris

      4 HBase端口

4. HBase 端口
ServiceServersDefault Ports UsedProtocolDescriptionNeed End User Access?Configuration Parameters

HMaster

Master Nodes (HBase Master Node and any back-up HBase Master node)60000  Yeshbase.master.port

HMaster Info Web UI

Master Nodes (HBase master Node and back up HBase Master node if any)60010httpThe port for the HBase-Master web UI. Set to -1 if you do not want the info server to run.Yeshbase.master.info.port

Region Server

All Slave Nodes60020  Yes (Typically admins, dev/support teams)hbase.regionserver.port

Region Server

All Slave Nodes60030http Yes (Typically admins, dev/support teams)hbase.regionserver.info.port
 All ZooKeeper Nodes2888 Port used by ZooKeeper peers to talk to each other.Seehere for more information.Nohbase.zookeeper.peerport
 All ZooKeeper Nodes3888 Port used by ZooKeeper peers to talk to each other.Seehere for more information. hbase.zookeeper.leaderport
  2181 Property from ZooKeeper‘s config zoo.cfg. The port at which the clients will connect. hbase.zookeeper.property.clientPort

        5 WebHCat 端口

 5 WebHCat 端口
ServiceServersDefault Ports UsedProtocolDescriptionNeed End User Access?Configuration Parameters

WebHCat Server

Any utility machine50111httpWeb API on top of HCatalog and other Hadoop servicesYestempleton.port
         6 监控ganglia端口

6. Ganglia 端口
ServiceServersDefault Ports UsedProtocolDescriptionNeed End User Access?Configuration Parameters
 Ganglia server8660/61/62/63 For gmond collectors  
 All Slave Nodes8660 For gmond agents  
 Ganglia server8651 For ganglia gmetad