首页 > 代码库 > 基于OpenStack(IceHouse+neutron) 部署 CloudFounry v183

基于OpenStack(IceHouse+neutron) 部署 CloudFounry v183

    之前苦于没有物理服务器,一直在虚拟机上小打小闹Cf,现在终于有了物理服务器,而且已经掌握了OpenStack的各个功能点,终于可以试一下了。本文基于OpenStack IceHouse 版本,使用Neutron网络搭建cf-183 版本,在网上查找资料,很少有使用neutron网络搭建的,目前也是使用了两个HM,解决了healthmanager 单点的问题,但是nats仍然是单点部署,但是参考官方文档的说法,nats很稳定,nats所在的虚拟机挂掉,可以由bosh 恢复。

环境准备

1、部署完成OpenStack icehouse 版本 网络模式;neutron+ovs+vlan,卷存储 ceph,基于OpenStack部署CloudFoundry时,OpenStack必须具有卷存储。

 2、 OpenStack安装完以后,进行以下准备工作:

1、配置默认的安全组策略

       


2、创建密钥对

创建名称为cfkey的密钥对并下载备用,名称可随意,后边配置中会使用到。

3、新增或者修改Flavor

新增或修改原有的3Flavor,要求如下:



4、调整OpenStack ProjectQuotas限制

 

5、创建CloudFoundry使用的内部网络

 

 

   注意:由于cloudfoundry 需要dns服务器,在创建该网络时,需要配置内部的dns 服务器地址,该场景使用的内部dns服务器地址:

               

6、创建路由器,使用该路由器,连接外部网络与net04

   

7、命令行修改cinder 配额

  cinder quota-update  tenant_id --volumes 500   // tenant_idadmin租户的id(使用 keystone tenant-list 获取)

  重启cinder 服务:

   cd /etc/init.d/; for i in $( ls cinder-* ); do sudo service $i restart; done

 

部署Bosh cli 

   

    在OpenStack 创建一个虚拟机,本文采用ubuntu 12.04 64 bits 系统

  

  1、安装ruby 运行环境  

     root@boshcli:~# curl -L https://get.rvm.io | bash -s stable 

Rvm安装完毕后重新开启命令行窗口,以确保载入Rvm环境,然后安装Ruby,要求1.9.3以上版本,本环境中使用ruby-1.9.3-p484

     root@boshcli:~# rvm install 1.9.3

     为了减少安装时间,替换gem源为淘宝gem 源

    root@boshcli:~#gem sources --remove https://rubygems.org/

    root@boshcli:~#gem sources -a https://ruby.taobao.org/

    root@boshcli:~#gem sources -l

 

   2、安装必需的软件

     root@boshcli:~# apt-get install git libxslt-dev libxml2-dev libmysql-ruby libmysqlclient-dev libpq-dev 

   3、安装Bosh Cli

     gem install bosh_cli_plugin_micro --pre

  该过程需要下载一批gem包,需要等待一段时间,安装完成,验证bosh cli 版本

root@boshcli:~# bosh --version

          BOSH 1.2710.0

  4、安装fog组件 验证openstack 环境

  4.1 在root目录创建.fog文件,增加以下内容

:openstack:

   :openstack_auth_url:  http://10.110.13.32:5000/v2.0/tokens  //

   :openstack_api_key:   123456a? //openstack admin 用户密码

   :openstack_username:  admin  // openstack admin 用户

   :openstack_tenant:    admin   //租户名称

 4.2 安装fog

     root@boshcli:~# gem install fog 

4.3 载入fog 的openstack 模式

root@boshcli:~# root#fog openstack

测试是否允许成功;

root@boshcli:~# Compute[:openstack].servers   // 能够返回服务器信息

部署Micro Bosh

1、下载Micro Bosh Stemcell

root@boshcli:~# mkdir -p ~/bosh-workspace/stemcells  

root@boshcli:~# cd ~/bosh-workspace/stemcells  

root@boshcli:~# wget http://bosh-jenkins-artifacts.s3.amazonaws.com/bosh-stemcell/openstack/bosh-stemcell-2917.2-openstack-kvm-ubuntu.tgz  

(可以使用其他工具将bosh-stemcell-2917.2-openstack-kvm-ubuntu.tgz 下载,再放入该目录)

2、创建部署Micro Bosh的Manifest文件

root@boshcli:~# mkdir -p ~/bosh-workspace/deployments/microbosh-openstack  

root@boshcli:~# cd ~/bosh-workspace/deployments/microbosh-openstack  

root@boshcli:~# vi micro_bosh.yml  

配置内容:

--- 
name: microbosh-openstack 
   
logging: 
  level: DEBUG 
   
network: 
  type: dynamic 
  vip: 10.110.13.32 # Floating IP 
  cloud_properties:
     net_id: 0bb4ff64-4413-41a9-9c3b-b93d7b6f6db1
resources: 
  persistent_disk: 16384 
  cloud_properties: 
    instance_type: cf.medium 
   
cloud: 
  plugin: openstack 
  properties: 
    openstack: 
      auth_url: http://10.110.13.2:5000/v2.0 
      username: cloudfoundry     # openstack username 
      api_key: 123456a?       # openstack api_key 
      tenant: cloudfoundry    # openstack tenant 
      default_security_groups: ["default"] # using default security groups 
      default_key_name: vkey  # key name as cf created earlier 
      private_key: ~/vkey.pem # pem file by uploading yourself 
   
apply_spec: 
  properties: 
    director: 
      max_threads: 3 
    hm: 
      resurrector_enabled: true 
    ntp: 
      - 0.north-america.pool.ntp.org 
      - 1.north-america.pool.ntp.org

3、部署Micro Bosh

设置Micro Bosh 部署文件:

root@boshcli:~# cd ~/bosh-workspace/deployments  

root@boshcli:~/bosh-workspace/deployments# bosh micro deployment microbosh-openstack    

 使用上边下载好的Stemcell部署Micro Bosh:

root@boshcli:~/bosh-workspace/deployments# bosh micro deploy ~/bosh-workspace/stemcells/bosh-stemcell-2917.2-openstack-kvm-ubuntu.tgz  

部署成功提示bosh target切换信息:

  4、登陆到Micro bosh 的director并创建账号

Target到Micro Bosh的director

root@boshcli:~/bosh-workspace/deployments#bosh target https://10.110.13.32:25555

初始账号:admin/admin

root@boshcli:~/bosh-workspace/deployments# bosh login  

Your username: admin  

Enter password: *****  

Logged in as `admin‘  

查看bosh状态

root@boshcli:~# bosh status
Config
             /root/.bosh_config


Director
  Name       microbosh-openstack
  URL        https://10.110.13.32:25555
  Version    1.2719.1.0 (00000000)
  User       admin
  UUID       b9c17bd2-2e53-452f-a8e2-a6bfe391aca5
  CPI        openstack
  dns        enabled (domain_name: microbosh)
  compiled_package_cache disabled
  snapshots  disabled


Deployment
  Manifest   /root/bosh-workspace/deployments/cf-183.yml

使用Mircro Bosh 部署Bosh

资源准备:部署bosh,需要8个虚拟机部署bosh的8个组件,(部署过程中,发现每个组件必须部署到单独的虚拟机中,需要准备8个内网IP,两个浮动IP)

 

1、上传Bosh Stemcell到Micro Bosh中

 root@boshcli:~/bosh-workspace/stemcells# bosh target https://10.110.13.32:25555

 root@boshcli:~/bosh-workspace/stemcells#  bosh login

 root@boshcli:~/bosh-workspace/stemcells# bosh upload stemcell bosh-stemcell-2917.2-openstack-kvm-ubuntu.tgz 

2、下载Bosh Release代码并打包 

  root@boshcli:~# cd ~/bosh-workspace

  root@boshcli:~# git clone https://github.com/cloudfoundry/bosh.git

  root@boshcli:~# bundle install --local  //如果失败,可以执行 bundle install  (如果使用已经存在的bosh release,可以不用执行这一步)

  root@boshcli:bundle exec rake release:create_dev_release

3、上传Bosh Release 到micro bosh 中

 root@boshcli:~/bosh-workspace# bosh upload release ~/bosh-workspace/bosh/release/dev_releases/bosh/bosh-105+dev.1.yml

  (如果使用已经存在的bosh,使用命令 root@boshcli:~#bosh upload release ~/bosh-workspace/bosh/release/releases/bosh-99.yml )

4、确认已经上传的stemcell和release

   root@boshcli:~# bosh stemcells

5、创建部署Bosh的Manifest文件

root@boshcli:~# mkdir -p ~/bosh-workspace/deployments/bosh-openstack

root@boshcli:~# cd ~/bosh-workspace/deployments/bosh-openstack 

root@bosh-cli:~# cp ~/bosh-workspace/bosh/release/examples/bosh-openstack-manual.yml bosh-openstack.yml 

6、修改bosh-openstack.yml

---
name: bosh-openstack
director_uuid: b9c17bd2-2e53-452f-a8e2-a6bfe391aca5 # CHANGE


release:
  name: bosh
  version: 99


compilation:
  workers: 2
  network: default
  reuse_compilation_vms: true
  cloud_properties:
    instance_type: cf.small # CHANGE


update:
  canaries: 1
  canary_watch_time: 3000-120000
  update_watch_time: 3000-120000
  max_in_flight: 4


networks:
  - name: floating
    type: vip
    cloud_properties: {}
  - name: default
    type: manual
    subnets:
      - name: private
        range: 171.71.71.0/24 # CHANGE
        gateway: 171.71.71.1 # CHANGE
        reserved:
          - 171.71.71.2 - 171.71.71.60 # CHANGE
        static:
          - 171.71.71.61 - 171.71.71.100 # CHANGE
        cloud_properties:
          net_id: 0bb4ff64-4413-41a9-9c3b-b93d7b6f6db1 # CHANGE


resource_pools:
  - name: common
    network: default
    size: 8
    stemcell:
      name: bosh-openstack-kvm-ubuntu-trusty-go_agent
      version: 2719.2
    cloud_properties:
      instance_type: cf.small # CHANGE


jobs:
  - name: nats
    template: nats
    instances: 1
    resource_pool: common
    networks:
      - name: default
        default: [dns, gateway]
        static_ips:
          - 171.71.71.62 # CHANGE


  - name: redis
    template: redis
    instances: 1
    resource_pool: common
    networks:
      - name: default
        default: [dns, gateway]
        static_ips:
          - 171.71.71.63 # CHANGE


  - name: postgres
    template: postgres
    instances: 1
    resource_pool: common
    persistent_disk: 16384
    networks:
      - name: default
        default: [dns, gateway]
        static_ips:
          - 171.71.71.68 # CHANGE


  - name: powerdns
    template: powerdns
    instances: 1
    resource_pool: common
    networks:
      - name: default
        default: [dns, gateway]
        static_ips:
          - 171.71.71.64 # CHANGE


  - name: blobstore
    template: blobstore
    instances: 1
    resource_pool: common
    persistent_disk: 51200
    networks:
      - name: default
        default: [dns, gateway]
        static_ips:
          - 171.71.71.65 # CHANGE


  - name: director
    template: director
    instances: 1
    resource_pool: common
    persistent_disk: 16384
    networks:
      - name: default
        default: [dns, gateway]
        static_ips:
          - 171.71.71.69 # CHANGE
      - name: floating
        static_ips:
          - 10.110.13.35 # CHANGE


  - name: registry
    template: registry
    instances: 1
    resource_pool: common
    networks:
      - name: default
        default: [dns, gateway]
        static_ips:
          - 171.71.71.66 # CHANGE


  - name: health_monitor
    template: health_monitor
    instances: 1
    resource_pool: common
    networks:
      - name: default
        default: [dns, gateway]
        static_ips:
          - 171.71.71.67 # CHANGE


properties:
  nats:
    address: 171.71.71.62 # CHANGE
    user: nats
    password: c1oudc0w


  redis:
    address: 171.71.71.63 # CHANGE
    password: redis


  postgres: &bosh_db
    host: 171.71.71.68 # CHANGE
    user: postgres
    password: postgres
    database: bosh


  dns:
    address: 171.71.71.64 # CHANGE
    db: *bosh_db
    recursor: 10.110.13.36 # CHANGE


  blobstore:
    address: 171.71.71.65 # CHANGE
    agent:
      user: agent
      password: agent
    director:
      user: director
      password: director


  director:
    name: bosh
    address: 171.71.71.69 # CHANGE
    db: *bosh_db


  registry:
    address: 171.71.71.66 # CHANGE
    db: *bosh_db
    http:
      user: registry
      password: registry


  hm:
    http:
      user: hm
      password: hm
    director_account:
      user: admin
      password: admin
    resurrector_enabled: true


  ntp:
    - 0.north-america.pool.ntp.org
    - 1.north-america.pool.ntp.org


  openstack:
    auth_url: http://10.110.13.2:5000/v2.0 # CHANGE
    username: cloudfoundry # CHANGE
    api_key: 123456a? # CHANGE
    tenant: cloudfoundry # CHANGE
    default_security_groups: ["default"] # CHANGE
    default_key_name: vkey # CHANGE

6、部署bosh

     root@boshcli:~/bosh-workspace/deployments# bosh deployment ~/bosh-workspace/deployments/bosh-openstack/bosh-openstack.yml

     root@boshcli:~/bosh-workspace/deployments# bosh deploy 

Bosh 部署CloudFoundry

 1、依次执行以下命令从GitHub获取并更新代码

    root@boshcli:~# mkdir -p ~/src/cloudfoundry

    root@boshcli:~# cd ~/src/cloudfoundry

    root@boshcli:~/src/cloudfoundry# git clone -b release-candidate https://github.com/cloudfoundry/cf-release.git

    root@boshcli:~/src/cloudfoundry# cd cf-release

root@boshcli:~/src/cloudfoundry/cf-release# ./update

 2、制作cloudfoundry release 包:

    root@boshcli:~/src/cloudfoundry/cf-release# bosh create release --force

  该命令式获取最新的CloudFoundry 源码,也可以使用已经经过测试的发布版本,命令行操作:

     root@boshcli:~/src/cloudfoundry/cf-release# bosh create release --force

     使用已经经过测试的发布版本,命令行操作:

    root@boshcli:~/src/cloudfoundry/cf-release# bosh create release releases/cf-183.yml

 最终会在releases目录下生成一个tgz的压缩包例如cf-183.tgz

注意:上述操作,非常耗时,也容易出现响应超时的问题。目前可以采用github社区提供的release包,cf-183版本的下载地址:

https://community-shared-boshreleases.s3.amazonaws.com/boshrelease-cf-183.tgz

  3、连接director,上传CloudFoundry bosh release 包和stemcell包

  root@boshcli:~/src/cloudfoundry/cf-release# bosh target https://10.110.13.32:25555

  root@boshcli:~/src/cloudfoundry/cf-release# bosh login

  root@boshcli:~/src/cloudfoundry/cf-release# bosh upload stemcell ~/bosh-workspace/stemcells/bosh-stemcell-2917.2-openstack-kvm-ubuntu.tgz

  root@boshcli:~/src/cloudfoundry/cf-release# bosh upload release releases/cf-183.tgz

 检查是否上传成功:

  root@boshcli:~# bosh releases

  root@boshcli:~# bosh stemcells 

  该stemcell 的登录用户名密码 root/c1oudc0w

 4、创建部署CloudFoundry需要的创建Manifest文件

    root@boshcli:~/src/cloudfoundry/cf-release# cd ~/bosh-workspace/deployments/

    root@boshcli:~/bosh-workspace/deployments# vi cf-183.yml

 

5、部署:

         root@boshcli:~/bosh-workspace/deployments# bosh deployment cf-183.yml

     root@boshcli:~/bosh-workspace/deployments# bosh deploy   

6、部署成功


该环境使用的cf-183.yml

<%
director_uuid = ‘b9c17bd2-2e53-452f-a8e2-a6bfe391aca5‘
root_domain = "inspurapp.com"
deployment_name = ‘cf‘
cf_release = ‘183‘
protocol = ‘http‘
common_password = ‘c1oudc0w‘
%>
---
name: <%= deployment_name %>
director_uuid: <%= director_uuid %>


releases:
 - name: cf
   version: <%= cf_release %>


compilation:
  workers: 2
  network: shared
  reuse_compilation_vms: true
  cloud_properties:
    instance_type: cf.small


update:
  canaries: 0
  canary_watch_time: 30000-600000
  update_watch_time: 30000-600000
  max_in_flight: 32
  serial: false


networks:
  - name: shared
    type: dynamic
    cloud_properties:
      net_id: 0bb4ff64-4413-41a9-9c3b-b93d7b6f6db1
      security_groups:
        - default
  - name: floating
    type: vip
    cloud_properties: {}


resource_pools:
  - name: common
    network: shared
    size: 13
    stemcell:
      name: bosh-openstack-kvm-ubuntu-trusty-go_agent
      version: 2719.2
    cloud_properties:
      instance_type: cf.small
  
  - name: meidium
    network: shared
    size: 2
    stemcell:
      name: bosh-openstack-kvm-ubuntu-trusty-go_agent
      version: 2719.2
    cloud_properties:
      instance_type: cf.medium
      
  - name: large
    network: shared
    size: 2
    stemcell:
      name: bosh-openstack-kvm-ubuntu-trusty-go_agent
      version: 2719.2
    cloud_properties:
      instance_type: cf.big


jobs:
  - name: nats
    templates:
      - name: nats
      - name: nats_stream_forwarder
    instances: 1
    resource_pool: common
    networks:
      - name: shared
        shared: [dns, gateway]


  - name: health_manager
    templates:
      - name: hm9000
    instances: 2
    resource_pool: common
    networks:
      - name: shared
        shared: [dns, gateway]
        
  - name: etcd
    templates:
      - name: etcd
    instances: 1
    resource_pool: common
    networks:
      - name: shared
        shared: [dns, gateway]
          
  - name: syslog_aggregator
    templates:
      - name: syslog_aggregator
    instances: 1
    resource_pool: common
    persistent_disk: 40960
    networks:
      - name: shared
        shared: [dns, gateway]


  - name: nfs_server
    templates:
      - name: debian_nfs_server
    instances: 1
    resource_pool: common
    persistent_disk: 40960
    networks:
      - name: shared
        shared: [dns, gateway]


  - name: postgres
    templates:
      - name: postgres
    instances: 1
    resource_pool: common
    persistent_disk: 40960
    networks:
      - name: shared
        shared: [dns, gateway]
    properties:
      db: databases


  - name: loggregator
    templates:
      - name: loggregator
    instances: 1
    resource_pool: common
    networks:
      - name: shared
        shared: [dns, gateway]


  - name: trafficcontroller
    templates:
      - name: loggregator_trafficcontroller
    instances: 1
    resource_pool: common
    networks:
      - name: shared
        shared: [dns, gateway]


  - name: cloud_controller
    templates:
      - name: cloud_controller_ng
    instances: 2
    resource_pool: common
    networks:
      - name: shared
        shared: [dns, gateway]
    properties:
      db: ccdb
      
  - name: uaa
    templates:
      - name: uaa
    instances: 2
    resource_pool: common
    networks:
      - name: shared
        shared: [dns, gateway]
        
  - name: dea
    templates:
      - name: dea_logging_agent
      - name: dea_next
    instances: 2
    resource_pool: large
    networks:
      - name: shared
        shared: [dns, gateway]


  - name: router
    templates:
      - name: gorouter
    instances: 2
    resource_pool: meidium
    networks:
      - name: shared
        shared: [dns, gateway]
    properties:
       metron_agent:
        zone: nova  
        
properties:
  domain: <%= root_domain %>
  system_domain: <%= root_domain %>
  system_domain_organization: ‘admin‘
  app_domains:
    - <%= root_domain %>


  haproxy: {}


  networks: 
    apps: shared 


  nats:
    user: nats
    password: <%= common_password %>
    address: 0.nats.shared.<%= deployment_name %>.microbosh
    port: 4222
    machines:
      - 0.nats.shared.<%= deployment_name %>.microbosh


  syslog_aggregator:
    address: 0.syslog-aggregator.shared.<%= deployment_name %>.microbosh
    port: 54321


  nfs_server:
    address: 0.nfs-server.shared.<%= deployment_name %>.microbosh
    network: "*.<%= deployment_name %>.microbosh"
    allow_from_entries:
      - 171.71.71.0/24


  debian_nfs_server:
    no_root_squash: true


  loggregator_endpoint:
    shared_secret: <%= common_password %>
    host: 0.trafficcontroller.shared.<%= deployment_name %>.microbosh


  loggregator:
    zone: nova
    servers:
      zone:
        -  0.loggregator.shared.<%= deployment_name %>.microbosh


  traffic_controller:
    zone: ‘nova‘


  logger_endpoint:
    use_ssl: <%= protocol == ‘https‘ %>
    port: 80


  ssl:
    skip_cert_verify: true


  router:
    endpoint_timeout: 60
    status:
      port: 8080
      user: gorouter
      password: <%= common_password %>
    servers:
      z1:
        - 0.router.shared.<%= deployment_name %>.microbosh
      z2:
        - 1.router.shared.<%= deployment_name %>.microbosh


  etcd:
    machines:
      - 0.etcd.shared.<%= deployment_name %>.microbosh


  dea: &dea
    disk_mb: 40960
    disk_overcommit_factor: 2
    memory_mb: 8192
    memory_overcommit_factor: 1    
    directory_server_protocol: <%= protocol %>
    mtu: 1460
    deny_networks:
      - 169.254.0.0/16 # Google Metadata endpoint


  dea_next: *dea
  
  metron_agent: 
    zone: nova
  metron_endpoint:
    zone: nova
    shared_secret: <%= common_password %>
  
  disk_quota_enabled: true


  dea_logging_agent:
    status:
      user: admin
      password: <%= common_password %>


  databases: &databases
    db_scheme: postgres
    address: 0.postgres.shared.<%= deployment_name %>.microbosh
    port: 5524
    roles:
      - tag: admin
        name: ccadmin
        password: <%= common_password %>
      - tag: admin
        name: uaaadmin
        password: <%= common_password %>
    databases:
      - tag: cc
        name: ccdb
        citext: true
      - tag: uaa
        name: uaadb
        citext: true


  ccdb: &ccdb
    db_scheme: postgres
    address: 0.postgres.shared.<%= deployment_name %>.microbosh
    port: 5524
    roles:
      - tag: admin
        name: ccadmin
        password: <%= common_password %>
    databases:
      - tag: cc
        name: ccdb
        citext: true


  ccdb_ng: *ccdb


  uaadb:
    db_scheme: postgresql
    address: 0.postgres.shared.<%= deployment_name %>.microbosh
    port: 5524
    roles:
      - tag: admin
        name: uaaadmin
        password: <%= common_password %>
    databases:
      - tag: uaa
        name: uaadb
        citext: true


  cc: &cc
    security_group_definitions : []
    default_running_security_groups : []
    default_staging_security_groups : []
    srv_api_uri: <%= protocol %>://api.<%= root_domain %>    
    jobs:
      local:
        number_of_workers: 2
      generic:
        number_of_workers: 2
      global:
        timeout_in_seconds: 14400
      app_bits_packer:
        timeout_in_seconds: null
      app_events_cleanup:
        timeout_in_seconds: null
      app_usage_events_cleanup:
        timeout_in_seconds: null
      blobstore_delete:
        timeout_in_seconds: null
      blobstore_upload:
        timeout_in_seconds: null
      droplet_deletion:
        timeout_in_seconds: null
      droplet_upload:
        timeout_in_seconds: null
      model_deletion:
        timeout_in_seconds: null
    bulk_api_password: <%= common_password %>
    staging_upload_user: upload
    staging_upload_password: <%= common_password %>
    quota_definitions:
      default:
        memory_limit: 10240
        total_services: 100
        non_basic_services_allowed: true
        total_routes: 1000
        trial_db_allowed: true
    resource_pool:
      resource_directory_key: cloudfoundry-resources
      fog_connection:
        provider: Local
        local_root: /var/vcap/nfs/shared
    packages:
      app_package_directory_key: cloudfoundry-packages
      fog_connection:
        provider: Local
        local_root: /var/vcap/nfs/shared
    droplets:
      droplet_directory_key: cloudfoundry-droplets
      fog_connection:
        provider: Local
        local_root: /var/vcap/nfs/shared
    buildpacks:
      buildpack_directory_key: cloudfoundry-buildpacks
      fog_connection:
        provider: Local
        local_root: /var/vcap/nfs/shared        
    install_buildpacks:
      - name: java_buildpack
        package: buildpack_java_offline
      - name: ruby_buildpack
        package: buildpack_ruby
      - name: nodejs_buildpack
        package: buildpack_nodejs
      - name: go_buildpack
        package: buildpack_go
      - name: php_buildpack
        package: buildpack_php
      - name: buildpack_python
        package: buildpack_python
    db_encryption_key: <%= common_password %>
    hm9000_noop: false
    diego: false
    newrelic:
      license_key: null
      environment_name: <%= deployment_name %>       


  ccng: *cc


  login:
    enabled: false


  uaa:
    url: <%= protocol %>://uaa.<%= root_domain %>
    no_ssl: <%= protocol == ‘http‘ %>
    cc:
      client_secret: <%= common_password %>
    admin:
      client_secret: <%= common_password %>
    batch:
      username: batch
      password: <%= common_password %>
    clients:
      cf:
        override: true
        authorized-grant-types: password,implicit,refresh_token
        authorities: uaa.none
        scope: cloud_controller.read,cloud_controller.write,openid,password.write,cloud_controller.admin,scim.read,scim.write
        access-token-validity: 7200
        refresh-token-validity: 1209600
      admin:
        secret: <%= common_password %>
        authorized-grant-types: client_credentials
        authorities: clients.read,clients.write,clients.secret,password.write,scim.read,uaa.admin
    scim:
      users:
      - admin|<%= common_password %>|scim.write,scim.read,openid,cloud_controller.admin,uaa.admin,password.write
      - services|<%= common_password %>|scim.write,scim.read,openid,cloud_controller.admin
    jwt:
      signing_key: |
        -----BEGIN RSA PRIVATE KEY-----
        MIICXAIBAAKBgQDHFr+KICms+tuT1OXJwhCUmR2dKVy7psa8xzElSyzqx7oJyfJ1
        JZyOzToj9T5SfTIq396agbHJWVfYphNahvZ/7uMXqHxf+ZH9BL1gk9Y6kCnbM5R6
        0gfwjyW1/dQPjOzn9N394zd2FJoFHwdq9Qs0wBugspULZVNRxq7veq/fzwIDAQAB
        AoGBAJ8dRTQFhIllbHx4GLbpTQsWXJ6w4hZvskJKCLM/o8R4n+0W45pQ1xEiYKdA
        Z/DRcnjltylRImBD8XuLL8iYOQSZXNMb1h3g5/UGbUXLmCgQLOUUlnYt34QOQm+0
        KvUqfMSFBbKMsYBAoQmNdTHBaz3dZa8ON9hh/f5TT8u0OWNRAkEA5opzsIXv+52J
        duc1VGyX3SwlxiE2dStW8wZqGiuLH142n6MKnkLU4ctNLiclw6BZePXFZYIK+AkE
        xQ+k16je5QJBAN0TIKMPWIbbHVr5rkdUqOyezlFFWYOwnMmw/BKa1d3zp54VP/P8
        +5aQ2d4sMoKEOfdWH7UqMe3FszfYFvSu5KMCQFMYeFaaEEP7Jn8rGzfQ5HQd44ek
        lQJqmq6CE2BXbY/i34FuvPcKU70HEEygY6Y9d8J3o6zQ0K9SYNu+pcXt4lkCQA3h
        jJQQe5uEGJTExqed7jllQ0khFJzLMx0K6tj0NeeIzAaGCQz13oo2sCdeGRHO4aDh
        HH6Qlq/6UOV5wP8+GAcCQFgRCcB+hrje8hfEEefHcFpyKH+5g1Eu1k0mLrxK2zd+
        4SlotYRHgPCEubokb2S1zfZDWIXW3HmggnGgM949TlY=
        -----END RSA PRIVATE KEY-----
      verification_key: |
        -----BEGIN PUBLIC KEY-----
        MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDHFr+KICms+tuT1OXJwhCUmR2d
        KVy7psa8xzElSyzqx7oJyfJ1JZyOzToj9T5SfTIq396agbHJWVfYphNahvZ/7uMX
        qHxf+ZH9BL1gk9Y6kCnbM5R60gfwjyW1/dQPjOzn9N394zd2FJoFHwdq9Qs0wBug
        spULZVNRxq7veq/fzwIDAQAB
        -----END PUBLIC KEY-----

 

配置DNS

  新创建一个虚拟机,作为dns-server (本文使用已经有的dns服务器,地址:10.106.1.36)

1、安装BIND9程序包

  root@dns:sudo apt-get install bind9  

 

2、新增编辑文件,总共需要编辑2个文件,新增2个文件,如下:

修改/etc/bind/named.conf.options,去掉forwarders的注释,其中的IP为网络营运商提供的DNS服务器,这里我们使用googleDNS

forwarders {  

       8.8.8.8;  

       8.8.4.4;  

};  

修改/etc/bind/named.conf.local,在最后增加增加双向解析代码:

zone  "iae.me" {  

     type master;  

     file "/etc/bind/db.iae.me";  

};  

  

zone  "26.68.10.in-addr.arpa" {  

     type master;  

     file "/etc/bind/db.26.68.10";  

};  

注意:其中的26.68.10是目标IP 10.68.26.91(为haproxy的地址)的前三段,表示一个IP地址段

新增域名(iae.me)解析文件/etc/bind/db.iae.me,内容如下:

;  

; BIND data file for dev sites  

;  

$TTL    604800  

@       IN      SOA     mycloud.com. root.mycloud.com. (  

                              1         ; Serial  

                         604800         ; Refresh  

                          86400         ; Retry  

                        2419200         ; Expire  

                         604800 )       ; Negative Cache TTL  

;  

@       IN      NS      mycloud.com.  

@       IN      A       10.68.26.91 

*.iae.me.  14400   IN      A       10.68.26.91  

新增IP地址反向解析文件/etc/bind/db.10.68.26.91,内容如下:

;  

; BIND reverse data file for dev domains  

;  

$TTL    604800  

@       IN      SOA     dev. root.dev. (  

                              1         ; Serial  

                         604800         ; Refresh  

                          86400         ; Retry  

                        2419200         ; Expire  

                         604800 )       ; Negative Cache TTL  

;  

@        IN      NS      iae.me.  

91      IN      PTR     iae.me.  


3、重启BIND9服务

   root@dns:service bind9 restart 

 

 注意:dns的配置应该在进行系统规划之前创建,将该dns地址配置在创建OpenStack网络时,这样,所有的虚拟机默认的dns地址中会包含该地址

配置haproxy

本文使用源码编译安装haproxy

1、root@haproxy:tar xvf haproxy-1.5.0.tar.gz

2、root@haproxy:cd haproxy-1.5.0

3、root@haproxy:make TARGET=ubuntu34 

4、root@haproxy:make  install PREFIX=/usr/local/haproxy 

5、root@haproxy:cd /etc/

    mkdir haproxy

    cd haproxy/

    vi haproxy.cfg 增加配置:

   global

    daemon

    maxconn 300000

    spread-checks 4

    nbproc 8

    log 127.0.0.1 local0 info

defaults

    log global

    #log 10.41.2.86:5140 syslog

    #log 10.106.1.34:5140 syslog

    option httplog

     mode http

   # log  127.0.0.1   local0 info

    timeout connect 30000ms

    timeout client 300000ms

    timeout server 300000ms

    # maxconn 320000

   # option http-pretend-keepalive

    option dontlognull

    option forwardfor

    option redispatch

    option abortonclose

listen admin_stats

       bind 0.0.0.0:1080         #监听端口

        mode http                       #http的7层模式

        option httplog                  #采用http日志格式

        maxconn 10

        stats refresh 30s               #统计页面自动刷新时间

        stats uri /stats                #统计页面url

        stats realm XingCloud\ Haproxy  #统计页面密码框上提示文本

        stats auth admin:admin          #统计页面用户名和密码设置

        stats hide-version              #隐藏统计页面上HAProxy的版本信息

frontend http-in

    mode http

    bind *:80

    log-format ^%ci:%cp^[%t]^%ft^%b/%s^%hr^%r^%ST^%B^%Tr^%Ts

    capture request header Host len 32

    reqadd X-Forwarded-Proto:\ http

    default_backend http-routers

 

backend tcp-routers

    mode tcp

    balance source

      #   server node1 10.106.1.46:80 weight 10

      #   server node2 10.106.1.57:80 weight 10

          server node1 192.168.136.148:80  weight 10 cookie app1inst1 check inter 2000 rise 2 fall 5 maxconn 10000

          server node2 192.168.136.155:80  weight 10 cookie app1inst2 check inter 2000 rise 2 fall 5 maxconn 10000

      #    server node3 10.106.1.27:80  weight 10 cookie app1inst2 check inter 2000 rise 2 fall 5 maxconn 10000

backend http-routers

    mode http

    balance source

         #server node1 10.106.1.46:80  weight 50 cookie app1inst1 check inter 2000 rise 2 fall 5

       # server node2 10.106.1.57:80  weight 3

          

       server node1 192.168.136.148:80  weight 50 cookie app1inst1 check inter 2000 rise 2 fall 5 maxconn 10000

       server node2 192.168.136.155:80  weight 50 cookie app1inst2 check inter 2000 rise 2 fall 5 maxconn 10000    

       #server node3 10.106.1.27:80  weight 50 cookie app1inst3 check inter 2000 rise 2 fall 5 maxconn 10000

 6、启动haproxy

     /usr/local/haproxy/sbin/haproxy -f /etc/haproxy/haproxy.cfg

     关闭haproxy 使用kill -9

 7、haproxy达到最优的性能,需要进行一些优化,目前在haproxy服务器进行的配置

    加载模块:modprobe ip_conntrack

     修改文件 /etc/sysctl.conf 

net.ipv4.ip_forward = 0

net.ipv4.conf.default.rp_filter = 1

net.ipv4.conf.default.accept_source_route = 0

kernel.sysrq = 0

kernel.core_uses_pid = 1

net.ipv4.tcp_syncookies = 1

kernel.msgmnb = 65536

kernel.msgmax = 65536

kernel.shmmax = 68719476736

kernel.shmall = 4294967296

net.ipv4.tcp_max_tw_buckets = 6000

net.ipv4.tcp_sack = 1

net.ipv4.tcp_window_scaling = 1

net.ipv4.tcp_rmem = 4096 87380 4194304

net.ipv4.tcp_wmem = 4096 16384 4194304

net.core.wmem_default = 8388608

net.core.rmem_default = 8388608

net.core.rmem_max = 16777216

net.core.wmem_max = 16777216

net.core.netdev_max_backlog = 262144

net.core.somaxconn = 262144

net.ipv4.tcp_max_orphans = 3276800

net.ipv4.tcp_max_syn_backlog = 262144

net.ipv4.tcp_timestamps = 0

net.ipv4.tcp_synack_retries = 1

net.ipv4.tcp_syn_retries = 1

net.ipv4.tcp_tw_recycle = 1

net.ipv4.tcp_tw_reuse = 1

net.ipv4.tcp_mem = 94500000 915000000 927000000

net.ipv4.tcp_fin_timeout = 1

net.ipv4.tcp_keepalive_time = 30

net.ipv4.ip_local_port_range = 1024 65000

net.nf_conntrack_max = 1024000

测试使用

安装cf的命令行工具:cli,下载地址:https://github.com/cloudfoundry/cli

 

CloudFoundry扩容减容

 扩容,减容通过修改部署文件完成,例如

 

 

执行以下命令,完成扩容减容

     root@boshcli:~/bosh-workspace/deployments# bosh deployment cf-183.yml

     root@boshcli:~/bosh-workspace/deployments# bosh deploy 

CloudFoundry 平滑升级

     暂未验证

制作离线buildpack

1、下载官方的java-buildpack

   git clone https://github.com/cloudfoundry/java-buildpack.git

2、下载工程依赖

     cd java-buildpack  

     bundle install

3、制作offline buildpack

   bundle exec rake package OFFLINE=true

4、上传buildpack

    cf create-buildpack java-buildpack-offline build/java-buildpack-offline-c19642c.zip 5 --enable

 

环境信息汇总

  10.68.26.91 haproxy

  10.68.26.87 bosh客户端

  10.68.26.92 cf命令行客户端

  Bosh director:10.68.26.

常见问题:

 1、OpenStack 租户的卷空间不足,需要增加租户的卷信息,命令行增加

       cinder quota-update  tenant_id --volumes 500   // tenant_idadmin租户的id(使用 keystone tenant-list 获取)

  重启cinder 服务:

       cd /etc/init.d/; for i in $( ls cinder-* ); do sudo service $i restart; done

 

2、micro bosh 的dns未起作用

    解决方案: 重新规划网络,在cloudfoundry 子网内配置内部的dns,该子网全部使用这个dns 。解决方法:创建一个cf使用的子网,dns配置到10.106.1.36

      OpenStack 中的修改dns,需要重启虚拟机。

 

3、单独的cloudfoundry 租户 cloudfoundry 123456a?

4、Cf默认的java buildpack是在线的,需要制作离线的buildpack,制作方法

5、修改java buildpack的tomcat参数配置(默认参数配置,app的应用启动时间非常长)

6、如何配置使用多个HM,多个nats

7、如果OpenStack 配置了主机名访问,需要在bosh cli客户机上配置hosts,还需要在bosh director 虚拟机上配置hosts  增加 *.*.*.*  controller


基于OpenStack(IceHouse+neutron) 部署 CloudFounry v183