设为首页 加入收藏

TOP

《HyperLedger Fabric 实战》—— 六、Fabric Kafka 集群部署
2019-04-23 14:29:45 】 浏览:55
Tags:HyperLedger Fabric 实战 Kafka 集群 部署

《HyperLedger Fabric 实战》—— 六、Fabric Kafka 集群部署

将需要 13 台机器,分别:

  • zookeeper 3 台
  • orderer 3 台
  • kafka 4 台
  • peer 3 台

1、自定义 crypto-config.yaml 配置

OrdererOrgs:
  - Name: Orderer
    Domain: example.com
    Specs:
      - Hostname: orderer0
      - Hostname: orderer1
      - Hostname: orderer2

PeerOrgs:
  - Name: Org1
    Domain: org1.example.com
    Template:
      Count: 2
    Users:
      Count: 1

  - Name: Org2
    Domain: org2.example.com
    Template:
      Count: 2
    Users:
      Count: 1
    Specs:
      - Hostname: foo
        CommonName: foo27.org2.example.com
      - Hostname: bar
      - Hostname: baz

  - Name: Org3
    Domain: org3.example.com
    Template:
      Count: 2
    Users:
      Count: 1

  - Name: Org4
    Domain: org4.example.com
    Template:
      Count: 2
    Users:
      Count: 1

  - Name: Org5
    Domain: org5.example.com
    Template:
      Count: 2
    Users:
      Count: 1

Org2 除使用模板配置,还使用了自定义节点配置,后面也会对其进行验证。

2、生成节点配置文件

cd ~/fabric/aberic/
./bin/cryptogen generate --config=./crypto-config.yaml

生成的组织配置文件在 ~/fabric/aberic/crypto-config/peerOrganizations 目录下。

3、configtx 配置

configtx.yaml 配置文件在上下文中需要与 crypto-config.yaml 相匹配。

# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#

---
################################################################################
#
#   Section: Organizations
#
#   - This section defines the different organizational identities which will
#   be referenced later in the configuration.
#
################################################################################
Organizations:

    - &OrdererOrg
        Name: OrdererMSP
        ID: OrdererMSP
        MSPDir: crypto-config/ordererOrganizations/example.com/msp

    - &Org1
        Name: Org1MSP
        ID: Org1MSP

        MSPDir: crypto-config/peerOrganizations/org1.example.com/msp

        AnchorPeers:
            - Host: peer0.org1.example.com
              Port: 7051

    - &Org2
        Name: Org2MSP
        ID: Org2MSP

        MSPDir: crypto-config/peerOrganizations/org2.example.com/msp

        AnchorPeers:
            - Host: peer0.org2.example.com
              Port: 7051

    - &Org3
        Name: Org3MSP
        ID: Org3MSP

        MSPDir: crypto-config/peerOrganizations/org3.example.com/msp

        AnchorPeers:
            - Host: peer0.org3.example.com
              Port: 7051

    - &Org4
        Name: Org4MSP
        ID: Org4MSP

        MSPDir: crypto-config/peerOrganizations/org4.example.com/msp

        AnchorPeers:
            - Host: peer0.org4.example.com
              Port: 7051

    - &Org5
        Name: Org5MSP
        ID: Org5MSP

        MSPDir: crypto-config/peerOrganizations/org5.example.com/msp

        AnchorPeers:
            - Host: peer0.org5.example.com
              Port: 7051

################################################################################
#
#   SECTION: Orderer
#
#   - This section defines the values to encode into a config transaction or
#   genesis block for orderer related parameters
#
################################################################################
Orderer: &OrdererDefaults

    OrdererType: kafka

    Addresses:
        - orderer0.example.com:7050
        - orderer1.example.com:7050
        - orderer2.example.com:7050

    BatchTimeout: 2s

    BatchSize:
        MaxMessageCount: 10
        AbsoluteMaxBytes: 98 MB
        PreferredMaxBytes: 512 KB

    Kafka:
        Brokers:
            - 192.168.24.204:9092
            - 192.168.24.205:9092
            - 192.168.24.206:9092
            - 192.168.24.207:9092

    Organizations:

################################################################################
#
#   SECTION: Application
#
#   - This section defines the values to encode into a config transaction or
#   genesis block for application related parameters
#
################################################################################
Application: &ApplicationDefaults

    Organizations:

Capabilities:
    Global: &ChannelCapabilities
        V1_1: true

    Orderer: &OrdererCapabilities
        V1_1: true

    Application: &ApplicationCapabilities
        V1_1: true

################################################################################
#
#   Profile
#
#   - Different configuration profiles may be encoded here to be specified
#   as parameters to the configtxgen tool
#
################################################################################
Profiles:

    TwoOrgsOrdererGenesis:
        Orderer:
            <<: *OrdererDefaults
            Organizations:
                - *OrdererOrg
        Consortiums:
            SampleConsortium:
                Organizations:
                    - *Org1
                    - *Org2
                    - *Org3
                    - *Org4
                    - *Org5
    TwoOrgsChannel:
        Consortium: SampleConsortium
        Application:
            <<: *ApplicationDefaults
            Organizations:
                - *Org1
                - *Org2
                - *Org3
                - *Org4
                - *Org5

4、生成创世区块文件及 channel 配置文件

./bin/configtxgen -profile TwoOrgsOrdererGenesis -outputBlock ./channel-artifacts/genesis.block

./bin/configtxgen -profile TwoOrgsChannel -outputCreateChannelTx ./channel-artifacts/mychannel.tx -channelID mychannel

5、Zookeeper 配置

docker-zookeeper1.yaml

# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
# 
# ZooKeeper的基本运转流程:
# 1、选举Leader。
# 2、同步数据。
# 3、选举Leader过程中算法有很多,但要达到的选举标准是一致的。
# 4、Leader要具有最高的执行ID,类似root权限。
# 5、集群中大多数的机器得到响应并follow选出的Leader。
#

version: '2'

services:

  zookeeper1:
    container_name: zookeeper1
    hostname: zookeeper1
    image: hyperledger/fabric-zookeeper
    restart: always
    environment:
      # ========================================================================
      #     Reference: https://zookeeper.apache.org/doc/r3.4.9/zookeeperAdmin.html#sc_configuration
      # ========================================================================
      #
      # myid
      # The ID must be unique within the ensemble and should have a value
      # ID在集合中必须是唯一的并且应该有一个值
      # between 1 and 255.
      # 在1和255之间。
      - ZOO_MY_ID=1
      #
      # server.x=[hostname]:nnnnn[:nnnnn]
      # The list of servers that make up the ZK ensemble. The list that is used
      # by the clients must match the list of ZooKeeper servers that each ZK
      # server has. There are two port numbers `nnnnn`. The first is what
      # followers use to connect to the leader, while the second is for leader
      # election.
      # 组成ZK集合的服务器列表。客户端使用的列表必须与ZooKeeper服务器列表所拥有的每一个ZK服务器相匹配。
      # 有两个端口号 `nnnnn`。第一个是追随者用来连接领导者的东西,第二个是领导人选举。
      - ZOO_SERVERS=server.1=zookeeper1:2888:3888 server.2=zookeeper2:2888:3888 server.3=zookeeper3:2888:3888
    ports:
      - 2181:2181
      - 2888:2888
      - 3888:3888
    extra_hosts:
      - zookeeper1:192.168.24.201
      - zookeeper2:192.168.24.202
      - zookeeper3:192.168.24.203
      - kafka1:192.168.24.204
      - kafka2:192.168.24.205
      - kafka3:192.168.24.206
      - kafka4:192.168.24.207

docker-zookeeper2.yaml

# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
# 
# ZooKeeper的基本运转流程:
# 1、选举Leader。
# 2、同步数据。
# 3、选举Leader过程中算法有很多,但要达到的选举标准是一致的。
# 4、Leader要具有最高的执行ID,类似root权限。
# 5、集群中大多数的机器得到响应并follow选出的Leader。
#

version: '2'

services:

  zookeeper2:
    container_name: zookeeper2
    hostname: zookeeper2
    image: hyperledger/fabric-zookeeper
    restart: always
    environment:
      # ========================================================================
      #     Reference: https://zookeeper.apache.org/doc/r3.4.9/zookeeperAdmin.html#sc_configuration
      # ========================================================================
      #
      # myid
      # The ID must be unique within the ensemble and should have a value
      # ID在集合中必须是唯一的并且应该有一个值
      # between 1 and 255.
      # 在1和255之间。
      - ZOO_MY_ID=2
      #
      # server.x=[hostname]:nnnnn[:nnnnn]
      # The list of servers that make up the ZK ensemble. The list that is used
      # by the clients must match the list of ZooKeeper servers that each ZK
      # server has. There are two port numbers `nnnnn`. The first is what
      # followers use to connect to the leader, while the second is for leader
      # election.
      # 组成ZK集合的服务器列表。客户端使用的列表必须与ZooKeeper服务器列表所拥有的每一个ZK服务器相匹配。
      # 有两个端口号 `nnnnn`。第一个是追随者用来连接领导者的东西,第二个是领导人选举。
      - ZOO_SERVERS=server.1=zookeeper1:2888:3888 server.2=zookeeper2:2888:3888 server.3=zookeeper3:2888:3888
    ports:
      - 2181:2181
      - 2888:2888
      - 3888:3888
    extra_hosts:
      - zookeeper1:192.168.24.201
      - zookeeper2:192.168.24.202
      - zookeeper3:192.168.24.203
      - kafka1:192.168.24.204
      - kafka2:192.168.24.205
      - kafka3:192.168.24.206
      - kafka4:192.168.24.207

docker-zookeeper3.yaml

# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
# 
# ZooKeeper的基本运转流程:
# 1、选举Leader。
# 2、同步数据。
# 3、选举Leader过程中算法有很多,但要达到的选举标准是一致的。
# 4、Leader要具有最高的执行ID,类似root权限。
# 5、集群中大多数的机器得到响应并follow选出的Leader。
#

version: '2'

services:

  zookeeper3:
    container_name: zookeeper3
    hostname: zookeeper3
    image: hyperledger/fabric-zookeeper
    restart: always
    environment:
      # ========================================================================
      #     Reference: https://zookeeper.apache.org/doc/r3.4.9/zookeeperAdmin.html#sc_configuration
      # ========================================================================
      #
      # myid
      # The ID must be unique within the ensemble and should have a value
      # ID在集合中必须是唯一的并且应该有一个值
      # between 1 and 255.
      # 在1和255之间。
      - ZOO_MY_ID=3
      #
      # server.x=[hostname]:nnnnn[:nnnnn]
      # The list of servers that make up the ZK ensemble. The list that is used
      # by the clients must match the list of ZooKeeper servers that each ZK
      # server has. There are two port numbers `nnnnn`. The first is what
      # followers use to connect to the leader, while the second is for leader
      # election.
      # 组成ZK集合的服务器列表。客户端使用的列表必须与ZooKeeper服务器列表所拥有的每一个ZK服务器相匹配。
      # 有两个端口号 `nnnnn`。第一个是追随者用来连接领导者的东西,第二个是领导人选举。
      - ZOO_SERVERS=server.1=zookeeper1:2888:3888 server.2=zookeeper2:2888:3888 server.3=zookeeper3:2888:3888
    ports:
      - 2181:2181
      - 2888:2888
      - 3888:3888
    extra_hosts:
      - zookeeper1:192.168.24.201
      - zookeeper2:192.168.24.202
      - zookeeper3:192.168.24.203
      - kafka1:192.168.24.204
      - kafka2:192.168.24.205
      - kafka3:192.168.24.206
      - kafka4:192.168.24.207

6、Kafka 配置

docker-kafka1.yaml

# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
# 
# 我们使用K和Z分别代表Kafka集群和ZooKeeper集群的节点个数
# 
# 1)K的最小值应该被设置为4(我们将会在第4步中解释,这是为了满足crash容错的最小节点数。
#    如果有4个代理,那么可以容错一个代理崩溃,一个代理停止服务后,channel仍然可以继续读写,新的channel可以被创建)
# 2)Z可以为3,5或是7。它的值需要是一个奇数避免脑裂(split-brain)情况,同时选择大于1的值为了避免单点故障。
#    超过7个ZooKeeper servers会被认为overkill。
#

version: '2'

services:

  kafka1:
    container_name: kafka1
    hostname: kafka1
    image: hyperledger/fabric-kafka
    restart: always
    environment:
      # ========================================================================
      #     Reference: https://kafka.apache.org/documentation/#configuration
      # ========================================================================
      #
      # broker.id
      - KAFKA_BROKER_ID=1
      #
      # min.insync.replicas
      # Let the value of this setting be M. Data is considered committed when
      # it is written to at least M replicas (which are then considered in-sync
      # and belong to the in-sync replica set, or ISR). In any other case, the
      # write operation returns an error. Then:
      # 1. If up to M-N replicas -- out of the N (see default.replication.factor
      # below) that the channel data is written to -- become unavailable,
      # operations proceed normally.
      # 2. If more replicas become unavailable, Kafka cannot maintain an ISR set
      # of M, so it stops accepting writes. Reads work without issues. The
      # channel becomes writeable again when M replicas get in-sync.
      # 
      # min.insync.replicas = M---设置一个M值(例如1<M<N,查看下面的default.replication.factor)
      # 数据提交时会写入至少M个副本(这些数据然后会被同步并且归属到in-sync 副本集合或ISR)。
      # 其它情况,写入操作会返回一个错误。接下来:
      # 1)如果channel写入的数据多达N-M个副本变的不可用,操作可以正常执行。
      # 2)如果有更多的副本不可用,Kafka不可以维护一个有M数量的ISR集合,因此Kafka停止接收写操作。Channel只有当同步M个副本后才可以重新可以写。
      - KAFKA_MIN_INSYNC_REPLICAS=2
      #
      # default.replication.factor
      # Let the value of this setting be N. A replication factor of N means that
      # each channel will have its data replicated to N brokers. These are the
      # candidates for the ISR set of a channel. As we noted in the
      # min.insync.replicas section above, not all of these brokers have to be
      # available all the time. In this sample configuration we choose a
      # default.replication.factor of K-1 (where K is the total number of brokers in
      # our Kafka cluster) so as to have the largest possible candidate set for
      # a channel's ISR. We explicitly avoid setting N equal to K because
      # channel creations cannot go forward if less than N brokers are up. If N
      # were set equal to K, a single broker going down would mean that we would
      # not be able to create new channels, i.e. the crash fault tolerance of
      # the ordering service would be non-existent.
      # 
      # 设置一个值N,N<K。
      # 设置replication factor参数为N代表着每个channel都保存N个副本的数据到Kafka的代理上。
      # 这些都是一个channel的ISR集合的候选。
      # 如同在上边min.insync.replicas section设置部分所描述的,不是所有的代理(orderer)在任何时候都是可用的。
      # N的值必须小于K,如果少于N个代理的话,channel的创建是不能成功的。
      # 因此,如果设置N的值为K,一个代理失效后,那么区块链网络将不能再创建新的channel---orderering service的crash容错也就不存在了。
      - KAFKA_DEFAULT_REPLICATION_FACTOR=3
      #
      # zookeper.connect
      # Point to the set of Zookeeper nodes comprising a ZK ensemble.
      # 指向Zookeeper节点的集合,其中包含ZK的集合。
      - KAFKA_ZOOKEEPER_CONNECT=zookeeper1:2181,zookeeper2:2181,zookeeper3:2181
      #
      # zookeeper.connection.timeout.ms
      # The max time that the client waits to establish a connection to
      # Zookeeper. If not set, the value in zookeeper.session.timeout.ms (below)
      # is used.
      #- KAFKA_ZOOKEEPER_CONNECTION_TIMEOUT_MS = 6000
      #
      # zookeeper.session.timeout.ms
      #- KAFKA_ZOOKEEPER_SESSION_TIMEOUT_MS = 6000
      #
      # socket.request.max.bytes
      # The maximum number of bytes in a socket request. ATTN: If you set this
      # env var, make sure to update `brokerConfig.Producer.MaxMessageBytes` in
      # `newBrokerConfig()` in `fabric/orderer/kafka/config.go` accordingly.
      #- KAFKA_SOCKET_REQUEST_MAX_BYTES=104857600 # 100 * 1024 * 1024 B
      #
      # message.max.bytes
      # The maximum size of envelope that the broker can receive.
      # 
      # 在configtx.yaml中会设置最大的区块大小(参考configtx.yaml中AbsoluteMaxBytes参数)。
      # 每个区块最大有Orderer.AbsoluteMaxBytes个字节(不包括头部),假定这里设置的值为A(目前99)。
      # message.max.bytes和replica.fetch.max.bytes应该设置一个大于A。
      # 为header增加一些缓冲区空间---1MB已经足够大。上述不同设置值之间满足如下关系:
      # Orderer.AbsoluteMaxBytes < replica.fetch.max.bytes <= message.max.bytes
      # (更完整的是,message.max.bytes应该严格小于socket.request.max.bytes的值,socket.request.max.bytes的值默认被设置为100MB。
      # 如果想要区块的大小大于100MB,需要编辑fabric/orderer/kafka/config.go文件里硬编码的值brokerConfig.Producer.MaxMessageBytes,
      # 修改后重新编译源码得到二进制文件,这种设置是不建议的。)
      - KAFKA_MESSAGE_MAX_BYTES=103809024 # 99 * 1024 * 1024 B
      #
      # replica.fetch.max.bytes
      # The number of bytes of messages to attempt to fetch for each channel.
      # This is not an absolute maximum, if the fetched envelope is larger than
      # this value, the envelope will still be returned to ensure that progress
      # can be made. The maximum message size accepted by the broker is defined
      # via message.max.bytes above.
      # 
      # 试图为每个通道获取的消息的字节数。
      # 这不是绝对最大值,如果获取的信息大于这个值,则仍然会返回信息,以确保可以取得进展。
      # 代理所接受的最大消息大小是通过上一条message.max.bytes定义的。
      - KAFKA_REPLICA_FETCH_MAX_BYTES=103809024 # 99 * 1024 * 1024 B
      #
      # unclean.leader.election.enable
      # Data consistency is key in a blockchain environment. We cannot have a
      # leader chosen outside of the in-sync replica set, or we run the risk of
      # overwriting the offsets that the previous leader produced, and --as a
      # result-- rewriting the blockchain that the orderers produce.
      # 数据一致性在区块链环境中是至关重要的。
      # 我们不能从in-sync 副本(ISR)集合之外选取channel leader,
      # 否则我们将会面临对于之前的leader产生的offsets覆盖的风险,
      # 这样的结果是,orderers产生的区块可能会重新写入区块链。
      - KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false
      #
      # log.retention.ms
      # Until the ordering service in Fabric adds support for pruning of the
      # Kafka logs, time-based retention should be disabled so as to prevent
      # segments from expiring. (Size-based retention -- see
      # log.retention.bytes -- is disabled by default so there is no need to set
      # it explicitly.)
      # 
      # 除非orderering service对Kafka日志的修剪增加支持,
      # 否则需要关闭基于时间的日志保留方式并且避免分段到期
      # (基于大小的日志保留方式log.retention.bytes在写本文章时在Kafka中已经默认关闭,因此不需要再次明确设置这个配置)。
      - KAFKA_LOG_RETENTION_MS=-1
      - KAFKA_HEAP_OPTS=-Xmx256M -Xms128M
    ports:
      - 9092:9092
    extra_hosts:
      - zookeeper1:192.168.24.201
      - zookeeper2:192.168.24.202
      - zookeeper3:192.168.24.203
      - kafka1:192.168.24.204
      - kafka2:192.168.24.205
      - kafka3:192.168.24.206
      - kafka4:192.168.24.207

docker-kafka2.yaml

# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
# 
# 我们使用K和Z分别代表Kafka集群和ZooKeeper集群的节点个数
# 
# 1)K的最小值应该被设置为4(我们将会在第4步中解释,这是为了满足crash容错的最小节点数。
#    如果有4个代理,那么可以容错一个代理崩溃,一个代理停止服务后,channel仍然可以继续读写,新的channel可以被创建)
# 2)Z可以为3,5或是7。它的值需要是一个奇数避免脑裂(split-brain)情况,同时选择大于1的值为了避免单点故障。
#    超过7个ZooKeeper servers会被认为overkill。
#

version: '2'

services:

  kafka2:
    container_name: kafka2
    hostname: kafka2
    image: hyperledger/fabric-kafka
    restart: always
    environment:
      # ========================================================================
      #     Reference: https://kafka.apache.org/documentation/#configuration
      # ========================================================================
      #
      # broker.id
      - KAFKA_BROKER_ID=2
      #
      # min.insync.replicas
      # Let the value of this setting be M. Data is considered committed when
      # it is written to at least M replicas (which are then considered in-sync
      # and belong to the in-sync replica set, or ISR). In any other case, the
      # write operation returns an error. Then:
      # 1. If up to M-N replicas -- out of the N (see default.replication.factor
      # below) that the channel data is written to -- become unavailable,
      # operations proceed normally.
      # 2. If more replicas become unavailable, Kafka cannot maintain an ISR set
      # of M, so it stops accepting writes. Reads work without issues. The
      # channel becomes writeable again when M replicas get in-sync.
      # 
      # min.insync.replicas = M---设置一个M值(例如1<M<N,查看下面的default.replication.factor)
      # 数据提交时会写入至少M个副本(这些数据然后会被同步并且归属到in-sync 副本集合或ISR)。
      # 其它情况,写入操作会返回一个错误。接下来:
      # 1)如果channel写入的数据多达N-M个副本变的不可用,操作可以正常执行。
      # 2)如果有更多的副本不可用,Kafka不可以维护一个有M数量的ISR集合,因此Kafka停止接收写操作。Channel只有当同步M个副本后才可以重新可以写。
      - KAFKA_MIN_INSYNC_REPLICAS=2
      #
      # default.replication.factor
      # Let the value of this setting be N. A replication factor of N means that
      # each channel will have its data replicated to N brokers. These are the
      # candidates for the ISR set of a channel. As we noted in the
      # min.insync.replicas section above, not all of these brokers have to be
      # available all the time. In this sample configuration we choose a
      # default.replication.factor of K-1 (where K is the total number of brokers in
      # our Kafka cluster) so as to have the largest possible candidate set for
      # a channel's ISR. We explicitly avoid setting N equal to K because
      # channel creations cannot go forward if less than N brokers are up. If N
      # were set equal to K, a single broker going down would mean that we would
      # not be able to create new channels, i.e. the crash fault tolerance of
      # the ordering service would be non-existent.
      # 
      # 设置一个值N,N<K。
      # 设置replication factor参数为N代表着每个channel都保存N个副本的数据到Kafka的代理上。
      # 这些都是一个channel的ISR集合的候选。
      # 如同在上边min.insync.replicas section设置部分所描述的,不是所有的代理(orderer)在任何时候都是可用的。
      # N的值必须小于K,如果少于N个代理的话,channel的创建是不能成功的。
      # 因此,如果设置N的值为K,一个代理失效后,那么区块链网络将不能再创建新的channel---orderering service的crash容错也就不存在了。
      - KAFKA_DEFAULT_REPLICATION_FACTOR=3
      #
      # zookeper.connect
      # Point to the set of Zookeeper nodes comprising a ZK ensemble.
      # 指向Zookeeper节点的集合,其中包含ZK的集合。
      - KAFKA_ZOOKEEPER_CONNECT=zookeeper1:2181,zookeeper2:2181,zookeeper3:2181
      #
      # zookeeper.connection.timeout.ms
      # The max time that the client waits to establish a connection to
      # Zookeeper. If not set, the value in zookeeper.session.timeout.ms (below)
      # is used.
      #- KAFKA_ZOOKEEPER_CONNECTION_TIMEOUT_MS = 6000
      #
      # zookeeper.session.timeout.ms
      #- KAFKA_ZOOKEEPER_SESSION_TIMEOUT_MS = 6000
      #
      # socket.request.max.bytes
      # The maximum number of bytes in a socket request. ATTN: If you set this
      # env var, make sure to update `brokerConfig.Producer.MaxMessageBytes` in
      # `newBrokerConfig()` in `fabric/orderer/kafka/config.go` accordingly.
      #- KAFKA_SOCKET_REQUEST_MAX_BYTES=104857600 # 100 * 1024 * 1024 B
      #
      # message.max.bytes
      # The maximum size of envelope that the broker can receive.
      # 
      # 在configtx.yaml中会设置最大的区块大小(参考configtx.yaml中AbsoluteMaxBytes参数)。
      # 每个区块最大有Orderer.AbsoluteMaxBytes个字节(不包括头部),假定这里设置的值为A(目前99)。
      # message.max.bytes和replica.fetch.max.bytes应该设置一个大于A。
      # 为header增加一些缓冲区空间---1MB已经足够大。上述不同设置值之间满足如下关系:
      # Orderer.AbsoluteMaxBytes < replica.fetch.max.bytes <= message.max.bytes
      # (更完整的是,message.max.bytes应该严格小于socket.request.max.bytes的值,socket.request.max.bytes的值默认被设置为100MB。
      # 如果想要区块的大小大于100MB,需要编辑fabric/orderer/kafka/config.go文件里硬编码的值brokerConfig.Producer.MaxMessageBytes,
      # 修改后重新编译源码得到二进制文件,这种设置是不建议的。)
      - KAFKA_MESSAGE_MAX_BYTES=103809024 # 99 * 1024 * 1024 B
      #
      # replica.fetch.max.bytes
      # The number of bytes of messages to attempt to fetch for each channel.
      # This is not an absolute maximum, if the fetched envelope is larger than
      # this value, the envelope will still be returned to ensure that progress
      # can be made. The maximum message size accepted by the broker is defined
      # via message.max.bytes above.
      # 
      # 试图为每个通道获取的消息的字节数。
      # 这不是绝对最大值,如果获取的信息大于这个值,则仍然会返回信息,以确保可以取得进展。
      # 代理所接受的最大消息大小是通过上一条message.max.bytes定义的。
      - KAFKA_REPLICA_FETCH_MAX_BYTES=103809024 # 99 * 1024 * 1024 B
      #
      # unclean.leader.election.enable
      # Data consistency is key in a blockchain environment. We cannot have a
      # leader chosen outside of the in-sync replica set, or we run the risk of
      # overwriting the offsets that the previous leader produced, and --as a
      # result-- rewriting the blockchain that the orderers produce.
      # 数据一致性在区块链环境中是至关重要的。
      # 我们不能从in-sync 副本(ISR)集合之外选取channel leader,
      # 否则我们将会面临对于之前的leader产生的offsets覆盖的风险,
      # 这样的结果是,orderers产生的区块可能会重新写入区块链。
      - KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false
      #
      # log.retention.ms
      # Until the ordering service in Fabric adds support for pruning of the
      # Kafka logs, time-based retention should be disabled so as to prevent
      # segments from expiring. (Size-based retention -- see
      # log.retention.bytes -- is disabled by default so there is no need to set
      # it explicitly.)
      # 
      # 除非orderering service对Kafka日志的修剪增加支持,
      # 否则需要关闭基于时间的日志保留方式并且避免分段到期
      # (基于大小的日志保留方式log.retention.bytes在写本文章时在Kafka中已经默认关闭,因此不需要再次明确设置这个配置)。
      - KAFKA_LOG_RETENTION_MS=-1
      - KAFKA_HEAP_OPTS=-Xmx256M -Xms128M
    ports:
      - 9092:9092
    extra_hosts:
      - zookeeper1:192.168.24.201
      - zookeeper2:192.168.24.202
      - zookeeper3:192.168.24.203
      - kafka1:192.168.24.204
      - kafka2:192.168.24.205
      - kafka3:192.168.24.206
      - kafka4:192.168.24.207

docker-kafka3.yaml

# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
# 
# 我们使用K和Z分别代表Kafka集群和ZooKeeper集群的节点个数
# 
# 1)K的最小值应该被设置为4(我们将会在第4步中解释,这是为了满足crash容错的最小节点数。
#    如果有4个代理,那么可以容错一个代理崩溃,一个代理停止服务后,channel仍然可以继续读写,新的channel可以被创建)
# 2)Z可以为3,5或是7。它的值需要是一个奇数避免脑裂(split-brain)情况,同时选择大于1的值为了避免单点故障。
#    超过7个ZooKeeper servers会被认为overkill。
#

version: '2'

services:

  kafka3:
    container_name: kafka3
    hostname: kafka3
    image: hyperledger/fabric-kafka
    restart: always
    environment:
      # ========================================================================
      #     Reference: https://kafka.apache.org/documentation/#configuration
      # ========================================================================
      #
      # broker.id
      - KAFKA_BROKER_ID=3
      #
      # min.insync.replicas
      # Let the value of this setting be M. Data is considered committed when
      # it is written to at least M replicas (which are then considered in-sync
      # and belong to the in-sync replica set, or ISR). In any other case, the
      # write operation returns an error. Then:
      # 1. If up to M-N replicas -- out of the N (see default.replication.factor
      # below) that the channel data is written to -- become unavailable,
      # operations proceed normally.
      # 2. If more replicas become unavailable, Kafka cannot maintain an ISR set
      # of M, so it stops accepting writes. Reads work without issues. The
      # channel becomes writeable again when M replicas get in-sync.
      # 
      # min.insync.replicas = M---设置一个M值(例如1<M<N,查看下面的default.replication.factor)
      # 数据提交时会写入至少M个副本(这些数据然后会被同步并且归属到in-sync 副本集合或ISR)。
      # 其它情况,写入操作会返回一个错误。接下来:
      # 1)如果channel写入的数据多达N-M个副本变的不可用,操作可以正常执行。
      # 2)如果有更多的副本不可用,Kafka不可以维护一个有M数量的ISR集合,因此Kafka停止接收写操作。Channel只有当同步M个副本后才可以重新可以写。
      - KAFKA_MIN_INSYNC_REPLICAS=2
      #
      # default.replication.factor
      # Let the value of this setting be N. A replication factor of N means that
      # each channel will have its data replicated to N brokers. These are the
      # candidates for the ISR set of a channel. As we noted in the
      # min.insync.replicas section above, not all of these brokers have to be
      # available all the time. In this sample configuration we choose a
      # default.replication.factor of K-1 (where K is the total number of brokers in
      # our Kafka cluster) so as to have the largest possible candidate set for
      # a channel's ISR. We explicitly avoid setting N equal to K because
      # channel creations cannot go forward if less than N brokers are up. If N
      # were set equal to K, a single broker going down would mean that we would
      # not be able to create new channels, i.e. the crash fault tolerance of
      # the ordering service would be non-existent.
      # 
      # 设置一个值N,N<K。
      # 设置replication factor参数为N代表着每个channel都保存N个副本的数据到Kafka的代理上。
      # 这些都是一个channel的ISR集合的候选。
      # 如同在上边min.insync.replicas section设置部分所描述的,不是所有的代理(orderer)在任何时候都是可用的。
      # N的值必须小于K,如果少于N个代理的话,channel的创建是不能成功的。
      # 因此,如果设置N的值为K,一个代理失效后,那么区块链网络将不能再创建新的channel---orderering service的crash容错也就不存在了。
      - KAFKA_DEFAULT_REPLICATION_FACTOR=3
      #
      # zookeper.connect
      # Point to the set of Zookeeper nodes comprising a ZK ensemble.
      # 指向Zookeeper节点的集合,其中包含ZK的集合。
      - KAFKA_ZOOKEEPER_CONNECT=zookeeper1:2181,zookeeper2:2181,zookeeper3:2181
      #
      # zookeeper.connection.timeout.ms
      # The max time that the client waits to establish a connection to
      # Zookeeper. If not set, the value in zookeeper.session.timeout.ms (below)
      # is used.
      #- KAFKA_ZOOKEEPER_CONNECTION_TIMEOUT_MS = 6000
      #
      # zookeeper.session.timeout.ms
      #- KAFKA_ZOOKEEPER_SESSION_TIMEOUT_MS = 6000
      #
      # socket.request.max.bytes
      # The maximum number of bytes in a socket request. ATTN: If you set this
      # env var, make sure to update `brokerConfig.Producer.MaxMessageBytes` in
      # `newBrokerConfig()` in `fabric/orderer/kafka/config.go` accordingly.
      #- KAFKA_SOCKET_REQUEST_MAX_BYTES=104857600 # 100 * 1024 * 1024 B
      #
      # message.max.bytes
      # The maximum size of envelope that the broker can receive.
      # 
      # 在configtx.yaml中会设置最大的区块大小(参考configtx.yaml中AbsoluteMaxBytes参数)。
      # 每个区块最大有Orderer.AbsoluteMaxBytes个字节(不包括头部),假定这里设置的值为A(目前99)。
      # message.max.bytes和replica.fetch.max.bytes应该设置一个大于A。
      # 为header增加一些缓冲区空间---1MB已经足够大。上述不同设置值之间满足如下关系:
      # Orderer.AbsoluteMaxBytes < replica.fetch.max.bytes <= message.max.bytes
      # (更完整的是,message.max.bytes应该严格小于socket.request.max.bytes的值,socket.request.max.bytes的值默认被设置为100MB。
      # 如果想要区块的大小大于100MB,需要编辑fabric/orderer/kafka/config.go文件里硬编码的值brokerConfig.Producer.MaxMessageBytes,
      # 修改后重新编译源码得到二进制文件,这种设置是不建议的。)
      - KAFKA_MESSAGE_MAX_BYTES=103809024 # 99 * 1024 * 1024 B
      #
      # replica.fetch.max.bytes
      # The number of bytes of messages to attempt to fetch for each channel.
      # This is not an absolute maximum, if the fetched envelope is larger than
      # this value, the envelope will still be returned to ensure that progress
      # can be made. The maximum message size accepted by the broker is defined
      # via message.max.bytes above.
      # 
      # 试图为每个通道获取的消息的字节数。
      # 这不是绝对最大值,如果获取的信息大于这个值,则仍然会返回信息,以确保可以取得进展。
      # 代理所接受的最大消息大小是通过上一条message.max.bytes定义的。
      - KAFKA_REPLICA_FETCH_MAX_BYTES=103809024 # 99 * 1024 * 1024 B
      #
      # unclean.leader.election.enable
      # Data consistency is key in a blockchain environment. We cannot have a
      # leader chosen outside of the in-sync replica set, or we run the risk of
      # overwriting the offsets that the previous leader produced, and --as a
      # result-- rewriting the blockchain that the orderers produce.
      # 数据一致性在区块链环境中是至关重要的。
      # 我们不能从in-sync 副本(ISR)集合之外选取channel leader,
      # 否则我们将会面临对于之前的leader产生的offsets覆盖的风险,
      # 这样的结果是,orderers产生的区块可能会重新写入区块链。
      - KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false
      #
      # log.retention.ms
      # Until the ordering service in Fabric adds support for pruning of the
      # Kafka logs, time-based retention should be disabled so as to prevent
      # segments from expiring. (Size-based retention -- see
      # log.retention.bytes -- is disabled by default so there is no need to set
      # it explicitly.)
      # 
      # 除非orderering service对Kafka日志的修剪增加支持,
      # 否则需要关闭基于时间的日志保留方式并且避免分段到期
      # (基于大小的日志保留方式log.retention.bytes在写本文章时在Kafka中已经默认关闭,因此不需要再次明确设置这个配置)。
      - KAFKA_LOG_RETENTION_MS=-1
      - KAFKA_HEAP_OPTS=-Xmx256M -Xms128M
    ports:
      - 9092:9092
    extra_hosts:
      - zookeeper1:192.168.24.201
      - zookeeper2:192.168.24.202
      - zookeeper3:192.168.24.203
      - kafka1:192.168.24.204
      - kafka2:192.168.24.205
      - kafka3:192.168.24.206
      - kafka4:192.168.24.207

docker-kafka4.yaml

# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
# 
# 我们使用K和Z分别代表Kafka集群和ZooKeeper集群的节点个数
# 
# 1)K的最小值应该被设置为4(我们将会在第4步中解释,这是为了满足crash容错的最小节点数。
#    如果有4个代理,那么可以容错一个代理崩溃,一个代理停止服务后,channel仍然可以继续读写,新的channel可以被创建)
# 2)Z可以为3,5或是7。它的值需要是一个奇数避免脑裂(split-brain)情况,同时选择大于1的值为了避免单点故障。
#    超过7个ZooKeeper servers会被认为overkill。
#

version: '2'

services:

  kafka4:
    container_name: kafka4
    hostname: kafka4
    image: hyperledger/fabric-kafka
    restart: always
    environment:
      # ========================================================================
      #     Reference: https://kafka.apache.org/documentation/#configuration
      # ========================================================================
      #
      # broker.id
      - KAFKA_BROKER_ID=4
      #
      # min.insync.replicas
      # Let the value of this setting be M. Data is considered committed when
      # it is written to at least M replicas (which are then considered in-sync
      # and belong to the in-sync replica set, or ISR). In any other case, the
      # write operation returns an error. Then:
      # 1. If up to M-N replicas -- out of the N (see default.replication.factor
      # below) that the channel data is written to -- become unavailable,
      # operations proceed normally.
      # 2. If more replicas become unavailable, Kafka cannot maintain an ISR set
      # of M, so it stops accepting writes. Reads work without issues. The
      # channel becomes writeable again when M replicas get in-sync.
      # 
      # min.insync.replicas = M---设置一个M值(例如1<M<N,查看下面的default.replication.factor)
      # 数据提交时会写入至少M个副本(这些数据然后会被同步并且归属到in-sync 副本集合或ISR)。
      # 其它情况,写入操作会返回一个错误。接下来:
      # 1)如果channel写入的数据多达N-M个副本变的不可用,操作可以正常执行。
      # 2)如果有更多的副本不可用,Kafka不可以维护一个有M数量的ISR集合,因此Kafka停止接收写操作。Channel只有当同步M个副本后才可以重新可以写。
      - KAFKA_MIN_INSYNC_REPLICAS=2
      #
      # default.replication.factor
      # Let the value of this setting be N. A replication factor of N means that
      # each channel will have its data replicated to N brokers. These are the
      # candidates for the ISR set of a channel. As we noted in the
      # min.insync.replicas section above, not all of these brokers have to be
      # available all the time. In this sample configuration we choose a
      # default.replication.factor of K-1 (where K is the total number of brokers in
      # our Kafka cluster) so as to have the largest possible candidate set for
      # a channel's ISR. We explicitly avoid setting N equal to K because
      # channel creations cannot go forward if less than N brokers are up. If N
      # were set equal to K, a single broker going down would mean that we would
      # not be able to create new channels, i.e. the crash fault tolerance of
      # the ordering service would be non-existent.
      # 
      # 设置一个值N,N<K。
      # 设置replication factor参数为N代表着每个channel都保存N个副本的数据到Kafka的代理上。
      # 这些都是一个channel的ISR集合的候选。
      # 如同在上边min.insync.replicas section设置部分所描述的,不是所有的代理(orderer)在任何时候都是可用的。
      # N的值必须小于K,如果少于N个代理的话,channel的创建是不能成功的。
      # 因此,如果设置N的值为K,一个代理失效后,那么区块链网络将不能再创建新的channel---orderering service的crash容错也就不存在了。
      - KAFKA_DEFAULT_REPLICATION_FACTOR=3
      #
      # zookeper.connect
      # Point to the set of Zookeeper nodes comprising a ZK ensemble.
      # 指向Zookeeper节点的集合,其中包含ZK的集合。
      - KAFKA_ZOOKEEPER_CONNECT=zookeeper1:2181,zookeeper2:2181,zookeeper3:2181
      #
      # zookeeper.connection.timeout.ms
      # The max time that the client waits to establish a connection to
      # Zookeeper. If not set, the value in zookeeper.session.timeout.ms (below)
      # is used.
      #- KAFKA_ZOOKEEPER_CONNECTION_TIMEOUT_MS = 6000
      #
      # zookeeper.session.timeout.ms
      #- KAFKA_ZOOKEEPER_SESSION_TIMEOUT_MS = 6000
      #
      # socket.request.max.bytes
      # The maximum number of bytes in a socket request. ATTN: If you set this
      # env var, make sure to update `brokerConfig.Producer.MaxMessageBytes` in
      # `newBrokerConfig()` in `fabric/orderer/kafka/config.go` accordingly.
      #- KAFKA_SOCKET_REQUEST_MAX_BYTES=104857600 # 100 * 1024 * 1024 B
      #
      # message.max.bytes
      # The maximum size of envelope that the broker can receive.
      # 
      # 在configtx.yaml中会设置最大的区块大小(参考configtx.yaml中AbsoluteMaxBytes参数)。
      # 每个区块最大有Orderer.AbsoluteMaxBytes个字节(不包括头部),假定这里设置的值为A(目前99)。
      # message.max.bytes和replica.fetch.max.bytes应该设置一个大于A。
      # 为header增加一些缓冲区空间---1MB已经足够大。上述不同设置值之间满足如下关系:
      # Orderer.AbsoluteMaxBytes < replica.fetch.max.bytes <= message.max.bytes
      # (更完整的是,message.max.bytes应该严格小于socket.request.max.bytes的值,socket.request.max.bytes的值默认被设置为100MB。
      # 如果想要区块的大小大于100MB,需要编辑fabric/orderer/kafka/config.go文件里硬编码的值brokerConfig.Producer.MaxMessageBytes,
      # 修改后重新编译源码得到二进制文件,这种设置是不建议的。)
      - KAFKA_MESSAGE_MAX_BYTES=103809024 # 99 * 1024 * 1024 B
      #
      # replica.fetch.max.bytes
      # The number of bytes of messages to attempt to fetch for each channel.
      # This is not an absolute maximum, if the fetched envelope is larger than
      # this value, the envelope will still be returned to ensure that progress
      # can be made. The maximum message size accepted by the broker is defined
      # via message.max.bytes above.
      # 
      # 试图为每个通道获取的消息的字节数。
      # 这不是绝对最大值,如果获取的信息大于这个值,则仍然会返回信息,以确保可以取得进展。
      # 代理所接受的最大消息大小是通过上一条message.max.bytes定义的。
      - KAFKA_REPLICA_FETCH_MAX_BYTES=103809024 # 99 * 1024 * 1024 B
      #
      # unclean.leader.election.enable
      # Data consistency is key in a blockchain environment. We cannot have a
      # leader chosen outside of the in-sync replica set, or we run the risk of
      # overwriting the offsets that the previous leader produced, and --as a
      # result-- rewriting the blockchain that the orderers produce.
      # 数据一致性在区块链环境中是至关重要的。
      # 我们不能从in-sync 副本(ISR)集合之外选取channel leader,
      # 否则我们将会面临对于之前的leader产生的offsets覆盖的风险,
      # 这样的结果是,orderers产生的区块可能会重新写入区块链。
      - KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false
      #
      # log.retention.ms
      # Until the ordering service in Fabric adds support for pruning of the
      # Kafka logs, time-based retention should be disabled so as to prevent
      # segments from expiring. (Size-based retention -- see
      # log.retention.bytes -- is disabled by default so there is no need to set
      # it explicitly.)
      # 
      # 除非orderering service对Kafka日志的修剪增加支持,
      # 否则需要关闭基于时间的日志保留方式并且避免分段到期
      # (基于大小的日志保留方式log.retention.bytes在写本文章时在Kafka中已经默认关闭,因此不需要再次明确设置这个配置)。
      - KAFKA_LOG_RETENTION_MS=-1
      - KAFKA_HEAP_OPTS=-Xmx256M -Xms128M
    ports:
      - 9092:9092
    extra_hosts:
      - zookeeper1:192.168.24.201
      - zookeeper2:192.168.24.202
      - zookeeper3:192.168.24.203
      - kafka1:192.168.24.204
      - kafka2:192.168.24.205
      - kafka3:192.168.24.206
      - kafka4:192.168.24.207

Kafka 的最小值应该被设置为 4,以满足 Crash 容错的最小节点数。如果有 4个代理,则可以容错一个代理崩溃,即一个代理停止服务后,Channel 仍然可以继续读写,新的 channel 可以被创建。

7、Orderer 配置

docker-order0.yaml

# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#

version: '2'

services:

  orderer0.example.com:
    container_name: orderer0.example.com
    image: hyperledger/fabric-orderer
    environment:
      - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=aberic_default
      - ORDERER_GENERAL_LOGLEVEL=debug
      # - ORDERER_GENERAL_LOGLEVEL=error
      - ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
      - ORDERER_GENERAL_LISTENPORT=7050
      #- ORDERER_GENERAL_GENESISPROFILE=AntiMothOrdererGenesis
      - ORDERER_GENERAL_GENESISMETHOD=file
      - ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block
      - ORDERER_GENERAL_LOCALMSPID=OrdererMSP
      - ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp
      #- ORDERER_GENERAL_LEDGERTYPE=ram
      #- ORDERER_GENERAL_LEDGERTYPE=file
      # enabled TLS
      - ORDERER_GENERAL_TLS_ENABLED=false
      - ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key
      - ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt
      - ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]

      - ORDERER_KAFKA_RETRY_LONGINTERVAL=10s 
      - ORDERER_KAFKA_RETRY_LONGTOTAL=100s 
      - ORDERER_KAFKA_RETRY_SHORTINTERVAL=1s
      - ORDERER_KAFKA_RETRY_SHORTTOTAL=30s
      - ORDERER_KAFKA_VERBOSE=true
      - ORDERER_KAFKA_BROKERS=[192.168.24.204:9092,192.168.24.205:9092,192.168.24.206:9092,192.168.24.207:9092]
    working_dir: /opt/gopath/src/github.com/hyperledger/fabric
    command: orderer
    volumes:
      - ./channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block
      - ./crypto-config/ordererOrganizations/example.com/orderers/orderer0.example.com/msp:/var/hyperledger/orderer/msp
      - ./crypto-config/ordererOrganizations/example.com/orderers/orderer0.example.com/tls/:/var/hyperledger/orderer/tls
    networks:
      default:
        aliases:
          - aberic
    ports:
      - 7050:7050
    extra_hosts:
      - kafka1:192.168.24.204
      - kafka2:192.168.24.205
      - kafka3:192.168.24.206
      - kafka4:192.168.24.207

docker-order1.yaml

# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#

version: '2'

services:

  orderer1.example.com:
    container_name: orderer1.example.com
    image: hyperledger/fabric-orderer
    environment:
      - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=aberic_default
      - ORDERER_GENERAL_LOGLEVEL=debug
      # - ORDERER_GENERAL_LOGLEVEL=error
      - ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
      - ORDERER_GENERAL_LISTENPORT=7050
      #- ORDERER_GENERAL_GENESISPROFILE=AntiMothOrdererGenesis
      - ORDERER_GENERAL_GENESISMETHOD=file
      - ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block
      - ORDERER_GENERAL_LOCALMSPID=OrdererMSP
      - ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp
      #- ORDERER_GENERAL_LEDGERTYPE=ram
      #- ORDERER_GENERAL_LEDGERTYPE=file
      # enabled TLS
      - ORDERER_GENERAL_TLS_ENABLED=false
      - ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key
      - ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt
      - ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]

      - ORDERER_KAFKA_RETRY_LONGINTERVAL=10s 
      - ORDERER_KAFKA_RETRY_LONGTOTAL=100s 
      - ORDERER_KAFKA_RETRY_SHORTINTERVAL=1s
      - ORDERER_KAFKA_RETRY_SHORTTOTAL=30s
      - ORDERER_KAFKA_VERBOSE=true
      - ORDERER_KAFKA_BROKERS=[192.168.24.204:9092,192.168.24.205:9092,192.168.24.206:9092,192.168.24.207:9092]
    working_dir: /opt/gopath/src/github.com/hyperledger/fabric
    command: orderer
    volumes:
      - ./channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block
      - ./crypto-config/ordererOrganizations/example.com/orderers/orderer1.example.com/msp:/var/hyperledger/orderer/msp
      - ./crypto-config/ordererOrganizations/example.com/orderers/orderer1.example.com/tls/:/var/hyperledger/orderer/tls
    networks:
      default:
        aliases:
          - aberic
    ports:
      - 7050:7050
    extra_hosts:
      - kafka1:192.168.24.204
      - kafka2:192.168.24.205
      - kafka3:192.168.24.206
      - kafka4:192.168.24.207

docker-order2.yaml

# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#

version: '2'

services:

  orderer2.example.com:
    container_name: orderer2.example.com
    image: hyperledger/fabric-orderer
    environment:
      - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=aberic_default
      - ORDERER_GENERAL_LOGLEVEL=debug
      # - ORDERER_GENERAL_LOGLEVEL=error
      - ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
      - ORDERER_GENERAL_LISTENPORT=7050
      #- ORDERER_GENERAL_GENESISPROFILE=AntiMothOrdererGenesis
      - ORDERER_GENERAL_GENESISMETHOD=file
      - ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block
      - ORDERER_GENERAL_LOCALMSPID=OrdererMSP
      - ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp
      #- ORDERER_GENERAL_LEDGERTYPE=ram
      #- ORDERER_GENERAL_LEDGERTYPE=file
      # enabled TLS
      - ORDERER_GENERAL_TLS_ENABLED=false
      - ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key
      - ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt
      - ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]

      - ORDERER_KAFKA_RETRY_LONGINTERVAL=10s 
      - ORDERER_KAFKA_RETRY_LONGTOTAL=100s 
      - ORDERER_KAFKA_RETRY_SHORTINTERVAL=1s
      - ORDERER_KAFKA_RETRY_SHORTTOTAL=30s
      - ORDERER_KAFKA_VERBOSE=true
      - ORDERER_KAFKA_BROKERS=[192.168.24.204:9092,192.168.24.205:9092,192.168.24.206:9092,192.168.24.207:9092]
    working_dir: /opt/gopath/src/github.com/hyperledger/fabric
    command: orderer
    volumes:
      - ./channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block
      - ./crypto-config/ordererOrganizations/example.com/orderers/orderer2.example.com/msp:/var/hyperledger/orderer/msp
      - ./crypto-config/ordererOrganizations/example.com/orderers/orderer2.example.com/tls/:/var/hyperledger/orderer/tls
    networks:
      default:
        aliases:
          - aberic
    ports:
      - 7050:7050
    extra_hosts:
      - kafka1:192.168.24.204
      - kafka2:192.168.24.205
      - kafka3:192.168.24.206
      - kafka4:192.168.24.207

8、启动 Zookeeper 集群

docker-compose -f docker-zookeeper1.yaml up
docker-compose -f docker-zookeeper2.yaml up
docker-compose -f docker-zookeeper3.yaml up

不加 -d 参数,可直接查看 zookeeper 启动日志。

docker exec -it zookeeper2 bash
./bin/zkServer.sh status

9、启动 Kafka 集群

docker-compose -f docker-kafka1.yaml up
docker-compose -f docker-kafka2.yaml up
docker-compose -f docker-kafka3.yaml up
docker-compose -f docker-kafka4.yaml up

未加 “-d” 参数,是为方便查看 kafka 启动日志。

以上启动过程若报内存不足,且如果是测试环境,可在 kafka 的配置文件中环境变量里加入如下参数。

- KAFKA_HEAP_OPTS=-Xmx256M -Xms128M

10、启动 Orderer 集群

与 zookeeper 及 kafka 时一样:

  1)将 docker-orderer0 ~ 2.yaml 上传至 fabric/aberic 目录下;

  2)将前面生成的 genesis.block 创世区块文件传到各自 fabric/aberic/channel-artifacts 目录下;

  3)将通过 crypto-config.yaml 配置文件所生成的节点文件,即 crypto-config 下的 ordererOrganizations 上传至各服务器的 fabric/aberic/crypto-config 目录下。

原 ordererOrganizations 文件夹的再下一层有一个 orderers 目录,下有三个文件夹,并非都要上传到每一台 Orderer 排序服务器上,只需将与之名称对应的文件夹上传即可。

之后在三台服务器上分别执行:

docker-compose -f docker-orderer0.yaml up
docker-compose -f docker-orderer1.yaml up -d
docker-compose -f docker-orderer2.yaml up -d

11、集群环境测试

由于 peer 节点的交互止步于 Orderer 排序服务,即 peer 并不关心顶层建设是 Solo 还是 Kafka,所以 peer 节点的配置与前面类似。

接下来会有三份配置文件,不仅测试以往的集群功能,还包括自定义节点配置文件测试:

docker-peer0org1.yaml

# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#

version: '2'

services:

  couchdb:
    container_name: couchdb
    image: hyperledger/fabric-couchdb
    # Comment/Uncomment the port mapping if you want to hide/expose the CouchDB service,
    # for example map it to utilize Fauxton User Interface in dev environments.
    ports:
      - 5984:5984

  ca:
    container_name: ca
    image: hyperledger/fabric-ca
    environment:
      - FABRIC_CA_HOME=/etc/hyperledger/fabric-ca-server
      - FABRIC_CA_SERVER_CA_NAME=ca
      - FABRIC_CA_SERVER_TLS_ENABLED=false
      - FABRIC_CA_SERVER_TLS_CERTFILE=/etc/hyperledger/fabric-ca-server-config/ca.org1.example.com-cert.pem
      - FABRIC_CA_SERVER_TLS_KEYFILE=/etc/hyperledger/fabric-ca-server-config/4867fff01ded23e5212f4d4b8ff02dad3c8c41e316fc3ddcb287eff9000d3dee_sk
    ports:
      - 7054:7054
    command: sh -c 'fabric-ca-server start --ca.certfile /etc/hyperledger/fabric-ca-server-config/ca.org1.example.com-cert.pem --ca.keyfile /etc/hyperledger/fabric-ca-server-config/4867fff01ded23e5212f4d4b8ff02dad3c8c41e316fc3ddcb287eff9000d3dee_sk -b admin:adminpw -d'
    volumes:
      - ./crypto-config/peerOrganizations/org1.example.com/ca/:/etc/hyperledger/fabric-ca-server-config

  peer0.org1.example.com:
    container_name: peer0.org1.example.com
    image: hyperledger/fabric-peer
    environment:
      - CORE_LEDGER_STATE_STATEDATABASE=CouchDB
      - CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS=192.168.24.211:5984

      - CORE_PEER_ID=peer0.org1.example.com
      - CORE_PEER_NETWORKID=aberic
      - CORE_PEER_ADDRESS=peer0.org1.example.com:7051
      - CORE_PEER_CHAINCODEADDRESS=peer0.org1.example.com:7052
      - CORE_PEER_CHAINCODELISTENADDRESS=peer0.org1.example.com:7052
      - CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.org1.example.com:7051
      - CORE_PEER_LOCALMSPID=Org1MSP

      - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
      # the following setting starts chaincode containers on the same
      # bridge network as the peers
      # https://docs.docker.com/compose/networking/
      - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=aberic_default
      - CORE_VM_DOCKER_TLS_ENABLED=false
      # - CORE_LOGGING_LEVEL=ERROR
      - CORE_LOGGING_LEVEL=DEBUG
      - CORE_PEER_GOSSIP_SKIPHANDSHAKE=true
      - CORE_PEER_GOSSIP_USELEADERELECTION=true
      - CORE_PEER_GOSSIP_ORGLEADER=false
      - CORE_PEER_PROFILE_ENABLED=false
      - CORE_PEER_TLS_ENABLED=false
      - CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/fabric/tls/server.crt
      - CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/fabric/tls/server.key
      - CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/fabric/tls/ca.crt
    volumes:
      - /var/run/:/host/var/run/
      - ./chaincode/go/:/opt/gopath/src/github.com/hyperledger/fabric/chaincode/go
      - ./crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/msp:/etc/hyperledger/fabric/msp
      - ./crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls:/etc/hyperledger/fabric/tls
    working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
    command: peer node start
    ports:
      - 7051:7051
      - 7052:7052
      - 7053:7053
    depends_on:
      - couchdb
    networks:
      default:
        aliases:
          - aberic
    extra_hosts:
      - orderer0.example.com:192.168.24.208
      - orderer1.example.com:192.168.24.209
      - orderer2.example.com:192.168.24.210

  cli:
    container_name: cli
    image: hyperledger/fabric-tools
    tty: true
    environment:
      - GOPATH=/opt/gopath
      - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
      # - CORE_LOGGING_LEVEL=ERROR
      - CORE_LOGGING_LEVEL=DEBUG
      - CORE_PEER_ID=cli
      - CORE_PEER_ADDRESS=peer0.org1.example.com:7051
      - CORE_PEER_CHAINCODELISTENADDRESS=peer0.org1.example.com:7052
      - CORE_PEER_LOCALMSPID=Org1MSP
      - CORE_PEER_TLS_ENABLED=false
      - CORE_PEER_TLS_CERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.crt
      - CORE_PEER_TLS_KEY_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.key
      - CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt
      - CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/Admin@org1.example.com/msp
    working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
    volumes:
      - /var/run/:/host/var/run/
      - ./chaincode/go/:/opt/gopath/src/github.com/hyperledger/fabric/chaincode/go
      - ./crypto-config:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
      - ./channel-artifacts:/opt/gopath/src/github.com/hyperledger/fabric/peer/channel-artifacts
    depends_on:
      - peer0.org1.example.com
    extra_hosts:
      - orderer0.example.com:192.168.24.208
      - orderer1.example.com:192.168.24.209
      - orderer2.example.com:192.168.24.210
      - peer0.org1.example.com:192.168.24.211

之后:

  1)将 docker-peer0org1.yaml 上传至 peer 节点的 aberic/ 目录下;

  2)将 mychannel.tx 频道文件上传至 aberic/channel-artifacts 目录下;

  3)将 peerOrganizations 上传至 aberic/crypto-config 目录下,且仅上传 org1 相关配置。

登入 peer 节点服务器,启动 peer 节点服务

docker-compose -f docker-peer0org1.yaml up -d

创建频道

peer channel create -o orderer0.example.com:7050 -c mychannel -t 50s -f ./channel-artifacts/mychannel.tx

加入频道

peer channel join -b mychannel.block

安装智能合约

peer chaincode install -n mychannel -p github.com/hyperledger/fabric/chaincode/go/chaincode_example02 -v 1.0

实例化合约

peer chaincode instantiate -o orderer0.example.com:7050 -C mychannel -n mychannel -c '{"Args":["init","A","10","B","10"]}' -P "OR ('Org1MSP.member','Org2MSP.member')" -v 1.0

查询 A 账户资产

peer chaincode query -C mychannel -n mychannel -c '{"Args":["query","A"]}'

备份 peer0.org1 通道描述文件

docker cp :/opt/gopath/src/github.com/hyperledger/fabric/peer/mychannel.block ./channel-artifacts/

测试自定义节点 foo27.org2, docker-foo27org2.yaml

# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#

version: '2'

services:

  couchdb:
    container_name: couchdb
    image: hyperledger/fabric-couchdb
    environment:
      - COUCHDB_USER=admin
      - COUCHDB_PASSWORD=123456
    # Comment/Uncomment the port mapping if you want to hide/expose the CouchDB service,
    # for example map it to utilize Fauxton User Interface in dev environments.
    ports:
      - 5984:5984

  ca:
    container_name: ca
    image: hyperledger/fabric-ca
    environment:
      - FABRIC_CA_HOME=/etc/hyperledger/fabric-ca-server
      - FABRIC_CA_SERVER_CA_NAME=ca
      - FABRIC_CA_SERVER_TLS_ENABLED=false
      - FABRIC_CA_SERVER_TLS_CERTFILE=/etc/hyperledger/fabric-ca-server-config/ca.org2.example.com-cert.pem
      - FABRIC_CA_SERVER_TLS_KEYFILE=/etc/hyperledger/fabric-ca-server-config/43fc8bf11c4cadf28bbe5d19ccf3e9ae617b744dc18cb117ef50ef901ddad630_sk
    ports:
      - 7054:7054
    command: sh -c 'fabric-ca-server start --ca.certfile /etc/hyperledger/fabric-ca-server-config/ca.org2.example.com-cert.pem --ca.keyfile /etc/hyperledger/fabric-ca-server-config/43fc8bf11c4cadf28bbe5d19ccf3e9ae617b744dc18cb117ef50ef901ddad630_sk -b admin:adminpw -d'
    volumes:
      - ./crypto-config/peerOrganizations/org2.example.com/ca/:/etc/hyperledger/fabric-ca-server-config

  foo27.org2.example.com:
    container_name: foo27.org2.example.com
    image: hyperledger/fabric-peer
    environment:
      - CORE_LEDGER_STATE_STATEDATABASE=CouchDB
      - CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS=couchdb:5984
      - CORE_LEDGER_STATE_COUCHDBCONFIG_USERNAME=admin
      - CORE_LEDGER_STATE_COUCHDBCONFIG_PASSWORD=123456

      - CORE_PEER_ID=foo27.org2.example.com
      - CORE_PEER_NETWORKID=aberic
      - CORE_PEER_ADDRESS=foo27.org2.example.com:7051
      - CORE_PEER_CHAINCODEADDRESS=foo27.org2.example.com:7052
      - CORE_PEER_CHAINCODELISTENADDRESS=foo27.org2.example.com:7052
      - CORE_PEER_GOSSIP_EXTERNALENDPOINT=foo27.org2.example.com:7051
      - CORE_PEER_LOCALMSPID=Org2MSP

      - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
      # the following setting starts chaincode containers on the same
      # bridge network as the peers
      # https://docs.docker.com/compose/networking/
      - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=aberic_default
      - CORE_VM_DOCKER_TLS_ENABLED=false
      # - CORE_LOGGING_LEVEL=ERROR
      - CORE_LOGGING_LEVEL=DEBUG
      - CORE_PEER_GOSSIP_SKIPHANDSHAKE=true
      - CORE_PEER_GOSSIP_USELEADERELECTION=true
      - CORE_PEER_GOSSIP_ORGLEADER=false
      - CORE_PEER_PROFILE_ENABLED=false
      - CORE_PEER_TLS_ENABLED=false
      - CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/fabric/tls/server.crt
      - CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/fabric/tls/server.key
      - CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/fabric/tls/ca.crt
    volumes:
      - /var/run/:/host/var/run/
      - ./chaincode/go/:/opt/gopath/src/github.com/hyperledger/fabric/chaincode/go
      - ./crypto-config/peerOrganizations/org2.example.com/peers/foo27.org2.example.com/msp:/etc/hyperledger/fabric/msp
      - ./crypto-config/peerOrganizations/org2.example.com/peers/foo27.org2.example.com/tls:/etc/hyperledger/fabric/tls
    working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
    command: peer node start
    ports:
      - 7051:7051
      - 7052:7052
      - 7053:7053
    depends_on:
      - couchdb
    networks:
      default:
        aliases:
          - aberic
    extra_hosts:
      - orderer0.example.com:192.168.24.208
      - orderer1.example.com:192.168.24.209
      - orderer2.example.com:192.168.24.210

  cli:
    container_name: cli
    image: hyperledger/fabric-tools
    tty: true
    environment:
      - GOPATH=/opt/gopath
      - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
      # - CORE_LOGGING_LEVEL=ERROR
      - CORE_LOGGING_LEVEL=DEBUG
      - CORE_PEER_ID=cli
      - CORE_PEER_ADDRESS=foo27.org2.example.com:7051
      - CORE_PEER_CHAINCODELISTENADDRESS=foo27.org2.example.com:7052
      - CORE_PEER_LOCALMSPID=Org2MSP
      - CORE_PEER_TLS_ENABLED=false
      - CORE_PEER_TLS_CERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/foo27.org2.example.com/tls/server.crt
      - CORE_PEER_TLS_KEY_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/foo27.org2.example.com/tls/server.key
      - CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/foo27.org2.example.com/tls/ca.crt
      - CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp
    working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
    volumes:
      - /var/run/:/host/var/run/
      - ./chaincode/go/:/opt/gopath/src/github.com/hyperledger/fabric/chaincode/go
      - ./crypto-config:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
      - ./channel-artifacts:/opt/gopath/src/github.com/hyperledger/fabric/peer/channel-artifacts
    depends_on:
      - foo27.org2.example.com
    extra_hosts:
      - orderer0.example.com:192.168.24.208
      - orderer1.example.com:192.168.24.209
      - orderer2.example.com:192.168.24.210
      - foo27.org2.example.com:192.168.24.212

文件上传至 peerx 服务器 fabric/aberic 目录下。

docker-compose -f docker-foo27org2.yaml up -d

mychannel.block 文件复制到当前服务器 fabric/aberic/channel-artifacts 目录下,并通过docker cp命令将其复制到 cli 容器 peer 目录下
  加入 mychannel 频道

peer channel join -b mychannel.block

安装智能合约

peer chaincode install -n mychannel -p github.com/hyperledger/fabric/chaincode/go/chaincode_example02 -v 1.0

查询 A 账户资产

peer chaincode query -C mychannel -n mychannel -c '{"Args":["query","A"]}'

查询后可得到结果与 peer0.org1 一致,说明该自定义配置节点启动及运行成功。

测试半自定义节点 bar.org2, docker-barorg2.yaml

# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#

version: '2'

services:

  couchdb:
    container_name: couchdb
    image: hyperledger/fabric-couchdb
    # Comment/Uncomment the port mapping if you want to hide/expose the CouchDB service,
    # for example map it to utilize Fauxton User Interface in dev environments.
    ports:
      - 5984:5984

  ca:
    container_name: ca
    image: hyperledger/fabric-ca
    environment:
      - FABRIC_CA_HOME=/etc/hyperledger/fabric-ca-server
      - FABRIC_CA_SERVER_CA_NAME=ca
      - FABRIC_CA_SERVER_TLS_ENABLED=false
      - FABRIC_CA_SERVER_TLS_CERTFILE=/etc/hyperledger/fabric-ca-server-config/ca.org2.example.com-cert.pem
      - FABRIC_CA_SERVER_TLS_KEYFILE=/etc/hyperledger/fabric-ca-server-config/43fc8bf11c4cadf28bbe5d19ccf3e9ae617b744dc18cb117ef50ef901ddad630_sk
    ports:
      - 7054:7054
    command: sh -c 'fabric-ca-server start --ca.certfile /etc/hyperledger/fabric-ca-server-config/ca.org2.example.com-cert.pem --ca.keyfile /etc/hyperledger/fabric-ca-server-config/43fc8bf11c4cadf28bbe5d19ccf3e9ae617b744dc18cb117ef50ef901ddad630_sk -b admin:adminpw -d'
    volumes:
      - ./crypto-config/peerOrganizations/org2.example.com/ca/:/etc/hyperledger/fabric-ca-server-config

  bar.org2.example.com:
    container_name: bar.org2.example.com
    image: hyperledger/fabric-peer
    environment:
      - CORE_LEDGER_STATE_STATEDATABASE=CouchDB
      - CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS=192.168.24.213:5984

      - CORE_PEER_ID=bar.org2.example.com
      - CORE_PEER_NETWORKID=aberic
      - CORE_PEER_ADDRESS=bar.org2.example.com:7051
      - CORE_PEER_CHAINCODEADDRESS=bar.org2.example.com:7052
      - CORE_PEER_CHAINCODELISTENADDRESS=bar.org2.example.com:7052
      - CORE_PEER_GOSSIP_EXTERNALENDPOINT=bar.org2.example.com:7051
      - CORE_PEER_LOCALMSPID=Org2MSP

      - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
      # the following setting starts chaincode containers on the same
      # bridge network as the peers
      # https://docs.docker.com/compose/networking/
      - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=aberic_default
      - CORE_VM_DOCKER_TLS_ENABLED=false
      # - CORE_LOGGING_LEVEL=ERROR
      - CORE_LOGGING_LEVEL=DEBUG
      - CORE_PEER_GOSSIP_SKIPHANDSHAKE=true
      - CORE_PEER_GOSSIP_USELEADERELECTION=true
      - CORE_PEER_GOSSIP_ORGLEADER=false
      - CORE_PEER_PROFILE_ENABLED=false
      - CORE_PEER_TLS_ENABLED=false
      - CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/fabric/tls/server.crt
      - CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/fabric/tls/server.key
      - CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/fabric/tls/ca.crt
    volumes:
      - /var/run/:/host/var/run/
      - ./chaincode/go/:/opt/gopath/src/github.com/hyperledger/fabric/chaincode/go
      - ./crypto-config/peerOrganizations/org2.example.com/peers/bar.org2.example.com/msp:/etc/hyperledger/fabric/msp
      - ./crypto-config/peerOrganizations/org2.example.com/peers/bar.org2.example.com/tls:/etc/hyperledger/fabric/tls
    working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
    command: peer node start
    ports:
      - 7051:7051
      - 7052:7052
      - 7053:7053
    depends_on:
      - couchdb
    networks:
      default:
        aliases:
          - aberic
    extra_hosts:
      - orderer0.example.com:192.168.24.208
      - orderer1.example.com:192.168.24.209
      - orderer2.example.com:192.168.24.210

  cli:
    container_name: cli
    image: hyperledger/fabric-tools
    tty: true
    environment:
      - GOPATH=/opt/gopath
      - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
      # - CORE_LOGGING_LEVEL=ERROR
      - CORE_LOGGING_LEVEL=DEBUG
      - CORE_PEER_ID=cli
      - CORE_PEER_ADDRESS=bar.org2.example.com:7051
      - CORE_PEER_CHAINCODELISTENADDRESS=bar.org2.example.com:7052
      - CORE_PEER_LOCALMSPID=Org2MSP
      - CORE_PEER_TLS_ENABLED=false
      - CORE_PEER_TLS_CERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/bar.org2.example.com/tls/server.crt
      - CORE_PEER_TLS_KEY_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/bar.org2.example.com/tls/server.key
      - CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/bar.org2.example.com/tls/ca.crt
      - CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp
    working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
    volumes:
      - /var/run/:/host/var/run/
      - ./chaincode/go/:/opt/gopath/src/github.com/hyperledger/fabric/chaincode/go
      - ./crypto-config:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
      - ./channel-artifacts:/opt/gopath/src/github.com/hyperledger/fabric/peer/channel-artifacts
    depends_on:
      - bar.org2.example.com
    extra_hosts:
      - orderer0.example.com:192.168.24.208
      - orderer1.example.com:192.168.24.209
      - orderer2.example.com:192.168.24.210
      - bar.org2.example.com:192.168.24.213

测试方法与 foo27.org2 节点服务相同,结果也能顺利地完成频道加入,合约安装,查询等步骤。

】【打印繁体】【投稿】【收藏】 【推荐】【举报】【评论】 【关闭】 【返回顶部
上一篇Spring Cloud Stream集成kafka 下一篇Spark-kafka-ES (JAVA)一套流程

最新文章

热门文章

Hot 文章

Python

C 语言

C++基础

大数据基础

linux编程基础

C/C++面试题目