Despliegue de web ÐApp con Quorum+Angular+Python+Flask en un VPS con Ubuntu 16.04 [1] [Quorum] [2] [ Solidity] [3] [ Python]
Nodo 2
Para la configuración del nodo 2, casi se replican los mismos comandos anteriores.
Se crea el directorio del nodo 2 con la configuración del mismo. Se mueven los wallets que se quieren relacionar con el nodo 2 (0x513f15ec9fc190cbc2ac25c6d6acdb58253f80d7 en este caso).
Se genera la configuración con los puertos para el nodo 2 (RPC 22001 y http 21001) en el archivo config_2.toml.
osboxes@osboxes:~/Desktop/giveliback$ geth --datadir=qdata/node2 --rpc --rpcaddr 0.0.0.0 --rpcapi admin,db,eth,debug,miner,net,shh,txpool,personal,web3,quorum --permissioned --nodiscover --rpcport 22001 --port 21001 dumpconfig > config_2.toml
WARN [07-05|19:20:49] No etherbase set and no accounts found as default
osboxes@osboxes:~/Desktop/giveliback$
osboxes@osboxes:~/Desktop/giveliback$ cp qdata/node1/keystore/UTC--2018-07-05T18-47-06.750007367Z--513f15ec9fc190cbc2ac25c6d6acdb58253f80d7 qdata/node2/keystore
osboxes@osboxes:~/Desktop/giveliback$ geth --datadir qdata/node2 init genesis.json 2>>qdata/logs/node2.log
Se genera el bloque génesis del nodo 2. Al generar el bloque de génesis, se asignará como etherbase la cuenta migrada.
Etherbase = "0x513f15ec9fc190cbc2ac25c6d6acdb58253f80d7"
DiscoveryV5Addr = ":21002"
Para instanciar la constelación del nodo 2, se deben generar las claves, proteger los permisos que los ficheros tienen, copiar el archivo de contraseña y finalmente inicializar el servicio.
osboxes@osboxes:~/Desktop/giveliback$ cd qdata/node2
osboxes@osboxes:~/Desktop/giveliback/qdata/node2$ constellation-node --generatekeys=node
Lock key pair node with password [none]: ********
osboxes@osboxes:~/Desktop/giveliback/qdata/node2$ chmod 600 node.key
osboxes@osboxes:~/Desktop/giveliback/qdata/node2$ cd ../..
osboxes@osboxes:~/Desktop/giveliback$ cp password.txt qdata/node2
osboxes@osboxes:~/Desktop/giveliback$ constellation-node --url=https://127.0.0.1:9002/ --port=9002 --workdir=qdata/node2 --socket=constellation.ipc --publickeys=node.pub --privatekeys=node.key --othernodes=https://127.0.0.1:9001/ --password=password.txt 2>>qdata/logs/constellation2.log
En el log de constellation1.log aparece el nuevo nodo y se registra.
19:21:45 [WARN] tls-known-clients (ca-or-tofu trust mode): Adding new fingerprint "00:08:88:6C:BE:43:20:34:0D:28:16:E8:4C:5E:20:E7:19:19:AE:E7:B9:AE:CE:94:C6:8E:2F:80:77:98:6E:36:00:1D:B1:54:29:FE:01:86:34:73:3B:49:80:C1:AB:28:FF:6F:DF:81:95:1B:3D:B9:87:7C:AE:F7:96:AE:13:33" for host 127.0.0.1
19:21:51 [WARN] tls-known-servers (ca-or-tofu trust mode): Adding new fingerprint "D0:2A:D9:74:2C:39:2A:52:16:D2:54:9E:96:6E:E5:31:B0:41:58:F8:91:68:01:85:50:42:33:BE:EE:2C:DC:15:C9:43:8D:B4:CD:5B:D9:32:41:CB:D6:41:E7:B7:40:56:27:DB:54:BD:36:88:83:C3:43:6D:CD:E2:1B:6F:5D:7E" for host 127.0.0.1
En el log de constellation2.log se detalla el mismo comportamiento.
19:21:21 [INFO] Log level is LevelWarn
19:21:41 [WARN] No TLS certificate or key found; generating tls-client-cert.pem/tls-client-key.pem
19:21:42 [WARN] No TLS certificate or key found; generating tls-server-cert.pem/tls-server-key.pem
19:21:45 [WARN] tls-known-servers (ca-or-tofu trust mode): Adding new fingerprint "25:73:EF:68:77:14:4E:75:E5:AC:1E:B9:BA:A5:0D:8D:17:9F:85:BC:24:E3:A3:9F:23:06:27:D0:AA:D8:45:83:32:2C:55:B5:78:AA:CA:FA:4F:BB:5C:46:03:7C:EE:1B:69:4B:7F:2C:27:EB:6C:71:37:DD:DC:2E:9C:2D:2A:89" for host 127.0.0.1
19:21:51 [WARN] tls-known-clients (ca-or-tofu trust mode): Adding new fingerprint "25:9C:FE:7F:FA:37:65:33:EA:F6:3D:4A:92:32:34:B6:A6:56:4B:66:F8:D8:78:D7:91:18:25:A8:12:E4:D2:D2:9B:67:09:D6:EB:23:38:A3:61:73:60:FD:59:FF:DA:15:F3:FF:60:8E:75:59:8A:F8:EF:9C:82:1F:ED:47:04:7F" for host 127.0.0.1
Los archivos creados deberán protegerse de la misma forma que en el nodo 1.
osboxes@osboxes:~/Desktop/giveliback$ chmod 600 qdata/node2/*.pem
Para conectar el segundo nodo al consenso RAFT se puede editar el archivo de los nodos permisionados y los estáticos y desplegar dichos en todos los directorios de los nodos para aceptar la inclusión del nodo 2.
De momento no hay forma trivial de identificar el hash del nodo mirando archivos de configuración o log o hasta el bloque génesis y sin instanciarlo. Se instancia el nodo sin archivos de static-nodes.json ni permissioned-nodes.json como circumloquio para obtener el hash del enode del segundo nodo que posteriormente se añadirá a sendos archivos.
osboxes@osboxes:~/Desktop/giveliback$ geth --datadir=qdata/node2 --raft --emitcheckpoints --raftport 50402 --unlock 0 --password password.txt --config config_2.toml 2>>qdata/logs/node2.log
Fatal: Raft-based consensus requires either (1) an initial peers list (in static-nodes.json) including this enode hash (99f8c274e8d7600d2fe687e6aeafff9ae96cea0c2b9d8b8c3f6a1b5356f4256c607ef3ee0a511b7fe6105b13461844dbc1f0519667a34b4d7249f65436fea767), or (2) the flag --raftjoinexisting RAFT_ID, where RAFT_ID has been issued by an existing cluster member calling `raft.addPeer(ENODE_ID)` with an enode ID containing this node's enode hash.
Los archivos permissioned-nodes.json y static-nodes.json tendrían un array con los dos nodos. Al instanciar los nodos debe añadirse el parámetro --permissioned para que se base en ese listado para la configuración de los nodos conectados.
[
"enode://97df36859bcce4b644464f748b29dc433fa3bc142d2e00fe29834758d82576db6933ec1af1f92bbb5dcee5a5891a3124a7f32faf2657f62630ddf75bf9194ddf@0.0.0.0:21000?discport=0&raftport=50401",
"enode://99f8c274e8d7600d2fe687e6aeafff9ae96cea0c2b9d8b8c3f6a1b5356f4256c607ef3ee0a511b7fe6105b13461844dbc1f0519667a34b4d7249f65436fea767@0.0.0.0:21001?discport=0&raftport=50402"
]
Existe otra opción que es más elegante y versátil. Desde uno de los nodos de la red (node1 en este caso) se añade el nodo 2 como peer desde la consola del nodo 1 y apuntar el nodo 2 con --raftjoinexisting 2 (que es el ID del nodo 2 en la red).
osboxes@osboxes:~/Desktop/giveliback$ geth attach ipc:/home/osboxes/Desktop/giveliback/qdata/node1/geth.ipc
Welcome to the Geth JavaScript console!
instance: Geth/v1.7.2-stable-f3d13152/linux-amd64/go1.10
coinbase: 0x3b6927fe4a4a4d44c3445292d375542cf299661c
at block: 0 (Thu, 01 Jan 1970 01:00:00 CET)
datadir: /home/osboxes/Desktop/giveliback/qdata/node1
modules: admin:1.0 debug:1.0 eth:1.0 miner:1.0 net:1.0 personal:1.0 raft:1.0 rpc:1.0 txpool:1.0 web3:1.0
> raft.addPeer("enode://0325ca376f24c195bf02cc5360124d887f5c0cf0372f4bccf53c824ff44c126bba1cc18ced369e7bf03c760d9ab66b28b7c27b159779e0cc06106e0a618045cf@0.0.0.0:21001?discport=0&raftport=50402")
2
Se levanta el nodo 2 pasando como parámetro el ID=2 recientemente asignado.
osboxes@osboxes:~/Desktop/giveliback$ geth --datadir=qdata/node2 --raft --raftjoinexisting 2 --emitcheckpoints --raftport 50402 --unlock 0 --password password.txt --config config_2.toml 2>>qdata/logs/node2.log
En el log del nodo 1 se aprecia la inclusión del nuevo peer y cuando éste ha sido levantado.
INFO [07-05|19:38:02] adding peer due to ConfChangeAddNode raft id=2
2018-07-05 19:38:02.099788 I | rafthttp: starting peer 2...
2018-07-05 19:38:02.100602 I | rafthttp: started HTTP pipelining with peer 2
2018-07-05 19:38:02.108672 I | rafthttp: started streaming with peer 2 (writer)
2018-07-05 19:38:02.119002 I | rafthttp: started peer 2
2018-07-05 19:38:02.119507 I | rafthttp: added peer 2
INFO [07-05|19:38:02] start snapshot applied index=2 last snapshot index=1
INFO [07-05|19:38:02] compacted log index=2
INFO [07-05|19:38:02] persisted the latest applied index index=3
2018-07-05 19:38:02.127878 I | rafthttp: started streaming with peer 2 (writer)
2018-07-05 19:38:02.128677 I | rafthttp: started streaming with peer 2 (stream MsgApp v2 reader)
2018-07-05 19:38:02.130745 I | rafthttp: started streaming with peer 2 (stream Message reader)
INFO [07-05|19:38:02] peer is currently unreachable peer id=2
...
INFO [07-05|19:38:46] peer is currently unreachable peer id=2
2018-07-05 19:38:46.153732 I | rafthttp: peer 2 became active
2018-07-05 19:38:46.155193 E | rafthttp: failed to dial 2 on stream Message (peer 2 failed to find local node 1)
2018-07-05 19:38:46.155208 I | rafthttp: peer 2 became inactive
2018-07-05 19:38:46.218670 I | rafthttp: peer 2 became active
2018-07-05 19:38:46.280411 E | rafthttp: failed to dial 2 on stream Message (peer 2 failed to find local node 1)
2018-07-05 19:38:46.280443 I | rafthttp: peer 2 became inactive
2018-07-05 19:38:46.301582 I | rafthttp: peer 2 became active
INFO [07-05|19:38:46] finished sending snapshot raft peer=2
2018-07-05 19:38:46.411213 I | rafthttp: established a TCP streaming connection with peer 2 (stream Message writer)
2018-07-05 19:38:46.419697 I | rafthttp: established a TCP streaming connection with peer 2 (stream MsgApp v2 writer)
2018-07-05 19:38:46.419970 I | rafthttp: established a TCP streaming connection with peer 2 (stream MsgApp v2 reader)
2018-07-05 19:38:46.424318 I | rafthttp: established a TCP streaming connection with peer 2 (stream Message reader)
INFO [07-05|19:39:28] Regenerated local transaction journal transactions=0 accounts=0
En el log del nodo 2 se identifica al mismo como VERIFIER y conectado con otro nodo.
2018-07-05 19:38:45] Starting peer-to-peer node instance=Geth/v1.7.2-stable-df4267a2/linux-amd64/go1.10
INFO [07-05|19:38:45] Allocated cache and file handles database=/home/osboxes/Desktop/giveliback/qdata/node2/geth/chaindata cache=128 handles=1024
WARN [07-05|19:38:46] Upgrading database to use lookup entries
INFO [07-05|19:38:46] Initialised chain configuration config="{ChainID: 10 Homestead: DAO: DAOSupport: false EIP150: 1 EIP155: 0 EIP158: 1 Byzantium: 1 IsQuorum: true Engine: unknown}"
INFO [07-05|19:38:46] Disk storage enabled for ethash caches dir=/home/osboxes/Desktop/giveliback/qdata/node2/geth/ethash count=3
INFO [07-05|19:38:46] Disk storage enabled for ethash DAGs dir=/home/osboxes/.ethash count=2
INFO [07-05|19:38:46] Initialising Ethereum protocol versions="[63 62]" network=1
INFO [07-05|19:38:46] Loaded most recent local header number=0 hash=64e3f2…8aeb0e td=0
INFO [07-05|19:38:46] Loaded most recent local full block number=0 hash=64e3f2…8aeb0e td=0
INFO [07-05|19:38:46] Loaded most recent local fast block number=0 hash=64e3f2…8aeb0e td=0
INFO [07-05|19:38:46] Regenerated local transaction journal transactions=0 accounts=0
INFO [07-05|19:38:46] Database deduplication successful deduped=0
INFO [07-05|19:38:46] Starting P2P networking
INFO [07-05|19:38:46] starting raft protocol handler
INFO [07-05|19:38:46] RLPx listener up self="enode://99f8c274e8d7600d2fe687e6aeafff9ae96cea0c2b9d8b8c3f6a1b5356f4256c607ef3ee0a511b7fe6105b13461844dbc1f0519667a34b4d7249f65436fea767@[::]:21001?discport=0"
INFO [07-05|19:38:46] loaded the latest applied index lastAppliedIndex=0
INFO [07-05|19:38:46] replaying WAL
INFO [07-05|19:38:46] loading WAL term=0 index=0
INFO [07-05|19:38:46] startRaft raft ID=2
INFO [07-05|19:38:46] newly joining an existing cluster; waiting for connections.
raft2018/07/05 19:38:46 INFO: 2 became follower at term 0
raft2018/07/05 19:38:46 INFO: newRaft 2 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
raft2018/07/05 19:38:46 INFO: 2 became follower at term 1
INFO [07-05|19:38:46] HTTP endpoint opened: http://0.0.0.0:22001
INFO [07-05|19:38:46] IPC endpoint opened: /home/osboxes/Desktop/giveliback/qdata/node2/geth.ipc
raft2018/07/05 19:38:46 INFO: 2 [term: 1] received a MsgHeartbeat message with higher term from 1 [term: 2]
raft2018/07/05 19:38:46 INFO: 2 became follower at term 2
raft2018/07/05 19:38:46 INFO: raft.node: 2 elected leader 1 at term 2
2018-07-05 19:38:46.154922 I | rafthttp: started HTTP pipelining with peer 1
2018-07-05 19:38:46.154972 E | rafthttp: failed to find member 1 in cluster 1000
2018-07-05 19:38:46.155306 E | rafthttp: failed to find member 1 in cluster 1000
INFO [07-05|19:38:46] QUORUM-CHECKPOINT name=BECAME-VERIFIER
2018-07-05 19:38:46.279462 E | rafthttp: failed to find member 1 in cluster 1000
2018-07-05 19:38:46.281517 E | rafthttp: failed to find member 1 in cluster 1000
2018-07-05 19:38:46.299226 I | rafthttp: peer 1 became active
raft2018/07/05 19:38:46 INFO: 2 [commit: 0, lastindex: 0, lastterm: 0] starts to restore snapshot [index: 3, term: 2]
raft2018/07/05 19:38:46 INFO: log [committed=0, applied=0, unstable.offset=1, len(unstable.Entries)=0] starts to restore snapshot [index: 3, term: 2]
raft2018/07/05 19:38:46 INFO: 2 restored progress of 1 [next = 4, match = 0, state = ProgressStateProbe, waiting = false, pendingSnapshot = 0]
raft2018/07/05 19:38:46 INFO: 2 restored progress of 2 [next = 4, match = 3, state = ProgressStateProbe, waiting = false, pendingSnapshot = 0]
raft2018/07/05 19:38:46 INFO: 2 [commit: 3] restored snapshot [index: 3, term: 2]
INFO [07-05|19:38:46] applying snapshot to raft storage
INFO [07-05|19:38:46] updating cluster membership per raft snapshot
INFO [07-05|19:38:46] adding new raft peer raft id=1
2018-07-05 19:38:46.343774 I | rafthttp: starting peer 1...
2018-07-05 19:38:46.343800 I | rafthttp: started HTTP pipelining with peer 1
2018-07-05 19:38:46.348731 I | rafthttp: started peer 1
2018-07-05 19:38:46.349384 I | rafthttp: added peer 1
INFO [07-05|19:38:46] updated cluster membership
INFO [07-05|19:38:46] blockchain is caught up; no need to synchronize
INFO [07-05|19:38:46] persisted the latest applied index index=3
2018-07-05 19:38:46.370742 I | rafthttp: started streaming with peer 1 (writer)
2018-07-05 19:38:46.371952 I | rafthttp: started streaming with peer 1 (writer)
2018-07-05 19:38:46.372963 I | rafthttp: started streaming with peer 1 (stream MsgApp v2 reader)
2018-07-05 19:38:46.374011 I | rafthttp: started streaming with peer 1 (stream Message reader)
2018-07-05 19:38:46.391210 I | rafthttp: peer 1 became active
2018-07-05 19:38:46.414950 I | rafthttp: established a TCP streaming connection with peer 1 (stream Message writer)
2018-07-05 19:38:46.417773 I | rafthttp: established a TCP streaming connection with peer 1 (stream MsgApp v2 writer)
2018-07-05 19:38:46.443306 I | rafthttp: established a TCP streaming connection with peer 1 (stream MsgApp v2 reader)
2018-07-05 19:38:46.445711 I | rafthttp: established a TCP streaming connection with peer 1 (stream Message reader)
INFO [07-05|19:38:47] Unlocked account address=0x513F15Ec9fc190cBc2Ac25C6D6aCDB58253F80d7
Yendo al nodo 1, se observa que el nodo 2 se ha conectado
osboxes@osboxes:~/Desktop/giveliback$ geth attach ipc:/home/osboxes/Desktop/giveliback/qdata/node1/geth.ipc
Welcome to the Geth JavaScript console!
instance: Geth/v1.7.2-stable-f3d13152/linux-amd64/go1.10
coinbase: 0x15163b14667e705591f78b67a72cf3357b4b3d0d
at block: 0 (Thu, 01 Jan 1970 01:00:00 CET)
datadir: /home/osboxes/Desktop/giveliback/qdata/node1
modules: admin:1.0 debug:1.0 eth:1.0 miner:1.0 net:1.0 personal:1.0 raft:1.0 rpc:1.0 txpool:1.0 web3:1.0
> admin.peers
[{
caps: ["eth/63"],
id: "99f8c274e8d7600d2fe687e6aeafff9ae96cea0c2b9d8b8c3f6a1b5356f4256c607ef3ee0a511b7fe6105b13461844dbc1f0519667a34b4d7249f65436fea767",
name: "Geth/v1.7.2-stable-df4267a2/linux-amd64/go1.10",
network: {
localAddress: "127.0.0.1:21000",
remoteAddress: "127.0.0.1:57834"
},
protocols: {
eth: {
difficulty: 0,
head: "0x64e3f2590db3fc0100e0b4841ccb97e3bc166d644e84245e613b7557448aeb0e",
version: 63
}
}
}]
De la misma manera, si se conecta al nodo 2 se ve el nodo 1 conectado.
osboxes@osboxes:~/Desktop/giveliback$ geth attach ipc:/home/osboxes/Desktop/giveliback/qdata/node2/geth.ipc
Welcome to the Geth JavaScript console!
instance: Geth/v1.7.2-stable-df4267a2/linux-amd64/go1.10
coinbase: 0x513f15ec9fc190cbc2ac25c6d6acdb58253f80d7
at block: 0 (Thu, 01 Jan 1970 01:00:00 CET)
datadir: /home/osboxes/Desktop/giveliback/qdata/node2
modules: admin:1.0 debug:1.0 eth:1.0 miner:1.0 net:1.0 personal:1.0 raft:1.0 rpc:1.0 txpool:1.0 web3:1.0
> admin.peers
[{
caps: ["eth/63"],
id: "97df36859bcce4b644464f748b29dc433fa3bc142d2e00fe29834758d82576db6933ec1af1f92bbb5dcee5a5891a3124a7f32faf2657f62630ddf75bf9194ddf",
name: "Geth/v1.7.2-stable-df4267a2/linux-amd64/go1.10",
network: {
localAddress: "127.0.0.1:57834",
remoteAddress: "127.0.0.1:21000"
},
protocols: {
eth: {
difficulty: 0,
head: "0x64e3f2590db3fc0100e0b4841ccb97e3bc166d644e84245e613b7557448aeb0e",
version: 63
}
}
}]
>
A partir de éste momento, la información de los nodos participantes en el cluster está almacenado aunque en caso de que los nodos añadidos se conviertan en permanentes, se recomienda incluirlos en static-nodes.json y permissioned-nodes.json y cargar los nodos. Es también el método usado a la hora de compartir repositorios.
A modo de prueba desde el nodo 2 se ejecuta una transacción de prueba.
> eth.sendTransaction({from:eth.accounts[0]})
"0xc7a740952de31312abf06d217bf9da85019801c386524877d7cdbdb6e52f2af8"
En el log del nodo 2 se registra la petición de la transacción.
INFO [07-05|19:42:05] Submitted contract creation fullhash=0xc7a740952de31312abf06d217bf9da85019801c386524877d7cdbdb6e52f2af8 to=0x584Df819e9C1C5ce6bD95Fffa67Ef0CEAD7E3d7c
INFO [07-05|19:42:05] QUORUM-CHECKPOINT name=TX-CREATED tx=0xc7a740952de31312abf06d217bf9da85019801c386524877d7cdbdb6e52f2af8 to=0x584Df819e9C1C5ce6bD95Fffa67Ef0CEAD7E3d7c
INFO [07-05|19:42:05] QUORUM-CHECKPOINT name=TX-ACCEPTED tx=0xc7a740952de31312abf06d217bf9da85019801c386524877d7cdbdb6e52f2af8
INFO [07-05|19:42:08] Imported new chain segment blocks=1 txs=1 mgas=0.021 elapsed=2.985s mgasps=0.007 number=1 hash=201e8a…e85183
INFO [07-05|19:42:08] QUORUM-CHECKPOINT name=BLOCK-CREATED block=201e8a4a2d311bf1a1524ce18b965b284a66b38cdf8c6783815733c3b5e85183
INFO [07-05|19:42:08] persisted the latest applied index index=4
INFO [07-05|19:42:11] Generating ethash verification cache epoch=1 percentage=83 elapsed=3.025s
INFO [07-05|19:42:12] Generated ethash verification cache epoch=1 elapsed=3.531s
En el log del nodo 1 se observa que el nodo 1 es el que ha ejecutado la transacción minándola.
INFO [07-05|19:42:05] Generated next block block num=1 num txes=1
INFO [07-05|19:42:05] 🔨 Mined block number=1 hash=201e8a4a elapsed=2.818467ms
INFO [07-05|19:42:05] QUORUM-CHECKPOINT name=TX-ACCEPTED tx=0xc7a740952de31312abf06d217bf9da85019801c386524877d7cdbdb6e52f2af8
INFO [07-05|19:42:08] Imported new chain segment blocks=1 txs=1 mgas=0.021 elapsed=3.029s mgasps=0.007 number=1 hash=201e8a…e85183
INFO [07-05|19:42:08] QUORUM-CHECKPOINT name=BLOCK-CREATED block=201e8a4a2d311bf1a1524ce18b965b284a66b38cdf8c6783815733c3b5e85183
INFO [07-05|19:42:08] persisted the latest applied index index=4
INFO [07-05|19:42:08] Not minting a new block since there are no pending transactions
INFO [07-05|19:42:11] Generating ethash verification cache epoch=1 percentage=84 elapsed=3.024s
INFO [07-05|19:42:12] Generated ethash verification cache epoch=1 elapsed=3.510s
En este momento se dispone de una red de dos nodos: nodo 1 (MINTER) y nodo 2 (VERIFIER).
Publicado del blog de ibón: https://ibón.es/2018/07/06/despliegue-de-web-dapp-con-quorumangularpythonflask-en-un-vps-con-ubuntu-16-04/