You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This isn't exactly a tinman issue, it's more of a discussion of the specific deployment practice that should be used for a particular testnet. But it's related and there's no better place to put this, so I'm putting it here.
A testnet may have nodes with the following functions:
(a) A fastgen node to handle state initialization by producing blocks with timestamps in the past.
(b) One or more witness nodes to produce blocks.
(c) One or more seed nodes which are the default connection points for new p2p peers.
(d) If API's are desired, one or more API nodes with the relevant plugin(s) enabled.
(e) If the testnet is to be long-lived, it should have several nodes for reliability and memory [1].
(f) If transactions are to be created over time while the testnet is live, a node with the private keys of the relevant testnet account(s), which runs a script that generates transactions, signs them with the private key, and submits them to a testnet API endpoint. TN: Gatling - Support price feeds #13
(g) If transactions are to be "ported" over time from the mainnet while the testnet is live, a node with the private keys of the relevant testnet account(s), which runs a script that queries the mainnet API, generates transactions based on the results of those queries, signs them with the private key, and submits them to a testnet API endpoint. Populate transactions #2
(h) The parts of Steemit's stack built on the blockchain -- condenser, jussi, yo, hivemind, etc.
(i) Third-party services, such as steemd.com, busy, etc.
In particular testnets, some of these may be omitted, or mutiple roles may be combined on a single machine.
For the currently existing testnet (pythagoras) and the previous testnet (halloween), I manually did the following steps:
Create 6 AWS instances using the dev AWS account
Compile the develop branch of steemd and make install somewhere in /opt on one node
tar | xz the resulting installation
nc the tarball to the other nodes
Run sha256sum on each of the 6 instances and manually compare the output value to ensure the tarball transferred correctly
Untar the tarball
Use nano to edit the config file for each node, each node's config file will be slightly different
All 6 nodes have the 5 IP addresses of the 5 other nodes listed as seed nodes
Each witness node's config file has 1/3 of the witnesses enabled with corresponding private keys
Each seed node will do double duty as API node, so have some plugins and API's enabled on seed nodes
Connect my dev machine as a temporary fastgen node with debug_node_plugin enabled, run tinman submit as specified in README.md
Once all the actions occur, the witness nodes should spontaneously start producing blocks
Once this happens, Ctrl+C to quit the fastgen node
This handles (a)-(e) although I'm sure the AWS / Docker experts reading this are cringing about how simple and old-fashioned my sysadmin techniques are. I didn't address (f)-(i). A node of type (f) will need to be created for price feeds #13, and type (g) will be needed for porting transactions #2.
Note that (f) and (g) don't need to run steemd at all, they just run Python scripts and talk to the API, so the smallest possible instance size will probably be fine for these nodes. @mvandeberg is of the opinion that every one of (a)-(i) should be crammed into a single machine, I say the 6 nodes I created (3 witness nodes and 3 seed / API nodes) is the minimum amount we should have for a long-lived, publically accessible, type (b) testnet.
I only have the most hazy idea of what's required for (h) and (i).
[1] Currently steemd does not save blocks after last_irreversible_block on disk. So for example, if you have a single witness, then shut it down, upgrade and restart it, it may start producing a fork! This may be avoided by splitting witness duty among at least 3 nodes, at most one of which is down at a time, allowing a --required-participation=60 flag to forbid producing a minority fork before it's caught up to the present.
This isn't exactly a
tinmanissue, it's more of a discussion of the specific deployment practice that should be used for a particular testnet. But it's related and there's no better place to put this, so I'm putting it here.A testnet may have nodes with the following functions:
condenser,jussi,yo,hivemind, etc.steemd.com,busy, etc.In particular testnets, some of these may be omitted, or mutiple roles may be combined on a single machine.
For the currently existing testnet (
pythagoras) and the previous testnet (halloween), I manually did the following steps:developbranch ofsteemdandmake installsomewhere in/opton one nodetar | xzthe resulting installationncthe tarball to the other nodessha256sumon each of the 6 instances and manually compare the output value to ensure the tarball transferred correctlynanoto edit the config file for each node, each node's config file will be slightly differentdebug_node_pluginenabled, runtinman submitas specified inREADME.mdThis handles (a)-(e) although I'm sure the AWS / Docker experts reading this are cringing about how simple and old-fashioned my sysadmin techniques are. I didn't address (f)-(i). A node of type (f) will need to be created for price feeds #13, and type (g) will be needed for porting transactions #2.
Note that (f) and (g) don't need to run
steemdat all, they just run Python scripts and talk to the API, so the smallest possible instance size will probably be fine for these nodes. @mvandeberg is of the opinion that every one of (a)-(i) should be crammed into a single machine, I say the 6 nodes I created (3 witness nodes and 3 seed / API nodes) is the minimum amount we should have for a long-lived, publically accessible, type (b) testnet.I only have the most hazy idea of what's required for (h) and (i).
[1] Currently
steemddoes not save blocks afterlast_irreversible_blockon disk. So for example, if you have a single witness, then shut it down, upgrade and restart it, it may start producing a fork! This may be avoided by splitting witness duty among at least 3 nodes, at most one of which is down at a time, allowing a--required-participation=60flag to forbid producing a minority fork before it's caught up to the present.