Threshold signer (TSS)
Motivation
Standard services which provide processing of cross-chain transfers are based on signing transactions by one oracle. This makes a risk of mistakes can be made by oracle, or money can be stolen. Also as it has only one oracle, there is only one point of failure, which means if something will be happened with oracle bridge will be stopped until it wont start working or be replaced by another one. Threshold service is developed as a decentralized solution to validate cross-chain transfers, using tss-lib it removes single point of failure making cross-chain transfers more secure.
Overview
Threshold signature service provides the processing and signing of the deposited token transfers to another chains based on the threshold signature scheme (TSS). It works as a decentralized solution connected to the Cosmos Bridge Core module to process and accumulate the cross-chain transfers.
Service description
Threshold signature service provides the processing and signing of the deposited token transfers to another chains.
To perform cross-chain transfers, all operations should be signed with ECDSA secp256k1 threshold (t-n) signature. This signature is generated by the TSS network that consists of several parties (validators) that are running this service.
Validators communicate with each other to:
- generate the party secret shares for "obtaining" the general system private key;
- validate incoming transfers (deposit requests);
- form the pool of the withdrawal transactions and data to be signed on the current signing "session";
- sign the formed data required to perform the cross-chain transfer.
- optionally, to broadcast the signed withdrawal transactions (if any) to the network.
The TSS network requires a several parties launched by different validators. The minimum number of active parties required to sign the data is defined by the threshold value T.
Protocol
System protocol is designed to provide the secure and reliable cross-chain transfers based on TSS signing process. It operates tss-lib library for threshold signature generation. Protocol implements secure messaging transport layer to provide the communication between parties according to the library requirements and recommendations.
Key Generation
Before starting the TSS signing process, parties should generate the general system private key. It will be used to sign the transactions or data required to perform the cross-chain transfer. Actually, the private key is not generated directly. By communicating with each other, parties generate the secret shares of the private key that, if combined (and number of shares is bigger than threshold), will be able to sign the provided data just like using the private key. As the result, each party will have its own secret share of the private key, which they should keep secure and in secret. Check the keygener documentation for more details on key generation process.
TSS Signing
TSS signing process is performed as a series of signing sessions, which are responsible for signing the data required to perform the specific transfer. See the session documentation for more session details.
Signing session process consists of the following steps:
Acceptance
To start signing the data, parties should firstly define and accept the transfer to be signed. Consenus module is responsible for choosing the data to be signed and the list of current session signers. Check it for more details on how the data is chosen and how the signers are defined.
Signing
After the data is accepted, parties should start the process of signing it and communicate with each other to produce the final transfer signature. This process is handled by the Signer module, check it for more details.
Finalization
After the data is signed by the required number of parties, the final signature is produced and sent to the Cosmos Bridge Core. Additionally, the withdrawal transaction can be broadcast to the network (varies depending on the network). The finalization process is described in the Finalizer module. After the session signing process is finished, parties should start the new session to sign the next transfer.
Synchronization
To prevent system failures, reach the consensus, and ensure the correct signing process, parties should be synchronized with each other. The synchronization process is based on using timestamps for each session duration and its steps. The time bounds are strongly defined for each session stage and type, so the result session duration is also constant (although there can be some exceptions). See Session boundaries for more details on time bounds.
Key Resharing
To ensure the system scalability and security, parties can join or leave the TSS network. It means that the secret shares of the general system private key should be redistributed among the old/new parties. The change in a number of parties can cause the change of the threshold value for number of signers required to sign the data. In that case, the key resharing process cannot be executed by the means of tss-lib library. Moreover, the change of private key shares causes the additional processes of funds migration and ecosystem reconfiguration. Thus, the key resharing process is not performed automatically and should be handled manually by the system administrators in a cooperation with the network parties.
Performing deposit
EVM networks
To initiate a transfer from an EVM network, the user should execute either depositERC20 or depositNative functions.
depositERC20 function:
function depositERC20(
address token_, // token address that should be transferred
uint256 amount_, // amount of tokens to transfer
string calldata receiver_, // receiver address on the target network
string calldata network_, // destination network identifier
bool isWrapped_ // if the token is wrapped or not
)
Note:
- before executing the
depositERC20function, the user should approve the contract to spend the amount of tokens that should be transferred; - to obtain the information about the available tokens to transfer, their addresses, chain identifiers and more, query the Cosmos Bridge Core
bridgemodule.
depositNative function:
function depositNative(
string calldata receiver_, // receiver address on the target network
string calldata network_ // destination network identifier
) payable
After the transaction execution, the according event will be emitted, either DepositedERC20 or DepositedNative.
To initiate the transfer processing, the user should provide any of the available parties with the deposit operation data:
- transaction hash - the hash of the transaction that contains the deposit operation;
- transaction nonce - the emitted event index, containing the information about the deposit operation and transfer memo;
- source chain id - the identifier of the source chain where the deposit operation was executed.
Bitcoin
To initiate a transfer from the Bitcoin network, the user should construct a transaction aligning with the next requirements:
- deposit transaction should contain the VOUT-X (x is the index of the output) pointed to the TSS network account address. The amount of the output will be tracked as the deposit amount and should not be below the dust threshold (1000 sats);
- the transaction should contain the memo with the required information about transfer parameters (destination address, chain id etc.) to be processed by the TSS network.
It should be included as VOUT-(X+1) output using the OP_RETURN script.
As the OP_RETURN script is limited to 80 bytes, the memo should be abbreviated and contain only the required information.
- For EVM networks, the memo should contain the destination address and the destination network identifier. Example:
0x0000..000-35443, where0x0000..000is the destination address and35443is the destination network identifier. - For Zano network, the memo should contain the Base58-decoded destination address (as in the default format it exceeds the 80 bytes of memo) and the destination network identifier. Example:
addr..-35443, whereaddr..is the Base58-decoded destination address and35443is the destination network identifier.
- For EVM networks, the memo should contain the destination address and the destination network identifier. Example:
After the transaction is broadcast, the user should provide the TSS network with the deposit operation data:
- transaction hash - the hash of the transaction that contains the deposit operation, prepended with the
0xprefix (if not present); - transaction nonce - the number of the output X that contains the deposit amount. The transaction memo can then be found by checking the next (VOUT-(X+1)) output;
- source chain id - the identifier of the source chain where the deposit operation was executed.
Zano
To initiate a transfer from the Zano network, the user should construct a transaction aligning with the next requirements:
- the transaction type should be a
burn_assettransaction; - the amount of burned asset and its identifier will be tracked as the deposit amount and token;
- the transaction should contain the memo (located in
service_entriesarray) with the required information about transfer parameters (destination address, chain id etc.) to be processed by the TSS network. It should be the present in the Base64-decoded string format of the following structure:
type DestinationData struct {
Address string `json:"dst_add"`
ChainId string `json:"dst_net_id"`
}
- transaction should be pointed to TSS network account address using the
point_tx_to_addresstransaction field. In this case, the burning transaction will be visible and processable in the TSS network.
After the transaction is broadcasted, the user should provide the TSS network with the deposit operation data:
- transaction hash - the hash of the transaction that contains the deposit operation, prepended with the
0xprefix (if not present); - transaction nonce - the index of
service_entriesarray item with transfer destination information; - source chain id - the identifier of the source chain where the deposit operation was executed.
Bridging Parameters
To find the required information about the supported tokens and chains, the user should query the Cosmos Bridge Core bridge module, which contains the information about the available tokens, their addresses, chain identifiers and more.
Bridge Module
Description
Module for interacting with different blockchain networks and application bridge accounts/contracts. Implements the bridge logic for the application.
Contains:
- RPC client connection configuration for different blockchain networks;
- Token bridging logic: deposits validation, withdrawals forming and sending;
Components
/chain: Module for configuring specific blockchain network connection and additional bridging params;/clients: Module for interacting with the blockchain networks and bridges:/clients/evm: Module for interacting with EVM-based networks;/clients/bitcoin: Module for interacting with Bitcoin network;/clients/zano: Module for interacting with Zano network;
Supported Networks
Bridge module currently supports:
- EVM-based networks (Ethereum, Binance Smart Chain, etc.);
- Bitcoin;
- Zano.
Withdrawal Constructor
Withdrawal constructor is responsible for:
- forming withdrawal signing data or unsigned transactions based on provided deposit data;
- validating the data to sign that corresponds to the provided deposit data;
Withdrawal constructor is different for each supported network type, as each network has its own unique withdrawal algorithms.
1. EVM networks
- Signing data construction: according to the provided deposit data, the constructor forms the ERC20/native token withdrawal operation data and hashes it using
EIP-191 Signed Data Standart; the resulting hash is a ready-to-sing data. - Signing data validation: using the provided deposit data and the signing data, the constructor forms the withdrawal operation as in the previous step and compares the resulting hash with the provided one.
2. Bitcoin network
Signing data construction: according to the provided deposit data, the fundrawtransaction wallet RPC method is called to form a withdrawal transaction.
This will form an unsigned transaction with:
- selected inputs for funding the withdrawal;
- outputs for the receiver and change;
- properly calculated fee.
fundrawtransaction method will be called with next parameters:
includeWatching-trueto include only watch-only addresses (TSS pubkey) in the transaction;changeAddress- the address to send the change to, set to the TSS pubkey hash;changePosition- the position of the change output, set to the first index (second position).feeRate- the fee rate in BTC/kvB, set to the default value (0.00001000 BTC per kB);
NOTE: Bitcoin wallet should be configured to track only UTXOs available to be spent by the TSS private key (TSS pubkey watch-only mode).
As we do not know scriptPubKey of each input (to form a signing data by TSS parties), the constructor have to execute listunspent RPC method, filter used UTXOs and get their scriptPubKey.
Then, the constructor forms a signature hash for each input using the SIGHASH_ALL flag.
The resulting array of signature hashes is a ready-to-sign data.
Signing data validation: using the provided deposit data and the signing data, the constructor begins transaction validation with next steps:
listunspentwallet RPC method is called to get the list of all available UTXOs;- For each input in the withdrawal transaction, the constructor checks
- if the UTXO is present in the list of available UTXOs;
- if the UTXO is not used twice in the transaction.
- constructed signature hash is equal to the provided one by the proposer.
- Check if the first output contains valid receiver PubKey script and withdrawal amount;
- Check if the second output contains valid change PubKey script (TSS pubkey hash);
- Ensure that no other outputs are present in the transaction.
- Check that transaction fees are calculated correctly:
- calculate the actual fee by subtracting the sum of the outputs from the sum of the inputs;
- get the expected transaction size by firstly mocking signature scripts with fake signatures;
- calculate the fee rate by dividing the actual fee by the transaction size;
- compare the calculated fee rate with the default one: if the tolerance (10% of the default fee rate) is exceeded, the transaction is considered invalid.
3. Zano network
- Signing data construction: according to the provided deposit data, the emit_asset request is sent to the Zano wallet RPC server, and the resulting VerifiedTxID field is a ready-to-sign data.
- Signing data validation: using the provided deposit data and provided additional data from the emit_asset response by the proposer, the constructor has the ability to decrypt transaction details using decrypt_tx_details method. Constructor validates:
- if the provided VerifiedTxID matches the decrypt_tx_details response;
- if the amount of tokens to be minted is correct;
- if the token receiver address is correct;
- if no additional outputs are present in the transaction (except the change).
Deposit fetcher
Description
Deposit fetcher submodule is designed to fetch deposit data and check deposit existence on core using provided network rpc connection and Bridge Core connector
Fetching data
Fetching data is performed through next steps:
- configuring source chain client with chain id from given deposit identifier
- validating deposit identifier data using source client
- fetching deposit data using rpc from client
- receiving source token info
- receiving token pair for deposit token, using source chain id, source token address and core connector
- receiving destination token info
- forming withdrawal amount using tokens info
In the end of fetching, user receives ready-to-insert deposit, with all necessary data.
Core Module
Description
Module for interacting with the Cosmos Bridge Core, especially with its bridge module.
Components
Connector
Core connector module is designed to query and save bridging data to the Cosmos Bridge Core.
Catch-Upper
Catch-upper module is designed to catch up with the processed transfers saved on Cosmos Bridge Core. When the TSS party goes down, to start signing again the pending transfers, the catch-upper should sync the processed transfers from the Cosmos Bridge Core to prevent double-spending and party misfunctioning.
Subscriber
Subscriber module is designed to listen to the Cosmos Bridge Core events, especially the newly processed transfers. As the party can not be included to the current session signers set, the subscriber should listen to the processed transfers and notify the party to update its internal state.
P2P Module
Description
Peer-to-peer (P2P) module that contains the core logic for the peer-to-peer communication between the signing TSS nodes (parties) in the network.
Broadcaster
Description
P2P broadcaster is responsible for broadcasting messages to all connected peers. It receives a list of peers to begin broadcasting messages to. It also can be used to broadcast messages to a specific set of peers.
Connection manager
Description
P2P connection manager is responsible for managing the peer-to-peer connections and their states. It holds grpc-connections for each peer and monitors their states. Different parts of the system can request a list of successfully-connected peers. A successful connection is a connection that has been established by checking the peer public key and a service mode match. As the party server can be run in TLS enabled/disabled mode, the connection manager should be able to handle both cases. In case of a TLS-enabled party server, the connection manager should configure clients with the TLS certificates. Otherwise, no additional configuration is required.
Inputs
Manager accepts:
- the list of peers to connect to;
- current service mode to identify ready-to-serve peers;
- client TLS certificate to identify itself to other peers (optional, in case of TLS-enabled mode).
Outputs
Manager provides:
- a list of successfully-connected peers;
- a grpc-connection to a specific peer by its public key;
- an option to subscribe to the parties' connection state changes.
Party server
Description
P2P party server is responsible for handling incoming connections from other peers.
TLS enabled/disabled modes
As the TSS protocol requires a secure connection between the parties, the server should be able to handle both TLS-enabled and disabled modes. In case of a TLS-disabled party server, the server should be able to accept incoming connections without any additional configuration. To identify the peer, server will use the public key from the peer's request.
NOTE: do not use the TLS-disabled mode in production environments as everyone can use someones' public key to connect to the party.
In case of a TLS-enabled party server, the server should be configured with the TLS certificates. It includes:
- server certificate;
- server private key;
- pool of CA certificates to verify the party certificates.
- party's public keys to identify the peers.
TSS Module
Description
TSS module is responsible for the threshold signature scheme (TSS) that is used for signing the data required to perform the cross-chain transfer.
Keygener
Keygener is a submodule for generating the secret shares for the parties in the TSS network and the private party key. It is responsible for generating the secret shares for the parties and distributing them to the parties in the network. Generated private party key is used for signing the withdrawal data in the TSS network by all active parties. The secret shares are generated using the third-party tss-lib library.
Note:
- The main keygen process is performed only once when the network is initialized;
- The keygen process is performed by all parties in the network (they should be active and with the appropriate service mode);
- The keygen process can be reused in case of resharing the secret shares or adding new parties to the network to generate new party private key.
Inputs
To start the keygen process, the following inputs are required:
list of active and ready parties to collaborate with; generated party pre-parameters (should be generated before starting the keygen process, see pre-params generation); Outputs
Keygener provides the out channel where the parties should send messages.
After the keygen process is completed, the output for the local party is the secret share that is used with other parties to sign the data with the system private party key.
Distributor
Description
Distributor is a submodule for validating and distributing the incoming transfer deposits to the parties in the TSS network.
As every party in the TSS network is able to receive users' transfer requests, it should be able to distribute the incoming transfer deposits to other parties in the network. This is made to:
- accelerate the process of TSS signing by backgrounded deposit validation before starting the signing process;
- prevent the situation when only the small group of parties receives the enormous majority of the transfer deposits. It can lead to the situation when proposer party (see Consensus module) does not have anything to propose for signing and session is stuck for a while.
Invalid deposits should be rejected and not distributed to the parties in the network.
Inputs
To start the deposit distribution process, the following inputs are required:
- list of active and ready parties to collaborate with;
- healthy database connection;
- incoming deposit identifiers.
Outputs
Distributor provides the out channel where incoming deposits should be sent.
Session
Description
Session is a submodule for managing the TSS session lifecycle.
Signing session
A TSS signing session is a set of operations that are performed by the parties in the network to process the withdrawal request. The main goal of the session is to define the current signers set, the data to be signed, sign the data, and finalize the transfer process.
Session consists of the following steps:
Acceptance- reaching an agreement between the parties in the TSS network on the data to be signed next. Uses the Consensus submodule;Signing- signing the provided data by communicating with other parties in the TSS network. Uses the Signer submodule;Finalization- finalizing the signing process by saving data/executing other finalization steps. Uses the Finalizer submodule.
There are as many active sessions as the total number of supported chains in the system. Each session is responsible for processing the withdrawal requests on the specific chain. This is done to:
- prevent the mixing of the withdrawal requests from the same chains (f.e trying to sign the same data twice, use the same UTXO in different transactions etc);
- speed up the signing process by parallelizing the non-conflicting withdrawal requests processing.
Each session has its own lifecycle and identifier that changes with each new session. New session in this context is an old finished session with new (incremented) session identifier that is ready to process new withdrawal requests (for the same chain as previous session) and waits for its start.
Keygen session
Keygen session is a special session that is used to generate the secret shares for the parties in the TSS network. It is performed only once when the network is initialized and the parties are ready to start the TSS signing process.
Session boundaries
To control the session duration, the session should be bounded by the time limits. Those limits are different for each step of the session (acceptance, signing, finalization) and session chain type. Also, each active session changes to the new one once in a constant time interval.
Here is the list of the signing session time bounds:
- EVM session:
- acceptance: 10 seconds;
- signing: 10 seconds;
- finalization step: 10 seconds;
- new session period: 30 seconds.
- Zano session:
- acceptance: 10 seconds;
- signing: 10 seconds;
- finalization step: 10 seconds;
- new session period: 30 seconds.
- Bitcoin session:
- acceptance: 10 seconds;
- signing: 10 seconds * number N of UTXOs to be signed in the transaction;
- finalization step: 10 seconds;
- new session period: 60 seconds.
In case of the session step timeout, the session should be finished and the new session should be initialized and wait for its start.
Keygen session deadline is 1 minute.
Session manager
Session manager is responsible for managing the set of sessions. It is responsible for:
- providing the specific session with other parties session messages;
- providing the requestor with the specific session information;
Catchup
For the initial sessions start, the parties are required to have the same session start time and initial session identifier.
In case when some party lost the connection and misses current session data, it should request the session information from other parties. Session information can include:
- current session identifier;
- session start time;
- session deadline;
Using this information, the party can calculate the current session identifier and session time bounds and catch up with the other parties by waiting for the current session deadline.
Consensus
Description
Consensus is a submodule for reaching an agreement between the parties in the TSS network on the data to be signed next. It is responsible for forming the withdrawal transaction and data to be signed on the current signing "session".
Mechanism
Consensus mechanism is based on the proposer selection and the data sharing between the parties in the network. There is two possible roles for the party in the consensus process:
proposer- the party that selects the data to be signed and shares it with all parties in the network;signer- the party that validates and signs the data shared by the proposer.
Only one proposer is selected for the current signing session, while all parties can be signers. Proposer is the signer as well.
The consensus process is performed by the following steps:
- All parties in the network should deterministically choose the proposer for the current signing session. Proposer is selected using the deterministic function using the ChaCha8 pseudo-random number generator.
- Proposer selects the unsigned withdrawal request based on the session context (f.e. deposit on the specific chain). According to the provided signing data constructor function (different for each chain), the proposer constructs the data to be signed and other metadata if needed. It shares the constructed data and metadata with all parties in the network. If there is no data to be signed, waiting for next session / broadcasting the no-signing-data message.
- Parties that received proposer request (signers) should:
- check if request provider matches the current session proposer;
- check if deposit is valid and unsigned yet;
- try to construct the same or validate existing data to sign, optionally using metadata (different for each chain);
- reply with acknowledgement status:
- ACK if everything is fine;
- NACK if something isn't valid (already signed proposal, non-existent deposit etc).
- While signers ACKing or NACKing proposer request, the proposer collects all ACKed responses. It should check that number of ACKs N is equal or bigger than signing threshold value T.
- if true, proposer deterministically selects the T signers from the N signer that ACKed the signing request. They will be the signers of the current session. They are notified by the proposer about the current session signer set.
- if false, waiting for next session / broadcasting the not-enough-signers message.
- Notified signers receive the current session signing set and can additionally validate that all parties forming the signers set are valid and active. Signers that are not included in the current signers set can wait till consensus session deadline and understand that they are not the part of the current signers set. Optionally, they can be notified by proposer that they won't take part in current signing process.
Inputs
To start the consensus process, the following inputs are required:
- list of active and ready parties to collaborate with;
- session context:
- current session and processing chain identifier;
- unsigned transfer selector (based on the chain identifier);
- signing data constructor function (different for each chain);
- signing data validator function (different for each chain);
Outputs
After the consensus process is completed, the output is the data to be signed and the list of parties that will sign the data. If the party is not included in the signers list, the signing data will be empty, and it should wait for the next session.
Signer
Description
Signer is a submodule for signing the provided data by communicating with other parties in the TSS network.
P2P communication is provided by the P2P module, while the TSS signing is provided by the third-party tss-lib library.
Note:
- No signing data validation is performed in this module;
- All parties should start the signing process at the same time;
- All parties should sing exactly the same data;
- There are enough parties to reach the threshold.
It is assumed that the data is validated before being passed to the signer and all parties agreed on the data to be signed.
Inputs
To start the signing process, the following inputs are required:
- data to be signed;
- list of parties to collaborate with (or broadcaster to send the data to all parties);
- signing threshold;
- local party secret share.
Outputs
After the signing process is completed, the output is the signature of the data and the error if any (timeout, not enough parties, signing error, etc.).
Finalizer
Description
Finalizer is a submodule for finalizing the signing process. It is responsible for saving the signed transfers to the Bridge Core. Additionally, it can be used to broadcast the signed transfers to the network or do other finalization steps (different for each chain).
Finalization process
EVM networks
For EVM networks, the finalization process is performed only by saving the signed withdrawal data to the Bridge Core. Then it can be used by anyone to construct and broadcast the withdrawal transaction to the destination network.
Note: TSS network does not broadcast the signed EVM transactions to the network, user should do it manually and pay the gas fee.
Bitcoin network
For the Bitcoin network, the finalization process, in addition to saving the signed withdrawal data to the Bridge Core, also broadcasts the signed transaction to the Bitcoin network.
Zano network
For the Zano network, the finalization process, in addition to saving the signed withdrawal data to the Bridge Core, also broadcasts the signed transaction to the Zano network.
Note: currently, the finalization process should be performed by the session proposer, see Consensus for more details;
Inputs
To start the finalization process, the following inputs are required:
- signed transfer data;
- Bridge Core connection;
- optional data for finalization (different for each chain).
Outputs
Finalizer does not provide any outputs, except for the finalization error if any.
Secrets module
Description
Module for managing application confidential and crucial secrets.
Currently, supports only HashiCorp Vault as a secret store.
Configuration
To connect to the Vault, the following environment variables should be set:
VAULT_PATH- the path to the VaultVAULT_TOKEN- the token to access the VaultMOUNT_PATH- the mount path where the application secrets are stored. Note: use thekv v2secrets engine
Next secrets should be set in the Vault key-value storage under the MOUNT_PATH for proper service configuration:
- for keygen mode:
keygen_preparams- TSS pre-parameters for the threshold signature key generationcosmos_account- Cosmos SDK account private key (in hex format).
- for signing mode:
- all the secrets from the keygen mode
tss_share- TSS key share for the local party threshold signature signing
API module
Description
API module designed to handle user`s HTTP request, provide WS connection and nesting GRPC server.
API server has 2 HTTP endpoints:
submitwhere user can submit his transfer with identifiercheckis used to check a withdrawal status of submitted deposit
WS connection provides user with updates when users deposit status changes
It is available via ws/check endpoint.
Submit request body example:
{
"tx_hash":"0x161075d666fb77421e19362f6c94b1efe9f1a0991499f10be094a5e2f60c147d",
"chain_id": "35442",
"tx_nonce": 0
}
Handlers
There are two handler functions CheckTx and SubmitTx used to handle user requests.
CheckTx
CheckTx handler accepts the deposit identifier, and performs deposit status checking.
Checking steps include:
- identifier struct validation
- checking if deposit with provided identifier exists
- forming CheckWithdrawalResponse to sent it to user
Withdrawal status response example:
{
"depositIdentifier": {
"txHash": "0x161075d666fb77421e19362f6c94b1efe9f1a0991499f10be094a5e2f60c147d",
"txNonce": "0",
"chainId": "35442"
},
"transferData": {
"sender": "0xbeefd475a76ec312502ba7b566a9b4cea91ab030",
"receiver": "0xbeefd475a76ec312502ba7b566a9b4cea91ab030",
"depositAmount": "1212",
"withdrawalAmount": "1212",
"depositAsset": "0x0000000000000000000000000000000000000000",
"withdrawalAsset": "0x10b0eebd5758c814eb333fc23a229efa8f5432ba",
"isWrappedAsset": "false",
"depositBlock": "15658600"
},
"withdrawalStatus": "WITHDRAWAL_STATUS_PENDING"
}
SubmitTx
SubmitTx is responsible to get deposit data with provided identifier from network and Bridge Core, and save it to local database to proceed it later.
Tx is submitted though several steps:
- identifier struct validation
- checking if deposit with provided identifier exists, if it already exists in database user gets an error response
- passing formed database identifier to Deposit processor submodule to fetch all deposit data
- if deposit was valid and data fetched successfully deposit data is inserted to db with pending status, if something has gone wrong:
- if provided deposit data was invalid deposit data is inserted to db with invalid status
- if service failed fetching deposit data it is inserted to db with failed status
Deposit processor submodule
Description
Processor submodule is designed to fetch deposit data using provided network rpc connection and Bridge Core connector
Fetching data
Fetching data is performed through next steps:
- configuring source chain client with chain id from given deposit identifier
- validating deposit identifier data using source client
- fetching deposit data using rpc from client
- receiving source token info
- receiving token pair for deposit token, using source chain id, source token address and core connector
- receiving destination token info
- forming withdrawal amount using tokens info
In the end of fetching, user receives ready-to-insert deposit, with all necessary data.
Service configuration
To provide the service with the required settings, you need to:
- create the configuration file;
- run the Vault server with configured application secrets;
Configuration file
The configuration file is based on the YAML format and should be provided to the service during the launch or commands execution. It stores the service settings, network settings, and other required parameters.
Example of configuration file:
log:
level: debug
disable_sentry: true
db:
url:
listener-grpc:
addr: :0000
parties:
list:
- core_address: bridge1...
connection: conn
pubkey: pub
- core_address: bridge1...
connection: conn
pubkey: pub
- core_address: bridge1...
connection: conn
pubkey: pub
tss:
keygen:
start_time: "2025-01-08 00:21:20"
session_id: abcd
signing:
start_time: "2025-01-08 00:21:20"
session_id: abcd
threshold: 1
Vault configuration
HashiCorp Vault is used to store the most sensitive data like keys, private TSS key shares etc.
Configuration
See the secrets module secrets for more details on how to configure the Vault secrets.
Environment variables
To configure the Vault credentials, the following environment variables should be set:
VAULT_PATH={path} -- the path to the Vault
VAULT_TOKEN={token} -- the token to access the Vault
MOUNT_PATH={mount_path} -- the mount path where the application secrets are stored
Example configuration:
export VAULT_PATH=http://localhost:8200
export VAULT_TOKEN=root
export MOUNT_PATH=secret
CLI
Description
Contains the command-line interface (CLI) for the project
Commands
Some of the commands require the mandatory or optional flags to be passed. See the Flags section for more details about specific flag definition and usage.
Database Migrations
At the start of server user has to migrate up his db to have possibility to process deposits in right way. Commands:
tss-svc service migrate up: Migrates the database schema to the latest versiontss-svc service migrate down: Rolls back the database schema to the previous version
Required flags:
--config(can be omitted if the default config file path is used)
Run server
Service can be run into two modes: keygen and signing.
- Signing mode offers user to take part in signing sessions and proceed incoming deposits.
- Keygen mode is designed to generate user`s shares used in signing process. For more details see Running service
Commands:
tss-svc service run keygen: Runs the TSS service in the keygen modetss-svc service run signing: Runs the TSS service in the sign mode
Required flags:
--config(can be omitted if the default config file path is used)
Optional flags:
--output
Sign single message
Commands:
tss-svc service sign [msg]: Signs a given message using the TSS service
Required flags:
--config(can be omitted if the default config file path is used)
Optional flags:
--output
Generation
-
tss-svc helpers generate preparams: Generates a new set of pre-parameters for the TSS service. Optional flags:--output--config
-
tss-svc helpers generate cosmos-account: Generates a new Cosmos SDK private key and according account address. Optional flags:--output--config
-
tss-svc helpers generate transaction: Generates a new transaction based on the given data. It is used for resharing purposes. Should be investigated further.
Parsing
Commands:
tss-svc helpers parse address-btc [x-cord] [y-cord]: Parses btc address from given pointtss-svc helpers parse address-eth [x-cord] [y-cord]: Parses eth address from given pointtss-svc helpers parse pubkey [x-cord] [y-cord]: Parses public key from given point
Optional flags:
--network(Network type (mainnet/testnet), mainnet is used by default)
Flags
--config(-c): Specifies the path to the configuration file. By default, the config file path is set toconfig.yaml. See Configuration for more details--output(-o): Specifies the data output type for the command. Use the flag with parameter to change the desired output:console: stdout, default output;file: write the output to a JSON file, use the--pathflag to specify the file path, default iscosmos-account.json;vault: write the output to a HashiCorp Vault (requires a running Vault server and configured environment variables. Used alongside with--configflag. See Configuration for more details).
Running The Service
Service can be run in two main modes: keygen and signing. Also, the service can execute additional commands like database migrations, message signing, etc.
Check the available commands and flags in the CLI documentation.
Keygen mode
Before starting the service in the keygen mode, firstly:
- set up the secrets store. See Configuring Vault for more details.
- make sure the configuration file is set up correctly. See Configuration file for more details.
- make sure the keygen session
start_timeandsession_idare the same for all parties.
To run the service in the keygen mode, execute the following command:
tss-svc service run keygen -c /path/to/config.yaml -o console|file|vault
For example, to run the service in the keygen mode with the ./configs/config.yaml and output the result (local party private share) to the Vault, run the following command:
tss-svc service run keygen -c ./configs/config.yaml -o vault
Signing mode
Before starting the service in the signing mode, firstly:
- set up the secrets store. See Configuring Vault for more details.
- make sure the configuration file is set up correctly. See Configuration file for more details.
- make sure the signing session
start_timeandsession_idare the same for all parties.
To run the service in the signing mode, execute the following command:
tss-svc service run sign -c /path/to/config.yaml -o console|file|vault
For example, to run the service in the signing mode with the ./configs/config.yaml and output the result to the console, run the following command:
tss-svc service run sign -c ./configs/config.yaml -o console