https://en.bitcoin.it/w/index.php?title=Special:NewPages&feed=atom&hideredirs=1&limit=50&offset=&namespace=0&username=&tagfilter=&size-mode=max&size=0Bitcoin Wiki - New pages [en]2020-01-22T23:01:26ZFrom Bitcoin WikiMediaWiki 1.30.0https://en.bitcoin.it/wiki/BIP_0342BIP 03422020-01-19T23:53:17Z<p>934: Update BIP text with latest version from https://github.com/sipa/bips/blob/a92523e57c252ca9/bip-0342.mediawiki</p>
<hr />
<div>{{bip}}<br />
{{BipMoved|bip-0342.mediawiki}}<br />
<br />
<pre><br />
BIP: 342<br />
Layer: Consensus (soft fork)<br />
Title: Validation of Taproot Scripts<br />
Author: Pieter Wuille <pieter.wuille@gmail.com><br />
Jonas Nick <jonasd.nick@gmail.com><br />
Anthony Towns <aj@erisian.com.au><br />
Comments-Summary: No comments yet.<br />
Comments-URI: https://github.com/bitcoin/bips/wiki/Comments:BIP-0342<br />
Status: Draft<br />
Type: Standards Track<br />
Created: 2020-01-19<br />
License: BSD-3-Clause<br />
Requires: 340, 341<br />
Post-History: 2019-05-06: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-May/016914.html [bitcoin-dev] Taproot proposal<br />
</pre><br />
<br />
==Introduction==<br />
<br />
===Abstract===<br />
<br />
This document specifies the semantics of the initial scripting system under [bip-0341.mediawiki BIP341].<br />
<br />
===Copyright===<br />
<br />
This document is licensed under the 3-clause BSD license.<br />
<br />
===Motivation===<br />
<br />
[bip-0341.mediawiki BIP341] proposes improvements to just the script structure, but some of its goals are incompatible with the semantics of certain opcodes within the scripting language itself.<br />
While it is possible to deal with these in separate optional improvements, their impact is not guaranteed unless they are addressed simultaneously with [bip-0341.mediawiki BIP341] itself.<br />
<br />
Specifically, the goal is making '''Schnorr signatures''', '''batch validation''', and '''signature hash''' improvements available to spends that use the script system as well.<br />
<br />
==Design==<br />
<br />
In order to achieve these goals, signature opcodes <code>OP_CHECKSIG</code> and <code>OP_CHECKSIGVERIFY</code> are modified to verify Schnorr signatures as specified in [bip-0340.mediawiki BIP340] and to use a signature message algorithm based on the common message calculation in [bip-0341.mediawiki BIP341].<br />
The tapscript signature message also simplifies <code>OP_CODESEPARATOR</code> handling and makes it more efficient.<br />
<br />
The inefficient <code>OP_CHECKMULTISIG</code> and <code>OP_CHECKMULTISIGVERIFY</code> opcodes are disabled.<br />
Instead, a new opcode <code>OP_CHECKSIGADD</code> is introduced to allow creating the same multisignature policies in a batch-verifiable way.<br />
Tapscript uses a new, simpler signature opcode limit fixing complicated interactions with transaction weight.<br />
Furthermore, a potential malleability vector is eliminated by requiring MINIMALIF.<br />
<br />
Tapscript can be upgraded through soft forks by defining unknown key types, for example to add new <code>hash_types</code> or signature algorithms.<br />
Additionally, the new tapscript <code>OP_SUCCESS</code> opcodes allow introducing new opcodes more cleanly than through <code>OP_NOP</code>.<br />
<br />
==Specification==<br />
<br />
The rules below only apply when validating a transaction input for which all of the conditions below are true:<br />
* The transaction input is a '''segregated witness spend''' (i.e., the scriptPubKey contains a witness program as defined in [bip-0141.mediawiki BIP141]).<br />
* It is a '''taproot spend''' as defined in [bip-0341.mediawiki BIP341] (i.e., the witness version is 1, the witness program is 32 bytes, and it is not P2SH wrapped).<br />
* It is a '''script path spend''' as defined in [bip-0341.mediawiki BIP341] (i.e., after removing the optional annex from the witness stack, two or more stack elements remain).<br />
* The leaf version is ''0xc0'' (i.e. the first byte of the last witness element after removing the optional annex is ''0xc0'' or ''0xc1''), marking it as a '''tapscript spend'''.<br />
<br />
Validation of such inputs must be equivalent to performing the following steps in the specified order.<br />
# If the input is invalid due to BIP141 or BIP341, fail.<br />
# The script as defined in BIP341 (i.e., the penultimate witness stack element after removing the optional annex) is called the '''tapscript''' and is decoded into opcodes, one by one:<br />
## If any opcode numbered ''80, 98, 126-129, 131-134, 137-138, 141-142, 149-153, 187-254'' is encountered, validation succeeds (none of the rules below apply). This is true even if later bytes in the tapscript would fail to decode otherwise. These opcodes are renamed to <code>OP_SUCCESS80</code>, ..., <code>OP_SUCCESS254</code>, and collectively known as <code>OP_SUCCESSx</code><ref>'''<code>OP_SUCCESSx</code>''' <code>OP_SUCCESSx</code> is a mechanism to upgrade the Script system. Using an <code>OP_SUCCESSx</code> before its meaning is defined by a softfork is insecure and leads to fund loss. The inclusion of <code>OP_SUCCESSx</code> in a script will pass it unconditionally. It precedes any script execution rules to avoid the difficulties in specifying various edge cases, for example: <code>OP_SUCCESSx</code> in a script with an input stack larger than 1000 elements, <code>OP_SUCCESSx</code> after too many signature opcodes, or even scripts with conditionals lacking <code>OP_ENDIF</code>. The mere existence of an <code>OP_SUCCESSx</code> anywhere in the script will guarantee a pass for all such cases. <code>OP_SUCCESSx</code> are similar to the <code>OP_RETURN</code> in very early bitcoin versions (v0.1 up to and including v0.3.5). The original <code>OP_RETURN</code> terminates script execution immediately, and return pass or fail based on the top stack element at the moment of termination. This was one of a major design flaws in the original bitcoin protocol as it permitted unconditional third party theft by placing an <code>OP_RETURN</code> in <code>scriptSig</code>. This is not a concern in the present proposal since it is not possible for a third party to inject an <code>OP_SUCCESSx</code> to the validation process, as the <code>OP_SUCCESSx</code> is part of the script (and thus committed to by the taproot output), implying the consent of the coin owner. <code>OP_SUCCESSx</code> can be used for a variety of upgrade possibilities:<br />
* An <code>OP_SUCCESSx</code> could be turned into a functional opcode through a softfork. Unlike <code>OP_NOPx</code>-derived opcodes which only have read-only access to the stack, <code>OP_SUCCESSx</code> may also write to the stack. Any rule changes to an <code>OP_SUCCESSx</code>-containing script may only turn a valid script into an invalid one, and this is always achievable with softforks.<br />
* Since <code>OP_SUCCESSx</code> precedes size check of initial stack and push opcodes, an <code>OP_SUCCESSx</code>-derived opcode requiring stack elements bigger than 520 bytes may uplift the limit in a softfork.<br />
* <code>OP_SUCCESSx</code> may also redefine the behavior of existing opcodes so they could work together with the new opcode. For example, if an <code>OP_SUCCESSx</code>-derived opcode works with 64-bit integers, it may also allow the existing arithmetic opcodes in the ''same script'' to do the same.<br />
* Given that <code>OP_SUCCESSx</code> even causes potentially unparseable scripts to pass, it can be used to introduce multi-byte opcodes, or even a completely new scripting language when prefixed with a specific <code>OP_SUCCESSx</code> opcode.</ref>.<br />
## If any push opcode fails to decode because it would extend past the end of the tapscript, fail.<br />
# If the '''initial stack''' as defined in BIP341 (i.e., the witness stack after removing both the optional annex and the two last stack elements after that) violates any resource limits (stack size, and size of the elements in the stack; see "Resource Limits" below), fail. Note that this check can be bypassed using <code>OP_SUCCESSx</code>.<br />
# The tapscript is executed according to the rules in the following section, with the initial stack as input.<br />
## If execution fails for any reason, fail.<br />
## If the execution results in anything but exactly one element on the stack which evaluates to true with <code>CastToBool()</code>, fail.<br />
# If this step is reached without encountering a failure, validation succeeds.<br />
<br />
===Script execution===<br />
<br />
The execution rules for tapscript are based on those for P2WSH according to BIP141, including the <code>OP_CHECKLOCKTIMEVERIFY</code> and <code>OP_CHECKSEQUENCEVERIFY</code> opcodes defined in [bip-0065.mediawiki BIP65] and [bip-0112.mediawiki BIP112], but with the following modifications:<br />
* '''Disabled script opcodes''' The following script opcodes are disabled in tapscript: <code>OP_CHECKMULTISIG</code> and <code>OP_CHECKMULTISIGVERIFY</code><ref>'''Why are <code>OP_CHECKMULTISIG</code> and <code>OP_CHECKMULTISIGVERIFY</code> disabled, and not turned into OP_SUCCESSx?''' This is a precaution to make sure people who accidentally keep using <code>OP_CHECKMULTISIG</code> in Tapscript notice a problem immediately. It also avoids the complication of script disassemblers needing to become context-dependent.</ref>. The disabled opcodes behave in the same way as <code>OP_RETURN</code>, by failing and terminating the script immediately when executed, and being ignored when found in unexecuted branch of the script.<br />
* '''Consensus-enforced MINIMALIF''' The MINIMALIF rules, which are only a standardness rule in P2WSH, are consensus enforced in tapscript. This means that the input argument to the <code>OP_IF</code> and <code>OP_NOTIF</code> opcodes must be either exactly 0 (the empty vector) or exactly 1 (the one-byte vector with value 1)<ref>'''Why make MINIMALIF consensus?''' This makes it considerably easier to write non-malleable scripts that take branch information from the stack.</ref>.<br />
* '''OP_SUCCESSx opcodes''' As listed above, some opcodes are renamed to <code>OP_SUCCESSx</code>, and make the script unconditionally valid.<br />
* '''Signature opcodes'''. The <code>OP_CHECKSIG</code> and <code>OP_CHECKSIGVERIFY</code> are modified to operate on Schnorr public keys and signatures (see [bip-0340.mediawiki BIP340]) instead of ECDSA, and a new opcode <code>OP_CHECKSIGADD</code> is added.<br />
** The opcode 186 (<code>0xba</code>) is named as <code>OP_CHECKSIGADD</code>. <ref>'''<code>OP_CHECKSIGADD</code>''' This opcode is added to compensate for the loss of <code>OP_CHECKMULTISIG</code>-like opcodes, which are incompatible with batch verification. <code>OP_CHECKSIGADD</code> is functionally equivalent to <code>OP_ROT OP_SWAP OP_CHECKSIG OP_ADD</code>, but only takes 1 byte. All <code>CScriptNum</code>-related behaviours of <code>OP_ADD</code> are also applicable to <code>OP_CHECKSIGADD</code>.</ref><ref>'''Alternatives to <code>CHECKMULTISIG</code>''' There are multiple ways of implementing a threshold ''k''-of-''n'' policy using Taproot and Tapscript:<br />
* '''Using a single <code>OP_CHECKSIGADD</code>-based script''' A <code>CHECKMULTISIG</code> script <code>m <pubkey_1> ... <pubkey_n> n CHECKMULTISIG</code> with witness <code>0 <signature_1> ... <signature_m></code> can be rewritten as script <code><pubkey_1> CHECKSIG ... <pubkey_n> CHECKSIGADD m NUMEQUAL</code> with witness <code><w_1> ... <w_n></code>. Every witness element <code>w_i</code> is either a signature corresponding to <code>pubkey_i</code> or an empty vector. A similar <code>CHECKMULTISIGVERIFY</code> script can be translated to BIP342 by replacing <code>NUMEQUAL</code> with <code>NUMEQUALVERIFY</code>. This approach has very similar characteristics to the existing <code>OP_CHECKMULTISIG</code>-based scripts.<br />
* '''Using a ''k''-of-''k'' script for every combination''' A ''k''-of-''n'' policy can be implemented by splitting the script into several leaves of the Merkle tree, each implementing a ''k''-of-''k'' policy using <code><pubkey_1> CHECKSIGVERIFY ... <pubkey_(n-1)> CHECKSIGVERIFY <pubkey_n> CHECKSIG</code>. This may be preferable for privacy reasons over the previous approach, as it only exposes the participating public keys, but it is only more cost effective for small values of ''k'' (1-of-''n'' for any ''n'', 2-of-''n'' for ''n &ge; 6'', 3-of-''n'' for ''n &ge; 9'', ...). Furthermore, the signatures here commit to the branch used, which means signers need to be aware of which other signers will be participating, or produce signatures for each of the tree leaves.<br />
* '''Using an aggregated public key for every combination''' Instead of building a tree where every leaf consists of ''k'' public keys, it is possible instead build a tree where every leaf contains a single ''aggregate'' of those ''k'' keys using [https://eprint.iacr.org/2018/068 MuSig]. This approach is far more efficient, but does require a 3-round interactive signing protocol to jointly produce the (single) signature.<br />
* '''Native Schnorr threshold signatures''' Multisig policies can also be realized with [http://cacr.uwaterloo.ca/techreports/2001/corr2001-13.ps threshold signatures] using verifiable secret sharing. This results in outputs and inputs that are indistinguishable from single-key payments, but at the cost of needing an interactive protocol (and associated backup procedures) before determining the address to send to.</ref><br />
<br />
===Rules for signature opcodes===<br />
<br />
The following rules apply to <code>OP_CHECKSIG</code>, <code>OP_CHECKSIGVERIFY</code>, and <code>OP_CHECKSIGADD</code>.<br />
<br />
* For <code>OP_CHECKSIGVERIFY</code> and <code>OP_CHECKSIG</code>, the public key (top element) and a signature (second to top element) are popped from the stack.<br />
** If fewer than 2 elements are on the stack, the script MUST fail and terminate immediately.<br />
* For <code>OP_CHECKSIGADD</code>, the public key (top element), a <code>CScriptNum</code> <code>n</code> (second to top element), and a signature (third to top element) are popped from the stack.<br />
** If fewer than 3 elements are on the stack, the script MUST fail and terminate immediately.<br />
** If <code>n</code> is larger than 4 bytes, the script MUST fail and terminate immediately.<br />
* If the public key size is zero, the script MUST fail and terminate immediately.<br />
* If the public key size is 32 bytes, it is considered to be a public key as described in BIP340:<br />
** If the signature is not the empty vector, the signature is validated against the public key (see the next subsection). Validation failure in this case immediately terminates script execution with failure.<br />
* If the public key size is not zero and not 32 bytes, the public key is of an ''unknown public key type''<ref>'''Unknown public key types''' allow adding new signature validation rules through softforks. A softfork could add actual signature validation which either passes or makes the script fail and terminate immediately. This way, new <code>SIGHASH</code> modes can be added, as well as [https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-December/016549.html NOINPUT-tagged public keys] and a public key constant which is replaced by the taproot internal key for signature validation.</ref> and no actual signature verification is applied. During script execution of signature opcodes they behave exactly as known public key types except that signature validation is considered to be successful.<br />
* If the script did not fail and terminate before this step, regardless of the public key type:<br />
** If the signature is the empty vector:<br />
*** For <code>OP_CHECKSIGVERIFY</code>, the script MUST fail and terminate immediately.<br />
*** For <code>OP_CHECKSIG</code>, an empty vector is pushed onto the stack, and execution continues with the next opcode.<br />
*** For <code>OP_CHECKSIGADD</code>, a <code>CScriptNum</code> with value <code>n</code> is pushed onto the stack, and execution continues with the next opcode.<br />
** If the signature is not the empty vector, the opcode is counted towards the sigops budget (see further).<br />
*** For <code>OP_CHECKSIGVERIFY</code>, execution continues without any further changes to the stack.<br />
*** For <code>OP_CHECKSIG</code>, a 1-byte value <code>0x01</code> is pushed onto the stack.<br />
*** For <code>OP_CHECKSIGADD</code>, a <code>CScriptNum</code> with value of <code>n + 1</code> is pushed onto the stack.<br />
<br />
===Signature validation===<br />
<br />
To validate a signature ''sig'' with public key ''p'':<br />
* Compute the tapscript message extension ''ext'', consisting of the concatenation of:<br />
** ''tapleaf_hash'' (32): the tapleaf hash as defined in [bip-0341.mediawiki BIP341]<br />
** ''key_version'' (1): a constant value ''0x00'' representing the current version of public keys in the tapscript signature opcode execution.<br />
** ''codesep_pos'' (4): the opcode position of the last executed <code>OP_CODESEPARATOR</code> before the currently executed signature opcode, with the value in little endian (or ''0xffffffff'' if none executed). The first opcode in a script has a position of 0. A multi-byte push opcode is counted as one opcode, regardless of the size of data being pushed.<br />
* If the ''sig'' is 64 bytes long, return ''Verify(p, hash<sub>TapSigHash</sub>(0x00 || SigMsg(0x00, 1) || ext), sig)'', where ''Verify'' is defined in [bip-0340.mediawiki BIP340].<br />
* If the ''sig'' is 65 bytes long, return ''sig[64] &ne; 0x00 and Verify(p, hash<sub>TapSighash</sub>(0x00 || SigMsg(sig[64], 1) || ext), sig[0:64])''.<br />
* Otherwise, fail.<br />
<br />
In summary, the semantics of signature validation is identical to BIP340, except the following:<br />
# The signature message includes the tapscript-specific data ''key_version''.<ref>'''Why does the signature message commit to the ''key_version''?''' This is for future extensions that define unknown public key types, making sure signatures can't be moved from one key type to another.</ref><br />
# The signature message commits to the executed script through the ''tapleaf_hash'' which includes the leaf version and script instead of ''scriptCode''. This implies that this commitment is unaffected by <code>OP_CODESEPARATOR</code>.<br />
# The signature message includes the opcode position of the last executed <code>OP_CODESEPARATOR</code>.<ref>'''Why does the signature message include the position of the last executed <code>OP_CODESEPARATOR</code>?''' This allows continuing to use <code>OP_CODESEPARATOR</code> to sign the executed path of the script. Because the <code>codeseparator_position</code> is the last input to the hash, the SHA256 midstate can be efficiently cached for multiple <code>OP_CODESEPARATOR</code>s in a single script. In contrast, the BIP143 handling of <code>OP_CODESEPARATOR</code> is to commit to the executed script only from the last executed <code>OP_CODESEPARATOR</code> onwards which requires unnecessary rehashing of the script. It should be noted that the one known <code>OP_CODESEPARATOR</code> use case of saving a second public key push in a script by sharing the first one between two code branches can be most likely expressed even cheaper by moving each branch into a separate taproot leaf.</ref><br />
<br />
===Resource limits===<br />
<br />
In addition to changing the semantics of a number of opcodes, there are also some changes to the resource limitations:<br />
* '''Script size limit''' The maximum script size of 10000 bytes does not apply. Their size is only implicitly bounded by the block weight limit.<ref>'''Why is a limit on script size no longer needed?''' Since there is no <code>scriptCode</code> directly included in the signature hash (only indirectly through a precomputable tapleaf hash), the CPU time spent on a signature check is no longer proportional to the size of the script being executed.</ref><br />
* '''Non-push opcodes limit''' The maximum non-push opcodes limit of 201 per script does not apply.<ref>'''Why is a limit on the number of opcodes no longer needed?''' An opcode limit only helps to the extent that it can prevent data structures from growing unboundedly during execution (both because of memory usage, and because of time that may grow in proportion to the size of those structures). The size of stack and altstack is already independently limited. By using O(1) logic for <code>OP_IF</code>, <code>OP_NOTIF</code>, <code>OP_ELSE</code>, and <code>OP_ENDIF</code> as suggested [https://bitslog.com/2017/04/17/new-quadratic-delays-in-bitcoin-scripts/ here] and implemented [https://github.com/bitcoin/bitcoin/pull/16902 here], the only other instance can be avoided as well.</ref><br />
* '''Sigops limit''' The sigops in tapscripts do not count towards the block-wide limit of 80000 (weighted). Instead, there is a per-script sigops ''budget''. The budget equals 50 + the total serialized size in bytes of the transaction input's witness (including the <code>CCompactSize</code> prefix). Executing a signature opcode (<code>OP_CHECKSIG</code>, <code>OP_CHECKSIGVERIFY</code>, or <code>OP_CHECKSIGADD</code>) with a non-empty signature decrements the budget by 50. If that brings the budget below zero, the script fails immediately. Signature opcodes with unknown public key type and non-empty signature are also counted.<ref>'''The tapscript sigop limit''' The signature opcode limit protects against scripts which are slow to verify due to excessively many signature operations. In tapscript the number of signature opcodes does not count towards the BIP141 or legacy sigop limit. The old sigop limit makes transaction selection in block construction unnecessarily difficult because it is a second constraint in addition to weight. Instead, the number of tapscript signature opcodes is limited by witness weight. Additionally, the limit applies to the transaction input instead of the block and only actually executed signature opcodes are counted. Tapscript execution allows one signature opcode per 50 witness weight units plus one free signature opcode.</ref><ref>'''Parameter choice of the sigop limit''' Regular witnesses are unaffected by the limit as their weight is composed of public key and (<code>SIGHASH_ALL</code>) signature pairs with ''33 + 65'' weight units each (which includes a 1 weight unit <code>CCompactSize</code> tag). This is also the case if public keys are reused in the script because a signature's weight alone is 65 or 66 weight units. However, the limit increases the fees of abnormal scripts with duplicate signatures (and public keys) by requiring additional weight. The weight per sigop factor 50 corresponds to the ratio of BIP141 block limits: 4 mega weight units divided by 80,000 sigops. The "free" signature opcode permitted by the limit exists to account for the weight of the non-witness parts of the transaction input.</ref><ref>'''Why are only signature opcodes counted toward the budget, and not for example hashing opcodes or other expensive operations?''' It turns out that the CPU cost per witness byte for verification of a script consisting of the maximum density of signature checking opcodes (taking the 50 WU/sigop limit into account) is already very close to that of scripts packed with other opcodes, including hashing opcodes (taking the 520 byte stack element limit into account) and <code>OP_ROLL</code> (taking the 1000 stack element limit into account). That said, the construction is very flexible, and allows adding new signature opcodes like <code>CHECKSIGFROMSTACK</code> to count towards the limit through a soft fork. Even if in the future new opcodes are introduced which change normal script cost there is no need to stuff the witness with meaningless data. Instead, the taproot annex can be used to add weight to the witness without increasing the actual witness size.</ref>.<br />
* '''Stack + altstack element count limit''' The existing limit of 1000 elements in the stack and altstack together after every executed opcode remains. It is extended to also apply to the size of initial stack.<br />
* '''Stack element size limit''' The existing limit of maximum 520 bytes per stack element remains, both in the initial stack and in push opcodes.<br />
<br />
==Rationale==<br />
<br />
<references /><br />
<br />
==Examples==<br />
<br />
==Acknowledgements==<br />
<br />
This document is the result of many discussions and contains contributions by a number of people. The authors wish to thank all those who provided valuable feedback and reviews, including the participants of the [https://github.com/ajtowns/taproot-review structured reviews].</div>934https://en.bitcoin.it/wiki/BIP_0341BIP 03412020-01-19T23:52:13Z<p>934: Update BIP text with latest version from https://github.com/sipa/bips/blob/a92523e57c252ca9/bip-0341.mediawiki</p>
<hr />
<div>{{bip}}<br />
{{BipMoved|bip-0341.mediawiki}}<br />
<br />
<pre><br />
BIP: 341<br />
Layer: Consensus (soft fork)<br />
Title: Taproot: SegWit version 1 spending rules<br />
Author: Pieter Wuille <pieter.wuille@gmail.com><br />
Jonas Nick <jonasd.nick@gmail.com><br />
Anthony Towns <aj@erisian.com.au><br />
Comments-Summary: No comments yet.<br />
Comments-URI: https://github.com/bitcoin/bips/wiki/Comments:BIP-0341<br />
Status: Draft<br />
Type: Standards Track<br />
Created: 2020-01-19<br />
License: BSD-3-Clause<br />
Requires: 340<br />
Post-History: 2019-05-06: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-May/016914.html [bitcoin-dev] Taproot proposal<br />
</pre><br />
<br />
==Introduction==<br />
<br />
===Abstract===<br />
<br />
This document proposes a new SegWit version 1 output type, with spending rules based on Taproot, Schnorr signatures, and Merkle branches.<br />
<br />
===Copyright===<br />
<br />
This document is licensed under the 3-clause BSD license.<br />
<br />
===Motivation===<br />
<br />
This proposal aims to improve privacy, efficiency, and flexibility of Bitcoin's scripting capabilities without adding new security assumptions<ref>'''What does not adding security assumptions mean?''' Unforgeability of signatures is a necessary requirement to prevent theft. At least when treating script execution as a digital signature scheme itself, unforgeability can be [https://github.com/apoelstra/taproot proven] in the Random Oracle Model assuming the Discrete Logarithm problem is hard. A [https://nbn-resolving.de/urn:nbn:de:hbz:294-60803 proof] for unforgeability of ECDSA in the current script system needs non-standard assumptions on top of that. Note that it is hard in general to model exactly what security for script means, as it depends on the policies and protocols used by wallet software.</ref>. Specifically, it seeks to minimize how much information about the spendability conditions of a transaction output is revealed on chain at creation or spending time and to add a number of upgrade mechanisms, while fixing a few minor but long-standing issues.<br />
<br />
==Design==<br />
<br />
A number of related ideas for improving Bitcoin's scripting capabilities have been previously proposed: Schnorr signatures ([https://github.com/bitcoin/bips/blob/master/bip-0340.mediawiki BIP340]), Merkle branches ("MAST", [bip-0114.mediawiki BIP114], [bip-0117.mediawiki BIP117]), new sighash modes ([bip-0118.mediawiki BIP118]), new opcodes like CHECKSIGFROMSTACK, [https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-January/015614.html Taproot], [https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-February/015700.html Graftroot], [https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-July/016249.html G'root], and [https://bitcointalk.org/index.php?topic=1377298.0 cross-input aggregation].<br />
<br />
Combining all these ideas in a single proposal would be an extensive change, be hard to review, and likely miss new discoveries that otherwise could have been made along the way. Not all are equally mature as well. For example, cross-input aggregation [https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-March/015838.html interacts] in complex ways with upgrade mechanisms, and solutions to that are still [https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-October/016461.html in flux]. On the other hand, separating them all into independent upgrades would reduce the efficiency and privacy gains to be had, and wallet and service providers may not be inclined to go through many incremental updates. Therefore, we're faced with a tradeoff between functionality and scope creep. In this design we strike a balance by focusing on the structural script improvements offered by Taproot and Merkle branches, as well as changes necessary to make them usable and efficient. For things like sighashes and opcodes we include fixes for known problems, but exclude new features that can be added independently with no downsides.<br />
<br />
As a result we choose this combination of technologies:<br />
* '''Merkle branches''' let us only reveal the actually executed part of the script to the blockchain, as opposed to all possible ways a script can be executed. Among the various known mechanisms for implementing this, one where the Merkle tree becomes part of the script's structure directly maximizes the space savings, so that approach is chosen.<br />
* '''Taproot''' on top of that lets us merge the traditionally separate pay-to-pubkey and pay-to-scripthash policies, making all outputs spendable by either a key or (optionally) a script, and indistinguishable from each other. As long as the key-based spending path is used for spending, it is not revealed whether a script path was permitted as well, resulting in space savings and an increase in scripting privacy at spending time.<br />
* Taproot's advantages become apparent under the assumption that most applications involve outputs that could be spent by all parties agreeing. That's where '''Schnorr''' signatures come in, as they permit [https://eprint.iacr.org/2018/068 key aggregation]: a public key can be constructed from multiple participant public keys, and which requires cooperation between all participants to sign for. Such multi-party public keys and signatures are indistinguishable from their single-party equivalents. This means that with taproot most applications can use the key-based spending path, which is both efficient and private. This can be generalized to arbitrary M-of-N policies, as Schnorr signatures support threshold signing, at the cost of more complex setup protocols.<br />
* As Schnorr signatures also permit '''batch validation''', allowing multiple signatures to be validated together more efficiently than validating each one independently, we make sure all parts of the design are compatible with this.<br />
* Where unused bits appear as a result of the above changes, they are reserved for mechanisms for '''future extensions'''. As a result, every script in the Merkle tree has an associated version such that new script versions can be introduced with a soft fork while remaining compatible with BIP 341. Additionally, future soft forks can make use of the currently unused <code>annex</code> in the witness (see [bip-0341.mediawiki#Rationale BIP341]).<br />
* While the core semantics of the '''signature hashing algorithm''' are not changed, a number of improvements are included in this proposal. The new signature hashing algorithm fixes the verification capabilities of offline signing devices by including amount and scriptPubKey in the signature message, avoids unnecessary hashing, uses '''tagged hashes''' and defines a default sighash byte.<br />
* The '''public key is directly included in the output''' in contrast to typical earlier constructions which store a hash of the public key or script in the output. This has the same cost for senders and is more space efficient overall if the key-based spending path is taken. <ref>'''Why is the public key directly included in the output?''' While typical earlier constructions store a hash of a script or a public key in the output, this is rather wasteful when a public key is always involved. To guarantee batch verifiability, the public key must be known to every verifier, and thus only revealing its hash as an output would imply adding an additional 32 bytes to the witness. Furthermore, to maintain [https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-January/012198.html 128-bit collision security] for outputs, a 256-bit hash would be required anyway, which is comparable in size (and thus in cost for senders) to revealing the public key directly. While the usage of public key hashes is often said to protect against ECDLP breaks or quantum computers, this protection is very weak at best: transactions are not protected while being confirmed, and a very [https://twitter.com/pwuille/status/1108097835365339136 large portion] of the currency's supply is not under such protection regardless. Actual resistance to such systems can be introduced by relying on different cryptographic assumptions, but this proposal focuses on improvements that do not change the security model.</ref><br />
<br />
Informally, the resulting design is as follows: a new witness version is added (version 1), whose programs consist of 32-byte encodings of points ''Q''. ''Q'' is computed as ''P + hash(P||m)G'' for a public key ''P'', and the root ''m'' of a Merkle tree whose leaves consist of a version number and a script. These outputs can be spent directly by providing a signature for ''Q'', or indirectly by revealing ''P'', the script and leaf version, inputs that satisfy the script, and a Merkle path that proves ''Q'' committed to that leaf. All hashes in this construction (the hash for computing ''Q'' from ''P'', the hashes inside the Merkle tree's inner nodes, and the signature hashes used) are tagged to guarantee domain separation.<br />
<br />
== Specification ==<br />
<br />
This section specifies the Taproot consensus rules. Validity is defined by exclusion: a block or transaction is valid if no condition exists that marks it failed.<br />
<br />
The notation below follows that of [bip-0340.mediawiki#design BIP340]. This includes the ''hash<sub>tag</sub>(x)'' notation to refer to ''SHA256(SHA256(tag) || SHA256(tag) || x)''. To the best of the authors' knowledge, no existing use of SHA256 in Bitcoin feeds it a message that starts with two single SHA256 outputs, making collisions between ''hash<sub>tag</sub>'' with other hashes extremely unlikely.<br />
<br />
=== Script validation rules ===<br />
<br />
A Taproot output is a native SegWit output (see [bip-0141.mediawiki BIP141]) with version number 1, and a 32-byte witness program.<br />
The following rules only apply when such an output is being spent. Any other outputs, including version 1 outputs with lengths other than 32 bytes, or P2SH-wrapped version 1 outputs<ref>'''Why is P2SH-wrapping not supported?''' Using P2SH-wrapped outputs only provides 80-bit collision security due to the use of a 160-bit hash. This is considered low, and becomes a security risk whenever the output includes data from more than a single party (public keys, hashes, ...).</ref>, remain unencumbered.<br />
<br />
* Let ''q'' be the 32-byte array containing the witness program (the second push in the scriptPubKey) which represents a public key according to [bip-0340.mediawiki#design BIP340].<br />
* Fail if the witness stack has 0 elements.<br />
* If there are at least two witness elements, and the first byte of the last element is 0x50<ref>'''Why is the first byte of the annex <code>0x50</code>?''' The <code>0x50</code> is chosen as it could not be confused with a valid P2WPKH or P2WSH spending. As the control block's initial byte's lowest bit is used to indicate the public key's Y squareness, each leaf version needs an even byte value and the immediately following odd byte value that are both not yet used in P2WPKH or P2WSH spending. To indicate the annex, only an "unpaired" available byte is necessary like <code>0x50</code>. This choice maximizes the available options for future script versions.</ref>, this last element is called ''annex'' ''a''<ref>'''What is the purpose of the annex?''' The annex is a reserved space for future extensions, such as indicating the validation costs of computationally expensive new opcodes in a way that is recognizable without knowing the scriptPubKey of the output being spent. Until the meaning of this field is defined by another softfork, users SHOULD NOT include <code>annex</code> in transactions, or it may lead to PERMANENT FUND LOSS.</ref> and is removed from the witness stack. The annex (or the lack of thereof) is always covered by the signature and contributes to transaction weight, but is otherwise ignored during taproot validation.<br />
* If there is exactly one element left in the witness stack, key path spending is used:<br />
** The single witness stack element is interpreted as the signature and must be valid (see the next section) for the public key ''q'' (see the next subsection).<br />
* If there are at least two witness elements left, script path spending is used:<br />
** Call the second-to-last stack element ''s'', the script.<br />
** The last stack element is called the control block ''c'', and must have length ''33 + 32m'', for a value of ''m'' that is an integer between 0 and 128<ref>'''Why is the Merkle path length limited to 128?''' The optimally space-efficient Merkle tree can be constructed based on the probabilities of the scripts in the leaves, using the Huffman algorithm. This algorithm will construct branches with lengths approximately equal to ''log<sub>2</sub>(1/probability)'', but to have branches longer than 128 you would need to have scripts with an execution chance below 1 in ''2<sup>128</sup>''. As that is our security bound, scripts that truly have such a low chance can probably be removed entirely.</ref>, inclusive. Fail if it does not have such a length.<br />
** Let ''p = c[1:33]'' and let ''P = point(p)'' where ''point'' and ''[:]'' are defined as in [bip-0340.mediawiki#design BIP340]. Fail if this point is not on the curve.<br />
** Let ''v = c[0] & 0xfe'' and call it the ''leaf version''<ref>'''What constraints are there on the leaf version?''' First, the leaf version cannot be odd as ''c[0] & 0xfe'' will always be even, and cannot be ''0x50'' as that would result in ambiguity with the annex. In addition, in order to support some forms of static analysis that rely on being able to identify script spends without access to the output being spent, it is recommended to avoid using any leaf versions that would conflict with a valid first byte of either a valid P2WPKH pubkey or a valid P2WSH script (that is, both ''v'' and ''v | 1'' should be an undefined, invalid or disabled opcode or an opcode that is not valid as the first opcode). The values that comply to this rule are the 32 even values between ''0xc0'' and ''0xfe'' and also ''0x66'', ''0x7e'', ''0x80'', ''0x84'', ''0x96'', ''0x98'', ''0xba'', ''0xbc'', ''0xbe''. Note also that this constraint implies that leaf versions should be shared amongst different witness versions, as knowing the witness version requires access to the output being spent.</ref>.<br />
** Let ''k<sub>0</sub> = hash<sub>TapLeaf</sub>(v || compact_size(size of s) || s)''; also call it the ''tapleaf hash''.<br />
** For ''j'' in ''[0,1,...,m-1]'':<br />
*** Let ''e<sub>j</sub> = c[33+32j:65+32j]''.<br />
*** Let ''k<sub>j+1</sub> depend on whether ''k<sub>j</sub> < e<sub>j</sub>'' (lexicographically)<ref>'''Why are child elements sorted before hashing in the Merkle tree?''' By doing so, it is not necessary to reveal the left/right directions along with the hashes in revealed Merkle branches. This is possible because we do not actually care about the position of specific scripts in the tree; only that they are actually committed to.</ref>:<br />
**** If ''k<sub>j</sub> < e<sub>j</sub>'': ''k<sub>j+1</sub> = hash<sub>TapBranch</sub>(k<sub>j</sub> || e<sub>j</sub>)''<ref>'''Why not use a more efficient hash construction for inner Merkle nodes?''' The chosen construction does require two invocations of the SHA256 compression functions, one of which can be avoided in theory (see [bip-0098.mediawiki BIP98]). However, it seems preferable to stick to constructions that can be implemented using standard cryptographic primitives, both for implementation simplicity and analyzability. If necessary, a significant part of the second compression function can be optimized out by [https://github.com/bitcoin/bitcoin/pull/13191 specialization] for 64-byte inputs.</ref>.<br />
**** If ''k<sub>j</sub> &ge; e<sub>j</sub>'': ''k<sub>j+1</sub> = hash<sub>TapBranch</sub>(e<sub>j</sub> || k<sub>j</sub>)''.<br />
** Let ''t = hash<sub>TapTweak</sub>(p || k<sub>m</sub>)''.<br />
** If ''t &ge; 0xFFFFFFFF FFFFFFFF FFFFFFFF FFFFFFFE BAAEDCE6 AF48A03B BFD25E8C D0364141'' (order of secp256k1), fail.<br />
** Let ''Q = point(q) if (c[0] & 1) = 1 and -point(q) otherwise''<ref>'''Why is it necessary to reveal a bit to indicate if the point represented by the output public key is negated in a script path spend?''' The ''point'' function (defined in [bip-0340.mediawiki#design BIP340]) always constructs a point with a square Y coordinate, but because ''Q'' is constructed by adding the taproot tweak to the internal public key ''P'', it cannot easily be guaranteed that ''Q'' in fact has such a Y coordinate. Therefore, before verifying the taproot tweak the original point is restored by negating if necessary. We can not ignore the Y coordinate because it would prevent batch verification. Trying out multiple internal keys until there's such a ''Q'' is possible but undesirable and unnecessary since this information about the Y coordinate only consumes an unused bit.</ref>. Fail if this point is not on the curve.<br />
** If ''Q &ne; P + int(t)G'', fail.<br />
** Execute the script, according to the applicable script rules<ref>'''What are the applicable script rules in script path spends?''' [bip-0342.mediawiki#design BIP342] specifies validity rules that apply for leaf version 0xc0, but future proposals can introduce rules for other leaf versions.</ref>, using the witness stack elements excluding the script ''s'', the control block ''c'', and the annex ''a'' if present, as initial stack.<br />
<br />
''q'' is referred to as ''taproot output key'' and ''p'' as ''taproot internal key''.<br />
<br />
=== Signature validation rules ===<br />
<br />
We first define a reusable common signature message calculation function, followed by the actual signature validation as it's used in key path spending.<br />
<br />
==== Common signature message ====<br />
<br />
The function ''SigMsg(hash_type, ext_flag)'' computes the message being signed as a byte array. It is implicitly also a function of the spending transaction and the outputs it spends, but these are not listed to keep notation simple. <br />
<br />
The parameter ''hash_type'' is an 8-bit unsigned value. The <code>SIGHASH</code> encodings from the legacy script system are reused, including <code>SIGHASH_ALL</code>, <code>SIGHASH_NONE</code>, <code>SIGHASH_SINGLE</code>, and <code>SIGHASH_ANYONECANPAY</code>, plus the default ''hash_type'' value ''0x00'' which results in signing over the whole transaction just as for <code>SIGHASH_ALL</code>. The following restrictions apply, which cause validation failure if violated:<br />
* Using any undefined ''hash_type'' (not ''0x00'', ''0x01'', ''0x02'', ''0x03'', ''0x81'', ''0x82'', or ''0x83''<ref>'''Why reject unknown ''hash_type'' values?''' By doing so, it is easier to reason about the worst case amount of signature hashing an implementation with adequate caching must perform.</ref>).<br />
* Using <code>SIGHASH_SINGLE</code> without a "corresponding output" (an output with the same index as the input being verified).<br />
<br />
The parameter ''ext_flag'' is an integer in range 0-127, and is used for indicating (in the message) that extensions are added at the end of the message<ref>'''What extensions use the ''ext_flag'' mechanism?''' [bip-0342.mediawiki#design BIP342] reuses the same common signature message algorithm, but adds BIP342-specific data at the end, which is indicated using ''ext_flag = 1''.</ref>.<br />
<br />
If the parameters take acceptable values, the message is the concatenation of the following data, in order(with byte size of each item listed in parentheses). Numerical values in 2, 4, or 8-byte are encoded in little-endian.<br />
<br />
* Control:<br />
** ''hash_type'' (1).<br />
* Transaction data:<br />
** ''nVersion'' (4): the ''nVersion'' of the transaction.<br />
** ''nLockTime'' (4): the ''nLockTime'' of the transaction.<br />
** If the ''hash_type & 0x80'' does not equal <code>SIGHASH_ANYONECANPAY</code>:<br />
*** ''sha_prevouts'' (32): the SHA256 of the serialization of all input outpoints.<br />
*** ''sha_amounts'' (32): the SHA256 of the serialization of all input amounts.<br />
*** ''sha_sequences'' (32): the SHA256 of the serialization of all input ''nSequence''.<br />
** If ''hash_type & 3'' does not equal <code>SIGHASH_NONE</code> or <code>SIGHASH_SINGLE</code>:<br />
*** ''sha_outputs'' (32): the SHA256 of the serialization of all outputs in <code>CTxOut</code> format.<br />
* Data about this input:<br />
** ''spend_type'' (1): equal to ''(ext_flag * 2) + annex_present'', where ''annex_present'' is 0 if no annex is present, or 1 otherwise (the original witness stack has two or more witness elements, and the first byte of the last element is ''0x50'')<br />
** ''scriptPubKey'' (35): ''scriptPubKey'' of the previous output spent by this input, serialized as script inside <code>CTxOut</code>. Its size is always 35 bytes.<br />
** If ''hash_type & 0x80'' equals <code>SIGHASH_ANYONECANPAY</code>:<br />
*** ''outpoint'' (36): the <code>COutPoint</code> of this input (32-byte hash + 4-byte little-endian).<br />
*** ''amount'' (8): value of the previous output spent by this input.<br />
*** ''nSequence'' (4): ''nSequence'' of this input.<br />
** If ''hash_type & 0x80'' does not equal <code>SIGHASH_ANYONECANPAY</code>:<br />
*** ''input_index'' (4): index of this input in the transaction input vector. Index of the first input is 0.<br />
** If an annex is present (the lowest bit of ''spend_type'' is set):<br />
*** ''sha_annex'' (32): the SHA256 of ''(compact_size(size of annex) || annex)'', where ''annex'' includes the mandatory ''0x50'' prefix.<br />
* Data about this output:<br />
** If ''hash_type & 3'' equals <code>SIGHASH_SINGLE</code>:<br />
*** ''sha_single_output'' (32): the SHA256 of the corresponding output in <code>CTxOut</code> format.<br />
<br />
The total length of ''SigMsg()'' is at most ''209'' bytes<ref>'''What is the output length of ''SigMsg()''?''' The total length of ''SigMsg()'' can be computed using the following formula: ''177 - is_anyonecanpay * 52 - is_none * 32 + has_annex * 32''.</ref>. Note that this does not include the size of sub-hashes such as ''sha_prevouts'', which may be cached across signatures of the same transaction.<br />
<br />
In summary, the semantics of the [bip-0143.mediawiki BIP143] sighash types remain unchanged, except the following:<br />
# The way and order of serialization is changed.<ref>'''Why is the serialization in the signature message changed?''' Hashes that go into the signature message and the message itself are now computed with a single SHA256 invocation instead of double SHA256. There is no expected security improvement by doubling SHA256 because this only protects against length-extension attacks against SHA256 which are not a concern for signature messages because there is no secret data. Therefore doubling SHA256 is a waste of resources. The message computation now follows a logical order with transaction level data first, then input data and output data. This allows to efficiently cache the transaction part of the message across different inputs using the SHA256 midstate. Additionally, sub-hashes can be skipped when calculating the message (for example `sha_prevouts` if <code>SIGHASH_ANYONECANPAY</code> is set) instead of setting them to zero and then hashing them as in BIP143. Despite that, collisions are made impossible by committing to the length of the data (implicit in ''hash_type'' and ''spend_type'') before the variable length data.</ref><br />
# The signature message commits to the ''scriptPubKey''<ref>'''Why does the signature message commit to the ''scriptPubKey''?''' This prevents lying to offline signing devices about output being spent, even when the actually executed script (''scriptCode'' in BIP143) is correct. This means it's possible to compactly prove to a hardware wallet what (unused) execution paths existed.</ref>.<br />
# If the <code>SIGHASH_ANYONECANPAY</code> flag is not set, the message commits to the amounts of ''all'' transaction inputs.<ref>'''Why does the signature message commit to the amounts of all transaction inputs?''' This eliminates the possibility to lie to offline signing devices about the fee of a transaction.</ref><br />
# The signature message commits to all input ''nSequence'' if <code>SIGHASH_NONE</code> or <code>SIGHASH_SINGLE</code> are set (unless <code>SIGHASH_ANYONECANPAY</code> is set as well).<ref>'''Why does the signature message commit to all input ''nSequence'' if <code>SIGHASH_SINGLE</code> or <code>SIGHASH_NONE</code> are set?''' Because setting them already makes the message commit to the <code>prevouts</code> part of all transaction inputs, it is not useful to treat the ''nSequence'' any different. Moreover, this change makes ''nSequence'' consistent with the view that <code>SIGHASH_SINGLE</code> and <code>SIGHASH_NONE</code> only modify the signature message with respect to transaction outputs and not inputs.</ref><br />
# The signature message includes commitments to the taproot-specific data ''spend_type'' and ''annex'' (if present).<br />
<br />
==== Taproot key path spending signature validation ====<br />
<br />
To validate a signature ''sig'' with public key ''q'':<br />
* If the ''sig'' is 64 bytes long, return ''Verify(q, hash<sub>TapSigHash</sub>(0x00 || SigMsg(0x00, 0)), sig)''<ref>'''Why is the input to ''hash<sub>TapSigHash</sub>'' prefixed with 0x00?''' This prefix is called the sighash epoch, and allows reusing the ''hash<sub>TapSigHash</sub>'' tagged hash in future signature algorithms that make invasive changes to how hashing is performed (as opposed to the ''ext_flag'' mechanism that is used for incremental extensions). An alternative is having them use a different tag, but supporting a growing number of tags may become undesirable.</ref>, where ''Verify'' is defined in [bip-0340.mediawiki#design BIP340].<br />
* If the ''sig'' is 65 bytes long, return ''sig[64] &ne; 0x00<ref>'''Why can the <code>hash_type</code> not be <code>0x00</code> in 65-byte signatures?''' Permitting that would enable malleating (by third parties, including miners) 64-byte signatures into 65-byte ones, resulting in a different `wtxid` and a different fee rate than the creator intended</ref> and Verify(q, hash<sub>TapSighash</sub>(0x00 || SigMsg(sig[64], 0)), sig[0:64])''.<br />
* Otherwise, fail<ref>'''Why permit two signature lengths?''' By making the most common type of <code>hash_type</code> implicit, a byte can often be saved.</ref>.<br />
<br />
== Constructing and spending Taproot outputs ==<br />
<br />
This section discusses how to construct and spend Taproot outputs. It only affects wallet software that chooses to implement receiving and spending,<br />
and is not consensus critical in any way.<br />
<br />
Conceptually, every Taproot output corresponds to a combination of a single public key condition (the internal key), and zero or more general conditions encoded in scripts organized in a tree.<br />
Satisfying any of these conditions is sufficient to spend the output.<br />
<br />
'''Initial steps''' The first step is determining what the internal key and the organization of the rest of the scripts should be. The specifics are likely application dependent, but here are some general guidelines:<br />
* When deciding between scripts with conditionals (<code>OP_IF</code> etc.) and splitting them up into multiple scripts (each corresponding to one execution path through the original script), it is generally preferable to pick the latter.<br />
* When a single condition requires signatures with multiple keys, key aggregation techniques like MuSig can be used to combine them into a single key. The details are out of scope for this document, but note that this may complicate the signing procedure.<br />
* If one or more of the spending conditions consist of just a single key (after aggregation), the most likely one should be made the internal key. If no such condition exists, it may be worthwhile adding one that consists of an aggregation of all keys participating in all scripts combined; effectively adding an "everyone agrees" branch. If that is inacceptable, pick as internal key a point with unknown discrete logarithm. One example of such a point is ''H = point(0x0250929b74c1a04954b78b4b6035e97a5e078a5a0f28ec96d547bfee9ace803ac0)'' which is [https://github.com/ElementsProject/secp256k1-zkp/blob/11af7015de624b010424273be3d91f117f172c82/src/modules/rangeproof/main_impl.h#L16 constructed] by taking the hash of the standard uncompressed encoding of the [https://www.secg.org/sec2-v2.pdf secp256k1] base point ''G'' as X coordinate. In order to avoid leaking the information that key path spending is not possible it is recommended to pick a fresh integer ''r'' in the range ''0...n-1'' uniformly at random and use ''H + rG'' as internal key. It is possible to prove that this internal key does not have a known discrete logarithm with respect to ''G'' by revealing ''r'' to a verifier who can then reconstruct how the internal key was created.<br />
* If the spending conditions do not require a script path, the output key should commit to an unspendable script path instead of having no script path. This can be achieved by computing the output key point as ''Q = P + int(hash<sub>TapTweak</sub>(bytes(P)))G''. <ref>'''Why should the output key always have a taproot commitment, even if there is no script path?'''<br />
If the taproot output key is an aggregate of keys, there is the possibility for a malicious party to add a script path without being noticed by the other parties.<br />
This allows to bypass the multiparty policy and to steal the coin.<br />
MuSig key aggregation does not have this issue because it already causes the internal key to be randomized.<br />
<br />
The attack works as follows: Assume Alice and Mallory want to aggregate their keys into a taproot output key without a script path.<br />
In order to prevent key cancellation and related attacks they use [https://eprint.iacr.org/2018/483.pdf MSDL-pop] instead of MuSig.<br />
The MSDL-pop protocol requires all parties to provide a proof of possession of their corresponding secret key and the aggregated key is just the sum of the individual keys.<br />
After Mallory receives Alice's key ''A'', Mallory creates ''M = M<sub>0</sub> + int(t)G'' where ''M<sub>0</sub>'' is Mallory's original key and ''t'' allows a script path spend with internal key ''P = A + M<sub>0</sub>'' and a script that only contains Mallory's key.<br />
Mallory sends a proof of possession of ''M'' to Alice and both parties compute output key ''Q = A + M = P + int(t)G''.<br />
Alice will not be able to notice the script path, but Mallory can unilaterally spend any coin with output key ''Q''.<br />
</ref><br />
* The remaining scripts should be organized into the leaves of a binary tree. This can be a balanced tree if each of the conditions these scripts correspond to are equally likely. If probabilities for each condition are known, consider constructing the tree as a Huffman tree.<br />
<br />
'''Computing the output script''' Once the spending conditions are split into an internal key <code>internal_pubkey</code> and a binary tree whose leaves are (leaf_version, script) tuples, the output script can be computed using the Python3 algorithms below. These algorithms take advantage of helper functions from the [bip-0340/referency.py BIP340 reference code] for integer conversion, point multiplication, and tagged hashes.<br />
<br />
First, we define <code>taproot_tweak_pubkey</code> for 32-byte [bip-0340.mediawiki#design BIP340] public key arrays.<br />
In addition to the tweaked public key byte array, the function returns a boolean indicating whether the public key represents the tweaked point or its negation.<br />
This will be required for spending the output with a script path.<br />
In order to allow spending with the key path, we define <code>taproot_tweak_seckey</code> to compute the secret key for a tweaked public key.<br />
For any byte string <code>h</code> it holds that <code>taproot_tweak_pubkey(pubkey_gen(seckey), h)[0] == pubkey_gen(taproot_tweak_seckey(seckey, h))</code>.<br />
<br />
<source lang="python"><br />
def taproot_tweak_pubkey(pubkey, h):<br />
t = int_from_bytes(tagged_hash("TapTweak", pubkey + h))<br />
if t >= SECP256K1_ORDER:<br />
raise ValueError<br />
Q = point_add(point(pubkey), point_mul(G, t))<br />
is_negated = not has_square_y(Q)<br />
return bytes_from_int(x(Q)), is_negated<br />
<br />
def taproot_tweak_seckey(seckey0, h):<br />
P = point_mul(G, int_from_bytes(seckey0))<br />
seckey = SECP256K1_ORDER - seckey0 if not has_square_y(P) else seckey<br />
t = int_from_bytes(tagged_hash("TapTweak", bytes_from_int(x(P)) + h))<br />
if t >= SECP256K1_ORDER:<br />
raise ValueError<br />
return (seckey + t) % SECP256K1_ORDER<br />
</source><br />
<br />
The following function, <code>taproot_output_script</code>, returns a byte array with the scriptPubKey (see BIP141).<br />
<code>ser_script</code> refers to a function that prefixes its input with a CCompactSize-encoded length.<br />
<br />
<source lang="python"><br />
def taproot_tree_helper(script_tree):<br />
if isinstance(script_tree, tuple):<br />
leaf_version, script = script_tree<br />
h = tagged_hash("TapLeaf", bytes([leaf_version]) + ser_script(script))<br />
return ([((leaf_version, script), bytes())], h)<br />
left, left_h = taproot_tree_helper(script_tree[0])<br />
right, right_h = taproot_tree_helper(script_tree[1])<br />
ret = [(l, c + right_h) for l, c in left] + [(l, c + left_h) for l, c in right]<br />
if right_h < left_h:<br />
left_h, right_h = right_h, left_h<br />
return (ret, tagged_hash("TapBranch", left_h + right_h))<br />
<br />
def taproot_output_script(internal_pubkey, script_tree):<br />
"""Given a internal public key and a tree of scripts, compute the output script.<br />
script_tree is either:<br />
- a (leaf_version, script) tuple (leaf_version is 0xc0 for [bip-0342.mediawiki#design BIP342] scripts)<br />
- a list of two elements, each with the same structure as script_tree itself<br />
- None<br />
"""<br />
if script_tree is None:<br />
h = bytes()<br />
else:<br />
_, h = taproot_tree_helper(script_tree)<br />
output_pubkey, _ = taproot_tweak_pubkey(internal_pubkey, h)<br />
return bytes([0x51, 0x20]) + output_pubkey<br />
</source><br />
<br />
[[File:bip-0342/tree.png|frame|This diagram shows the hashing structure to obtain the tweak from an internal key ''P'' and a Merkle tree consisting of 5 script leaves. ''A'', ''B'', ''C'' and ''E'' are ''TapLeaf'' hashes similar to ''D'' and ''AB'' is a ''TapBranch'' hash. Note that when ''CDE'' is computed ''E'' is hashed first because ''E'' is less than ''CD''.]]<br />
<br />
To spend this output using script ''D'', the control block would contain the following data in this order:<br />
<br />
<control byte with leaf version and negation bit> <internal key p> <C> <E> <AB><br />
<br />
The TapTweak would then be computed as described in [[#script-validation-rules]] like so:<br />
<br />
<source><br />
D = tagged_hash("TapLeaf", bytes([leaf_version]) + ser_script(script))<br />
CD = tagged_hash("TapBranch", C + D)<br />
CDE = tagged_hash("TapBranch", E + CD)<br />
ABCDE = tagged_hash("TapBranch", AB + CDE)<br />
TapTweak = tagged_hash("TapTweak", p + ABCDE)<br />
</source><br />
<br />
'''Spending using the key path''' A Taproot output can be spent with the secret key corresponding to the <code>internal_pubkey</code>. To do so, a witness stack consists of a single element: a bip-schnorr signature on the signature hash as defined above, with the secret key tweaked by the same <code>h</code> as in the above snippet. See the code below:<br />
<br />
<source lang="python"><br />
def taproot_sign_key(script_tree, internal_seckey, hash_type):<br />
_, h = taproot_tree_helper(script_tree)<br />
output_seckey = taproot_tweak_seckey(internal_seckey, h)<br />
sig = schnorr_sign(sighash(hash_type), output_seckey)<br />
if hash_type != 0:<br />
sig += bytes([hash_type])<br />
return [sig]<br />
</source><br />
<br />
This function returns the witness stack necessary and a <code>sighash</code> function to compute the signature hash as defined above (for simplicity, the snippet above ignores passing information like the transaction, the input position, ... to the sighashing code).<br />
<br />
'''Spending using one of the scripts''' A Taproot output can be spent by satisfying any of the scripts used in its construction. To do so, a witness stack consisting of the script's inputs, plus the script itself and the control block are necessary. See the code below:<br />
<br />
<source lang="python"><br />
def taproot_sign_script(internal_pubkey, script_tree, script_num, inputs):<br />
info, h = taproot_tree_helper(script_tree)<br />
(leaf_version, script), path = info[script_num]<br />
_, is_negated = taproot_tweak_pubkey(internal_pubkey, h)<br />
output_pubkey_tag = 0 if is_negated else 1<br />
pubkey_data = bytes([output_pubkey_tag + leaf_version]) + internal_pubkey<br />
return inputs + [script, pubkey_data + path]<br />
</source><br />
<br />
== Security ==<br />
<br />
Taproot improves the privacy of Bitcoin because instead of revealing all possible conditions for spending an output, only the satisfied spending condition has to be published.<br />
Ideally, outputs are spent using the key path which prevents observers from learning the spending conditions of a coin.<br />
A key path spend could be a "normal" payment from a single- or multi-signature wallet or the cooperative settlement of hidden multiparty contract.<br />
<br />
A script path spend leaks that there is a script path and that the key path was not applicable - for example because the involved parties failed to reach agreement.<br />
Moreover, the depth of a script in the Merkle root leaks information including the minimum depth of the tree, which suggests specific wallet software that created the output and helps clustering.<br />
Therefore, the privacy of script spends can be improved by deviating from the optimal tree determined by the probability distribution over the leaves.<br />
<br />
Just like other existing output types, taproot outputs should never reuse keys, for privacy reasons.<br />
This does not only apply to the particular leaf that was used to spend an output but to all leaves committed to in the output.<br />
If leaves were reused, it could happen that spending a different output would reuse the same Merkle branches in the Merkle proof.<br />
Using fresh keys implies that taproot output construction does not need to take special measures to randomizing leaf positions because they are already randomized due to the branch-sorting Merkle tree construction used in taproot.<br />
This does not avoid leaking information through the leaf depth and therefore only applies to balanced (sub-) trees.<br />
In addition, every leaf should have a set of keys distinct from every other leaf.<br />
The reason for this is to increase leaf entropy and prevent an observer from learning an undisclosed script using brute-force search.<br />
<br />
== Test vectors ==<br />
<br />
Examples with creation transaction and spending transaction pairs, valid and invalid.<br />
<br />
Examples of preimage for sighashing for each of the sighash modes.<br />
<br />
== Rationale ==<br />
<br />
<references /><br />
<br />
== Deployment ==<br />
<br />
TODO<br />
<br />
== Backwards compatibility ==<br />
As a soft fork, older software will continue to operate without modification.<br />
Non-upgraded nodes, however, will consider all SegWit version 1 witness programs as anyone-can-spend scripts.<br />
They are strongly encouraged to upgrade in order to fully validate the new programs.<br />
<br />
Non-upgraded wallets can receive and send bitcoin from non-upgraded and upgraded wallets using SegWit version 0 programs, traditional pay-to-pubkey-hash, etc.<br />
Depending on the implementation non-upgraded wallets may be able to send to Segwit version 1 programs if they support sending to [https://github.com/bitcoin/bips/blob/master/bip-0173.mediawiki BIP173] Bech32 addresses.<br />
<br />
== Acknowledgements ==<br />
<br />
This document is the result of discussions around script and signature improvements with many people, and had direct contributions from Greg Maxwell and others. It further builds on top of earlier published proposals such as Taproot by Greg Maxwell, and Merkle branch constructions by Russell O'Connor, Johnson Lau, and Mark Friedenbach. <br />
<br />
The authors wish the thank Arik Sosman for suggesting to sort Merkle node children before hashes, removing the need to transfer the position in the tree, as well as all those who provided valuable feedback and reviews, including the participants of the [https://github.com/ajtowns/taproot-review structured reviews].</div>934https://en.bitcoin.it/wiki/BIP_0340BIP 03402020-01-19T23:49:18Z<p>934: Update BIP text with latest version from https://github.com/sipa/bips/blob/a92523e57c252ca9/bip-0340.mediawiki</p>
<hr />
<div>{{bip}}<br />
{{BipMoved|bip-0340.mediawiki}}<br />
<br />
<pre><br />
BIP: 340<br />
Title: Schnorr Signatures for secp256k1<br />
Author: Pieter Wuille <pieter.wuille@gmail.com><br />
Jonas Nick <Jonas Nick <jonasd.nick@gmail.com><br />
Tim Ruffing <crypto@timruffing.de><br />
Comments-Summary: No comments yet.<br />
Comments-URI: https://github.com/bitcoin/bips/wiki/Comments:BIP-0340<br />
Status: Draft<br />
Type: Standards Track<br />
License: BSD-2-Clause<br />
Created: 2020-01-19<br />
Post-History: 2018-07-06: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-July/016203.html [bitcoin-dev] Schnorr signatures BIP<br />
</pre><br />
<br />
== Introduction ==<br />
<br />
=== Abstract ===<br />
<br />
This document proposes a standard for 64-byte Schnorr signatures over the elliptic curve ''secp256k1''.<br />
<br />
=== Copyright ===<br />
<br />
This document is licensed under the 2-clause BSD license.<br />
<br />
=== Motivation ===<br />
<br />
Bitcoin has traditionally used<br />
[https://en.wikipedia.org/wiki/Elliptic_Curve_Digital_Signature_Algorithm ECDSA] signatures over the [https://www.secg.org/sec2-v2.pdf secp256k1 curve] with [https://en.wikipedia.org/wiki/SHA-2 SHA256] hashes for authenticating<br />
transactions. These are [https://www.secg.org/sec1-v2.pdf standardized], but have a number of downsides<br />
compared to [http://publikationen.ub.uni-frankfurt.de/opus4/files/4280/schnorr.pdf Schnorr signatures] over the same curve:<br />
<br />
* '''Provable security''': Schnorr signatures are provably secure. In more detail, they are ''strongly unforgeable under chosen message attack (SUF-CMA)''<ref>Informally, this means that without knowledge of the secret key but given valid signatures of arbitrary messages, it is not possible to come up with further valid signatures.</ref> [https://www.di.ens.fr/~pointche/Documents/Papers/2000_joc.pdf in the random oracle model assuming the hardness of the elliptic curve discrete logarithm problem (ECDLP)] and [http://www.neven.org/papers/schnorr.pdf in the generic group model assuming variants of preimage and second preimage resistance of the used hash function]<ref>A detailed security proof in the random oracle model, which essentially restates [https://www.di.ens.fr/~pointche/Documents/Papers/2000_joc.pdf the original security proof by Pointcheval and Stern] more explicitly, can be found in [https://eprint.iacr.org/2016/191 a paper by Kiltz, Masny and Pan]. All these security proofs assume a variant of Schnorr signatures that use ''(e,s)'' instead of ''(R,s)'' (see Design above). Since we use a unique encoding of ''R'', there is an efficiently computable bijection that maps ''(R,s)'' to ''(e,s)'', which allows to convert a successful SUF-CMA attacker for the ''(e,s)'' variant to a successful SUF-CMA attacker for the ''(R,s)'' variant (and vice-versa). Furthermore, the proofs consider a variant of Schnorr signatures without key prefixing (see Design above), but it can be verified that the proofs are also correct for the variant with key prefixing. As a result, all the aforementioned security proofs apply to the variant of Schnorr signatures proposed in this document.</ref>. In contrast, the [https://nbn-resolving.de/urn:nbn:de:hbz:294-60803 best known results for the provable security of ECDSA] rely on stronger assumptions.<br />
* '''Non-malleability''': The SUF-CMA security of Schnorr signatures implies that they are non-malleable. On the other hand, ECDSA signatures are inherently malleable<ref>If ''(r,s)'' is a valid ECDSA signature for a given message and key, then ''(r,n-s)'' is also valid for the same message and key. If ECDSA is restricted to only permit one of the two variants (as Bitcoin does through a policy rule on the network), it can be [https://nbn-resolving.de/urn:nbn:de:hbz:294-60803 proven] non-malleable under stronger than usual assumptions.</ref>; a third party without access to the secret key can alter an existing valid signature for a given public key and message into another signature that is valid for the same key and message. This issue is discussed in [bip-0062.mediawiki BIP62] and [bip-0146.mediawiki BIP146].<br />
* '''Linearity''': Schnorr signatures provide a simple and efficient method that enables multiple collaborating parties to produce a signature that is valid for the sum of their public keys. This is the building block for various higher-level constructions that improve efficiency and privacy, such as multisignatures and others (see Applications below).<br />
<br />
For all these advantages, there are virtually no disadvantages, apart<br />
from not being standardized. This document seeks to change that. As we<br />
propose a new standard, a number of improvements not specific to Schnorr signatures can be<br />
made:<br />
<br />
* '''Signature encoding''': Instead of using [https://en.wikipedia.org/wiki/X.690#DER_encoding DER]-encoding for signatures (which are variable size, and up to 72 bytes), we can use a simple fixed 64-byte format.<br />
* '''Public key encoding''': Instead of using ''compressed'' 33-byte encodings of elliptic curve points which are common in Bitcoin today, public keys in this proposal are encoded as 32 bytes.<br />
* '''Batch verification''': The specific formulation of ECDSA signatures that is standardized cannot be verified more efficiently in batch compared to individually, unless additional witness data is added. Changing the signature scheme offers an opportunity to address this.<br />
* '''Completely specified''': To be safe for usage in consensus systems, the verification algorithm must be completely specified at the byte level. This guarantees that nobody can construct a signature that is valid to some verifiers but not all. This is traditionally not a requirement for digital signature schemes, and the lack of exact specification for the DER parsing of ECDSA signatures has caused problems for Bitcoin [https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-July/009697.html in the past], needing [https://github.com/bitcoin/bips/blob/master/bip-0066.mediawiki BIP66] to address it. In this document we aim to meet this property by design. For batch verification, which is inherently non-deterministic as the verifier can choose their batches, this property implies that the outcome of verification may only differ from individual verifications with negligible probability, even to an attacker who intentionally tries to make batch- and non-batch verification differ.<br />
<br />
By reusing the same curve and hash function as Bitcoin uses for ECDSA, we are able to retain existing mechanisms for choosing secret and public keys, and we avoid introducing new assumptions about the security of elliptic curves and hash functions.<br />
<br />
== Description ==<br />
<br />
We first build up the algebraic formulation of the signature scheme by<br />
going through the design choices. Afterwards, we specify the exact<br />
encodings and operations.<br />
<br />
=== Design ===<br />
<br />
'''Schnorr signature variant''' Elliptic Curve Schnorr signatures for message ''m'' and public key ''P'' generally involve a point ''R'', integers ''e'' and ''s'' picked by the signer, and the base point ''G'' which satisfy ''e = hash(R || m)'' and ''s⋅G = R + e⋅P''. Two formulations exist, depending on whether the signer reveals ''e'' or ''R'':<br />
# Signatures are pairs ''(e, s)'' that satisfy ''e = hash(s⋅G - e⋅P || m)''. This variant avoids minor complexity introduced by the encoding of the point ''R'' in the signature (see paragraphs "Encoding R and public key point P" and "Implicit Y coordinates" further below in this subsection). Moreover, revealing ''e'' instead of ''R'' allows for potentially shorter signatures: Whereas an encoding of ''R'' inherently needs about 32 bytes, the hash ''e'' can be tuned to be shorter than 32 bytes, and [http://www.neven.org/papers/schnorr.pdf a short hash of only 16 bytes suffices to provide SUF-CMA security at the target security level of 128 bits]. However, a major drawback of this optimization is that finding collisions in a short hash function is easy. This complicates the implementation of secure signing protocols in scenarios in which a group of mutually distrusting signers work together to produce a single joint signature (see Applications below). In these scenarios, which are not captured by the SUF-CMA model due its assumption of a single honest signer, a promising attack strategy for malicious co-signers is to find a collision in the hash function in order to obtain a valid signature on a message that an honest co-signer did not intent to sign.<br />
# Signatures are pairs ''(R, s)'' that satisfy ''s⋅G = R + hash(R || m)⋅P''. This supports batch verification, as there are no elliptic curve operations inside the hashes. Batch verification enables significant speedups.<br />
<br />
[[File:bip-0340/speedup-batch.png|center|frame|This graph shows the ratio between the time it takes to verify ''n'' signatures individually and to verify a batch of ''n'' signatures. This ratio goes up logarithmically with the number of signatures, or in other words: the total time to verify ''n'' signatures grows with ''O(n / log n)''.]]<br />
<br />
Since we would like to avoid the fragility that comes with short hashes, the ''e'' variant does not provide significant advantages. We choose the ''R''-option, which supports batch verification. <br />
<br />
'''Key prefixing''' Using the verification rule above directly makes Schnorr signatures vulnerable to "related-key attacks" in which a third party can convert a signature ''(R, s)'' for public key ''P'' into a signature ''(R, s + a⋅hash(R || m))'' for public key ''P + a⋅G'' and the same message ''m'', for any given additive tweak ''a'' to the signing key. This would render signatures insecure when keys are generated using [bip-0032.mediawiki BIP32's unhardened derivation] and other methods that rely on additive tweaks to existing keys such as Taproot.<br />
<br />
To protect against these attacks, we choose ''key prefixed''<ref>A limitation of committing to the public key (rather than to a short hash of it, or not at all) is that it removes the ability for public key recovery or verifying signatures against a short public key hash. These constructions are generally incompatible with batch verification.</ref> Schnorr signatures; changing the equation to ''s⋅G = R + hash(R || P || m)⋅P''. [https://eprint.iacr.org/2015/1135.pdf It can be shown] that key prefixing protects against related-key attacks with additive tweaks. In general, key prefixing increases robustness in multi-user settings, e.g., it seems to be a requirement for proving the MuSig multisignature scheme secure (see Applications below).<br />
<br />
We note that key prefixing is not strictly necessary for transaction signatures as used in Bitcoin currently, because signed transactions indirectly commit to the public keys already, i.e., ''m'' contains a commitment to ''pk''. However, this indirect commitment should not be relied upon because it may change with proposals such as SIGHASH_NOINPUT ([./bip-0118.mediawiki BIP118]), and would render the signature scheme unsuitable for other purposes than signing transactions, e.g., [https://bitcoin.org/en/developer-reference#signmessage signing ordinary messages].<br />
<br />
'''Encoding R and public key point P''' There exist several possibilities for encoding elliptic curve points:<br />
# Encoding the full X and Y coordinates of ''P'' and ''R'', resulting in a 64-byte public key and a 96-byte signature.<br />
# Encoding the full X coordinate and one bit of the Y coordinate to determine one of the two possible Y coordinates. This would result in 33-byte public keys and 65-byte signatures.<br />
# Encoding only the X coordinate, resulting in 32-byte public keys and 64-byte signatures.<br />
<br />
Using the first option would be slightly more efficient for verification (around 10%), but we prioritize compactness, and therefore choose option 3.<br />
<br />
'''Implicit Y coordinates''' In order to support efficient verification and batch verification, the Y coordinate of ''P'' and of ''R'' cannot be ambiguous (every valid X coordinate has two possible Y coordinates). We have a choice between several options for symmetry breaking:<br />
# Implicitly choosing the Y coordinate that is in the lower half.<br />
# Implicitly choosing the Y coordinate that is even<ref>Since ''p'' is odd, negation modulo ''p'' will map even numbers to odd numbers and the other way around. This means that for a valid X coordinate, one of the corresponding Y coordinates will be even, and the other will be odd.</ref>.<br />
# Implicitly choosing the Y coordinate that is a quadratic residue (has a square root modulo the field size, or "is a square" for short)<ref>A product of two numbers is a square when either both or none of the factors are squares. As ''-1'' is not a square modulo secp256k1's field size ''p'', and the two Y coordinates corresponding to a given X coordinate are each other's negation, this means exactly one of the two must be a square.</ref>.<br />
<br />
In the case of ''R'' the third option is slower at signing time but a bit faster to verify, as it is possible to directly compute whether the Y coordinate is a square when the points are represented in<br />
[https://en.wikibooks.org/wiki/Cryptography/Prime_Curve/Jacobian_Coordinates Jacobian coordinates] (a common optimization to avoid modular inverses<br />
for elliptic curve operations). The two other options require a possibly<br />
expensive conversion to affine coordinates first. This would even be the case if the sign or oddness were explicitly coded (option 2 in the list above). We therefore choose option 3.<br />
<br />
For ''P'' the speed of signing and verification does not significantly differ between any of the three options because affine coordinates of the point have to be computed anyway. For consistency reasons we choose the same option as for ''R''. The signing algorithm ensures that the signature is valid under the correct public key by negating the secret key if necessary.<br />
<br />
Implicit Y coordinates are not a reduction in security when expressed as the number of elliptic curve operations an attacker is expected to perform to compute the secret key. An attacker can normalize any given public key to a point whose Y coordinate is a square by negating the point if necessary. This is just a subtraction of field elements and not an elliptic curve operation<ref>This can be formalized by a simple reduction that reduces an attack on Schnorr signatures with implicit Y coordinates to an attack to Schnorr signatures with explicit Y coordinates. The reduction works by reencoding public keys and negating the result of the hash function, which is modeled as random oracle, whenever the challenge public key has an explicit Y coordinate that is not a square. A proof sketch can be found [https://medium.com/blockstream/reducing-bitcoin-transaction-sizes-with-x-only-pubkeys-f86476af05d7 here].</ref>.<br />
<br />
'''Tagged Hashes''' Cryptographic hash functions are used for multiple purposes in the specification below and in Bitcoin in general. To make sure hashes used in one context can't be reinterpreted in another one, hash functions can be tweaked with a context-dependent tag name, in such a way that collisions across contexts can be assumed to be infeasible. Such collisions obviously can not be ruled out completely, but only for schemes using tagging with a unique name. As for other schemes collisions are at least less likely with tagging than without.<br />
<br />
For example, without tagged hashing a BIP340 signature could also be valid for a signature scheme where the only difference is that the arguments to the hash function are reordered. Worse, if the BIP340 nonce derivation function was copied or independently created, then the nonce could be accidentally reused in the other scheme leaking the secret key.<br />
<br />
This proposal suggests to include the tag by prefixing the hashed data with ''SHA256(tag) || SHA256(tag)''. Because this is a 64-byte long context-specific constant and the ''SHA256'' block size is also 64 bytes, optimized implementations are possible (identical to SHA256 itself, but with a modified initial state). Using SHA256 of the tag name itself is reasonably simple and efficient for implementations that don't choose to use the optimization.<br />
<br />
'''Final scheme''' As a result, our final scheme ends up using public key ''pk'' which is the X coordinate of a point ''P'' on the curve whose Y coordinate is a square and signatures ''(r,s)'' where ''r'' is the X coordinate of a point ''R'' whose Y coordinate is a square. The signature satisfies ''s⋅G = R + tagged_hash(r || pk || m)⋅P''.<br />
<br />
=== Specification ===<br />
<br />
The following conventions are used, with constants as defined for [https://www.secg.org/sec2-v2.pdf secp256k1]:<br />
* Lowercase variables represent integers or byte arrays.<br />
** The constant ''p'' refers to the field size, ''0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEFFFFFC2F''.<br />
** The constant ''n'' refers to the curve order, ''0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364141''.<br />
* Uppercase variables refer to points on the curve with equation ''y<sup>2</sup> = x<sup>3</sup> + 7'' over the integers modulo ''p''.<br />
** ''is_infinite(P)'' returns whether or not ''P'' is the point at infinity.<br />
** ''x(P)'' and ''y(P)'' are integers in the range ''0..p-1'' and refer to the X and Y coordinates of a point ''P'' (assuming it is not infinity).<br />
** The constant ''G'' refers to the base point, for which ''x(G) = 0x79BE667EF9DCBBAC55A06295CE870B07029BFCDB2DCE28D959F2815B16F81798'' and ''y(G) = 0x483ADA7726A3C4655DA4FBFC0E1108A8FD17B448A68554199C47D08FFB10D4B8''.<br />
** Addition of points refers to the usual [https://en.wikipedia.org/wiki/Elliptic_curve#The_group_law elliptic curve group operation].<br />
** [https://en.wikipedia.org/wiki/Elliptic_curve_point_multiplication Multiplication (⋅) of an integer and a point] refers to the repeated application of the group operation.<br />
* Functions and operations:<br />
** ''||'' refers to byte array concatenation.<br />
** The function ''x[i:j]'', where ''x'' is a byte array, returns a ''(j - i)''-byte array with a copy of the ''i''-th byte (inclusive) to the ''j''-th byte (exclusive) of ''x''.<br />
** The function ''bytes(x)'', where ''x'' is an integer, returns the 32-byte encoding of ''x'', most significant byte first.<br />
** The function ''bytes(P)'', where ''P'' is a point, returns ''bytes(x(P))''.<br />
** The function ''int(x)'', where ''x'' is a 32-byte array, returns the 256-bit unsigned integer whose most significant byte first encoding is ''x''.<br />
** The function ''is_square(x)'', where ''x'' is an integer, returns whether or not ''x'' is a quadratic residue modulo ''p''. Since ''p'' is prime, it is equivalent to the [https://en.wikipedia.org/wiki/Legendre_symbol Legendre symbol] ''(x / p) = x<sup>(p-1)/2</sup> mod p'' being equal to ''1''<ref>For points ''P'' on the secp256k1 curve it holds that ''y(P)<sup>(p-1)/2</sup> &ne; 0 mod p''.</ref>.<br />
** The function ''has_square_y(P)'', where ''P'' is a point, is defined as ''not is_infinite(P) and is_square(y(P))''<ref>For points ''P'' on the secp256k1 curve it holds that ''has_square_y(P) = not has_square_y(-P)''.</ref>.<br />
** The function ''lift_x(x)'', where ''x'' is an integer in range ''0..p-1'', returns the point ''P'' for which ''x(P) = x''<ref><br />
Given a candidate X coordinate ''x'' in the range ''0..p-1'', there exist either exactly two or exactly zero valid Y coordinates. If no valid Y coordinate exists, then ''x'' is not a valid X coordinate either, i.e., no point ''P'' exists for which ''x(P) = x''. The valid Y coordinates for a given candidate ''x'' are the square roots of ''c = x<sup>3</sup> + 7 mod p'' and they can be computed as ''y = &plusmn;c<sup>(p+1)/4</sup> mod p'' (see [https://en.wikipedia.org/wiki/Quadratic_residue#Prime_or_prime_power_modulus Quadratic residue]) if they exist, which can be checked by squaring and comparing with ''c''.<br />
</ref> and ''has_square_y(P)''<ref><br />
If ''P := lift_x(x)'' does not fail, then ''y := y(P) = c<sup>(p+1)/4</sup> mod p'' is square. Proof: If ''lift_x'' does not fail, ''y'' is a square root of ''c'' and therefore the [https://en.wikipedia.org/wiki/Legendre_symbol Legendre symbol] ''(c / p)'' is ''c<sup>(p-1)/2</sup> = 1 mod p''. Because the Legendre symbol ''(y / p)'' is ''y<sup>(p-1)/2</sup> mod p = c<sup>((p+1)/4)((p-1)/2)</sup> mod p = 1<sup>((p+1)/4)</sup> mod p = 1 mod p'', ''y'' is square.<br />
</ref>, or fails if no such point exists. The function ''lift_x(x)'' is equivalent to the following pseudocode:<br />
*** Let ''c = x<sup>3</sup> + 7 mod p''.<br />
*** Let ''y = c<sup>(p+1)/4</sup> mod p''.<br />
*** Fail if ''c &ne; y<sup>2</sup> mod p''.<br />
*** Return the unique point ''P'' such that ''x(P) = x'' and ''y(P) = y'', or fail if no such point exists.<br />
** The function ''point(x)'', where ''x'' is a 32-byte array, returns the point ''P = lift_x(int(x))''.<br />
** The function ''hash<sub>tag</sub>(x)'' where ''tag'' is a UTF-8 encoded tag name and ''x'' is a byte array returns the 32-byte hash ''SHA256(SHA256(tag) || SHA256(tag) || x)''.<br />
<br />
==== Public Key Generation ====<br />
<br />
Input:<br />
* The secret key ''sk'': a 32-byte array, generated uniformly at random<br />
<br />
The algorithm ''PubKey(sk)'' is defined as:<br />
* Let ''d = int(sk)''.<br />
* Fail if ''d = 0'' or ''d &ge; n''.<br />
* Return ''bytes(d⋅G)''.<br />
<br />
Note that we use a very different public key format (32 bytes) than the ones used by existing systems (which typically use elliptic curve points as public keys, or 33-byte or 65-byte encodings of them). A side effect is that ''PubKey(sk) = PubKey(bytes(n - int(sk))'', so every public key has two corresponding secret keys.<br />
<br />
As an alternative to generating keys randomly, it is also possible and safe to repurpose existing key generation algorithms for ECDSA in a compatible way. The secret keys constructed by such an algorithm can be used as ''sk'' directly. The public keys constructed by such an algorithm (assuming they use the 33-byte compressed encoding) need to be converted by dropping the first byte. Specifically, [https://github.com/bitcoin/bips/blob/master/bip-0032.mediawiki BIP32] and schemes built on top of it remain usable.<br />
<br />
==== Default Signing ====<br />
<br />
Input:<br />
* The secret key ''sk'': a 32-byte array<br />
* The message ''m'': a 32-byte array<br />
<br />
The algorithm ''Sign(sk, m)'' is defined as:<br />
* Let ''d' = int(sk)''<br />
* Fail if ''d' = 0'' or ''d' &ge; n''<br />
* Let ''P = d'⋅G''<br />
* Let ''d = d' '' if ''has_square_y(P)'', otherwise let ''d = n - d' ''.<br />
* Let ''rand = hash<sub>BIPSchnorrDerive</sub>(bytes(d) || m)''.<br />
* Let ''k' = int(rand) mod n''<ref>Note that in general, taking a uniformly random 256-bit integer modulo the curve order will produce an unacceptably biased result. However, for the secp256k1 curve, the order is sufficiently close to ''2<sup>256</sup>'' that this bias is not observable (''1 - n / 2<sup>256</sup>'' is around ''1.27 * 2<sup>-128</sup>'').</ref>.<br />
* Fail if ''k' = 0''.<br />
* Let ''R = k'⋅G''.<br />
* Let ''k = k' '' if ''has_square_y(R)'', otherwise let ''k = n - k' ''.<br />
* Let ''e = int(hash<sub>BIPSchnorr</sub>(bytes(R) || bytes(P) || m)) mod n''.<br />
* Return the signature ''bytes(R) || bytes((k + ed) mod n)''.<br />
<br />
==== Alternative Signing ====<br />
<br />
It should be noted that various alternative signing algorithms can be used to produce equally valid signatures. The algorithm in the previous section is deterministic, i.e., it will always produce the same signature for a given message and secret key. This method does not need a random number generator (RNG) at signing time and is thus trivially robust against failures of RNGs. Alternatively the 32-byte ''rand'' value may be generated in other ways, producing a different but still valid signature (in other words, this is not a ''unique'' signature scheme). '''No matter which method is used to generate the ''rand'' value, the value must be a fresh uniformly random 32-byte string which is not even partially predictable for the attacker.'''<br />
<br />
'''Synthetic nonces''' For instance when a RNG is available, 32 bytes of RNG output can be appended to the input to ''hash<sub>BIPSchnorrDerive</sub>''. This will change the corresponding line in the signing algorithm to ''rand = hash<sub>BIPSchnorrDerive</sub>(bytes(d) || m || get_32_bytes_from_rng())'', where ''get_32_bytes_from_rng()'' is the call to the RNG. Adding RNG output may improve protection against [https://moderncrypto.org/mail-archive/curves/2017/000925.html fault injection attacks and side-channel attacks], and it is safe to add the output of a low-entropy RNG.<br />
<br />
'''Nonce exfiltration protection''' It is possible to strengthen the nonce generation algorithm using a second device. In this case, the second device contributes randomness which the actual signer provably incorporates into its nonce. This prevents certain attacks where the signer device is compromised and intentionally tries to leak the secret key through its nonce selection.<br />
<br />
'''Multisignatures''' This signature scheme is compatible with various types of multisignature and threshold schemes such as [https://eprint.iacr.org/2018/068 MuSig], where a single public key requires holders of multiple secret keys to participate in signing (see Applications below).<br />
'''It is important to note that multisignature signing schemes in general are insecure with the ''rand'' generation from the default signing algorithm above (or any other deterministic method).'''<br />
<br />
==== Verification ====<br />
<br />
Input:<br />
* The public key ''pk'': a 32-byte array<br />
* The message ''m'': a 32-byte array<br />
* A signature ''sig'': a 64-byte array<br />
<br />
The algorithm ''Verify(pk, m, sig)'' is defined as:<br />
* Let ''P = point(pk)''; fail if ''point(pk)'' fails.<br />
* Let ''r = int(sig[0:32])''; fail if ''r &ge; p''.<br />
* Let ''s = int(sig[32:64])''; fail if ''s &ge; n''.<br />
* Let ''e = int(hash<sub>BIPSchnorr</sub>(bytes(r) || bytes(P) || m)) mod n''.<br />
* Let ''R = s⋅G - e⋅P''.<br />
* Fail if ''not has_square_y(R)'' or ''x(R) &ne; r''.<br />
* Return success iff no failure occurred before reaching this point.<br />
<br />
For every valid secret key ''sk'' and message ''m'', ''Verify(PubKey(sk),m,Sign(sk,m))'' will succeed.<br />
<br />
Note that the correctness of verification relies on the fact that ''point(pk)'' always returns a point with a square Y coordinate. A hypothetical verification algorithm that treats points as public keys, and takes the point ''P'' directly as input would fail any time a point with non-square Y is used. While it is possible to correct for this by negating points with non-square Y coordinate before further processing, this would result in a scheme where every (message, signature) pair is valid for two public keys (a type of malleability that exists for ECDSA as well, but we don't wish to retain). We avoid these problems by treating just the X coordinate as public key.<br />
<br />
==== Batch Verification ====<br />
<br />
Input:<br />
* The number ''u'' of signatures<br />
* The public keys ''pk<sub>1..u</sub>'': ''u'' 32-byte arrays<br />
* The messages ''m<sub>1..u</sub>'': ''u'' 32-byte arrays<br />
* The signatures ''sig<sub>1..u</sub>'': ''u'' 64-byte arrays<br />
<br />
The algorithm ''BatchVerify(pk<sub>1..u</sub>, m<sub>1..u</sub>, sig<sub>1..u</sub>)'' is defined as:<br />
* Generate ''u-1'' random integers ''a<sub>2...u</sub>'' in the range ''1...n-1''. They are generated deterministically using a [https://en.wikipedia.org/wiki/Cryptographically_secure_pseudorandom_number_generator CSPRNG] seeded by a cryptographic hash of all inputs of the algorithm, i.e. ''seed = seed_hash(pk<sub>1</sub>..pk<sub>u</sub> || m<sub>1</sub>..m<sub>u</sub> || sig<sub>1</sub>..sig<sub>u</sub> )''. A safe choice is to instantiate ''seed_hash'' with SHA256 and use [https://tools.ietf.org/html/rfc8439 ChaCha20] with key ''seed'' as a CSPRNG to generate 256-bit integers, skipping integers not in the range ''1...n-1''.<br />
* For ''i = 1 .. u'':<br />
** Let ''P<sub>i</sub> = point(pk<sub>i</sub>)''; fail if ''point(pk<sub>i</sub>)'' fails.<br />
** Let ''r<sub>i</sub> = int(sig<sub>i</sub>[0:32])''; fail if ''r<sub>i</sub> &ge; p''.<br />
** Let ''s<sub>i</sub> = int(sig<sub>i</sub>[32:64])''; fail if ''s<sub>i</sub> &ge; n''.<br />
** Let ''e<sub>i</sub> = int(hash<sub>BIPSchnorr</sub>(bytes(r<sub>i</sub>) || bytes(P<sub>i</sub>) || m<sub>i</sub>)) mod n''.<br />
** Let ''R<sub>i</sub> = lift_x(r<sub>i</sub>)''; fail if ''lift_x(r<sub>i</sub>)'' fails.<br />
* Fail if ''(s<sub>1</sub> + a<sub>2</sub>s<sub>2</sub> + ... + a<sub>u</sub>s<sub>u</sub>)⋅G &ne; R<sub>1</sub> + a<sub>2</sub>⋅R<sub>2</sub> + ... + a<sub>u</sub>⋅R<sub>u</sub> + e<sub>1</sub>⋅P<sub>1</sub> + (a<sub>2</sub>e<sub>2</sub>)⋅P<sub>2</sub> + ... + (a<sub>u</sub>e<sub>u</sub>)⋅P<sub>u</sub>''.<br />
* Return success iff no failure occurred before reaching this point.<br />
<br />
If all individual signatures are valid (i.e., ''Verify'' would return success for them), ''BatchVerify'' will always return success. If at least one signature is invalid, ''BatchVerify'' will return success with at most a negligible probability.<br />
<br />
=== Optimizations ===<br />
<br />
Many techniques are known for optimizing elliptic curve implementations. Several of them apply here, but are out of scope for this document. Two are listed below however, as they are relevant to the design decisions:<br />
<br />
'''Squareness testing''' The function ''is_square(x)'' is defined as above, but can be computed more efficiently using an [https://en.wikipedia.org/wiki/Jacobi_symbol#Calculating_the_Jacobi_symbol extended GCD algorithm].<br />
<br />
'''Jacobian coordinates''' Elliptic Curve operations can be implemented more efficiently by using [https://en.wikibooks.org/wiki/Cryptography/Prime_Curve/Jacobian_Coordinates Jacobian coordinates]. Elliptic Curve operations implemented this way avoid many intermediate modular inverses (which are computationally expensive), and the scheme proposed in this document is in fact designed to not need any inversions at all for verification. When operating on a point ''P'' with Jacobian coordinates ''(x,y,z)'' which is not the point at infinity and for which ''x(P)'' is defined as ''x / z<sup>2</sup>'' and ''y(P)'' is defined as ''y / z<sup>3</sup>'':<br />
* ''has_square_y(P)'' can be implemented as ''is_square(yz mod p)''.<br />
* ''x(P) &ne; r'' can be implemented as ''(0 &le; r < p) and (x &ne; z<sup>2</sup>r mod p)''.<br />
<br />
== Applications ==<br />
<br />
There are several interesting applications beyond simple signatures.<br />
While recent academic papers claim that they are also possible with ECDSA, consensus support for Schnorr signature verification would significantly simplify the constructions.<br />
<br />
=== Multisignatures and Threshold Signatures ===<br />
<br />
By means of an interactive scheme such as [https://eprint.iacr.org/2018/068 MuSig], participants can aggregate their public keys into a single public key which they can jointly sign for. This allows ''n''-of-''n'' multisignatures which, from a verifier's perspective, are no different from ordinary signatures, giving improved privacy and efficiency versus ''CHECKMULTISIG'' or other means.<br />
<br />
Moreover, Schnorr signatures are compatible with [https://web.archive.org/web/20031003232851/http://www.research.ibm.com/security/dkg.ps distributed key generation], which enables interactive threshold signatures schemes, e.g., the schemes described by [http://cacr.uwaterloo.ca/techreports/2001/corr2001-13.ps Stinson and Strobl (2001)] or [https://web.archive.org/web/20060911151529/http://theory.lcs.mit.edu/~stasio/Papers/gjkr03.pdf Genaro, Jarecki and Krawczyk (2003)]. These protocols make it possible to realize ''k''-of-''n'' threshold signatures, which ensure that any subset of size ''k'' of the set of ''n'' signers can sign but no subset of size less than ''k'' can produce a valid Schnorr signature. However, the practicality of the existing schemes is limited: most schemes in the literature have been proven secure only for the case ''k-1 < n/2'', are not secure when used concurrently in multiple sessions, or require a reliable broadcast mechanism to be secure. Further research is necessary to improve this situation.<br />
<br />
=== Adaptor Signatures ===<br />
<br />
[https://download.wpsoftware.net/bitcoin/wizardry/mw-slides/2018-05-18-l2/slides.pdf Adaptor signatures] can be produced by a signer by offsetting his public nonce with a known point ''T = t⋅G'', but not offsetting his secret nonce.<br />
A correct signature (or partial signature, as individual signers' contributions to a multisignature are called) on the same message with same nonce will then be equal to the adaptor signature offset by ''t'', meaning that learning ''t'' is equivalent to learning a correct signature.<br />
This can be used to enable atomic swaps or even [https://eprint.iacr.org/2018/472 general payment channels] in which the atomicity of disjoint transactions is ensured using the signatures themselves, rather than Bitcoin script support. The resulting transactions will appear to verifiers to be no different from ordinary single-signer transactions, except perhaps for the inclusion of locktime refund logic.<br />
<br />
Adaptor signatures, beyond the efficiency and privacy benefits of encoding script semantics into constant-sized signatures, have additional benefits over traditional hash-based payment channels. Specifically, the secret values ''t'' may be reblinded between hops, allowing long chains of transactions to be made atomic while even the participants cannot identify which transactions are part of the chain. Also, because the secret values are chosen at signing time, rather than key generation time, existing outputs may be repurposed for different applications without recourse to the blockchain, even multiple times.<br />
<br />
=== Blind Signatures ===<br />
<br />
A blind signature protocol is an interactive protocol that enables a signer to sign a message at the behest of another party without learning any information about the signed message or the signature. Schnorr signatures admit a very [https://www.math.uni-frankfurt.de/~dmst/research/papers/schnorr.blind_sigs_attack.2001.pdf simple blind signature scheme] which is however insecure because it's vulnerable to [https://www.iacr.org/archive/crypto2002/24420288/24420288.pdf Wagner's attack]. A known mitigation is to let the signer abort a signing session with a certain probability, and the resulting scheme can be [https://eprint.iacr.org/2019/877 proven secure under non-standard cryptographic assumptions].<br />
<br />
Blind Schnorr signatures could for example be used in [https://github.com/ElementsProject/scriptless-scripts/blob/master/md/partially-blind-swap.md Partially Blind Atomic Swaps], a construction to enable transferring of coins, mediated by an untrusted escrow agent, without connecting the transactors in the public blockchain transaction graph.<br />
<br />
== Test Vectors and Reference Code ==<br />
<br />
For development and testing purposes, we provide a [[bip-0340/test-vectors.csv|collection of test vectors in CSV format]] and a naive, highly inefficient, and non-constant time [[bip-0340/reference.py|pure Python 3.7 reference implementation of the signing and verification algorithm]].<br />
The reference implementation is for demonstration purposes only and not to be used in production environments.<br />
<br />
== Footnotes ==<br />
<br />
<references /><br />
<br />
== Acknowledgements ==<br />
<br />
This document is the result of many discussions around Schnorr based signatures over the years, and had input from Johnson Lau, Greg Maxwell, Andrew Poelstra, Rusty Russell, and Anthony Towns. The authors further wish to thank all those who provided valuable feedback and reviews, including the participants of the [https://github.com/ajtowns/taproot-review structured reviews].</div>934https://en.bitcoin.it/wiki/BlockstreamBlockstream2020-01-17T09:06:09Z<p>FaceFace: another typo</p>
<hr />
<div>{{infobox company|name=Blockstream<br />
|image=<br />
|industry=<br />
|founder=<br />
|foundation=<br />
|location=<br />
|assets=<br />
|twitter=<br />
|facebook=<br />
|<br />
|website=https://blockstream.info/<br />
}}</div>FaceFacehttps://en.bitcoin.it/wiki/Lightning_walletLightning wallet2019-12-19T11:24:07Z<p>Flix: /* See Also */</p>
<hr />
<div>A lightning network wallet is a Bitcoin wallet that also connects to the Lightning Network, allowing users to open channels and make transactions on this Bitcoin layer 2, which allows instant and cheap transactions.<br />
<br />
==Mobile wallets==<br />
<br />
*Eclair<br />
*Bluewallet<br />
*Wallet of Satoshi<br />
*bitcoin lightning wallet<br />
*Sats app<br />
*LightningPeach<br />
*Breez<br />
*Lightning labs' wallet<br />
*Zap<br />
*Bottle Pay<br />
<br />
==Desktop wallets==<br />
*Zap<br />
*Electrum<br />
*<br />
*<br />
*<br />
<br />
==Web wallets==<br />
*Tippin.me<br />
*<br />
*<br />
*<br />
*<br />
<br />
==See Also==<br />
*[[Lightning Exchanges]]<br />
<br />
==External Links==<br />
*[https://lightningnetworkstores.com/wallets List of Lightning Network wallets]</div>Flixhttps://en.bitcoin.it/wiki/CoinLoanCoinLoan2019-12-17T11:37:29Z<p>CoinLoan: CoinLoan is a crypto-lending platform, where anyone can earn interest in idle assets, issue and get a loan backed by crypto-collateral.</p>
<hr />
<div>CoinLoan is a crypto-lending platform, where anyone can issue and get a loan<br />
backed by crypto-collateral. The platform is a meeting point for borrowers who seek to leverage their crypto-assets without selling them and lenders who wish to earn competitive interest with minimal risks.<br />
<br />
<br><br />
== Company ==<br />
CoinLoan company headquarters itself in Tallinn, Estonia, and for a good reason. Estonia is a European start-up and digitization paradise. This North-European country is one of the world's most crypto-friendly environments.<br />
<br><br />
With the key members living in Tallinn, CoinLoan international team includes professionals from other countries in Northern and Central Europe.<br />
<br><br />
CoinLoan has the status of a licensed financial institution <ref>[https://coinloan.io/licenses/, CoinLoan Licenses & Certificates],</ref> and serves customers worldwide. <br />
<br />
== History ==<br />
In 2014, an event occurred that later became an incentive for the company's creation. Alex Faliushin was forced to sell his bitcoins for about $400. There was no way at that moment, to save crypto-assets as a long-term investment and get the money for urgent needs. At the beginning of 2017, BTC tripled its price. <br />
<br />
<br><br />
Faliushin and his business partner, Max Sapelov, came up with the idea of using crypto as loan collateral. They evaluated the market and development potential; there was no competition yet.<br />
<br />
<br><br />
In the first half of 2017, Faliushin, as CEO, and Sapelov, as CTO, started to build a team and develop the crypto-to-fiat lending platform called CoinLoan. SALT lending popped up, implementing a similar business model (with an opportunity to borrow only) concomitantly with CoinLoan.<br />
<br />
<br><br />
In July 2018, the CoinLoan platform went life <ref>[https://finance.yahoo.com/news/coinloan-opens-platform-bridge-gap-190000636.html CoinLoan Opens Platform to Bridge Gap Between Lenders and Borrowers]</ref>. It was the bearish year for bitcoin and the whole crypto-market. It revealed a lack of services for crypto-enthusiasts trying to hold their assets. In response to a market request, many crypto-backed lending companies emerged.<br />
<br />
<br><br />
CoinLoan continued to develop and improve the platform progressively:<br />
* '''December 2018''' — crypto-exchange service appeared.<ref>[https://blog.coinloan.io/coinloan-crypto-exchange-is-now-open/ CoinLoan Crypto Exchange Is Now Open]</ref> It CoinLoan users now able to convert crypto to fiat and vice versa directly on the platform.<br />
* '''March 2019''' — dynamic collateral monitoring system created.<ref>[https://blog.coinloan.io/coinloan-beats-volatility-and-increases-ltv-limit-to-70/ CoinLoan Beats Volatility and Increases LTV Limit to 70%]</ref> It allowed CoinLoan to lift the LTV limit to 70% and move a liquidation threshold beyond 90%.<br />
* '''May 2019''' — CoinLoan enabled Visa and MasterCard payments.<ref>[https://www.newswire.com/news/coinloan-crypto-to-fiat-lending-company-enables-credit-card-deposits-20900972 CoinLoan Crypto-to-Fiat Lending Company Enables Credit Card Deposits] </ref><br />
* '''June 2019''' — To supplement existing crypto-to-fiat loans, crypto-to-crypto lending options added.<ref>[https://blog.coinloan.io/coinloan-enters-the-crypto-to-crypto-lending-market/ CoinLoan Enters the Crypto-to-Crypto Lending Market]</ref><br />
* '''November 2019''' — interest account service added.<ref>[https://blog.coinloan.io/earn-8-on-your-stablecoins-with-coinloan-interest-account/ Earn 8% on Your Stablecoins With CoinLoan Interest Account]</ref> It worked as a bank deposit and enabled users to earn fixed revenue for depositing crypto and fiat funds.<br />
<br />
== Main Functions ==<br />
==== Interest Account ====<br />
Offers a fixed rate on crypto and fiat assets for parking them on CoinLoan.<br />
<br />
==== Lending Platform ====<br />
Allows users to find an appropriate loan offer/request or create a custom one with a desired interest rate, loan term, and amount.<br />
<br />
==== Crypto Exchange ====<br />
Enables users to buy or sell crypto quickly.<br />
<br />
==== Affiliate Program ====<br />
Rewards users for bringing newcomers to the platform.<br />
== Fees Schedule ==<br />
==== Borrowing ====<br />
The borrowing fee is 1% of the overall loan principal amount. The CoinLoan platform provides two options for paying borrowing fee:<br />
* In loan currency;<br />
* In CoinLoan tokens (CLT) – 50% discount.<br />
==== Lending ====<br />
Lending is free.<br />
==== Deposit ====<br />
Deposits in crypto and fiat are free.<br />
Deposits by Visa / MasterCard: 2 EUR + 4.2% (from deposit amount).<br />
<br />
==== Withdrawal ====<br />
Withdrawals in cryptocurrency and euro are free. Withdrawal fees for other fiat currencies are fixed, as shown on [https://coinloan.io/fees/ the official site].<br />
<br />
==== Liquidation ====<br />
The borrower is charged a liquidation fee in the amount of 5% of the liquidated loan collateral.<br />
== Assets Security ==<br />
CoinLoan focus on the development of their own security technologies. The company prefers to maintain cryptocurrency custody in-house rather than outsource security to a third party.<ref>[https://blog.coinloan.io/do-we-store-your-crypto-assets-securely/ Do We Store Your Crypto Assets Securely?]</ref><br />
<br><br />
* All crypto-assets are stored in offline, cold, multi-signature wallets.<br />
* Transaction signing only happens offline on separate devices that have never been connected to the network, and this process involves several people.<br />
* The multi-signature process involves several keys (N) with a required quorum of any (M) keys. Thus, it’s not possible to sign the transaction using a single individual. Also, this system ensures that even losing one of the multi-signature keys, the user will never lose control over assets.<br />
* Encrypted parts of the keys are stored in a geographically-distributed manner in the banks’ safe deposit boxes to prevent potential loss of the keys due to natural disasters, including floods, earthquakes, fires, etc.<br />
<br />
== External Links == <br />
CoinLoan [https://coinloan.io/ website]<br />
<br />
== References ==</div>CoinLoanhttps://en.bitcoin.it/wiki/BIP_0179BIP 01792019-12-13T17:28:23Z<p>934: Update BIP text with latest version from https://github.com/bitcoin/bips/blob/0a388fac4619e440/bip-0179.mediawiki</p>
<hr />
<div>{{bip}}<br />
{{BipMoved|bip-0179.mediawiki}}<br />
<br />
<pre><br />
BIP: 179<br />
Title: Name for payment recipient identifiers<br />
Author: Emil Engler <me@emilengler.com><br />
MarcoFalke <falke.marco@gmail.com><br />
Luke Dashjr <luke+bip@dashjr.org><br />
Comments-Summary: No comments yet.<br />
Comments-URI: https://github.com/bitcoin/bips/wiki/Comments:BIP-0179<br />
Status: Draft<br />
Type: Informational<br />
Created: 2019-10-17<br />
License: CC0-1.0<br />
</pre><br />
<br />
==Abstract==<br />
This BIP proposes a new term for 'address'<br />
<br />
==Specification==<br />
The new term is:<br />
''Bitcoin'' '''Invoice''' ''Address''<br />
<br />
The ''Bitcoin'' and ''Address'' parts are optional.<br />
The address suffix should only be used as a transitional step.<br />
<br />
A ''Bitcoin'' Invoice ''Address'' is a string of characters that can be used to indicate the intended recipient and purpose of a transaction.<br />
<br />
==Motivation==<br />
Bitcoin addresses are intended to be only used '''once''' and you should generate a new one for every new incoming payment.<br />
The term 'address' however indicates consistency because nearly everything on the internet or the offline world with the term 'address'<br />
is something that rarely or even never changes (postal address, email address, IP addresses (depends heavily on the provider), etc.)<br />
The motivation for this BIP is to change the term address to something that indicates that the address is connected to a single transaction.<br />
<br />
==Rationale==<br />
The reason why we use ''Bitcoin Invoice Address'' or just ''Invoice'' is to emphasize that it is single-use.<br />
The terms ''Bitcoin'' and ''Address'' are optional for the following reasons:<br />
For ''Bitcoin'':<br />
* Useful for multicoin wallets to indicate that it belongs to Bitcoin<br />
* Indicates a difference between a lightning and an on-chain invoice<br />
For ''Address'':<br />
* To not confuse users with a completely new term<br />
* To show that it is where you send something to<br />
* To not break backwards compatibility<br />
<br />
This gives us the four following possibilities:<br />
* Bitcoin Invoice Address<br />
* Bitcoin Invoice<br />
* Invoice Address<br />
* Invoice<br />
<br />
==Backwards Compatibility==<br />
To avoid issues, the 'Address' suffix is permitted, but not recommended.<br />
The suffix 'Address' remains so users should be immediately able to recognize it until the new term is widely known.<br />
<br />
==Acknowledgements==<br />
Thanks to Chris Belcher for the suggestion of the term 'Bitcoin Invoice Address'<br />
<br />
==Copyright==<br />
This BIP is released under CC0-1.0 and therefore Public Domain.</div>934https://en.bitcoin.it/wiki/AgoraDeskAgoraDesk2019-12-04T11:09:30Z<p>AgoraDesk: add ref for first paragraph</p>
<hr />
<div>[[File:AgoraDesk_avatar.png|270px|thumb|right]]<br />
'''[https://agoradesk.com/ AgoraDesk]''' is a peer-to-peer over-the-counter (P2P OTC) [[cryptocurrency]] exchange. It was created by the same team behind [https://localmonero.co LocalMonero], Monero's equivalent of [[LocalBitcoins]].<ref name=launchpost>[https://www.reddit.com/r/AgoraDesk/comments/dbaxvr/from_the_creators_of_localmonero_agoradesk_an/ From the creators of LocalMonero: AgoraDesk!]</ref> <br />
<br />
==Overview==<br />
Much like LocalBitcoins, AgoraDesk allows people from any country to use any payment method and any currency to trade Bitcoin. Every online trade has escrow protection, where the Bitcoin seller's funds will be escrowed until the end of the trade, which mitigates counterparty risk.<br />
<br />
There are no KYC, AML, or identity verification requirements to use the service, and traders can choose any medium of exchange, including cash. AgoraDesk places a large emphasis on user privacy. An E-mail address is not required during registration. IP addresses are not associated with user accounts. AgoraDesk doesn't require the use of JavaScript to operate, completely mitigating the overwhelming majority of web vulnerabilities, which are dependent upon JavaScript. AgoraDesk also has a [[Tor]] portal and I2P portal.<br />
<br />
==P2P OTC Options==<br />
AgoraDesk is the first platform to offer peer-to-peer over-the-counter derivatives trading, namely option contracts. <ref name=options>[https://cryptobriefing.com/monero-bitcoin-options-agoraesk/ Agoradesk Introduces Options Trading For Monero And Bitcoin]</ref> Options are a form of derivative contract, in which traders agree to exchange an asset at a preset price at a later date. <br />
<br />
Users can execute call and put options for risk hedging and leveraged trading, with physical settlement of underlying BTC. A similar setup was observed on the [[bitcoin-otc]] IRC channel with an inferior escrow setup, requiring both sides of the contract to be escrowed by a third party. On AgoraDesk, despite only the underlying asset being escrowed, counterparty risk is completely eliminated.<ref name=counterpartyrisk>[https://agoradesk.com/eliminating-counterparty-risk How AgoraDesk Eliminates Counterparty Risk in Option Trades]</ref><br />
<br />
==External Links==<br />
* [http://agoradesk.com AgoraDesk]<br />
* [https://twitter.com/AgoraDesk Twitter]<br />
* [https://t.me/AgoraDesk Telegram]<br />
<br />
==See also==<br />
* [[List of Exchanges]]<br />
* [[Bech32 adoption]]<br />
<br />
==References==<br />
<references /><br />
[[Category:Exchanges]]<br />
[[Category:Local]]<br />
[[Category:Directories]]<br />
[[Category:eWallets]]</div>AgoraDeskhttps://en.bitcoin.it/wiki/Backup_and_Storage_MethodsBackup and Storage Methods2019-11-25T06:55:13Z<p>Belcher: Belcher moved page Backup and Storage Methods to Links to Storage Methods: Page doesn't actually have content to backup/storage methods, see discussion</p>
<hr />
<div>This page reviews published methods for backing up and storing bitcoin wallets. <br />
<br />
== Cold Storage Methods ==<br />
<br />
=== Glacier protocol ===<br />
<br />
https://glacierprotocol.org/<br />
<br />
The glacier protocol is a cold storage scheme. It teaches how to use multiple computers made by different manufacturers which help resist attacks like malicius firmware. The multiple computers are given the same entropy and the user checks that they result in the same bitcoin addresses and private keys. Users are advised to avoid sidechannels like audio, power, magnetic and radio.<br />
<br />
The tutorial teaches users to deal with raw private keys and write them down on paper. [[Deterministic wallet|deterministic wallets]] are not used, nor are [[full node]]s. Users are instructed to look up their balances on a blockchain explorer website which damages the user's privacy and makes them trust the website for verifying the rules of bitcoin.<br />
<br />
=== SmartCustody's Simple Self-Custody Cold Storage ===<br />
<br />
[https://github.com/BlockchainCommons/SmartCustodyWhitePapers/blob/master/%23SmartCustody-_Simple_Self-Custody_Cold_Storage_Scenario.md github.com/BlockchainCommons/SmartCustodyWhitePapers]<br />
<br />
This guide show how to store coins in a cold storage situation with the ability for heirs to recover your funds if you die. The guide is a bit hard to read with many optional steps, and the "basic scenario" uses 2 hardware wallets with the same seed for some reason. The information it recommends putting in a safe deposit box is enough to steal funds, so you're putting a lot of trust in the safe deposit box. There are Alternate scenarios, but they don't make themselves very clear.<br />
<br />
=== [[Electrum]]'s cold storage guide ===<br />
<br />
https://electrum.readthedocs.io/en/latest/coldstorage.html<br />
<br />
The wallet features [[seed phrase]]s, [[Deterministic wallet|deterministic wallets]], offline signing. Unsigned transactions can be transferred with QR codes and saving to a file (which can be put on a USB flash drive or any other transfer method). The wallet can be backed by a [[full node]] if the user connects [[Electrum#Server software|to their own server]], but this is optional and does not happen by default.<br />
<br />
The tutorial does not aim to discuss anything about creating a secure offline computer.<br />
<br />
=== Rusty Russell's "Remarkably Unreliable Guide To Bitcoin Storage" ===<br />
<br />
https://github.com/rustyrussell/bitcoin-storage-guide<br />
<br />
The tutorial teaches how to use a laptop as the secure offline computer. It uses ubuntu OS, and Bitcoin Core as the bitcoin wallet. The private key material is stored in raw private key format, not seed phrases (which bitcoin core doesn't support) and so the guide does not benefit from [[Deterministic wallet|deterministic wallets]]. QR codes are used to transfer transactions between the offline and online computers. As the tutorial uses Bitcoin Core it enjoys the benefits of a [[full node]] wallet. <br />
<br />
However, it recommends naively splitting keys (without using a secure key-splitting algorithm like [[Shamir's secret sharing|Shamir's secret sharing algorithm]]), and so is insecure and certainly not well vetted.<br />
<br />
=== Alexandr Nellson's Scheme ===<br />
<br />
[https://medium.com/@nellsonx/how-to-properly-store-bitcoins-and-other-cryptocurrencies-14e0db1910d medium.com/@nellsonx/how-to-properly-store-bitcoins]<br />
<br />
This method is relatively basic, glossing over important steps like how to properly airgap a machine, how to create and handle a strong passphrase, and how to back up your seed. It uses usb drives to boot the machine and transfer transaction information, which is a significant attack vector. It also isn't open source and is definitely not well vetted. <br />
<br />
=== Cold Wasabi ===<br />
<br />
https://docs.wasabiwallet.io/using-wasabi/ColdWasabi.html<br />
<br />
This is a pretty basic guide that focuses on using the Wasabi wallet to mix your coins before sending them to a hardware wallet. There is supplementary information about how to setup a hardware wallet and backup your seed, but this doesn't make for a complete or easy-to-use guide. It is open source, and so might be somewhat vetted.<br />
<br />
== Other Storage Methods == <br />
<br />
=== Bitgoldwallet's Storage Methods ===<br />
<br />
https://www.bitgoldwallet.com/how-to-store-bitcoin.html<br />
<br />
This site has a number of different storage methods of both the hot and cold variety. The methods are detailed and complex, and somewhat hard to read. It seems to have some odd recommendations, like using password protected PDF files and Zorin OS. ''More review required.''<br />
<br />
== Storage Method Components ==<br />
<br />
The items in this section are methods that do not outline a complete backup or storage mechanism, and thus must be combined with other techniques in order to create a security backup or storage mechanism. <br />
<br />
* A small paper on bitcoin storage best practices - [https://github.com/devrandom/btc-papers/blob/master/storage-best-practices.md github.com/devrandom/../storage-best-practices.md]<br />
* Pamela Morgan's guides to bitcoin inheritance planning - [https://medium.com/@pamelawjd medium.com/@pamelawjd]<br />
** https://medium.com/@pamelawjd/inheritance-planning-for-cryptocurrencies-3-steps-in-3-minutes-83ebb3e916a2<br />
** https://www.youtube.com/watch?v=ddwWNWg8YSQ&feature=youtu.be<br />
** https://empoweredlaw.com/</div>Fresheneeszhttps://en.bitcoin.it/wiki/UtorgUtorg2019-11-20T16:56:59Z<p>Muxacin: </p>
<hr />
<div>{{infobox company|name=Utorg Crypto Exchange|image=[[File:Utorg-main.png|275px]]<br />
|trading_name=Utorg.io<br />
|industry=[[Exchange|Bitcoin exchange]]<br />
|foundation=2018<br />
|currencies=BTC, ETH, USDT, USD, RUB, UAH<br />
|website=https://utorg.io<br />
}}<br />
<br />
==UTORG.IO==<br />
'''Utorg''' is a european cryptocurrency exchange, where you can easily buy, sell and trade Bitcoins, Ethereum and many other cryptocurrencies. We provide services for customers from all over the world, including the CIS countries. We are in the European legal field, but are focused mostly on Russian, Ukrainian and Belarusian customers, and on residents of European states. Therefore, we support cryptocurrency pairs with fiat currencies such as RUB, UAH, USD and provide a wide range of fiat payment methods.<br />
<br />
==Deposit/Withdraw Funds==<br />
<br />
===Fiat currencies===<br />
* USD<br />
* RUB<br />
* UAH<br />
<br />
===Crypto currencies===<br />
* BTC<br />
* ETH<br />
* USDT (ERC20)<br />
<br />
=== [https://utorg.io/fees Full list of fees and limits on Utorg] <ref>[https://utorg.io/fees Utorg.io Fees and Limits]</ref> ===<br />
<br />
<br />
==Services==<br />
<br />
The exchange launched publicly in August 2019, offering following trading pairs: RUB/BTC, RUB/ETH, UAH/BTC, UAH/ETH<ref>[https://bitcointalk.org/index.php?topic=5171499.0 Utorg.io - Exchange Now Open with Live Trading BTC, ETH, RUB, USD]</ref><ref>[https://www.bitcoinminershashrate.com/utorg-exchange-started-beta-testing/ Utorg Exchange Started Beta Testing]</ref>.<br />
<br />
===Instant Change===<br />
Secure and fast conversion of cryptocurrencies to fiat currencies and back in 1 click. <br />
<br />
===Exchange Platform===<br />
Trading terminal to trade crypto currencies. [https://utorg.io/trade Utorg Exchange]<br />
<br />
===IOU DESK===<br />
IOU Desk – is a digital market platform where users are able to buy and sell project tokens before these projects go public. The tokens were acquired at the various financing stages, starting from seed rounds and finishing with ICOs. They have a number of forms: IOU (I owe you) – debt documents, agreements, SAFTs, and others. First IOU was conducted for PERLIN project<ref>[https://medium.com/@QB_community/on-the-utorg-exchange-iou-trading-on-perlin-begins-29fec7ff7499 On the Utorg exchange, IOU trading on Perlin begins]</ref><br />
<br />
===UTORG PRO===<br />
Merchant as a SaaS solution for exchangers, eCommerce and other payment platforms<br />
<br />
===API===<br />
Full-fledged API<ref>[https://static.utorg.io/docs/index.html#general-info Utorg Exchange API Documentation]</ref>, allowing to connect to Utorg and integrate third-party services to use data from Utorg Exchange<br />
<br />
==Referral Program==<br />
<br />
Within [https://utorg.io/referral Utorg Exchange Referral Program]<ref>[https://utorg.io/referral Utorg Referral Program]</ref>, users get 20% of fee from all exchange transactions of their referred users. UTORG.IO offers promotional tools, statistics and instant withdrawals of Referral rewards. Min amount of rewards to withdraw is $10. <br />
<br />
==Utorg advantages==<br />
* Convenient tools for miners and business<ref>[https://blockchainjournal.news/utorg-cryptocurrency-exchange-supporting-fiat-currencies-targets-miners/ Utorg Targets Miners]</ref>;<br />
* Wide selection of payment instruments;<br />
* IOU - trade future tokens;<br />
* Ease of registration;<br />
* Ease of verification;<br />
* Simple and intuitive interface;<br />
* High security;<br />
* 2FA option;<br />
* Round-the-clock support;<br />
* Multisig Wallets;<br />
* SegWit support<br />
<br />
==See Also==<br />
<br />
* [[Exchanges]]<br />
<br />
==External Links==<br />
<br />
* [http://utorg.io Utorg.io] website<br />
<br />
==References==<br />
<References /><br />
<br />
[[Category:Exchanges]]<br />
[[Category:Local]]<br />
[[Category:Mining_contractors]]</div>Muxacinhttps://en.bitcoin.it/wiki/BIP_0325BIP 03252019-11-06T16:31:57Z<p>934: Update BIP text with latest version from https://github.com/bitcoin/bips/blob/580e7192211f0ff5/bip-0325.mediawiki</p>
<hr />
<div>{{bip}}<br />
{{BipMoved|bip-0325.mediawiki}}<br />
<br />
<pre><br />
BIP: 325<br />
Layer: Applications<br />
Title: Signet<br />
Author: Karl-Johan Alm <karljohan-alm@garage.co.jp><br />
Comments-Summary: No comments yet.<br />
Comments-URI: https://github.com/bitcoin/bips/wiki/Comments:BIP-0325<br />
Status: Draft<br />
Type: Standards Track<br />
Created: 2019-03-20<br />
License: CC0-1.0<br />
</pre><br />
<br />
== Abstract ==<br />
<br />
A new type of test network where signatures are used in addition to proof of work for block progress, enabling much better coordination and robustness (be reliably unreliable), for persistent, longer-term testing scenarios involving multiple independent parties.<br />
<br />
== Motivation ==<br />
<br />
Testnet is a great place to try out new things without risking real money, but it is notoriously unreliable. Huge block reorgs, long gaps in between blocks being mined or sudden bursts of blocks in rapid succession mean that realistic testing of software, especially involving multiple independent parties running software over an extended period of time, becomes infeasible in practice.<br />
<br />
A new type of test network would be more suitable for integration testing by organizations such as exchanges, or testing of next generation Layer-2 protocols like Eltoo or sidechain pegs. The goal is not to be perfectly reliable but rather to have a predictable amount of unreliability. You want a test network to behave like mainnet (i.e. no thousands of block reorgs) while also making it easier to trigger expected but rare events like a 6-block reorg. Regtest is not suitable for longer-term scenarios involving multiple independent parties because creating blocks costs nothing, so any party can completely control the test network.<br />
<br />
<br />
== Specification ==<br />
<br />
A new type of network ("signet"), which takes an additional consensus parameter called the challenge (scriptPubKey). The challenge can be a simple pubkey (P2PKH style), or a k-of-n multisig, or any other script you would want.<br />
<br />
The witness commitment of the coinbase transaction is extended to include a secondary commitment (the signature/solution):<br />
<br />
1-4 bytes - Push the following (x + 4) bytes<br />
4 bytes - Signet header (0xecc7daa2)<br />
x bytes - Solution (sigScript)<br />
<br />
Any push operations that do not start with the 4 byte signet header are ignored. Multiple push operations with the 4 byte signet header are ignored except for the first entry.<br />
<br />
Any signature operations contained within the challenge use SHA256d(modifiedBlockHash), i.e. the double-SHA256 digest of the following data as the sighash:<br />
<br />
{|class="wikitable" style="text-align: center;"<br />
|-<br />
!Type<br />
!Size<br />
!Name<br />
|-<br />
|Int32||4||nVersion<br />
|-<br />
|Uint256||32||hashPrevBlock<br />
|-<br />
|Uint256||32||modifiedMerkleRoot<br />
|-<br />
|Uint32||4||nTime<br />
|-<br />
|Uint32||4||nBits<br />
|}<br />
<br />
The <code>modifiedMerkleRoot</code> hash is obtained by generating the merkle root of the block transactions, with the coinbase witness commitment as is, without the signet extension. This means the merkle root of the block is different from the merkle root in the signet commitment. This is needed, because the signature can never be included in the very message (in this case, a block) that is being signed. Apart from the signature, to facilitate block generation (mining), the block nonce value is the only other component of the block that the signet signature does not commit to. When grinding proof of work, the extended nonce cannot be used as it would invalidate the signature. Instead, simply resigning the same (or an updated) block will give a new search space.<br />
<br />
A block is considered fully validated if the above commitment is found, and its solution is valid. It is recommended that this verification is done directly before or after the witness commitment verification, as the data required to do both is approximately the same.<br />
<br />
== Compatibility ==<br />
<br />
This specification is backwards compatible in the sense that existing software can use Signet out of the box.<br />
<br />
Simply by adding the network parameters for signet (magic number, etc), a client can connect to and use any signet network without further modifications. The block headers have valid proof of work, so clients can trivially check that blocks are "probably" valid.<br />
<br />
However, anyone can mine blocks that are accepted by the client for any given signet network. These blocks do not contain the required signatures, however, so any fully validating node will promptly reject them. As such, clients need to either validate the block signature inside the coinbase transaction, or connect to trusted peers.<br />
<br />
Other software need not add block signature validation code that they will not use in production. This is adequate for non-production test purposes where the goal is to have a network behave as much like mainnet as possible.<br />
<br />
== Reference implementation ==<br />
<br />
Pull request at https://github.com/bitcoin/bitcoin/pull/16411<br />
<br />
== Acknowledgements ==<br />
<br />
TODO<br />
<br />
== References ==<br />
<br />
# Original mailing list thread: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-March/016734.html<br />
# Bitcoin Wiki entry: https://en.bitcoin.it/wiki/Signet<br />
<br />
== Copyright ==<br />
<br />
This document is licensed under the Creative Commons CC0 1.0 Universal license.</div>934https://en.bitcoin.it/wiki/BIP_0330BIP 03302019-11-05T23:56:58Z<p>934: Update BIP text with latest version from https://github.com/bitcoin/bips/blob/96382166b0d10885/bip-0330.mediawiki</p>
<hr />
<div>{{bip}}<br />
{{BipMoved|bip-0330.mediawiki}}<br />
<br />
<pre><br />
BIP: 330<br />
Layer: Peer Services<br />
Title: Transaction announcements reconciliation<br />
Author: Gleb Naumenko <naumenko.gs@gmail.com><br />
Pieter Wuille <pieter.wuille@gmail.com><br />
Comments-Summary: No comments yet.<br />
Comments-URI: https://github.com/bitcoin/bips/wiki/Comments:BIP-0330<br />
Status: Draft<br />
Type: Standards Track<br />
Created: 2019-09-25<br />
License: CC0-1.0<br />
License-Code: MIT<br />
</pre><br />
<br />
==Abstract==<br />
<br />
This document specifies a P2P protocol extension for reconciliation of transaction announcements <b>between 2 nodes</b>, which is a building block for efficient transaction relay protocols (e.g., [https://arxiv.org/pdf/1905.10518.pdf Erlay]). This is a step towards increasing the connectivity of the network for almost no bandwidth cost.<br />
<br />
==Motivation==<br />
<br />
Currently in the Bitcoin network, every 32-byte transaction ID is announced in at least one direction between every pair of connected peers, via INV messages. This results in high cost of announcing transactions: ''O(nodes * connections_per_node)''.<br />
<br />
A <b>reconciliation-based protocol</b> which uses the technique suggested in this document can have better scaling properties than INV-based flooding.<br />
<br />
Increasing the connectivity of the network makes the network more robust to partitioning attacks; thus, improving the bandwidth scaling of transaction relay to ''O(nodes)'' (and without a high constant overhead) would allow us to improve the security of the network by increasing connectivity. It would also reduce the bandwidth required to run a Bitcoin node and potentially enable more users to run full nodes.<br />
<br />
===Erlay===<br />
<br />
[https://arxiv.org/pdf/1905.10518.pdf Erlay] is an example of a high-level transaction relay protocol which employs set reconciliation for bandwidth efficiency.<br />
<br />
Erlay uses both flooding (announcing using INV messages to all peers) and reconciliation to announce transactions.<br />
Flooding is expensive, so Erlay seeks to use it sparingly and in strategic locations - only well-connected publicly reachable nodes flood transactions to other publicly reachable nodes via outbound connections.<br />
Since every unreachable node is directly connected to several reachable nodes, this policy ensures that a transaction is quickly propagated to be within one hop from most of the nodes in the network.<br />
<br />
All transactions not propagated through flooding are propagated through efficient set reconciliation.<br />
To do this, every node keeps a reconciliation set for each peer, in which transactions are placed which would have been announced using INV messages absent this protocol. Every 2 seconds every node chooses a peer from its outbound connections in a predetermined order to reconcile with, resulting in both sides learning the transactions known to the other side. After every reconciliation round, the corresponding reconciliation set is cleared.<br />
A more detailed description of a set reconciliation round and other implementation details can be found in the paper.<br />
<br />
Erlay allows us to:<br />
* save 40% of the bandwidth consumed by a node, given typical network connectivity as of July 2019.<br />
* achieve similar latency<br />
* increase network connectivity for almost no bandwidth or latency cost<br />
* improves privacy as a side-effect<br />
<br />
This document proposes a P2P-layer extension which is required to enable efficient reconciliation-based protocols (like Erlay) for transaction relay.<br />
<br />
==Specification==<br />
<br />
===New data structures===<br />
<br />
Several new data structures are introduced to the P2P protocol first, to aid with efficient transaction relay.<br />
<br />
====32-bit short transaction IDs====<br />
<br />
During reconciliation, significantly abbreviated transaction IDs are used of just 32 bits in size. To prevent attackers from constructing sets of transactions that cause network-wide collisions, the short ID computation is salted on a per-link basis using 64 bits of entropy contributed by both communication partners.<br />
<br />
Short IDs are computed as follows:<br />
* Let ''salt<sub>1</sub>'' and ''salt<sub>2</sub>'' be the entropy contributed by both sides; see the "sendrecon" message further for details how they are exchanged.<br />
* Sort the two salts such that ''salt<sub>1</sub> &le; salt<sub>2</sub>'' (which side sent what doesn't matter).<br />
* Compute ''h = SHA256("Tx Relay Salting" || salt<sub>1</sub> || salt<sub>2</sub>)'', where the two salts are encoded in 64-bit little-endian byte order.<br />
* Let ''k<sub>0</sub>'' be the 64-bit integer obtained by interpreting the first 8 bytes of ''h'' in little-endian byte order.<br />
* Let ''k<sub>1</sub>'' be the 64-bit integer obtained by interpreting the second 8 bytes of ''h'' in little-endian byte order.<br />
* Let ''s = SipHash-2-4((k<sub>0</sub>,k<sub>1</sub>),wtxid)'', where ''wtxid'' is the transaction hash including witness data as defined by BIP141.<br />
* The short ID is equal to ''1 + (s mod 0xFFFFFFFF)''.<br />
<br />
This results in approximately uniformly distributed IDs in the range ''[1..0xFFFFFFFF]'', which is a requirement for using them as elements in 32-bit sketches. See the next paragraph for details.<br />
<br />
====Short transaction ID sketches====<br />
<br />
Reconciliation-based relay uses [https://www.cs.bu.edu/~reyzin/code/fuzzy.html PinSketch] BCH-based secure sketches as introduced by the [https://www.cs.bu.edu/~reyzin/fuzzy.html Fuzzy Extractors paper]. They are a form of set checksums with the following properties:<br />
* Sketches have a predetermined capacity, and when the number of elements in the set does not exceed the capacity, it is always possible to recover the entire set from the sketch by decoding the sketch. A sketch of nonzero b-bit elements with capacity c can be stored in bc bits.<br />
* A sketch of the [https://en.wikipedia.org/wiki/Symmetric_difference symmetric difference] between the two sets (i.e., all elements that occur in one but not both input sets), can be obtained by combining the sketches of those sets.<br />
<br />
The sketches used here consists of elements of the [https://en.wikipedia.org/wiki/Finite_field finite field] ''GF(2<sup>32</sup>)''. Specifically, we represent finite field elements as polynomials in ''x'' over ''GF(2)'' modulo ''x<sup>32</sup + x<sup>7</sup> + x<sup>3</sup> + x<sup>2</sup> + 1''. To map integers to finite field elements, simply treat each bit ''i'' (with value ''2<sup>i</sup>'') in the integer as the coefficient of ''x<sup>i</sup>'' in the polynomial representation. For example the integer ''101 = 2<sup>6</sup> + 2<sup>5</sup> + 2<sup>2</sup> + 1'' is mapped to field element ''x<sup>6</sup> + x<sup>5</sup> + x<sup>2</sup> + 1''. These field elements can be added and multiplied together, but the specifics of that are out of scope for this document.<br />
<br />
A short ID sketch with capacity ''c'' consists of a sequence of ''c'' field elements. The first is the sum of all short IDs in the set, the second is the sum of the 3rd powers of all short IDs, the third is the sum of the 5th powers etc., up to the last element with is the sum of the ''(2c-1)''th powers. These elements are then encoded as 32-bit integers in little endian byte order, resulting in a ''4c''-byte serialization.<br />
<br />
The following Python 3.2+ code implements the creation of sketches: <pre><br />
FIELD_BITS = 32<br />
FIELD_MODULUS = (1 << FIELD_BITS) + 0b10001101<br />
<br />
def mul2(x):<br />
"""Compute 2*x in GF(2^FIELD_BITS)"""<br />
return (x << 1) ^ (FIELD_MODULUS if x.bit_length() >= FIELD_BITS else 0)<br />
<br />
def mul(x, y):<br />
"""Compute x*y in GF(2^FIELD_BITS)"""<br />
ret = 0<br />
for bit in [(x >> i) & 1 for i in range(x.bit_length())]:<br />
ret, y = ret ^ bit * y, mul2(y)<br />
return ret<br />
<br />
def create_sketch(shortids, capacity):<br />
"""Compute the bytes of a sketch for given shortids and given capacity."""<br />
odd_sums = [0 for _ in range(capacity)]<br />
for shortid in shortids:<br />
squared = mul(shortid, shortid)<br />
for i in range(capacity):<br />
odd_sums[i] ^= shortid<br />
shortid = mul(shortid, squared)<br />
return b''.join(elem.to_bytes(4, 'little') for elem in odd_sums)<br />
</pre><br />
<br />
The [https://github.com/sipa/minisketch/ minisketch] library implements the construction, merging, and decoding of these sketches efficiently.<br />
<br />
====Truncated transaction IDs====<br />
<br />
For announcing and relaying transaction outside of reconciliation, we need an unambiguous, unsalted way to refer to transactions to deduplicate transaction requests. As we're introducing a new scheme anyway, this is a good opportunity to switch to wtxid-based requests rather than txid-based ones. While using full 256-bit wtxids is possible, this is overkill as they contribute significantly to the total bandwidth as well. Instead, we truncate the wtxid to just their first 128 bits. These are referred to as truncated IDs.<br />
<br />
===Intended Protocol Flow===<br />
<br />
Set reconciliation primarily consists of the transmission and decoding of a reconciliation set sketch upon request.<br />
<br />
[[File:bip-0330/recon_scheme_merged.png|framed|center|Set reconciliation protocol flow]]<br />
<br />
====Bisection====<br />
<br />
If a node is unable to reconstruct the set difference from the received sketch, the node then makes an additional reconciliation request, similar to the initial one, but this request is applied to only a fraction of possible transactions (e.g., in the range 0x0–0x8). Because of the linearity of sketches, a sketch of a subset of transactions would allow the node to compute a sketch for the remainder, which saves bandwidth.<br />
<br />
[[File:bip-0330/bisection.png|framed|300px|center|Bisection]]<br />
<br />
===New messages===<br />
Several new protocol messages are added: sendrecon, reqreconcil, sketch, reqbisec, reconcildiff, invtx, gettx. This section describes their serialization, contents, and semantics.<br />
<br />
In what follows, all integers are serialized in little-endian byte order. Boolean values are encoded as a single byte that must be 0 or 1 exactly. Arrays are serialized with the CompactSize prefix that encodes their length, as is common in other P2P messages.<br />
<br />
====sendrecon====<br />
The sendrecon message announces support for the reconciliation protocol. It is expected to be only sent once, and ignored by nodes that don't support it.<br />
<br />
Its payload consists of:<br />
{|class="wikitable"<br />
! Data type !! Name !! Description<br />
|-<br />
| bool || sender || Indicates whether the sender will send "reqreconcil" message<br />
|-<br />
| bool || responder || Indicates whether the sender will respond to "reqreconcil" messages.<br />
|-<br />
| uint32 || version || Sender must set this to 1 currently, otherwise receiver should ignore the message.<br />
|-<br />
| uint64 || salt || The salt used in the short transaction ID computation.<br />
|}<br />
<br />
"reqreconcil" messages can only be sent if the sender has sent a "sendrecon" message with sender=true, and the receiver has sent a "sendrecon" message with responder=true.<br />
<br />
====reqreconcil====<br />
The reqreconcil message initiates a reconciliation round.<br />
<br />
{|class="wikitable"<br />
! Data type !! Name !! Description<br />
|-<br />
| uint16 || set_size || Size of the sender's reconciliation set, used to estimate set difference.<br />
|-<br />
| uint8 || q || Coefficient used to estimate set difference. Multiplied by PRECISION=2^6 and rounded up by the sender and divided by PRECISION by the receiver.<br />
|}<br />
<br />
Upon receipt of a "reqreconcil" message, the receiver:<br />
* Constructs and sends a "sketch" message (see below), with a sketch of capacity computed as ''|set_size - local_set_size| + q * (set_size + local_set_size) + c'', where ''local_set_size'' represents size of the receiver's reconciliation set.<br />
* Makes a snapshot of their current reconciliation set, and clears the set itself. The snapshot is kept until a "reconcildiff" message is received by the node.<br />
<br />
It is suggested to use ''c=1'' to avoid sending empty sketches and reduce the overhead caused by under-estimations.<br />
<br />
Intuitively, ''q'' represents the discrepancy in sets: the closer the sets are, the lower optimal ''q'' is.<br />
As suggested by Erlay, ''q'' should be derived as an optimal ''q'' value for the previous reconciliation with a given peer, once the actual set sizes and set difference are known. Alternatively, ''q=0.1'' should be used as a default value.<br />
For example, if in previous round ''set_size=30'' and ''local_set_size=20'', and the *actual* difference was ''4'', then a node should compute ''q'' as following:<br />
''q=(|30-20| - 1) / (30+20)=0.18''<br />
The derivation of ''q'' can be changed according to the version of the protocol.<br />
<br />
No new "reqreconcil" message can be sent until a "reconcildiff" message is sent.<br />
<br />
====sketch====<br />
The sketch message is used to communicate a sketch required to perform set reconciliation.<br />
<br />
{|class="wikitable"<br />
! Data type !! Name !! Description<br />
|-<br />
| byte[] || skdata || The sketch of the sender's reconciliation snapshot<br />
|}<br />
<br />
Upon receipt of a "sketch" message, a node computes the set difference by combining the receiver sketch with a sketch computed locally for a corresponding reconciliation set. If this is the 2nd time for this round a "sketch" message was received, the bisection approach is used, and by combining the new sketch with the previous one, two difference sketches are obtained, one for the first half and one for the second half of the short id range. The receiving node then tries to decode this sketch (or sketches), and based on the result:<br />
* If decoding fails, a "reconcildiff" message is sent with the failure flag set (success=false). If this was the first "sketch" in the round, a "reqbisec" message may be sent instead.<br />
* If decoding succeeds, a "reconcildiff" message is sent with the truncated IDs of all locally known transactions that appear in the decode result, and the short IDs of the unrecognized ones.<br />
<br />
The receiver also makes snapshot of their current reconciliation set, and clears the set itself. The snapshot is kept until a "reconcildiff" message is sent by the node.<br />
<br />
====reqbisec====<br />
The reqbisec message is used to signal that set reconciliation has failed and an extra sketch is needed to find set difference.<br />
<br />
It has an empty payload.<br />
<br />
Upon receipt of a "reqbisec" message, a node responds to it with a "sketch" message, which contains a sketch of a subset of corresponding reconciliation set <b>snapshot</b> (stored when "reqreconcil" message for the current round was processed) (values in range ''[0..(2^31)]'').<br />
<br />
====reconcildiff====<br />
The reconcildiff message is used to announce transactions which are found to be missing during set reconciliation on the sender's side.<br />
<br />
{|class="wikitable"<br />
! Data type !! Name !! Description<br />
|-<br />
| uint8 || success || Indicates whether sender of the message succeeded at set difference decoding.<br />
|-<br />
| uint32[] || ask_shortids || The short IDs that the sender did not have.<br />
|}<br />
<br />
Upon receipt a "reconcildiff" message with ''success=1'', a node sends a "invtx" message for the transactions requested by 32-bit IDs (first vector) containing their 128-bit truncated IDs (with parent transactions occuring before their dependencies), and can request announced transactions (second vector) it does not have via a "gettx" message.<br />
Otherwise if ''success=0'', receiver should request bisection via ''reqbisec'' (if failure happened for the first time).<br />
If failure happened for the second time, receiver should announce the transactions from the reconciliation set via an "invtx" message, excluding the transactions announced from the sender.<br />
<br />
The <b>snapshot</b> of the corresponding reconciliation set is cleared by the sender and the receiver of the message.<br />
<br />
The sender should also send their own "invtx" message along with the reconcildiff message to announce transactions which are missing on the receiver's side.<br />
<br />
====invtx====<br />
The invtx message is used to announce transactions (both along with reconcildiff message and as a response to the reconcildiff message). It is the truncated ID analogue of "inv" (which cannot be used because it has 256-bit elements).<br />
<br />
{|class="wikitable"<br />
! Data type !! Name !! Description<br />
|-<br />
| uint128[] || inv_truncids || The truncated IDs of transactions the sender believes the receiver does not have.<br />
|}<br />
<br />
Upon receipt a "invtx" message, a node requests announced transactions it does not have.<br />
The <b>snapshot</b> of the corresponding reconciliation set is cleared by the sender of the message.<br />
<br />
====gettx====<br />
The gettx message is used to request transactions by 128-bit truncated IDs. It is the truncated ID analogue of "getdata".<br />
<br />
{|class="wikitable"<br />
! Data type !! Name !! Description<br />
|-<br />
| uint128[] || ask_truncids || The truncated IDs of transactions the sender wants the full transaction data for.<br />
|}<br />
<br />
Upon receipt a "gettx" message, a node sends "tx" messages for the requested transactions.<br />
<br />
==Local state==<br />
<br />
This BIP suggests a stateful protocol and it requires storing several variables at every node to operate properly.<br />
<br />
====Reconciliation sets====<br />
Every node stores a set of 128-bit truncated IDs for every peer which supports transaction reconciliation, representing the transactions which would have been sent according to the regular flooding protocol.<br />
Incoming transactions are added to sets when those transactions are received (if they satisfy the policies such as minimum fee set by a peer).<br />
A reconciliation set is moved to the corresponding set snapshot after the transmission of the initial sketch.<br />
<br />
====Reconciliation set snapshot====<br />
After the transmitting of the initial sketch (either sending or receiving of reconcildiff message), every node should store the snapshot of the current reconciliation set, and clear the set.<br />
This is important to make bisection more stable during the reconciliation round (bisection should be applied to the snapshot).<br />
The snapshot is also used to efficiently lookup the transactions requested by short ID.<br />
The snapshot is cleared after the end of the reconciliation round (sending or receiving of the reconcildiff message).<br />
<br />
====q-coefficient====<br />
The q value should be stored to make efficient difference estimation. It is shared across peers and changed after every reconciliation.<br />
q-coefficient represents the discrepancy in sets: the closer the sets are, the lower optimal ''q'' is.<br />
In future implementations, q could vary across different peers or become static.<br />
<br />
<br />
==Backward compatibility==<br />
<br />
Older clients remain fully compatible and interoperable after this change.<br />
<br />
Clients which do not implement this protocol remain fully compatible after this change using existing protocols, because transaction announcement reconciliation is used only for peers that negotiate support for it.<br />
<br />
==Rationale==<br />
<br />
====Why using PinSketch for set reconciliation?====<br />
<br />
PinSketch is more bandwidth efficient than IBLT, especially for the small differences in sets we expect to operate over.<br />
PinSketch is as bandwidth efficient as CPISync, but PinSketch has quadratic decoding complexity, while CPISync have cubic decoding complexity. This makes PinSketch significantly faster.<br />
<br />
====Why using 32-bit short transaction IDs?====<br />
<br />
To use Minisketch in practice, transaction IDs should be shortened (ideally, not more than 64 bits per element).<br />
Small number of bits per transaction also allows to save extra bandwidth and make operations over sketches faster.<br />
According to our estimates, 32 bits provides low collision rate in a non-adversarial model (which is enabled by using independent salts per-link).<br />
<br />
====Why using 128-bit short IDs?====<br />
<br />
To avoid problems caused by the delays in the network, our protocol requires extra round of announcing unsalted transaction IDs. [https://arxiv.org/pdf/1905.10518.pdf Erlay] protocol on top of this work also requires announcing unsalted transaction IDs for flooding. <br />
Both of these measures allow to deduplicate transaction announcements across the peers.<br />
However, using full 256-bit IDs to uniquely identify transactions seems to be an overkill.<br />
128 is the highest power of 2 which provides good enough collision-resistance in an adversarial model, and trivially saves a significant portion of the bandwidth related to these announcements.<br />
<br />
====Why using bisection instead of extending the sketch?====<br />
<br />
Unlike extended sketches, bisection does not require operating over sketches of higher order.<br />
This allows to avoid the high computational cost caused by quadratic decoding complexity.<br />
<br />
==Implementation==<br />
<br />
TODO<br />
<br />
==Acknowledgments==<br />
<br />
A large fraction of this proposal was done during designing Erlay with Gregory Maxwell, Sasha Fedorova and Ivan Beschastnikh.<br />
We would like to thank Suhas Daftuar for contributions to the design and BIP structure.<br />
We would like to thank Ben Woosley for contributions to the high-level description of the idea.<br />
<br />
==Copyright==<br />
<br />
This document is licensed under the Creative Commons CC0 1.0 Universal license.</div>934