User:Gmaxwell/alt ideas: Difference between revisions

From Bitcoin Wiki
Jump to navigation Jump to search
mNo edit summary
likewise
 
(44 intermediate revisions by 2 users not shown)
Line 1: Line 1:


Here are the ideas which I think would be most interesting to see in an altcoin. A few of these things may be possible as hardforking changes in Bitcoin too but some represent different security and economic tradeoffs and I don't think those could be ethically imposed on Bitcoin even if a simple majority of users wanted them (as they'd be forced onto the people who don't want them).
Here are a few of the ideas which I think would be most interesting to see '''<span style="font-size:125%;">in an altcoin</span>'''. A few of these things may be possible as hardforking changes in Bitcoin too but '''<span style="font-size:125%;">some represent different security and economic tradeoffs and I don't think those could be ethically imposed on Bitcoin even if a simple majority of users wanted them</span>''' (as they'd be forced onto the people who don't want them).


(Some of these ideas are mine, some are from other people, some are old and obvious)
(Some of these ideas are mine, some are from other people, some are old and obvious)
* Replace transaction merkle tree with a Merkle-sum-tree
** e.g. If a regular merkle tree uses merge(X,Y) = H(X||Y)  a MST would use merge(X,Y) = sum(X[0],Y[0]),H(sum(X[0],Y[0])||X[1]||Y[1]) with fees,hash taking the place of txids in the tree. This allows SPV nodes to stochastically validate the subsidy in blocks by fetching a random leaf and then fetching its txins. see also: [https://bitcointalk.org/index.php?topic=137933.0]
* Rule violation announcements. When any rule violation is found, a proof or pointer to it gets flooded. (though no more than one violation per block or its parents) See also: [http://sourceforge.net/mailarchive/message.php?msg_id=29450793]
* Transaction block commitments.  Each transaction (or signature?) should contain a block index and 32 (?) least significant bits of the block hash. The transaction's fees are only valid (or only their full value?) if they are mined in a chain they agree with. This would let people making bitcoin transactions 'vote with their wallets' on the identity of the chain they consider important. This isn't a viable POW replacement, but would greatly reduce the economic benefit of mining moderate depth forks, esp as fees begin to dominate the block reward.  "You don't get (all) of my fee payment if you are mining a chain I don't like!"
** Nodes would typical checkpoint a few blocks in the past from their current height to avoid overstating their opinion unnecessarily.
** Deep checkpoints could be automatically triggered by observing a critical mass of coins-day-destroyed confirming them— creating a PoS-ish system, though this is subject to the 'nothing at stake' problem of PoS, and is probably very dangerous. (e.g. isolation risk for newly bootsrapping nodes)


* POW which involves queries against the UTXO set (set of spendable coins)
* POW which involves queries against the UTXO set (set of spendable coins)
Line 8: Line 17:
** Can still be combined with merged mining.
** Can still be combined with merged mining.
** (This is entirely Amiller's idea, but I like it a fair bit)
** (This is entirely Amiller's idea, but I like it a fair bit)
** One exciting enhancement to this idea I have is making the power H(header||nonce||H(utxo_lookup(H(header||nonce))). This way if you have a stream of utxo queries coming in, you can make the work of them mine for you. Validation then, is mining. If you don't have enough queries coming in you just make some up at random.
* POW which turns the distributed computation into ticking for timelock encryption
** An infinite sequence of nothing-up-my-sleeve numbers are taken as an infinte sequence of ECC public keys. Searching the pow involves finding distinguished points along a Pollard's rho DLP solution trying to crack the key. When the key is cracked the problem is advanced to the next key.
** People can then encrypt messages with all of the keys between now and sometime in the future and network will crack them, achieving a timelock.
** Probably incompatible with merged mining and other POW schemes.
** Making the difficulty adaptive either makes far in the future messages impossible (because the problem size wouldn't be known long in advance), or requires increasingly big headers as the difficulty would require working on multiple problems concurrently.
** The obvious constructions using ECDLP as the asymmetric problem are not progress free.


* UTXO aging  
* UTXO aging  
** Abandoned UTXO should be forgotten and become unspendable.
** Abandoned UTXO should be forgotten and become unspendable.
** Demurrage is one possible construction for this, but its economically and educationally complicated. Simpler would just be having a long but finite maximum. Unspendable coins could vanish forever, or be returned to mining— but returning the coins to mining is economically distorting and risks creating weird incentives ("I won't mine your txn because I want to collect its inputs as fees once you've been successfully denied")
** Demurrage is one possible construction for this, but its economically and educationally complicated. Simpler would just be having a long but finite maximum. Unspendable coins could vanish forever, or be returned to mining— but returning the coins to mining is economically distorting and risks creating weird incentives ("I won't mine your txn because I want to collect its inputs as fees once you've been successfully denied")
** ATTENTION MORONS: THIS CANNOT BE DONE WITH BITCOIN. SEE THE LARGE BOLD TEXT AT THE TOP.


* Make all transactions P2SH
* Make all transactions P2SH
** Simplicity and some space savings
** Simplicity and some space savings
* Ultimate P2SH:
** Represent the script as a merklized abstract syntax tree. The P2SH address is the root. When spending the spender only may provide only the branch they are executing, and hashes for the unexecuted branches. This increases privacy and can compress long scripts on spend.
* Transaction cost prepayment: One problem is that it's possible to create UTXO that are unprofitable to redeem.
** Instead make every output specify a max_size, which is the maximum marginal increase in size from redeeming this txout in a new transaction. (would need to be coded into addresses)
** max_size would be serialized as an unsigned variable length int minus whatever the smallest credible max_size is, (e.g. something like 40 for bitcoin)
*** This makes sure people aren't incentivized to write unspendable txn, perhaps a larger minimum max_size should be used, e.g. the size of the smallest secure TX_IN.
** Then for the 'effective_size' of a transaction is effective_size = MAX(real_size-sum_inputs(max_size),minimum_viable_txn_size) +  sum_outputs(max_size)
** In order to economical align cost the blocksize limit should be based on the effective_size rather than real_size.
** Unused effective_size is paid to the miner (E.g. in the coinbase txn), otherwise they have an incentive to create extra stupid outputs just to gather up this resource.


* Pruned history
* Pruned history
Line 21: Line 51:
** Massive security loss— an attacker that can construct a large reorg can steal all the transacted coin beyond a certain depth.
** Massive security loss— an attacker that can construct a large reorg can steal all the transacted coin beyond a certain depth.


* Normative and committed merkelized UTXO data structure  
* Normative and committed merklized UTXO data structure  
** allows full validation of current blocks by storageless nodes with SPV security
** allows full validation of current blocks by storageless nodes with SPV security
** Can be complimented by proof-of-misbehavior messages that show a block is invalid by packing up the tree fragments that provide the data needed to see its invalidity
** Can be complimented by proof-of-misbehavior messages that show a block is invalid by packing up the tree fragments that provide the data needed to see its invalidity
* ZKP Validated checkpoints— Is it possible to use computational integrity to create compact (constant size) checkpoint proofs that show that a checkpoint was the result of a faithful validation of the blockchain? This could be used to give pruned history the same security as full Bitcoin up to the limitations of the integrity proofs.  This requires a CI system with proofs size/complexity that doesn't depend on the 'secret' input to the code being run.
* Trusted computing reorg-doomsday preventers. Allow people running trusted computing with remote attestation to sign burred blocks with a program that will never sign a conflicting fork the branches off more than N (e.g. 1000) blocks back from a valid chain they've observed. Further ensure their faithful operation with fidelity bonds on another cryptocurrency.


* Chain folding
* Chain folding
** If nodes don't actually need to validate old chain data (because of committed UTXO and pruned history), it would be possible to 'fold up' the historic chain: whenever— by chance— a header is found with an apparent difficulty greater than a summary target (some large multiple of the current difficulty) then the next block can have a summary header which has a PREV back to the prior summary as well as the prior blocks. Nodes which are validating just to gauge difficulty can skip the intermediate blocks. This can be applied recursively
** If nodes don't actually need to validate old chain data (because of committed UTXO and pruned history), it would be possible to 'fold up' the historic chain: whenever— by chance— a header is found with an apparent difficulty greater than a summary target (some large multiple of the current difficulty) then the next block can have a summary header which has a PREV back to the prior summary as well as the prior blocks. Nodes which are validating just to gauge difficulty can skip the intermediate blocks. This can be applied recursively. If the backpointers are randomized and every block is a candidate summary you end making the chain a merklized skiplist.
 
* Petertodd's MMR TXO:  Alternatively, do not store a UTXO set. Instead encode the transactions outputs in the blockchain in a merkle mountain range (an insertion ordered fully populated binary tree, setup to make appends cheap) over the whole chain. Transactions are required to provide the update proofs that show their inputs in the tree (and thus also allow you to null them out). This means that fully validating nodes and miners can be basically storageless, but wallets must take on the cost of remembering their own coins.


* Adaptive block speed
* Adaptive block speed
** If nodes can merge in orphans for the prior (few?) blocks then they can show evidence at the amount of forking which is happening. This could be used to achieve closed loop control of the block target speed. Thought would need to be given on what the right incentives would be to make sure that all the merging is done (the obvious thing to do would be to add the difficulty in for the best chain preference).
** If nodes can merge in orphans for the prior (few?) blocks then they can show evidence of the amount of forking which is happening. This could be used to achieve closed loop control of the block target speed. Thought would need to be given on what the right incentives would be to make sure that all the merging is done (the obvious thing to do would be to add the difficulty in for the best chain preference).
** Merges are motivated by being counted in the sum-difficulty— your chain is longer if you merge.
** Merges wouldn't merge transactions, just difficulty.
** Merges wouldn't merge transactions, just difficulty.
** Merges would bloat blocks, and very fast blocks would result in a lot of header data but these are less significant with folding (esp if summary blocks never included merges).
** Merges would bloat blocks, and very fast blocks would result in a lot of header data but these are less significant with folding (esp if summary blocks never included merges).
** If blocks are fast that might incentivize not mining transactions. Aligning POW with the work of validation may help.
** If blocks are fast that might incentivize not mining transactions. Aligning POW with the work of validation may help.
** (I think the merging for difficulty stuff is more or less one of Amiller's ideas)


* Support for ed25519 signatures and lamport signatures
* Support for ed25519 signatures and lamport signatures
** Both are significantly faster than Bitcoin's ECDSA, lamport is QC hard but would result in enormous scriptsigs (less of an issue with pruned history).
** Both are significantly faster than Bitcoin's ECDSA, lamport is QC hard but would result in enormous scriptsigs (less of an issue with pruned history).
** With libsecp256k1 ed25519 is <s>only 2x</s>not faster. Also it has a non-prime order— cofactor of 8. yuck.
* Alternatively, [http://repo.thehackademy.net/depot_madchat/crypto/papers/fawkes.pdf Guy Fawkes signatures]


* Direct support for validating unblinded chaum tokens of some kind
* Direct support for validating unblinded chaum tokens of some kind
** Makes off-chain chaum banks easier to integrate (e.g. directly redeeming your offchain tokens onto the chain)
** Makes off-chain chaum banks easier to integrate (e.g. directly redeeming your offchain tokens onto the chain)
** E.g. you do a blinded chaum-chaum exchange but the new token is instead a blockchain redemption certificate. Because its blinded the bank can't tell they're signing a redemption cert and so can't prevent you from redeeming your coins against the chain without denying all service.
* or integrate the 'zerocoin' anonymous accumulator[http://spar.isi.jhu.edu/~mgreen/ZerocoinOakland.pdf].


* Switch to a value encoding which is arbitrary precision
* Switch to a value encoding which is arbitrary precision
** Removes divisibility concerns forever
** Removes divisibility concerns forever
** Perhaps allow coinbase votes to increase the permitted precision. (May also allow more compact transactions for typical transactions with a finite number of significant digits)
** Perhaps allow coinbase votes to increase the permitted precision. (May also allow more compact transactions for typical transactions with a finite number of significant digits)
* adam3us proposes preventing mining policy blockading by blinding inputs. A transaction is mined but it isn't clear which inputs its spending. Fees are paid by unblinded inputs to prevent DOS attacks. Blinding is done in such a way that double spends are still obvious. [https://bitcointalk.org/index.php?topic=205533.msg2149057#msg2149057]
==Coin of the moonmen==
"Bytecoin" (apologies to the kind fellow on the forum with that name). The idea behind bytecoin is that there is a fixed supply of 21 trillion bytes. Mining bytecoin gives you access to these bytes through a geometrically delaying initial "cosmic inflation" (subsidy :P). Every txout has a size value plus an excess bytecoins value. A txout can not be bigger than its total bytecoin value. A bytecoin confers the right to store one byte in the system's utxo set for some maximum lifetime. The excess value of a bytecoin txout is decreased over time at some multiple of the whole txout size and when it reaches zero the whole txout is pruned (recovered bytecoins are added back to the slow geometric distribution pool). The marginal redemption cost of a bytecoin output is prepaid (see above), but the reclaimable prepayment starts at zero and matures at the same rate as the excess value decay to rate-limit transactions, prematurely reclaiming bytecoin sends the immature prepayment to the geometric distribution. Because the above stuff moves all the storage costs into the utxo set, the system has a fixed maximum storage operating cost, and is free of negative storage externality. It would be not as good as a currency as Bitcoin is because the usefulness of the storage is economically distorting, but it would be much better as a broadcast storage medium.
(A non-inflationary version where the excess value doesn't decrease and miners are paid just on transaction fees and the recovery of forgotten/lost storage is possible too, but there would be little incentive to not maximally use all the storage at all times in that case, and if the system had too many long term users who never lost their keys it might be impossible to pay for adequate security)
(A version where the byte supply isn't constant but is instead based on a miner proof of throughput that increases the available storage based on some function of the miners POW query speed might also be interesting.)


==See also==
==See also==
*[[Hardfork Wishlist]]
*[[Hardfork Wishlist]]

Latest revision as of 18:53, 19 April 2017

Here are a few of the ideas which I think would be most interesting to see in an altcoin. A few of these things may be possible as hardforking changes in Bitcoin too but some represent different security and economic tradeoffs and I don't think those could be ethically imposed on Bitcoin even if a simple majority of users wanted them (as they'd be forced onto the people who don't want them).

(Some of these ideas are mine, some are from other people, some are old and obvious)

  • Replace transaction merkle tree with a Merkle-sum-tree
    • e.g. If a regular merkle tree uses merge(X,Y) = H(X||Y) a MST would use merge(X,Y) = sum(X[0],Y[0]),H(sum(X[0],Y[0])||X[1]||Y[1]) with fees,hash taking the place of txids in the tree. This allows SPV nodes to stochastically validate the subsidy in blocks by fetching a random leaf and then fetching its txins. see also: [1]
  • Rule violation announcements. When any rule violation is found, a proof or pointer to it gets flooded. (though no more than one violation per block or its parents) See also: [2]
  • Transaction block commitments. Each transaction (or signature?) should contain a block index and 32 (?) least significant bits of the block hash. The transaction's fees are only valid (or only their full value?) if they are mined in a chain they agree with. This would let people making bitcoin transactions 'vote with their wallets' on the identity of the chain they consider important. This isn't a viable POW replacement, but would greatly reduce the economic benefit of mining moderate depth forks, esp as fees begin to dominate the block reward. "You don't get (all) of my fee payment if you are mining a chain I don't like!"
    • Nodes would typical checkpoint a few blocks in the past from their current height to avoid overstating their opinion unnecessarily.
    • Deep checkpoints could be automatically triggered by observing a critical mass of coins-day-destroyed confirming them— creating a PoS-ish system, though this is subject to the 'nothing at stake' problem of PoS, and is probably very dangerous. (e.g. isolation risk for newly bootsrapping nodes)
  • POW which involves queries against the UTXO set (set of spendable coins)
    • Basically a special kind of memory hard POW that proves that the miner has a complete copy of the UTXO set and that the miner is good at querying it
    • Can still be combined with merged mining.
    • (This is entirely Amiller's idea, but I like it a fair bit)
    • One exciting enhancement to this idea I have is making the power H(header||nonce||H(utxo_lookup(H(header||nonce))). This way if you have a stream of utxo queries coming in, you can make the work of them mine for you. Validation then, is mining. If you don't have enough queries coming in you just make some up at random.


  • POW which turns the distributed computation into ticking for timelock encryption
    • An infinite sequence of nothing-up-my-sleeve numbers are taken as an infinte sequence of ECC public keys. Searching the pow involves finding distinguished points along a Pollard's rho DLP solution trying to crack the key. When the key is cracked the problem is advanced to the next key.
    • People can then encrypt messages with all of the keys between now and sometime in the future and network will crack them, achieving a timelock.
    • Probably incompatible with merged mining and other POW schemes.
    • Making the difficulty adaptive either makes far in the future messages impossible (because the problem size wouldn't be known long in advance), or requires increasingly big headers as the difficulty would require working on multiple problems concurrently.
    • The obvious constructions using ECDLP as the asymmetric problem are not progress free.
  • UTXO aging
    • Abandoned UTXO should be forgotten and become unspendable.
    • Demurrage is one possible construction for this, but its economically and educationally complicated. Simpler would just be having a long but finite maximum. Unspendable coins could vanish forever, or be returned to mining— but returning the coins to mining is economically distorting and risks creating weird incentives ("I won't mine your txn because I want to collect its inputs as fees once you've been successfully denied")
    • ATTENTION MORONS: THIS CANNOT BE DONE WITH BITCOIN. SEE THE LARGE BOLD TEXT AT THE TOP.
  • Make all transactions P2SH
    • Simplicity and some space savings
  • Ultimate P2SH:
    • Represent the script as a merklized abstract syntax tree. The P2SH address is the root. When spending the spender only may provide only the branch they are executing, and hashes for the unexecuted branches. This increases privacy and can compress long scripts on spend.
  • Transaction cost prepayment: One problem is that it's possible to create UTXO that are unprofitable to redeem.
    • Instead make every output specify a max_size, which is the maximum marginal increase in size from redeeming this txout in a new transaction. (would need to be coded into addresses)
    • max_size would be serialized as an unsigned variable length int minus whatever the smallest credible max_size is, (e.g. something like 40 for bitcoin)
      • This makes sure people aren't incentivized to write unspendable txn, perhaps a larger minimum max_size should be used, e.g. the size of the smallest secure TX_IN.
    • Then for the 'effective_size' of a transaction is effective_size = MAX(real_size-sum_inputs(max_size),minimum_viable_txn_size) + sum_outputs(max_size)
    • In order to economical align cost the blocksize limit should be based on the effective_size rather than real_size.
    • Unused effective_size is paid to the miner (E.g. in the coinbase txn), otherwise they have an incentive to create extra stupid outputs just to gather up this resource.
  • Pruned history
    • Structure transactions so that the parts needed for validation (txins, scriptsigs) are separate from the output data (scriptpubkey, output and fee values) and put them in separate hash trees. All nodes fully prune all data more than a few thousand blocks back.
    • Massive space savings and improvements in syncup speed.
    • Massive security loss— an attacker that can construct a large reorg can steal all the transacted coin beyond a certain depth.
  • Normative and committed merklized UTXO data structure
    • allows full validation of current blocks by storageless nodes with SPV security
    • Can be complimented by proof-of-misbehavior messages that show a block is invalid by packing up the tree fragments that provide the data needed to see its invalidity
  • ZKP Validated checkpoints— Is it possible to use computational integrity to create compact (constant size) checkpoint proofs that show that a checkpoint was the result of a faithful validation of the blockchain? This could be used to give pruned history the same security as full Bitcoin up to the limitations of the integrity proofs. This requires a CI system with proofs size/complexity that doesn't depend on the 'secret' input to the code being run.
  • Trusted computing reorg-doomsday preventers. Allow people running trusted computing with remote attestation to sign burred blocks with a program that will never sign a conflicting fork the branches off more than N (e.g. 1000) blocks back from a valid chain they've observed. Further ensure their faithful operation with fidelity bonds on another cryptocurrency.
  • Chain folding
    • If nodes don't actually need to validate old chain data (because of committed UTXO and pruned history), it would be possible to 'fold up' the historic chain: whenever— by chance— a header is found with an apparent difficulty greater than a summary target (some large multiple of the current difficulty) then the next block can have a summary header which has a PREV back to the prior summary as well as the prior blocks. Nodes which are validating just to gauge difficulty can skip the intermediate blocks. This can be applied recursively. If the backpointers are randomized and every block is a candidate summary you end making the chain a merklized skiplist.
  • Petertodd's MMR TXO: Alternatively, do not store a UTXO set. Instead encode the transactions outputs in the blockchain in a merkle mountain range (an insertion ordered fully populated binary tree, setup to make appends cheap) over the whole chain. Transactions are required to provide the update proofs that show their inputs in the tree (and thus also allow you to null them out). This means that fully validating nodes and miners can be basically storageless, but wallets must take on the cost of remembering their own coins.
  • Adaptive block speed
    • If nodes can merge in orphans for the prior (few?) blocks then they can show evidence of the amount of forking which is happening. This could be used to achieve closed loop control of the block target speed. Thought would need to be given on what the right incentives would be to make sure that all the merging is done (the obvious thing to do would be to add the difficulty in for the best chain preference).
    • Merges are motivated by being counted in the sum-difficulty— your chain is longer if you merge.
    • Merges wouldn't merge transactions, just difficulty.
    • Merges would bloat blocks, and very fast blocks would result in a lot of header data but these are less significant with folding (esp if summary blocks never included merges).
    • If blocks are fast that might incentivize not mining transactions. Aligning POW with the work of validation may help.
    • (I think the merging for difficulty stuff is more or less one of Amiller's ideas)
  • Support for ed25519 signatures and lamport signatures
    • Both are significantly faster than Bitcoin's ECDSA, lamport is QC hard but would result in enormous scriptsigs (less of an issue with pruned history).
    • With libsecp256k1 ed25519 is only 2xnot faster. Also it has a non-prime order— cofactor of 8. yuck.
  • Direct support for validating unblinded chaum tokens of some kind
    • Makes off-chain chaum banks easier to integrate (e.g. directly redeeming your offchain tokens onto the chain)
    • E.g. you do a blinded chaum-chaum exchange but the new token is instead a blockchain redemption certificate. Because its blinded the bank can't tell they're signing a redemption cert and so can't prevent you from redeeming your coins against the chain without denying all service.
  • or integrate the 'zerocoin' anonymous accumulator[3].
  • Switch to a value encoding which is arbitrary precision
    • Removes divisibility concerns forever
    • Perhaps allow coinbase votes to increase the permitted precision. (May also allow more compact transactions for typical transactions with a finite number of significant digits)
  • adam3us proposes preventing mining policy blockading by blinding inputs. A transaction is mined but it isn't clear which inputs its spending. Fees are paid by unblinded inputs to prevent DOS attacks. Blinding is done in such a way that double spends are still obvious. [4]

Coin of the moonmen

"Bytecoin" (apologies to the kind fellow on the forum with that name). The idea behind bytecoin is that there is a fixed supply of 21 trillion bytes. Mining bytecoin gives you access to these bytes through a geometrically delaying initial "cosmic inflation" (subsidy :P). Every txout has a size value plus an excess bytecoins value. A txout can not be bigger than its total bytecoin value. A bytecoin confers the right to store one byte in the system's utxo set for some maximum lifetime. The excess value of a bytecoin txout is decreased over time at some multiple of the whole txout size and when it reaches zero the whole txout is pruned (recovered bytecoins are added back to the slow geometric distribution pool). The marginal redemption cost of a bytecoin output is prepaid (see above), but the reclaimable prepayment starts at zero and matures at the same rate as the excess value decay to rate-limit transactions, prematurely reclaiming bytecoin sends the immature prepayment to the geometric distribution. Because the above stuff moves all the storage costs into the utxo set, the system has a fixed maximum storage operating cost, and is free of negative storage externality. It would be not as good as a currency as Bitcoin is because the usefulness of the storage is economically distorting, but it would be much better as a broadcast storage medium.

(A non-inflationary version where the excess value doesn't decrease and miners are paid just on transaction fees and the recovery of forgotten/lost storage is possible too, but there would be little incentive to not maximally use all the storage at all times in that case, and if the system had too many long term users who never lost their keys it might be impossible to pay for adequate security)

(A version where the byte supply isn't constant but is instead based on a miner proof of throughput that increases the available storage based on some function of the miners POW query speed might also be interesting.)

See also