User:Gmaxwell/alt ideas: Difference between revisions

From Bitcoin Wiki
Jump to navigation Jump to search
prepayment
addition.
Line 36: Line 36:
* Transaction cost prepayment: One problem is that it's possible to create UTXO that are unprofitable to redeem.
* Transaction cost prepayment: One problem is that it's possible to create UTXO that are unprofitable to redeem.
** Instead make every output specify a max_size, which is the maximum size of a TX_IN that will be permitted to redeem it.
** Instead make every output specify a max_size, which is the maximum size of a TX_IN that will be permitted to redeem it.
** max_size serialized as an unsigned variable length int minus whatever the smallest credible maxsize is, (e.g. something like 40 for bitcoin)
** Then for the 'cost' of a transaction use  cost = MAX(size-sum_inputs(max_size),minimum_viable_txn_size) +  sum_outputs(max_size)
** Then for the 'cost' of a transaction use  cost = MAX(size-sum_inputs(max_size),minimum_viable_txn_size) +  sum_outputs(max_size)
** In order to economical align cost the blocksize limit should be based on it rather than size.
** In order to economical align cost the blocksize limit should be based on it rather than size.

Revision as of 06:49, 10 March 2013

Here are the ideas which I think would be most interesting to see in an altcoin. A few of these things may be possible as hardforking changes in Bitcoin too but some represent different security and economic tradeoffs and I don't think those could be ethically imposed on Bitcoin even if a simple majority of users wanted them (as they'd be forced onto the people who don't want them).

(Some of these ideas are mine, some are from other people, some are old and obvious)

  • Replace transaction merkle tree with a Merkle-sum-tree
    • e.g. If a regular merkle tree uses merge(X,Y) = H(X||Y) a MST would use merge(X,Y) = sum(X[0],Y[0]),H(sum(X[0],Y[0])||X[1]||Y[1]) with fees,hash taking the place of txids in the tree. This allows SPV nodes to stochastically validate the subsidy in blocks by fetching a random leaf and then fetching its txins. see also: [1]
  • Rule violation announcements. When any rule violation is found, a proof or pointer to it gets flooded. (though no more than one violation per block or its parents) See also: [2]
  • Transaction checkpoints. Each transaction (or signature?) should contain a block index and 32 (?) least significant bits of the block hash. The transaction's fees are only valid (or only their full value?) if they are mined in a chain they agree with. This would let people making bitcoin transactions 'vote with their wallets' on the identity of the chain they consider important. This isn't a viable POW replacement, but would greatly reduce the economic benefit of mining moderate depth forks, esp as fees begin to dominate the block reward. "You don't get (all) of my fee payment if you are mining a chain I don't like!"
    • Nodes would typical checkpoint a few blocks in the past from their current height to avoid overstating their opinion unnecessarily.
    • Deep checkpoints could be automatically triggered by observing a critical mass of coins-day-destroyed confirming them— creating a PoS-ish system, though this is subject to the 'nothing at stake' problem of PoS, and is probably very dangerous. (e.g. isolation risk for newly bootsrapping nodes)
  • POW which involves queries against the UTXO set (set of spendable coins)
    • Basically a special kind of memory hard POW that proves that the miner has a complete copy of the UTXO set and that the miner is good at querying it
    • Can still be combined with merged mining.
    • (This is entirely Amiller's idea, but I like it a fair bit)
  • POW which turns the distributed computation into ticking for timelock encryption
    • An infinite sequence of nothing-up-my-sleeve numbers are taken as an infinte sequence of ECC public keys. Searching the pow involves finding distinguished points along a Pollard's rho DLP solution trying to crack the key. When the key is cracked the problem is advanced to the next key.
    • People can then encrypt messages to these keys and sometime in the future the network will crack them, achieving a timelock.
    • Probably incompatible with merged mining and other POW schemes.
    • Making the difficulty adaptive either makes far in the future messages impossible (because the problem size wouldn't be known long in advance), or requires increasingly big headers as the difficulty would require working on multiple problems concurrently.
  • UTXO aging
    • Abandoned UTXO should be forgotten and become unspendable.
    • Demurrage is one possible construction for this, but its economically and educationally complicated. Simpler would just be having a long but finite maximum. Unspendable coins could vanish forever, or be returned to mining— but returning the coins to mining is economically distorting and risks creating weird incentives ("I won't mine your txn because I want to collect its inputs as fees once you've been successfully denied")
  • Make all transactions P2SH
    • Simplicity and some space savings
  • Ultimate P2SH:
    • Represent the script as a merklized abstract syntax tree. The P2SH address is the root. When spending the spender only may provide only the branch they are executing, and hashes for the unexecuted branches. This increases privacy and can compress long scripts on spend.
  • Transaction cost prepayment: One problem is that it's possible to create UTXO that are unprofitable to redeem.
    • Instead make every output specify a max_size, which is the maximum size of a TX_IN that will be permitted to redeem it.
    • max_size serialized as an unsigned variable length int minus whatever the smallest credible maxsize is, (e.g. something like 40 for bitcoin)
    • Then for the 'cost' of a transaction use cost = MAX(size-sum_inputs(max_size),minimum_viable_txn_size) + sum_outputs(max_size)
    • In order to economical align cost the blocksize limit should be based on it rather than size.
  • Pruned history
    • Structure transactions so that the parts needed for validation (txins, scriptsigs) are separate from the output data (scriptpubkey, output and fee values) and put them in separate hash trees. All nodes fully prune all data more than a few thousand blocks back.
    • Massive space savings and improvements in syncup speed.
    • Massive security loss— an attacker that can construct a large reorg can steal all the transacted coin beyond a certain depth.
  • Normative and committed merklized UTXO data structure
    • allows full validation of current blocks by storageless nodes with SPV security
    • Can be complimented by proof-of-misbehavior messages that show a block is invalid by packing up the tree fragments that provide the data needed to see its invalidity
  • Chain folding
    • If nodes don't actually need to validate old chain data (because of committed UTXO and pruned history), it would be possible to 'fold up' the historic chain: whenever— by chance— a header is found with an apparent difficulty greater than a summary target (some large multiple of the current difficulty) then the next block can have a summary header which has a PREV back to the prior summary as well as the prior blocks. Nodes which are validating just to gauge difficulty can skip the intermediate blocks. This can be applied recursively. If the backpointers are randomized and every block is a candidate summary you end making the chain a merklized skiplist.
  • Adaptive block speed
    • If nodes can merge in orphans for the prior (few?) blocks then they can show evidence of the amount of forking which is happening. This could be used to achieve closed loop control of the block target speed. Thought would need to be given on what the right incentives would be to make sure that all the merging is done (the obvious thing to do would be to add the difficulty in for the best chain preference).
    • Merges are motivated by being counted in the sum-difficulty— your chain is longer if you merge.
    • Merges wouldn't merge transactions, just difficulty.
    • Merges would bloat blocks, and very fast blocks would result in a lot of header data but these are less significant with folding (esp if summary blocks never included merges).
    • If blocks are fast that might incentivize not mining transactions. Aligning POW with the work of validation may help.
    • (I think the merging for difficulty stuff is more or less one of Amiller's ideas)
  • Support for ed25519 signatures and lamport signatures
    • Both are significantly faster than Bitcoin's ECDSA, lamport is QC hard but would result in enormous scriptsigs (less of an issue with pruned history).
  • Direct support for validating unblinded chaum tokens of some kind
    • Makes off-chain chaum banks easier to integrate (e.g. directly redeeming your offchain tokens onto the chain)
    • E.g. you do a blinded chaum-chaum exchange but the new token is instead a blockchain redemption certificate. Because its blinded the bank can't tell they're signing a redemption cert and so can't prevent you from redeeming your coins against the chain without denying all service.
  • Switch to a value encoding which is arbitrary precision
    • Removes divisibility concerns forever
    • Perhaps allow coinbase votes to increase the permitted precision. (May also allow more compact transactions for typical transactions with a finite number of significant digits)

See also