Proof of burn
Proof of burn is method for bootstrapping one cryptocurrency off of another. The idea is that miners should show proof that they burned some coins - that is, sent them to a verifiably unspendable address. This is expensive from their individual point of view, just like proof of work; but it consumes no resources other than the burned underlying asset. To date, all proof of burn cryptocurrencies work by burning proof-of-work-mined cryptocurrencies, so the ultimate source of scarcity remains the proof-of-work-mined "fuel".
There are likely many possible variants of proof of burn. This page currently describes Iain Stewart's version. Other people can add variant versions that still belong to the broad proof of burn idea.
- 1 Iain Stewart's version of proof of burn
- 2 Earlier work suggesting a role for the burning of coins
- 3 See also
Iain Stewart's version of proof of burn
(Note: Iain Stewart's version of proof of burn is an attempt at a protocol which could be used within one cryptocurrency for ongoing generation of its blockchain (i.e. mining). For the much simpler task of burning one currency to create another, any reasonable algorithm for creating units of the second currency upon detection of fresh burns of the first will suffice. The subtleties of this version - in particular, the simulation of mining rigs, and the reliance on low-bit-rate external randomness - will not be necessary.)
Introduction and motivation
The key idea of proof-of-burn (this would also apply to proof-of-stake, by the way) is that when choosing the thing which is to qualify as a "difficulty", i.e. to require miners to exhibit proof that they've "done something that's tough to do", all that matters is that an individual miner finds the task expensive. (Well... it also matters that everyone else should find it cheap to verify that it has been done.) It doesn't need to be the case that real resources are consumed in the real economy.
With proof-of-work, it so happens that real resources are indeed consumed - mining rigs are produced, with human labour and materials as input, electricity is used, and all these things have to be bid away from their real-economy best alternative uses. (Or, if they're produced in addition to what would have been produced, the total of leisure time is less than it could have been. Something real is grabbed as input.) And while a cryptocurrency is being set up (i.e. [the fast early phase of] its initial distribution) - or, more precisely, while the first cryptocurrency is being set up; more on this distinction later! - no good alternative has been proposed. (And I'm not proposing one.) But once a cryptocurrency is up and running, with its initial distribution close to completed, new possibilities arise, for tasks to "feel expensive" to a miner but not actually "be expensive" from a god-like whole-economy perspective.
Proof-of-stake (of the "Cunicula variety", I mean) is in fact arguably already an example of such a task. It feels awfully expensive, to a miner, to save up a lot of bitcoins and become a big stakeholder; but from a whole-economy viewpoint, this is a swapping of assets' ownership labels around, it's not a burning of electricity or the like. However, I thought it would be interesting to invent a task that is absolutely, nakedly, unambiguously an example of the contrast between the two viewpoints. And yes, there is one: burning the currency!
By "burning" a tranche of bitcoins I just mean sending them to an address which is unspendable. The precise technical details of this will vary from cryptocurrency to cryptocurrency. With Bitcoin, any address which is [the RIPEMD160/SHA256 hash of] a script that evaluates to false will do. So, the script should do a "deliberately silly" thing - instead of things like "check such-and-such signature, and put the validity result on the stack", it should do something like "add 2 and 2, and now check if what's on top of the stack is equal to 5". (Or just "push 4, and check if it's equal to 5". Anything of that sort.) There are thus an unbounded number of such scripts, with entropy saturating RIPEMD160 since you can choose big numbers to taste. So, bitcoins sent to such a txout can never be redeemed on a future txin. (Barring the cracking of RIPEMD160 and the finding of an alternative matching script, that is. If that happens, the cryptocurrency is in big trouble anyway!)
With this definition of burning, it's not obvious to blockchain-watchers that some bitcoins have been burnt, at the time of burning. They've been sent to an address which doesn't stand out from any other. It's only later, when a miner who burned them earlier now wants to exhibit proof that "yes, these coins are burnt", that blockchain-watchers get their proof. (Which basically consists of exhibiting the script that manifestly always evaluates to false, and hashes to the address.) If it's thought desirable that the act of burning should be obvious right away, rather than later, then this can be achieved: burning merely needs to be defined as sending to some fixed unspendable address, with no variation - e.g. we could settle on the hash of "push 4, and check if it's equal to 5".
So, miners are creating candidate winning blocks by saying to the listening world, not "Look! I've done this many trillion hashes! [or struck lucky with fewer: you, the listening world, wouldn't know the difference... but this doesn't matter...]", but rather "Look! Two months ago I burned this many bitcoins!". In both cases, "this many" means an adjustable difficulty parameter, which the network adjusts from time to time (fortnightly, in today's Bitcoin) to squeeze out marginal miners (and keep more-efficient-than-marginal ones in profit) to just the extent needed to regulate block creation to a preferred pace (one per 10 minutes, in today's Bitcoin).
Why that phrase "Two months ago"? The broad principle is as follows. A miner mustn't be able to just burn some bitcoins right now and say "OK, I've burned them! Now let me have all those latest juicy transaction fees that have arrived in the past few minutes! Thanks!" That extremely recent act of burning could be undone in a block chain reorganisation; and then the same miner would be able to "re-burn" those same coins in an attempt to grab a block afresh, post-reorganisation. That would constitute a breakdown in the analogy of burning with proof-of-work hashing. A trillion proof-of-work hashes on a pre-reorg block are of no value on the post-reorg chain. A proof-of-work miner must simply shrug and say "Oh well, that's those expenses [electricity, mining rig rental / imputed rental,...] lost and gone... time to try again!" And that's the way things should be, for security - it should not be as cheap to extend the height of two or more competing chains as it is to focus on one. (And having decided to focus on one, a miner should incur a risk of lost expense if their choice turns out to be "the wrong one" in network consensus terms.)
The above point makes it clear why the act of burning should be a decent interval earlier than the act of exhibiting proof. Two months may be overdoing it, but the protocol should require it to be sufficiently far back that there's no practical possibility of it being undone. There are in fact some further issues, to do with making sure it's not cheap for a miner to re-exhibit their proof (of having performed a suitably substantial burn a suitably long time ago) on multiple competing chains. Details to follow.
Now then! How much burning will actually happen, under this protocol? The answer is straightforward enough, though its implications are quite broad and in some ways surprising. Miners will burn bitcoins at an average rate very close to the average rate that ordinary users are sending them fees (and any coin-minting still going on too of course), minus the miners' true real-resource costs (i.e. the hardware and electricity and the like for handling transactions and blocks and burn proofs - these costs will be far lower than the hashing costs incurred under proof-of-work, but of course still non-zero). This follows by the same sort of "approach to equilibrium" reasoning that tells us that miners will expend real resources on proof-of-work to roughly that extent - if they didn't, mining would be supra-normally profitable, and new entrants would be attracted into the trade. If burning coins, rather than buying a lot of kit from a mining rig supplier, is the expense incurred by a miner to compete for the revenue stream, the same economic principles apply.
Technical sketch of proof of burn: "Burnt coins are mining rigs!"
Iain Stewart writes: In this subsection I give a provisional technical sketch of the operational details of the proof-of-burn protocol I've currently settled on. It can be summed up in the following pithy slogan:
"Burnt coins are mining rigs!"
What that slogan means will become clear as I go on. Basically, proof-of-work is so elegant, in so many different ways, excepting its high real-resource cost, that I decided my attempt at an alternative to it, avoiding its real-resource cost, should mimic it as faithfully as possible in every other aspect. Well, only readers can judge whether I've succeeded!
The key is to use a stream of true randomness - see below for where that comes from! - to simulate the generating of random hashes that a real proof-of-work mining rig would produce. Now, obviously we don't want to "simulate" every actual hash! A "simulation" of proof-of-work at that level of detail would just be proof-of-work! Rather, we want to mimic the statistical properties of a mining rig's stream of hashes, to just the level of detail required to give the same sort of pattern of meeting / not meeting a moving difficulty target, and all the rest of it, that we get with the real thing.
So: first of all, what exactly is my "stream of true randomness"? Chop up time into units considerably shorter than the intended inter-block time, but with no need to go much finer than general network latency. (Seconds will do, I think.) For each second, t, we need a uniform random number between 0 and 1 assigned to it, RAND(t). This sounds as if we need some awful dependency on a fragile central source - some high-powered laser at NASA pouring out quantum noise every second, or something - with all the trust and failure issues that would imply. Fortunately, for simulating mining rigs, we don't need anything like that. All that matters is that, to someone "buying a simulated mining rig" (burning some bitcoins, that is!) at time t, they can't predict the value of RAND(any t' 2 months or more to the future of t). [Where "2 months" means the dead time a burnt coin must wait before it becomes a simulated active mining rig. See introductory motivating section above. It's basically just a generous waiting period to make sure a burnt coin is truly definitely burnt, and won't have any chance of being "unburnt" in a chain reorg, by the time it comes into use in mining.] We do not need fresh statistical independence of RAND(t) w.r.t. all of RAND(t-1), RAND(t-2), RAND(t-3), etc. (And we don't mind if the stream is known a "short" time into the future - e.g. a week or two; at any rate a small portion of the waiting period.)
Such a lesser goal can, I believe, be achieved with just a few tens of bits of true randomness per week. Quality is what matters, not quantity! I suggest tapping into the world's most highly-audited source of low-bit-rate true randomness: lotteries. These (the big reputable ones anyway) are already subject to elaborate inspection of the machinery that tosses the balls around and draws some of them out. And the results are publicised so widely, in so many newspapers, TV channels, websites etc, as to make it impossible for anyone to lie about them.
Roughly weekly, a config-file (lottery-results.dat or whatever) should get "the latest lottery results" appended to it. There is no hurry about this, it doesn't need to be exactly every week, or even the same lottery every time, it just needs several tens of bits of fresh lottery data added roughly weekly. I believe there would be no trouble propagating this to all nodes, by out-of-band means if necessary. The format should be utterly simple and transparent, a 1-line plain text description of the results and the timestamp (t in RAND(t)) from which they are to be paid attention to, onwards. Like this:
. . . for use from 2013-06-24 00:00:00 onwards: UK national lottery 2013-06-12: 4 9 20 22 37 42, bonus ball 19 for use from 2013-07-01 00:00:00 onwards: UK national lottery 2013-06-19: 7 12 14 19 33 41, bonus ball 28
(Obviously the meta-level words "for use from... onwards" aren't really needed - I just put them in for clarity.) Each line is added in a leisurely, unhurried fashion, at some time (it doesn't matter when) between the draw and the intended start-paying-attention-to-it date. (Some time between 2013-06-19 and 2013-07-01 00:00:00, in the case of the example last line above.) This gives plenty of time for people to add it themselves, from their favourite news source, and check by out-of-band means that they've added what everybody else has added, right down to spelling and punctuation. (Which in practice probably means copying it from somewhere. The point is, the "somewhere" doesn't need to be trusted - a lie, or an unexpected variation in format or spelling or punctuation, would be called out well within the leisurely timescale.)
RAND(t) is then HASH(config-file [excluding any lines that are "for use from time later than t onwards" of course], plus t itself [in some standard format, e.g. 2013-07-02 23:14:05, or just the Unix numeric time] as a final line). - Or if you like, HASH(HASH(config-file) ++ t), for efficiency. (Where "++" means "append together in some standard fashion".) "HASH()" is your favourite hash function, e.g. SHA256(SHA256()). Thus RAND(t) is a 256-bit integer, which we regard conceptually as a real number between 0 and 1 by putting a binary point in front.
(I'm aware that people on the forums are coming up with randomness protocols for proof-of-stake, proof-of-activity and the like which don't involve external true randomness like lotteries - they just hash the last hundred blocks' hashes together, or something like that. I don't think this is good enough. [For their application or for mine!] Someone producing the latest block, given the previous 99, can privately produce billions of cheap variations on it, by varying the order the transactions are listed etc, until they find, and publish, the one that "games" the randomness in their favour. However, if I'm wrong about this, and hashing the last hundred blocks is in fact fine, then good! We can drop the lottery rigmarole! Anyway, for the rest of this description, I'll simply assume that RAND(t) becomes available for all t, but remains unknown until a week or two before t, and in particular, RAND(2 months or more from now) is "massively unknown" right now - unknown with many tens to hundreds of bits of unknowable future entropy. That's all that matters for turning burnt coins into simulated mining rigs.)
Right then! What do we do with this RAND(t) stream? We simulate the capricious behaviour of a true proof-of-work mining rig! Let us assume that, in the real proof-of-work world, you can buy an h hashes/second mining rig for b=c*h BTC. (c varies with technical progress, BTC price fluctuations, etc; but in the simulated world it can just be left constant.) Or, inverting this, if you spend b BTC, you acquire a mining rig of h=b/c hashes/s power. Now, what does it actually mean for your rig to perform h hashes during 1 second? It means you're producing h uniform random numbers between 0 and 1. (That binary point again!) But you don't really care what they all are individually - how well you did during that 1 second is defined as "what was the lowest hash value you produced during that 1 second?". This is then inspected for whether it beats [is lower than] the network's current target; or, perhaps, whether it beats the lesser [i.e. numerically greater] target of a pooled mining operation. If it's good enough, that precious lowest hash is published to the network (or mining pool), and the others are just thrown away (not published). If it's not good enough, even the [not-so-]precious lowest hash isn't published - and certainly not the others. So, in the simulation, we only need to produce, for each second, a simulated "lowest hash for that 1 second". The "others" don't have to exist at all!
For reproducing (statistically) the pattern of hits and misses w.r.t. targets tough enough to not be achieving several hits within 1 second, and for h>>1, we can take the simulated lowest hash for time t to be (1/h) * HASH(signatures-of-the-act-of-burning-the-b-BTC ++ RAND(t)). [In fact this even works for h<1, despite the rather surreal appearance of "lowest hashes" >1 every so often in that case - i.e. values outside the range that real lowest (or any other!) hashes can ever actually be: it turns out this surreal behaviour is just what's needed to degrade the probability of beating the target by just the right amount. Values >1 in effect represent "I didn't generate any hashes at all during this particular 1-second window".]
Two further subtleties. First of all, it turns out to be desirable to include the block number (chain height @ 1 per block) in the formula - just to keep the owners of simulated mining rigs "on their toes" and not be able to tell a week or so in advance when they'll be lucky. (This encourages them to run a continuous full node. Maybe that's not in fact that important.) So we take simulated-lowest-hash = (1/h) * HASH(signatures-of-the-act-of-burning-the-b-BTC ++ block-number-of-would-be-block ++ RAND(t)). (We should not include finer details of the block, to avoid "gaming" a la the hundred blocks business I mentioned earlier. - Or if I'm wrong about that, then OK, by all means mix in finer details of the block!)
Secondly, it turns out that to keep the burning process going forever, rather than a pulse of initial burning that no-one ever again wants to contribute further burning to, we should simulate one more property of real-world mining rigs: they break down! That is, we should demurrage away the strength of a simulated mining rig. (A plea to the reader: Don't be alarmed by the word "demurrage". This is burnt coins I'm talking about - they should be treated "harshly", in whatever style mimics real-world mining rigs to the required fidelity. Ordinary unburnt coins are not being demurraged! [Unless that's independently desirable as a policy matter, e.g. if it's felt that network strength can only be achieved via more revenue for miners than fees alone will ever provide.]) A real mining rig likely works at full tilt for a few years and then conks out. We could demurrage each burnt coin in that style - it abruptly expires E years after its creation - but I think a smooth exponential demurrage is nicer, i.e. its burnt value after y years is b'=b*exp(-y/E). Thus h = b'/c = (b/c)*exp(-y/E). [And 1/h = c/b' = (c/b)*exp(y/E).]
Taking these two further subtleties into account, we get the following formula:
simulated-lowest-hash(t) = (c/b)*exp([t - t_burning]/E) * HASH(signatures-of-the-act-of-burning-the-b-BTC ++ block-number-of-would-be-block ++ RAND(t)).
So there you have it! With this formula, life as a miner is spookily similar to the real proof-of-work case. You "buy a mining rig" - you burn coins, and that hits you in exactly the way sending off money to a chip supplier would have hit you, even though over the whole economy, no real resources have been expended - and you then hope that, by submitting lucky hashes to the network in the form of blocks, you can make more back in fees over time than you spent initially. If you don't keep connected to the network, you won't know what transactions are eligible for including in your next would-be block, and your next lucky hash will run to waste. Meanwhile, other people are "buying mining rigs" (burning coins) too, either freshly or to make up for the "wearing out" of their existing ones; and the network is adjusting its target hash value [reciprocal difficulty] to regulate the rate all this mining effort is producing blocks at, to some preferred average rate. All spookily normal, in other words!
Now, I'm being a little bit disingenuous to say that everything is normal. We need protection against certain things (use of a lucky hash on two or more competing chains; timestamp-falsification abuse) which either do not exist at all under true proof-of-work - the former - or exist but with the consequences and mitigation strategy being different in detail - the latter. I believe I have a way of standing up to the various forms of malice we need to worry about of those kinds. More to follow soon hopefully!
The key insight is that verifiably, publicly burning some coins of a known-total-stock-issued currency is the same as "remurrage" (opposite of "demurrage" - it may not be a correct word, but it's a nice back-formation) on the remainder. That is, if there are verifiably 21 million issued-and-not-burned coins, and then you go to sleep and wake up later and there are now only 20 million issued-and-not-burned coins, that's the same as if some magic genie multiplied all wake-up-time nominal bitcoin figures by 21/20. Another way to see the identity of real effect would be to redefine "burning n bitcoins" to mean, not "sending n to an unspendable address", but rather "scattering" the n bitcoins, i.e. sending them to every existing [non-zero-balance] address, in proportion to the balance [sum of unspent txouts] already stored in each. This would be a horribly gigantic transaction to actually do explicitly, but the point is, burning can be thought of "as if" done that way. (Slogan: Quantity-deflation is remurrage in disguise, in exactly the same way that quantity-inflation is demurrage in disguise.)
So, what that means is, if while you're sleeping you (a non-miner) hold 1 bitcoin purely passively, i.e. it just sits undisturbed on the blockchain, well, when you wake up, it's as if you've received a "dividend" (of 5% in the particular numeric example above) on top of the general economy-tracking price we expect of a constant-quantity currency. In a world with actual, visibly performed remurrage, this is made explicit - your balance is now 21/20, with the nominal circulation [issued-and-not-burned, and remurraged for good measure] constant at 21 million. You can go and spend the 1/20 "dividend" on some treat, and still have the same fraction (1 out of 21 million) of the money stock as when you went to sleep.
In a world without any attempt at explicit remurrage, the real facts of the situation are (of course!) the same, but their nominal expression is not perhaps so instantly obvious. Your nominal holding is unchanged at 1; but this is now 1 part in 20 million of the whole money stock, not the 1 part in 21 million it was before. So, you can go and spend 1/21 of it on some treat, taking your nominal balance down to 20/21, and that nominal balance is the same (1 out of 21 million) fraction of the money stock as when you went to sleep.
So, basically, if you're holding bitcoins and trying to hold an "economy-tracking amount", no more and no less, you find you can go out into the market and use the fraction of your holdings that counts as "dividend above and beyond economy-tracking" on some treat or other. (Indeed, "you can go out..." should read "you must go out...", if you're really determined to pursue precisely that tracking strategy.)
Who's selling you the real resources embodied in the "treat"? And what's their motive? Well, transaction fee payers presumably like to re-stock their bitcoin real balances to roughly the same [economy-tracking] level as before, on average - they're paying for the transaction processing as a service. These fees are then burned by miners. (Well, not literally the fees themselves - the fees themselves are collected by miners, but the way they achieve this is to burn an approximately equal amount, as explained earlier.) So, ordinary Bitcoin users, to achieve their desired re-stocking, have to either produce slightly more, or consume slightly less, or a proper or improper mixture thereof, than they would have needed to in a hypothetical (presumably impracticable) alternative world where they pay no fees for their everyday transactions (and some magic mining-god just altruistically and reliably creates a blockchain out of all the transactions, without charging anybody anything). [This follows directly from their re-stocking desire, and has nothing to do with proof-of-burn specifically. That is, they have to do this regardless of whether the protocol is proof-of-work, proof-of-stake, proof-of-burn or whatever.] It's that extra gap between production and consumption - "extra" on top of whatever gap people are already choosing as a real saving[/dis-saving] strategy - that goes on to the market, for you and all other bitcoin holders to bid off the market by spending on your "treat".
This whole stable pattern of spending habits is, perhaps surprisingly, the same pattern of real resource allocation as would have happened if Cunicula's proof of stake (with close to 100% stake / 0% work admixture) had been in operation, and all bitcoin holders, large and small, had enthusiastically thrown their bitcoins (that is, their stream of bitcoin days destroyed) into the stake-claiming process. The fraction of fees they'd collect if they did that would be just like the "dividend" as I called it above - it would be like explicit remurrage, except instead of being automatic, it would require each holder's active participation (i.e. they couldn't just go to sleep, unless they left some sort of "trusted bot" running and using their private keys to sign their stream of bitcoin days destroyed). So, if you like, proof-of-burn is like automated, 100%-stakeholder-participation Cunicula-style proof-of-stake!
Incidentally, it's also fascinating to consider what happens if the community does decide a demurraging of [ordinary, unburnt] coins, the revenue being added as a coin-[re-]minting stream to the flow of fees, is necessary to continue with forever, for the sake of network strength. The amazing answer, as far as I can honestly work out, is that in long-run equilibrium, the burn rate is just such as to make hardly any of this demurrage real demurrage at all! [The implicit remurrage of burning almost exactly cancels the explicit demurrage of network-strengthening coin-[re-]minting! (Or if you like, the implicit demurrage of inflationary fresh-coin-minting.) Either way, we seem to get the possibility of amazing network strength "for free"!] This opens up the shocking possibility of turning up the demurrage/inflation rate to startlingly high levels - 5% of money stock / year, 10% of money stock / year... levels that would bring the whole cryptocurrency into disrepute in the true proof-of-work case, since coin-holders really would suffer that actual amount of explicit or implicit demurrage - and yet not hitting coin-holders with more than a frictional residual fraction of that in real terms at all, because of the way burning simulates remurrage! Extraordinary stuff! I plan to say more about this soon - but this quick teaser description should already be food for thought.
Coin-burning as a tool for transition between cryptocurrencies
Proof of burn may also be of interest as a tool for managing an orderly transition from one cryptocurrency ("oldcoin", let's call it) to another ("newcoin"). If the developers of newcoin are looking for a way of avoiding proof-of-work's real resource consumption even in newcoin's initial distribution phase, they can't use proof of newcoin-burn: newcoins don't exist yet. But they can use proof of oldcoin-burn! (Assuming their reason for creating newcoin is not a doubting of oldcoin's security model, anyway. - Or at least, not a doubting severe enough to affect sufficiently deeply buried oldcoins, these being the candidates for burning.)
The newcoin blockchain would thus start with (at least a hash referring to) a complete catalogue of all the [sufficiently deeply buried] unspent txouts of oldcoin. Miners would then exhibit burning events within oldcoin up to a certain date; after which, the protocol would switch to burning of newcoin itself (and the dependency on oldcoin could even be thrown away entirely, if a checkpoint of that transition moment was promulgated and accepted by the newcoin community).
This has the nice consequence that, if people throughout the broader economy are gradually deserting oldcoin (as newcoin catches on), its value need not collapse! Instead, oldcoin gets burnt in the transition process, neatly reducing its nominal supply in just such a way as to roughly keep pace with its declining real demand. Meanwhile, those same acts of burning are minting fresh newcoins, at just the pace required to keep up with newcoin's growing real demand. (At least, that's the case if miners anticipate the transition speed correctly, and enter / exit the coins' respective mining trades at a pace that competes away supra-normal profits. We also have to assume that the total real demand for both[/all...] cryptocurrencies is roughly stable in "economy-tracking" terms; or at least, that miners anticipate the time path of the size of total real demand, and of its currency-by-currency composition, correctly, or near enough correctly.)
To sum up: proof of burn could, just maybe, qualify as a new tool to greatly assist overall (multi-cryptocurrency) economic robustness and stability!
Earlier work suggesting a role for the burning of coins
Forum member ripper234 points out an earlier work by forum member dacoinminster suggesting coins could be burnt as one component of a broader protocol. The earlier work is discussed on StackExchange. It revolves around a centralised "trusted entity" system, and so is not directly comparable to decentralised proof-of-burn mining; but it may be of interest to some readers.