Bitcoin Obituaries - Bitcoin Declared Dead 350+ Times

Daily analysis of cryptocurrencies 20191002(Market index 39 — Fear state)

Daily analysis of cryptocurrencies 20191002(Market index 39 — Fear state)

https://preview.redd.it/tj7i5cek95q31.png?width=1440&format=png&auto=webp&s=1b0873b8203bebb9fda398400b020e38d05ef7ed

David Marcus: WSJ’s Report Is Untrue, First Wave Of Libra Association To Be Formalized As reported earlier, Visa (V.N), Mastercard Inc (MA.N) and other key financial partners may reconsider their involvement in Facebook Inc’s cryptocurrency, Libra. David Marcus, the Facebook blockchain lead who co-created Libra, took to Twitter to deliver his opinions as follows: He stated, the first wave of Libra Association members will be formalized in the weeks to come. Change of this magnitude is hard and requires courage. It will be a long journey, according to him. “For Libra to succeed, it needs committed members, and while I have no knowledge of specific organizations plans to not step up, commitment to the mission is more important than anything else,” he said. He continued, the part of WSJ’s article suggesting they weren’t on top of, or didn’t share detailed information about how to secure Libra and protect the network against illegal activity is categorically untrue. “The tone of some of this reporting suggests angst, etc… I can tell you that we’re very calmly, and confidently working through the legitimate concerns that Libra has raised by bringing conversations about the value of digital currencies to the forefront,”
Russian Ministry Of Finance Proposes 3 Legal Categories For Cryptocurrencies Russia’s Ministry of Finance is considering the possibility of regulating cryptocurrencies into three distinct legal categories. The move would allow the country to regulate the industry much easier. According to Deputy Finance Minister Alexei Moiseev, the proposed categories are: technical tokens, virtual assets, and digital financial assets.
India’s Airpay Partners With InstaReM To Develop International Payments Solutions While India’s entry into the crypto-landscape seems sluggish at best, Mumbai-based Airpay Payment Services has set out to create a merchant-centered international payments solution in collaboration with InstaReM. As per the official announcement, Mumbai-based Airpay Payment Services, India’s only omni-channel payments platform, has announced a partnership with InstaReM, one of the world’s fastest-growing digital cross-border payments companies, to launch “FX Payment,” an international payments solution for its merchants.”
Former Blackrock And Goldman Sachs Senior Executive Joins Blockchain As General Counsel Cryptocurrency wallet provider Blockchain.com has brought on a Blackrock and Goldman Sachs veteran, called Howard Surloff, as its general counsel to lead the company’s legal and compliance teams. According to Surloff’s LinkedIn, the Wall Street veteran has worked at Blackrock for 12 years, first serving as the deputy general counsel and then leading the investment management firm’s iShares and Index division as global chief operating officer.

Encrypted project calendar(October 02, 2019)

BNB/Binance Coin: The 2019 DELTA Summit will be held in Malta from October 2nd to 4th. The DELTA Summit is Malta’s official blockchain and digital innovation campaign. BTC/Bitcoin: The B.Tokyo 2019 conference will be held in Tokyo from October 2nd to 3rd. CAPP/Cappasity: The Cappasity (CAPP) London Science and Technology Festival will be held from October 2nd to 3rd, when the Cappasity project will be attended by the Science and Technology Festival.

Encrypted project calendar(October 03, 2019)

ETC/Ethereum Classic: The 2019 Ether Classic (ETC) Summit will be held in Vancouver on October 3–4 ANT/Aragon: Aragon (ANT) is the AGP for the new mandatory community review period, with a deadline of October 3.

Encrypted project calendar(October 04, 2019)

KNC/Kyber Network: Kyber Network (KNC) will update the maxGasPrice parameter in the Kyber Network contract from 100 gwei to 50 gwei within 2 weeks after October 4.

Encrypted project calendar(October 05, 2019)

Ontology (ONT): Ony Ji will attend the blockchain event in Japan on October 5th and explain the practical application based on the ontology network. BNB/Binance Coin: The Binance Coin (BNB) Oasis Game Hackathon will be held on October 5th in Bangalore, India, and will be hosted by Binance Labs, Matic Network, Cocos-BCX, Celer Network, Marlin Protocol.

Encrypted project calendar(October 06, 2019)

SPND/ Spendcoin: Spendcoin (SPND) will be online on October 6th

Encrypted project calendar(October 07, 2019)

GNO/Gnosis: Gnosis (GNO) will discuss the topic “Decentralized Trading Agreement Based on Ethereum” will be held in Osaka, Japan on October 7th. Kyber and Uniswap, Gnosis and Loopring will attend and give speeches.

Encrypted project calendar(October 08, 2019)

BTC/Bitcoin: The 2nd Global Digital Mining Summit will be held in Frankfurt, Germany from October 8th to 10th.

Encrypted project calendar(October 09, 2019)

CENNZ/Centrality: Centrality (CENNZ) will meet in InsurTechNZ Connect — Insurance and Blockchain on October 9th in Auckland.

Encrypted project calendar(October 10, 2019)

INB/Insight Chain: The Insight Chain (INB) INB public blockchain main network will be launched on October 10. VET/Vechain: VeChain (VET) will attend the BLOCKWALKS Blockchain Europe Conference on October 10. CAPP/Cappasity: Cappasity (CAPP) Cappasity will be present at the Osaka Global Innovation Forum in Osaka (October 10–11).

Encrypted project calendar(October 11, 2019)

OKB/OKB: OKB (OKB) OKEx series of talks will be held in Istanbul on October 11th to discuss “the rise of the Turkish blockchain.”

Encrypted project calendar(October 12, 2019)

BTC/Bitcoin: The 2019 Global Mining Leaders Summit will be held in Chengdu, China from October 12th to 14th.

Encrypted project calendar(October 14, 2019)

BCH/Bitcoin Cash: The ChainPoint 19 conference will be held in Armenia from October 14th to 15th.

Encrypted project calendar(October 15, 2019)

RUFF/RUFF Token: Ruff will end the three-month early bird program on October 15th KAT/Kambria: Kambria (KAT) exchanges ERC20 KAT for a 10% bonus on BEP2 KAT-7BB, and the token exchange reward will end on October 15. BTC/Bitcoin: The Blockchain Technology Investment Summit (CIS) will be held in Los Angeles from October 15th to 16th.

Encrypted project calendar(October 16, 2019)

BTC/Bitcoin: The 2019 Blockchain Life Summit will be held in Moscow, Russia from October 16th to 17th. MIOTA/IOTA: IOTA (MIOTA) IOTA will host a community event on the theme of “Technology Problem Solving and Testing IoT Devices” at the University of Southern California in Los Angeles on October 16. ETH/Ethereum: Ethereum launches Istanbul (Istanbul) main network upgrade, this main network upgrade involves 6 code upgrades. QTUM/Qtum: Qtum (QTUM) Qtum main network hard fork is scheduled for October 16.

Encrypted project calendar(October 18, 2019)

BTC/Bitcoin: The SEC will give a pass on the VanEck/SolidX ETF on October 18th and make a final decision HB/HeartBout: HeartBout (HB) will officially release the Android version of the HeartBout app on October 18.

Encrypted project calendar(October 19, 2019)

PI/PCHAIN Network: The PCHAIN (PI) backbone (Phase 5, 82 nodes, 164, 023, 802 $ PI, 7 candidates) will begin on October 19. LINK/ChainLink: Diffusion 2019 will be held in Berlin, Germany from October 19th to 20th

Encrypted project calendar(October 21, 2019)

KNC/Kyber Network: The official online hackathon of the Kyber Network (KNC) project will end on October 21st, with more than $42,000 in prize money.

Yesterday, there was a nice upside correction in bitcoin above the $8,200 resistance against the US Dollar. The BTC/USD even settled above $8,200 and the 100 hourly simple moving average. Moreover, the price surpassed the $8,400 resistance level. However, the bulls struggled near the key $8,500 resistance area. As a result, the price topped near the $8,540 level and recently started a downward move.
It broke the $8,400 support area, plus the 23.6% Fib retracement level of the last leg from the $7,653 low to $8,539 high. Moreover, there was a break below an ascending channel with support near $8,275 on the hourly chart of the BTC/USD pair. The pair tested the $8,200 support area and the 100 hourly simple moving average. If there are more downsides, the price could test the $8,100 support area.
Besides, the 50% Fib retracement level of the last leg from the $7,653 low to $8,539 high is also near the $8,100 level. Any further losses may perhaps drag the price towards the $8,000 handle. The 61.8% Fib retracement level of the last leg from the $7,653 low to $8,539 high is also near $7,992. If the price settles below $8,000, there could be additional losses towards the $7,500 support.
Review previous articles: https://medium.com/@to.liuwen

Telegram: https://t.me/Lay126
Twitter:https://twitter.com/mianhuai8
Facebook:https://www.facebook.com/profile.php?id=100022246432745
Reddi:https://www.reddit.com/useliuidaxmn
LinkedIn:https://www.linkedin.com/in/liu-wei-294a12176/
submitted by liuidaxmn to u/liuidaxmn [link] [comments]

Why Turing-complete smart contracts are doomed: "Kurt Gödel and Alan Turing both posed the same question: 'Can we find a universal procedure to prove that a mathematical theory is true or false?' They each, in their own way, answered 'NO': there exist some mathematical truths that cannot be proven."

Summary:
(1) Turing-complete languages are fundamentally inappropriate for writing "smart contracts" - because such languages are inherently undecidable, which makes it impossible to know what a "smart contract" will do before running it.
(2) We should learn from Wall Street's existing DSLs (domain-specific languages) for financial products and smart contracts, based on declarative and functional languages such as Ocaml and Haskell - instead of doing what the Web 2.0 "brogrammers" behind Solidity did, and what Peter Todd is also apparently embarking upon: ie, ignoring the lessons that Wall Street has already learned, and "reinventing the wheel", using less-suitable languages such as C++ and JavaScript-like languages (Solidity), simply because they seem "easier" for the "masses" to use.
(3) We should also consider using specification languages (to say what a contract does) along with implementation languages (saying how it should do it) - because specifications are higher-level and easier for people to read than implementations which are lower-level meant for machines to run - and also because ecosystems of specification/implementation language pairs (such as Coq/Ocaml) support formal reasoning and verification tools which could be used to mathematically prove that a smart contract's implementation is "correct" (ie, it satisfies its specification) before even running it.
Details:
http://www.otherpress.com/features/alan-turing-kurt-godel-two-asymptotic-destinies/
https://duckduckgo.com/?q=turing+g%C3%B6del+undecidability&ia=web
https://duckduckgo.com/?q=turing+g%C3%B6del+incompleteness&ia=web
https://duckduckgo.com/?q=%22turing+complete%22+%22g%C3%B6del+incompleteness%22&ia=web
When I have more time later, I will hopefully be able to write up a more gentle introduction on all this stuff, providing more explanations, motivations, and examples for laypersons who are interested in getting a feel for the deep subtle mathematical implications at play here in these emerging "language design wars" around recent proposals to add "smart contracts" to cryptocurrencies.
Right now I'm just providing this quick heads-up / reminder / warning, alluded to in the title of the OP, with some more pointers to the literature in the links above.
People who already do have a deep understanding of mathematics and its history will get the message right away - by recalling the crisis in the foundations of mathematics which occurred in the early 1900s, involving concepts like Russell's paradox, Gödel's incompleteness theorem, undecidability, Turing completeness, etc.
Turing-complete languages lead to "undecidable" programs (ie, you cannot figure out what you do until after you run them)
One hint: recall that Gödel's incompleteness theorem proved that any mathematical system which is (Turing)-complete, must also be inconsistent incomplete [hat tip] - that is, in any such system, it must be possible to formulate propositions which are undecidable within that system.
This is related to things like the Halting Problem.
And by the way, Ethereum's concept of "gas" is not a real solution to the Halting Problem: Yes, running out of "gas" means that the machine will "stop" eventually, but this naïve approach does not overcome the more fundamental problems regarding undecidability of programs written using a Turing-complete language.
The take-away is that:
When using any Turing-complete language, it will always be possible for someone (eg, the DAO hacker, or some crook like Bernie Madoff, or some well-meaning but clueless dev from slock.it) to formulate a "smart contract" whose meaning cannot be determined in advance by merely inspecting the code: ie, it will always be possible to write a smart contract whose meaning can only be determined after running the code.
Take a moment to contemplate the full, deep (and horrifying) implications of all this.
Some of the greatest mathematicians and computer scientists of the 20th century already discovered and definitively proved (much to the consternation most of their less-sophisticated (naïve) colleagues - who nevertheless eventually were forced to come around and begrudgingly agree with them) that:
The horrifying conclusion is that:
This all is based on a very, very deep result of mathematics (Gödel's Incompleteness Theorem, as referenced in some of the links above) - which even many mathematicians themselves had a hard time understanding and accepting.
And it is also very, very common for programmers to not understand or accept this deep mathematical result.
Most programmers do not understand the implications of Gödel's incompleteness theorem on Turing-complete languages
As a mathematician first, and a programmer second, I can confirm from my own experience that most programmers do not understand this important mathematical history at all, and its implications - it is simply too subtle or too foreign for them to grasp.
Their understanding of computing is childish, naïve, and simplistic.
They simply view a computer as a marvelous machine which can execute a sequence of instructions in some language (and please note that, for them, that language usually happens to simply "come with" the machine, so they unquestionably accept whatever language that happens to be - ie, they almost never dive deeper into the subtle concepts of "language design" itself - a specialized area of theoretical computer science which few of them ever think about).
Paradigms lost
As we've seen, time after time, this failure of most programmers contemplate the deeper implications of "language design" has has led to the familiar litany of disasters and "learning experiences" where programmers have slowly abandoned one "programming paradigm" and moved on to the next, after learning (through bitter experience) certain hard facts and unpleasant, non-intuitive realities which initially escaped their attention when they were simply enjoying the naïve thrill of programming - such as the following:
Today, in cryptocurrencies, we are seeing this sad history repeat itself, with plenty of examples of programmers who don't understand these subtle concepts involving the foundations of mathematics - specifically, the mathematical fact (Gödel's Incompleteness Theorem) that any logical system or language which is "powerful" enough to be "Turing complete" must also be inconsistent.
The naïve Ethereum people think they've cleverly sidestepped this with the notion of "gas" but actually all they're doing is cheating with this messy kludge: because simply saying "we'll arbitrarily make the program stop running at some point" does not make "smart contracts" written in Ethereum "decidable" - as we've seen, these contracts can still blow up / go wrong in other ways before they run out of gas.
Peter Todd petertodd might also be an example of this confusion (given his history, my hunch is that he probably is - but I haven't had time to do a thorough investigation yet) - with his recent post proposing smart contracts in Bitcoin based on the lambda calculus.
Basically, the only way to avoid falling into the "Turing tar-pit" of confusing and misleading semantics / behavior and undecidability will be to use slightly more restricted languages which are carefully designed / selected to not be Turing-complete.
There are plenty of non-Turing-complete lanaguages available to learn from.
One possibility would be to consider languages which are based on intuitionistic logic / constructivism / Martin-Löf's Type theory / Heyting Logic - which is similar to classical Boolean logic except that Heyting Logic rejects the Law of the Excluded Middle.
What all these "schools of mathematics" have in common is a more restricted and more concrete notion of "proof", supporting a safer mode of computation, where something is considered "proven" or "true" only if you can provide concrete evidence.
By the way, the word "witness" in "Segregated Witness" - meaning a proof that has been constructed, to "witness" the truth of a proposition, or the validity of a block - comes from the realm of constructivism in mathematics.
These languages are somewhat more restricted than Turing-complete languages, but they are still quite expressive and efficient enough to specify nearly any sort of financial rules or "smart contracts" which we might desire.
In fact, the notion "smart contracts" is actually not new at all, and a lot of related work has already been done in this area - and, interestingly, it is based mostly on the kinds of "functional languages" which most of the developers at Core/Blockstream, and at slock.it, are not familiar with (since they are trapped in the imperative paradigm of less-safe procedural languages such as C++ and JavaScript):
Wall Street is already writing DSLs for "smart contracts" - mostly using functional languages
Check out the many, many languages for smart contracts already being used major financial firms, and notice how most of them are functional (based on Ocaml and Haskell), and not procedural (like C++ and JavaScript):
http://dslfin.org/resources.html
https://stackoverflow.com/questions/23448/dsl-in-finance
The lesson to learn here is simple: Just because we are storing our data on a blockchain and running our code on a permissionless distributed network, does not mean that we should ignore the rich, successful history of existing related work on designing financial products and "smart contracts" which has already been happening on Wall Street using functional languages.
In fact, if we want to run "smart contracts" on a permissionless distributed concurrent parallel network (rather than on a centralized system), then it actually becomes even more important to use functional and declarative paradigms and immutable data structures supported by languages like Ocaml and Haskell, and avoid the imperative and procedural paradigms involving mutable data structures, which are almost impossible to get right in a distributed concurrent parallel architecture. (See the video "The Future is Parallel, and the Future of Parallel is Declarative" for an excellent 1-hour explanation of this).
Only non-Turing-complete languages support formal reasoning and verification
Basically, a language which is not Turing complete, but is instead based on the slightly more restricted "Intuitionistic Logic" or "Constructivism", satisfies an important property where it is possible to do "formal reasoning and verification" about any program written in that language.
This is what we need when dealing with financial products and smart contracts: we need to be able to know in advance "what" the program does (ie, before running it) - which can be done using tools such as formal reasoning and verification and "correctness proofs" (which are not applicable to Turing-complete languages).
Turing-complete languages for "smart contracts" are needlessly dangerous because you can't figure out in advance what they do
As the "language design wars" around cryptocurrencies and "smart contracts" begin to heat up, we must always insist on using only non-Turing-complete languages which enable us to use the tools of formal reasoning and verification to mathematically prove in advance that a "smart contract" program actually does "what" it is supposed to do.
Separating specification from implementation is essential for proving correctness
A specification stating "what the smart contract does" should ideally be spelled out separately from the implementation stating "how" it should do it.
In other words, a high-level, more compact & human-readable specification language can be used to mathematically (and in many cases (semi-)automatically) derive (and formally verify - ie, provide a mathematical correctness proof for) the low-level, hard-to-read machine-runnable program in an implementation language, which tell them machine "how the smart contract does what it does".
A simple list of "language design" requirements for smart contracts
The following considerations are important for ensuring safety of smart contracts:
So, the requirements for languages for smart contracts should include:
(1) Our language should be non-Turing complete - ie, it should be based instead on "Intuititionistic Logic" / "Constructivism";
(2) We should favor declarative languages (and also things like immutable data structures) - because these are the easiest to run on parallel architectures.
(3) Our toolbox should support formal reasoning and verification, allowing us to mathematically prove that a low-level machine-runnable implementation satisfies its high-level, human-readable specification before we actually run it
Some YouTube videos for further study
There's a video discussing how declarative languages with immutable data structures (such as Haskell, which is pure functional) are a nice "fit" for parallel programming:
The Future is Parallel, and the Future of Parallel is Declarative
https://www.youtube.com/watch?v=hlyQjK1qjw8
There's also some videos about how Jane Street Capital has been successfully using the language OCaml (which includes functional, object-oriented, and imperative paradigms) to develop financial products:
Why OCaml
https://www.youtube.com/watch?v=v1CmGbOGb2I
Caml Trading
https://www.youtube.com/watch?v=hKcOkWzj0_s
Lessons from history
When I see Peter Todd writing a blog post where he embarks on informally specifying a new language for "smart contracts for Bitcoin" based on lambda calculus, it makes me shudder and recollect Greenspun's Tenth Rule, which states:
Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp.
Only now, it looks like Peter Todd is going to try to single-handedly re-implement languages like Ocaml and Haskell, and then try to build the same financial DSLs (domain-specific languages) which Wall Street already built on them.
I think a much better approach would be to look show a bit more humility, and a little less of the "NIH" (not invented here) syndrome, and see what we can learn from the vast amount of existing work in this area - specifically, the DSLs (domain-specific languages) which Wall Street is already using with great success for automating financial products and smart contracts:
http://dslfin.org/resources.html
https://stackoverflow.com/questions/23448/dsl-in-finance
And remember, most of that existing work involving DSLs for financial products and smart contracts was done on top of functional languages like Ocaml and Haskell - it was not done on top of imperative languages like C++ and JavaScript (and Solidity, which is "JavaScript-like" in many ways).
There are reasons for this - and any so-called dev who ignores that vast body of existing, related work is simply a victim of too much ego and too little awareness of the fin-tech giants who have gone before him.
I'm sure Peter Todd is having a wonderful time being geek with all this - and the hordes of suck-ups and wanna-be's who slavishly worship the C++ pinheads at Core/Blockstream will be duly impressed by all his pseudo-mathematical mumbo-jumbo - but this is mere mental masturbation, if it ignores the major amount of related work that's already been done in this area.
Smart contracts for cryptocurrencies should use Wall Street's existing DSLs financial contracts written in Ocaml and Haskell as a starting point. Eventually maybe we could also even use a language like Coq for writing specifications, and proving that the implementations satisfy the specifications. Any so-called "dev" who fails to acknowledge this previous work is simply not serious.
Ignorance is bliss, and cock-sure Peter Todd is probably merely embarking on a futile mission of hubris by trying to create all this stuff from scratch based on his limited experience as a hacker gamer coder coming from the procedural / imperative paradigm, apparently unaware of the decades of related work which have shown that doing provably correct parallel programming is a gargantuan arduous challenge which may very well turn out to be insurmountable at this time.**
Lord help us if this immature, ignorant vandal who wants Bitcoin to fail takes the ignorant followers of r\bitcoin and Core down the path of these so-called "smart contracts" - reinventing decades of work already done on Wall Street and academia using Haskell and Ocaml, as they screw around with "easier" languages based on C++ and JavaScript.
Further reading
For more discussion about the emerging "language design wars" around the idea of possibly adding "smart contracts" to cryptocurrencies, here are some recent links from Reddit:
The bug which the "DAO hacker" exploited was not "merely in the DAO itself" (ie, separate from Ethereum). The bug was in Ethereum's language design itself (Solidity / EVM - Ethereum Virtual Machine) - shown by the "recursive call bug discovery" divulged (and dismissed) on slock.it last week.
https://np.reddit.com/btc/comments/4op2es/the_bug_which_the_dao_hacker_exploited_was_not/
https://np.reddit.com/ethereum/comments/4opjov/the_bug_which_the_dao_hacker_exploited_was_not/
Can we please never again put 100m in a contract without formal correctness proofs?
https://np.reddit.com/ethereum/comments/4oimok/can_we_please_never_again_put_100m_in_a_contract/
Would the smart formal methods people here mind chiming in with some calm advice on this thread?
https://np.reddit.com/haskell/comments/4ois15/would_the_smart_formal_methods_people_here_mind/
submitted by ydtm to btc [link] [comments]

Is anyone else freaked out by this whole blocksize debate? Does anyone else find themself often agreeing with *both* sides - depending on whichever argument you happen to be reading at the moment? And do we need some better algorithms and data structures?

Why do both sides of the debate seem “right” to me?
I know, I know, a healthy debate is healthy and all - and maybe I'm just not used to the tumult and jostling which would be inevitable in a real live open major debate about something as vital as Bitcoin.
And I really do agree with the starry-eyed idealists who say Bitcoin is vital. Imperfect as it may be, it certainly does seem to represent the first real chance we've had in the past few hundred years to try to steer our civilization and our planet away from the dead-ends and disasters which our government-issued debt-based currencies keep dragging us into.
But this particular debate, about the blocksize, doesn't seem to be getting resolved at all.
Pretty much every time I read one of the long-form major arguments contributed by Bitcoin "thinkers" who I've come to respect over the past few years, this weird thing happens: I usually end up finding myself nodding my head and agreeing with whatever particular piece I'm reading!
But that should be impossible - because a lot of these people vehemently disagree!
So how can both sides sound so convincing to me, simply depending on whichever piece I currently happen to be reading?
Does anyone else feel this way? Or am I just a gullible idiot?
Just Do It?
When you first look at it or hear about it, increasing the size seems almost like a no-brainer: The "big-block" supporters say just increase the blocksize to 20 MB or 8 MB, or do some kind of scheduled or calculated regular increment which tries to take into account the capabilities of the infrastructure and the needs of the users. We do have the bandwidth and the memory to at least increase the blocksize now, they say - and we're probably gonna continue to have more bandwidth and memory in order to be able to keep increasing the blocksize for another couple decades - pretty much like everything else computer-based we've seen over the years (some of this stuff is called by names such as "Moore's Law").
On the other hand, whenever the "small-block" supporters warn about the utter catastrophe that a failed hard-fork would mean, I get totally freaked by their possible doomsday scenarios, which seem totally plausible and terrifying - so I end up feeling that the only way I'd want to go with a hard-fork would be if there was some pre-agreed "triggering" mechanism where the fork itself would only actually "switch on" and take effect provided that some "supermajority" of the network (of who? the miners? the full nodes?) had signaled (presumably via some kind of totally reliable p2p trustless software-based voting system?) that they do indeed "pre-agree" to actually adopt the pre-scheduled fork (and thereby avoid any possibility whatsoever of the precious blockchain somehow tragically splitting into two and pretty much killing this cryptocurrency off in its infancy).
So in this "conservative" scenario, I'm talking about wanting at least 95% pre-adoption agreement - not the mere 75% which I recall some proposals call for, which seems like it could easily lead to a 75/25 blockchain split.
But this time, with this long drawn-out blocksize debate, the core devs, and several other important voices who have become prominent opinion shapers over the past few years, can't seem to come to any real agreement on this.
Weird split among the devs
As far as I can see, there's this weird split: Gavin and Mike seem to be the only people among the devs who really want a major blocksize increase - and all the other devs seem to be vehemently against them.
But then on the other hand, the users seem to be overwhelmingly in favor of a major increase.
And there are meta-questions about governance, about about why this didn't come out as a BIP, and what the availability of Bitcoin XT means.
And today or yesterday there was this really cool big-blockian exponential graph based on doubling the blocksize every two years for twenty years, reminding us of the pure mathematical fact that 210 is indeed about 1000 - but not really addressing any of the game-theoretic points raised by the small-blockians. So a lot of the users seem to like it, but when so few devs say anything positive about it, I worry: is this just yet more exponential chart porn?
On the one hand, Gavin's and Mike's blocksize increase proposal initially seemed like a no-brainer to me.
And on the other hand, all the other devs seem to be against them. Which is weird - not what I'd initially expected at all (but maybe I'm just a fool who's seduced by exponential chart porn?).
Look, I don't mean to be rude to any of the core devs, and I don't want to come off like someone wearing a tinfoil hat - but it has to cross people's minds that the powers that be (the Fed and the other central banks and the governments that use their debt-issued money to run this world into a ditch) could very well be much more scared shitless than they're letting on. If we assume that the powers that be are using their usual playbook and tactics, then it could be worth looking at the book "Confessions of an Economic Hitman" by John Perkins, to get an idea of how they might try to attack Bitcoin. So, what I'm saying is, they do have a track record of sending in "experts" to try to derail projects and keep everyone enslaved to the Creature from Jekyll Island. I'm just saying. So, without getting ad hominem - let's just make sure that our ideas can really stand scrutiny on their own - as Nick Szabo says, we need to make sure there is "more computer science, less noise" in this debate.
When Gavin Andresen first came out with the 20 MB thing - I sat back and tried to imagine if I could download 20 MB in 10 minutes (which seems to be one of the basic mathematical and technological constraints here - right?)
I figured, "Yeah, I could download that" - even with my crappy internet connection.
And I guess the telecoms might be nice enough to continue to double our bandwidth every two years for the next couple decades – if we ask them politely?
On the other hand - I think we should be careful about entrusting the financial freedom of the world into the greedy hands of the telecoms companies - given all their shady shenanigans over the past few years in many countries. After decades of the MPAA and the FBI trying to chip away at BitTorrent, lately PirateBay has been hard to access. I would say it's quite likely that certain persons at institutions like JPMorgan and Goldman Sachs and the Fed might be very, very motivated to see Bitcoin fail - so we shouldn't be too sure about scaling plans which depend on the willingness of companies Verizon and AT&T to double our bandwith every two years.
Maybe the real important hardware buildout challenge for a company like 21 (and its allies such as Qualcomm) to take on now would not be "a miner in every toaster" but rather "Google Fiber Download and Upload Speeds in every Country, including China".
I think I've read all the major stuff on the blocksize debate from Gavin Andresen, Mike Hearn, Greg Maxwell, Peter Todd, Adam Back, and Jeff Garzick and several other major contributors - and, oddly enough, all their arguments seem reasonable - heck even Luke-Jr seems reasonable to me on the blocksize debate, and I always thought he was a whackjob overly influenced by superstition and numerology - and now today I'm reading the article by Bram Cohen - the inventor of BitTorrent - and I find myself agreeing with him too!
I say to myself: What's going on with me? How can I possibly agree with all of these guys, if they all have such vehemently opposing viewpoints?
I mean, think back to the glory days of a couple of years ago, when all we were hearing was how this amazing unprecedented grassroots innovation called Bitcoin was going to benefit everyone from all walks of life, all around the world:
...basically the entire human race transacting everything into the blockchain.
(Although let me say that I think that people's focus on ideas like driverless cabs creating realtime fare markets based on supply and demand seems to be setting our sights a bit low as far as Bitcoin's abilities to correct the financial world's capital-misallocation problems which seem to have been made possible by infinite debt-based fiat. I would have hoped that a Bitcoin-based economy would solve much more noble, much more urgent capital-allocation problems than driverless taxicabs creating fare markets or refrigerators ordering milk on the internet of things. I was thinking more along the lines that Bitcoin would finally strangle dead-end debt-based deadly-toxic energy industries like fossil fuels and let profitable clean energy industries like Thorium LFTRs take over - but that's another topic. :=)
Paradoxes in the blocksize debate
Let me summarize the major paradoxes I see here:
(1) Regarding the people (the majority of the core devs) who are against a blocksize increase: Well, the small-blocks arguments do seem kinda weird, and certainly not very "populist", in the sense that: When on earth have end-users ever heard of a computer technology whose capacity didn't grow pretty much exponentially year-on-year? All the cool new technology we've had - from hard drives to RAM to bandwidth - started out pathetically tiny and grew to unimaginably huge over the past few decades - and all our software has in turn gotten massively powerful and big and complex (sometimes bloated) to take advantage of the enormous new capacity available.
But now suddenly, for the first time in the history of technology, we seem to have a majority of the devs, on a major p2p project - saying: "Let's not scale the system up. It could be dangerous. It might break the whole system (if the hard-fork fails)."
I don't know, maybe I'm missing something here, maybe someone else could enlighten me, but I don't think I've ever seen this sort of thing happen in the last few decades of the history of technology - devs arguing against scaling up p2p technology to take advantage of expected growth in infrastructure capacity.
(2) But... on the other hand... the dire warnings of the small-blockians about what could happen if a hard-fork were to fail - wow, they do seem really dire! And these guys are pretty much all heavyweight, experienced programmers and/or game theorists and/or p2p open-source project managers.
I must say, that nearly all of the long-form arguments I've read - as well as many, many of the shorter comments I've read from many users in the threads, whose names I at least have come to more-or-less recognize over the past few months and years on reddit and bitcointalk - have been amazingly impressive in their ability to analyze all aspects of the lifecycle and management of open-source software projects, bringing up lots of serious points which I could never have come up with, and which seem to come from long experience with programming and project management - as well as dealing with economics and human nature (eg, greed - the game-theory stuff).
So a lot of really smart and experienced people with major expertise in various areas ranging from programming to management to game theory to politics to economics have been making some serious, mature, compelling arguments.
But, as I've been saying, the only problem to me is: in many of these cases, these arguments are vehemently in opposition to each other! So I find myself agreeing with pretty much all of them, one by one - which means the end result is just a giant contradiction.
I mean, today we have Bram Cohen, the inventor of BitTorrent, arguing (quite cogently and convincingly to me), that it would be dangerous to increase the blocksize. And this seems to be a guy who would know a few things about scaling out a massive global p2p network - since the protocol which he invented, BitTorrent, is now apparently responsible for like a third of the traffic on the internet (and this despite the long-term concerted efforts of major evil players such as the MPAA and the FBI to shut the whole thing down).
Was the BitTorrent analogy too "glib"?
By the way - I would like to go on a slight tangent here and say that one of the main reasons why I felt so "comfortable" jumping on the Bitcoin train back a few years ago, when I first heard about it and got into it, was the whole rough analogy I saw with BitTorrent.
I remembered the perhaps paradoxical fact that when a torrent is more popular (eg, a major movie release that just came out last week), then it actually becomes faster to download. More people want it, so more people have a few pieces of it, so more people are able to get it from each other. A kind of self-correcting economic feedback loop, where more demand directly leads to more supply.
(BitTorrent manages to pull this off by essentially adding a certain structure to the file being shared, so that it's not simply like an append-only list of 1 MB blocks, but rather more like an random-access or indexed array of 1 MB chunks. Say you're downloading a film which is 700 MB. As soon as your "client" program has downloaded a single 1-MB chunk - say chunk #99 - your "client" program instantly turns into a "server" program as well - offering that chunk #99 to other clients. From my simplistic understanding, I believe the Bitcoin protocol does something similar, to provide a p2p architecture. Hence my - perhaps naïve - assumption that Bitcoin already had the right algorithms / architecture / data structure to scale.)
The efficiency of the BitTorrent network seemed to jive with that "network law" (Metcalfe's Law?) about fax machines. This law states that the more fax machines there are, the more valuable the network of fax machines becomes. Or the value of the network grows on the order of the square of the number of nodes.
This is in contrast with other technology like cars, where the more you have, the worse things get. The more cars there are, the more traffic jams you have, so things start going downhill. I guess this is because highway space is limited - after all, we can't pave over the entire countryside, and we never did get those flying cars we were promised, as David Graeber laments in a recent essay in The Baffler magazine :-)
And regarding the "stress test" supposedly happening right now in the middle of this ongoing blocksize debate, I don't know what worries me more: the fact that it apparently is taking only $5,000 to do a simple kind of DoS on the blockchain - or the fact that there are a few rumors swirling around saying that the unknown company doing the stress test shares the same physical mailing address with a "scam" company?
Or maybe we should just be worried that so much of this debate is happening on a handful of forums which are controlled by some guy named theymos who's already engaged in some pretty "contentious" or "controversial" behavior like blowing a million dollars on writing forum software (I guess he never heard that reddit.com software is open-source)?
So I worry that the great promise of "decentralization" might be more fragile than we originally thought.
Scaling
Anyways, back to Metcalfe's Law: with virtual stuff, like torrents and fax machines, the more the merrier. The more people downloading a given movie, the faster it arrives - and the more people own fax machines, the more valuable the overall fax network.
So I kindof (naïvely?) assumed that Bitcoin, being "virtual" and p2p, would somehow scale up the same magical way BitTorrrent did. I just figured that more people using it would somehow automatically make it stronger and faster.
But now a lot of devs have started talking in terms of the old "scarcity" paradigm, talking about blockspace being a "scarce resource" and talking about "fee markets" - which seems kinda scary, and antithetical to much of the earlier rhetoric we heard about Bitcoin (the stuff about supporting our favorite creators with micropayments, and the stuff about Africans using SMS to send around payments).
Look, when some asshole is in line in front of you at the cash register and he's holding up the line so they can run his credit card to buy a bag of Cheeto's, we tend to get pissed off at the guy - clogging up our expensive global electronic payment infrastructure to make a two-dollar purchase. And that's on a fairly efficient centralized system - and presumably after a year or so, VISA and the guy's bank can delete or compress the transaction in their SQL databases.
Now, correct me if I'm wrong, but if some guy buys a coffee on the blockchain, or if somebody pays an online artist $1.99 for their work - then that transaction, a few bytes or so, has to live on the blockchain forever?
Or is there some "pruning" thing that gets rid of it after a while?
And this could lead to another question: Viewed from the perspective of double-entry bookkeeping, is the blockchain "world-wide ledger" more like the "balance sheet" part of accounting, i.e. a snapshot showing current assets and liabilities? Or is it more like the "cash flow" part of accounting, i.e. a journal showing historical revenues and expenses?
When I think of thousands of machines around the globe having to lug around multiple identical copies of a multi-gigabyte file containing some asshole's coffee purchase forever and ever... I feel like I'm ideologically drifting in one direction (where I'd end up also being against really cool stuff like online micropayments and Africans banking via SMS)... so I don't want to go there.
But on the other hand, when really experienced and battle-tested veterans with major experience in the world of open-souce programming and project management (the "small-blockians") warn of the catastrophic consequences of a possible failed hard-fork, I get freaked out and I wonder if Bitcoin really was destined to be a settlement layer for big transactions.
Could the original programmer(s) possibly weigh in?
And I don't mean to appeal to authority - but heck, where the hell is Satoshi Nakamoto in all this? I do understand that he/she/they would want to maintain absolute anonymity - but on the other hand, I assume SN wants Bitcoin to succeed (both for the future of humanity - or at least for all the bitcoins SN allegedly holds :-) - and I understand there is a way that SN can cryptographically sign a message - and I understand that as the original developer of Bitcoin, SN had some very specific opinions about the blocksize... So I'm kinda wondering of Satoshi could weigh in from time to time. Just to help out a bit. I'm not saying "Show us a sign" like a deity or something - but damn it sure would be fascinating and possibly very helpful if Satoshi gave us his/hetheir 2 satoshis worth at this really confusing juncture.
Are we using our capacity wisely?
I'm not a programming or game-theory whiz, I'm just a casual user who has tried to keep up with technology over the years.
It just seems weird to me that here we have this massive supercomputer (500 times more powerful than the all the supercomputers in the world combined) doing fairly straightforward "embarassingly parallel" number-crunching operations to secure a p2p world-wide ledger called the blockchain to keep track of a measly 2.1 quadrillion tokens spread out among a few billion addresses - and a couple of years ago you had people like Rick Falkvinge saying the blockchain would someday be supporting multi-million-dollar letters of credit for international trade and you had people like Andreas Antonopoulos saying the blockchain would someday allow billions of "unbanked" people to send remittances around the village or around the world dirt-cheap - and now suddenly in June 2015 we're talking about blockspace as a "scarce resource" and talking about "fee markets" and partially centralized, corporate-sponsored "Level 2" vaporware like Lightning Network and some mysterious company is "stess testing" or "DoS-ing" the system by throwing away a measly $5,000 and suddenly it sounds like the whole system could eventually head right back into PayPal and Western Union territory again, in terms of expensive fees.
When I got into Bitcoin, I really was heavily influenced by vague analogies with BitTorrent: I figured everyone would just have tiny little like utorrent-type program running on their machine (ie, Bitcoin-QT or Armory or Mycelium etc.).
I figured that just like anyone can host a their own blog or webserver, anyone would be able to host their own bank.
Yeah, Google and and Mozilla and Twitter and Facebook and WhatsApp did come along and build stuff on top of TCP/IP, so I did expect a bunch of companies to build layers on top of the Bitcoin protocol as well. But I still figured the basic unit of bitcoin client software powering the overall system would be small and personal and affordable and p2p - like a bittorrent client - or at the most, like a cheap server hosting a blog or email server.
And I figured there would be a way at the software level, at the architecture level, at the algorithmic level, at the data structure level - to let the thing scale - if not infinitely, at least fairly massively and gracefully - the same way the BitTorrent network has.
Of course, I do also understand that with BitTorrent, you're sharing a read-only object (eg, a movie) - whereas with Bitcoin, you're achieving distributed trustless consensus and appending it to a write-only (or append-only) database.
So I do understand that the problem which BitTorrent solves is much simpler than the problem which Bitcoin sets out to solve.
But still, it seems that there's got to be a way to make this thing scale. It's p2p and it's got 500 times more computing power than all the supercomputers in the world combined - and so many brilliant and motivated and inspired people want this thing to succeed! And Bitcoin could be our civilization's last chance to steer away from the oncoming debt-based ditch of disaster we seem to be driving into!
It just seems that Bitcoin has got to be able to scale somehow - and all these smart people working together should be able to come up with a solution which pretty much everyone can agree - in advance - will work.
Right? Right?
A (probably irrelevant) tangent on algorithms and architecture and data structures
I'll finally weigh with my personal perspective - although I might be biased due to my background (which is more on the theoretical side of computer science).
My own modest - or perhaps radical - suggestion would be to ask whether we're really looking at all the best possible algorithms and architectures and data structures out there.
From this perspective, I sometimes worry that the overwhelming majority of the great minds working on the programming and game-theory stuff might come from a rather specific, shall we say "von Neumann" or "procedural" or "imperative" school of programming (ie, C and Python and Java programmers).
It seems strange to me that such a cutting-edge and important computer project would have so little participation from the great minds at the other end of the spectrum of programming paradigms - namely, the "functional" and "declarative" and "algebraic" (and co-algebraic!) worlds.
For example, I was struck in particular by statements I've seen here and there (which seemed rather hubristic or lackadaisical to me - for something as important as Bitcoin), that the specification of Bitcoin and the blockchain doesn't really exist in any form other than the reference implementation(s) (in procedural languages such as C or Python?).
Curry-Howard anyone?
I mean, many computer scientists are aware of the Curry-Howard isomorophism, which basically says that the relationship between a theorem and its proof is equivalent to the relationship between a specification and its implementation. In other words, there is a long tradition in mathematics (and in computer programming) of:
And it's not exactly "turtles all the way down" either: a specification is generally simple and compact enough that a good programmer can usually simply visually inspect it to determine if it is indeed "correct" - something which is very difficult, if not impossible, to do with a program written in a procedural, implementation-oriented language such as C or Python or Java.
So I worry that we've got this tradition, from the open-source github C/Java programming tradition, of never actually writing our "specification", and only writing the "implementation". In mission-critical military-grade programming projects (which often use languages like Ada or Maude) this is simply not allowed. It would seem that a project as mission-critical as Bitcoin - which could literally be crucial for humanity's continued survival - should also use this kind of military-grade software development approach.
And I'm not saying rewrite the implementations in these kind of theoretical languages. But it might be helpful if the C/Python/Java programmers in the Bitcoin imperative programming world could build some bridges to the Maude/Haskell/ML programmers of the functional and algebraic programming worlds to see if any kind of useful cross-pollination might take place - between specifications and implementations.
For example, the JavaFAN formal analyzer for multi-threaded Java programs (developed using tools based on the Maude language) was applied to the Remote Agent AI program aboard NASA's Deep Space 1 shuttle, written in Java - and it took only a few minutes using formal mathematical reasoning to detect a potential deadlock which would have occurred years later during the space mission when the damn spacecraft was already way out around Pluto.
And "the Maude-NRL (Naval Research Laboratory) Protocol Analyzer (Maude-NPA) is a tool used to provide security proofs of cryptographic protocols and to search for protocol flaws and cryptosystem attacks."
These are open-source formal reasoning tools developed by DARPA and used by NASA and the US Navy to ensure that program implementations satisfy their specifications. It would be great if some of the people involved in these kinds of projects could contribute to help ensure the security and scalability of Bitcoin.
But there is a wide abyss between the kinds of programmers who use languages like Maude and the kinds of programmers who use languages like C/Python/Java - and it can be really hard to get the two worlds to meet. There is a bit of rapprochement between these language communities in languages which might be considered as being somewhere in the middle, such as Haskell and ML. I just worry that Bitcoin might be turning into being an exclusively C/Python/Java project (with the algorithms and practitioners traditionally of that community), when it could be more advantageous if it also had some people from the functional and algebraic-specification and program-verification community involved as well. The thing is, though: the theoretical practitioners are big on "semantics" - I've heard them say stuff like "Yes but a C / C++ program has no easily identifiable semantics". So to get them involved, you really have to first be able to talk about what your program does (specification) - before proceeding to describe how it does it (implementation). And writing high-level specifications is typically very hard using the syntax and semantics of languages like C and Java and Python - whereas specs are fairly easy to write in Maude - and not only that, they're executable, and you state and verify properties about them - which provides for the kind of debate Nick Szabo was advocating ("more computer science, less noise").
Imagine if we had an executable algebraic specification of Bitcoin in Maude, where we could formally reason about and verify certain crucial game-theoretical properties - rather than merely hand-waving and arguing and deploying and praying.
And so in the theoretical programming community you've got major research on various logics such as Girard's Linear Logic (which is resource-conscious) and Bruni and Montanari's Tile Logic (which enables "pasting" bigger systems together from smaller ones in space and time), and executable algebraic specification languages such as Meseguer's Maude (which would be perfect for game theory modeling, with its functional modules for specifying the deterministic parts of systems and its system modules for specifiying non-deterministic parts of systems, and its parameterized skeletons for sketching out the typical architectures of mobile systems, and its formal reasoning and verification tools and libraries which have been specifically applied to testing and breaking - and fixing - cryptographic protocols).
And somewhat closer to the practical hands-on world, you've got stuff like Google's MapReduce and lots of Big Data database languages developed by Google as well. And yet here we are with a mempool growing dangerously big for RAM on a single machine, and a 20-GB append-only list as our database - and not much debate on practical results from Google's Big Data databases.
(And by the way: maybe I'm totally ignorant for asking this, but I'll ask anyways: why the hell does the mempool have to stay in RAM? Couldn't it work just as well if it were stored temporarily on the hard drive?)
And you've got CalvinDB out of Yale which apparently provides an ACID layer on top of a massively distributed database.
Look, I'm just an armchair follower cheering on these projects. I can barely manage to write a query in SQL, or read through a C or Python or Java program. But I would argue two points here: (1) these languages may be too low-level and "non-formal" for writing and modeling and formally reasoning about and proving properties of mission-critical specifications - and (2) there seem to be some Big Data tools already deployed by institutions such as Google and Yale which support global petabyte-size databases on commodity boxes with nice properties such as near-real-time and ACID - and I sometimes worry that the "core devs" might be failing to review the literature (and reach out to fellow programmers) out there to see if there might be some formal program-verification and practical Big Data tools out there which could be applied to coming up with rock-solid, 100% consensus proposals to handle an issue such as blocksize scaling, which seems to have become much more intractable than many people might have expected.
I mean, the protocol solved the hard stuff: the elliptical-curve stuff and the Byzantine General stuff. How the heck can we be falling down on the comparatively "easier" stuff - like scaling the blocksize?
It just seems like defeatism to say "Well, the blockchain is already 20-30 GB and it's gonna be 20-30 TB ten years from now - and we need 10 Mbs bandwidth now and 10,000 Mbs bandwidth 20 years from - assuming the evil Verizon and AT&T actually give us that - so let's just become a settlement platform and give up on buying coffee or banking the unbanked or doing micropayments, and let's push all that stuff into some corporate-controlled vaporware without even a whitepaper yet."
So you've got Peter Todd doing some possibly brilliant theorizing and extrapolating on the idea of "treechains" - there is a Let's Talk Bitcoin podcast from about a year ago where he sketches the rough outlines of this idea out in a very inspiring, high-level way - although the specifics have yet to be hammered out. And we've got Blockstream also doing some hopeful hand-waving about the Lightning Network.
Things like Peter Todd's treechains - which may be similar to the spark in some devs' eyes called Lightning Network - are examples of the kind of algorithm or architecture which might manage to harness the massive computing power of miners and nodes in such a way that certain kinds of massive and graceful scaling become possible.
It just seems like a kindof tiny dev community working on this stuff.
Being a C or Python or Java programmer should not be a pre-req to being able to help contribute to the specification (and formal reasoning and program verification) for Bitcoin and the blockchain.
XML and UML are crap modeling and specification languages, and C and Java and Python are even worse (as specification languages - although as implementation languages, they are of course fine).
But there are serious modeling and specification languages out there, and they could be very helpful at times like this - where what we're dealing with is questions of modeling and specification (ie, "needs and requirements").
One just doesn't often see the practical, hands-on world of open-source github implementation-level programmers and the academic, theoretical world of specification-level programmers meeting very often. I wish there were some way to get these two worlds to collaborate on Bitcoin.
Maybe a good first step to reach out to the theoretical people would be to provide a modular executable algebraic specification of the Bitcoin protocol in a recognized, military/NASA-grade specification language such as Maude - because that's something the theoretical community can actually wrap their heads around, whereas it's very hard to get them to pay attention to something written only as a C / Python / Java implementation (without an accompanying specification in a formal language).
They can't check whether the program does what it's supposed to do - if you don't provide a formal mathematical definition of what the program is supposed to do.
Specification : Implementation :: Theorem : Proof
You have to remember: the theoretical community is very aware of the Curry-Howard isomorphism. Just like it would be hard to get a mathematician's attention by merely showing them a proof without telling also telling them what theorem the proof is proving - by the same token, it's hard to get the attention of a theoretical computer scientist by merely showing them an implementation without showing them the specification that it implements.
Bitcoin is currently confronted with a mathematical or "computer science" problem: how to secure the network while getting high enough transactional throughput, while staying within the limited RAM, bandwidth and hard drive space limitations of current and future infrastructure.
The problem only becomes a political and economic problem if we give up on trying to solve it as a mathematical and "theoretical computer science" problem.
There should be a plethora of whitepapers out now proposing algorithmic solutions to these scaling issues. Remember, all we have to do is apply the Byzantine General consensus-reaching procedure to a worldwide database which shuffles 2.1 quadrillion tokens among a few billion addresses. The 21 company has emphatically pointed out that racing to compute a hash to add a block is an "embarrassingly parallel" problem - very easy to decompose among cheap, fault-prone, commodity boxes, and recompose into an overall solution - along the lines of Google's highly successful MapReduce.
I guess what I'm really saying is (and I don't mean to be rude here), is that C and Python and Java programmers might not be the best qualified people to develop and formally prove the correctness of (note I do not say: "test", I say "formally prove the correctness of") these kinds of algorithms.
I really believe in the importance of getting the algorithms and architectures right - look at Google Search itself, it uses some pretty brilliant algorithms and architectures (eg, MapReduce, Paxos) which enable it to achieve amazing performance - on pretty crappy commodity hardware. And look at BitTorrent, which is truly p2p, where more demand leads to more supply.
So, in this vein, I will close this lengthy rant with an oddly specific link - which may or may not be able to make some interesting contributions to finding suitable algorithms, architectures and data structures which might help Bitcoin scale massively. I have no idea if this link could be helpful - but given the near-total lack of people from the Haskell and ML and functional worlds in these Bitcoin specification debates, I thought I'd be remiss if I didn't throw this out - just in case there might be something here which could help us channel the massive computing power of the Bitcoin network in such a way as to enable us simply sidestep this kind of desperate debate where both sides seem right because the other side seems wrong.
https://personal.cis.strath.ac.uk/neil.ghani/papers/ghani-calco07
The above paper is about "higher dimensional trees". It uses a bit of category theory (not a whole lot) and a bit of Haskell (again not a lot - just a simple data structure called a Rose tree, which has a wikipedia page) to develop a very expressive and efficient data structure which generalizes from lists to trees to higher dimensions.
I have no idea if this kind of data structure could be applicable to the current scaling mess we apparently are getting bogged down in - I don't have the game-theory skills to figure it out.
I just thought that since the blockchain is like a list, and since there are some tree-like structures which have been grafted on for efficiency (eg Merkle trees) and since many of the futuristic scaling proposals seem to also involve generalizing from list-like structures (eg, the blockchain) to tree-like structures (eg, side-chains and tree-chains)... well, who knows, there might be some nugget of algorithmic or architectural or data-structure inspiration there.
So... TL;DR:
(1) I'm freaked out that this blocksize debate has splintered the community so badly and dragged on so long, with no resolution in sight, and both sides seeming so right (because the other side seems so wrong).
(2) I think Bitcoin could gain immensely by using high-level formal, algebraic and co-algebraic program specification and verification languages (such as Maude including Maude-NPA, Mobile Maude parameterized skeletons, etc.) to specify (and possibly also, to some degree, verify) what Bitcoin does - before translating to low-level implementation languages such as C and Python and Java saying how Bitcoin does it. This would help to communicate and reason about programs with much more mathematical certitude - and possibly obviate the need for many political and economic tradeoffs which currently seem dismally inevitable - and possibly widen the collaboration on this project.
(3) I wonder if there are some Big Data approaches out there (eg, along the lines of Google's MapReduce and BigTable, or Yale's CalvinDB), which could be implemented to allow Bitcoin to scale massively and painlessly - and to satisfy all stakeholders, ranging from millionaires to micropayments, coffee drinkers to the great "unbanked".
submitted by BeYourOwnBank to Bitcoin [link] [comments]

Why Turing-complete smart contracts are doomed: "Kurt Gödel and Alan Turing both posed the same question: 'Can we find a universal procedure to prove that a mathematical theory is true or false?' They each, in their own way, answered 'NO': there exist some mathematical truths that cannot be proven."

(Cross-post from btc.)
Summary:
(1) Turing-complete languages are fundamentally inappropriate for writing "smart contracts" - because such languages are inherently undecidable, which makes it impossible to know what a "smart contract" will do before running it.
(2) We should learn from Wall Street's existing DSLs (domain-specific languages) for financial products and smart contracts, based on declarative and functional languages such as Ocaml and Haskell - instead of doing what the Web 2.0 "brogrammers" behind Solidity did, and what Peter Todd is also apparently embarking upon: ie, ignoring the lessons that Wall Street has already learned, and "reinventing the wheel", using less-suitable languages such as C++ and JavaScript-like languages (Solidity), simply because they seem "easier" for the "masses" to use.
(3) We should also consider using specification languages (to say what a contract does) along with implementation languages (saying how it should do it) - because specifications are higher-level and easier for people to read than implementations which are lower-level meant for machines to run - and also because ecosystems of specification/implementation language pairs (such as Coq/Ocaml) support formal reasoning and verification tools which could be used to mathematically prove that a smart contract's implementation is "correct" (ie, it satisfies its specification) before even running it.
Details:
http://www.otherpress.com/features/alan-turing-kurt-godel-two-asymptotic-destinies/
https://duckduckgo.com/?q=turing+g%C3%B6del+undecidability&ia=web
https://duckduckgo.com/?q=turing+g%C3%B6del+incompleteness&ia=web
https://duckduckgo.com/?q=%22turing+complete%22+%22g%C3%B6del+incompleteness%22&ia=web
When I have more time later, I will hopefully be able to write up a more gentle introduction on all this stuff, providing more explanations, motivations, and examples for laypersons who are interested in getting a feel for the deep subtle mathematical implications at play here in these emerging "language design wars" around recent proposals to add "smart contracts" to cryptocurrencies.
Right now I'm just providing this quick heads-up / reminder / warning, alluded to in the title of the OP, with some more pointers to the literature in the links above.
People who already do have a deep understanding of mathematics and its history will get the message right away - by recalling the crisis in the foundations of mathematics which occurred in the early 1900s, involving concepts like Russell's paradox, Gödel's incompleteness theorem, undecidability, Turing completeness, etc.
Turing-complete languages lead to "undecidable" programs (ie, you cannot figure out what you do until after you run them)
One hint: recall that Gödel's incompleteness theorem proved that any mathematical system which is (Turing)-complete, must also be inconsistent - that is, in any such system, it must be possible to formulate propositions which are undecidable within that system.
This is related to things like the Halting Problem.
And by the way, Ethereum's concept of "gas" is not a real solution to the Halting Problem: Yes, running out of "gas" means that the machine will "stop" eventually, but this naïve approach does not overcome the more fundamental problems regarding undecidability of programs written using a Turing-complete language.
The take-away is that:
When using any Turing-complete language, it will always be possible for someone (eg, the DAO hacker, or some crook like Bernie Madoff, or some well-meaning but clueless dev from slock.it) to formulate a "smart contract" whose meaning cannot be determined in advance by merely inspecting the code: ie, it will always be possible to write a smart contract whose meaning can only be determined after running the code.
Take a moment to contemplate the full, deep (and horrifying) implications of all this.
Some of the greatest mathematicians and computer scientists of the 20th century already discovered and definitively proved (much to the consternation most of their less-sophisticated (naïve) colleagues - who nevertheless eventually were forced to come around and begrudgingly agree with them) that:
The horrifying conclusion is that:
This all is based on a very, very deep result of mathematics (Gödel's Incompleteness Theorem, as referenced in some of the links above) - which even many mathematicians themselves had a hard time understanding and accepting.
And it is also very, very common for programmers to not understand or accept this deep mathematical result.
Most programmers do not understand the implications of Gödel's incompleteness theorem on Turing-complete languages
As a mathematician first, and a programmer second, I can confirm from my own experience that most programmers do not understand this important mathematical history at all, and its implications - it is simply too subtle or too foreign for them to grasp.
Their understanding of computing is childish, naïve, and simplistic.
They simply view a computer as a marvelous machine which can execute a sequence of instructions in some language (and please note that, for them, that language usually happens to simply "come with" the machine, so they unquestionably accept whatever language that happens to be - ie, they almost never dive deeper into the subtle concepts of "language design" itself - a specialized area of theoretical computer science which few of them ever think about).
Paradigms lost
As we've seen, time after time, this failure of most programmers contemplate the deeper implications of "language design" has has led to the familiar litany of disasters and "learning experiences" where programmers have slowly abandoned one "programming paradigm" and moved on to the next, after learning (through bitter experience) certain hard facts and unpleasant, non-intuitive realities which initially escaped their attention when they were simply enjoying the naïve thrill of programming - such as the following:
Today, in cryptocurrencies, we are seeing this sad history repeat itself, with plenty of examples of programmers who don't understand these subtle concepts involving the foundations of mathematics - specifically, the mathematical fact (Gödel's Incompleteness Theorem) that any logical system or language which is "powerful" enough to be "Turing complete" must also be inconsistent.
The naïve Ethereum people think they've cleverly sidestepped this with the notion of "gas" but actually all they're doing is cheating with this messy kludge: because simply saying "we'll arbitrarily make the program stop running at some point" does not make "smart contracts" written in Ethereum "decidable" - as we've seen, these contracts can still blow up / go wrong in other ways before they run out of gas.
Peter Todd petertodd might also be an example of this confusion (given his history, my hunch is that he probably is - but I haven't had time to do a thorough investigation yet) - with his recent post proposing smart contracts in Bitcoin based on the lambda calculus.
Basically, the only way to avoid falling into the "Turing tar-pit" of confusing and misleading semantics / behavior and undecidability will be to use slightly more restricted languages which are carefully designed / selected to not be Turing-complete.
There are plenty of non-Turing-complete lanaguages available to learn from.
One possibility would be to consider languages which are based on intuitionistic logic / constructivism / Martin-Löf's Type theory / Heyting Logic - which is similar to classical Boolean logic except that Heyting Logic rejects the Law of the Excluded Middle.
What all these "schools of mathematics" have in common is a more restricted and more concrete notion of "proof", supporting a safer mode of computation, where something is considered "proven" or "true" only if you can provide concrete evidence.
By the way, the word "witness" in "Segregated Witness" - meaning a proof that has been constructed, to "witness" the truth of a proposition, or the validity of a block - comes from the realm of constructivism in mathematics.
These languages are somewhat more restricted than Turing-complete languages, but they are still quite expressive and efficient enough to specify nearly any sort of financial rules or "smart contracts" which we might desire.
In fact, the notion "smart contracts" is actually not new at all, and a lot of related work has already been done in this area - and, interestingly, it is based mostly on the kinds of "functional languages" which most of the developers at Core/Blockstream, and at slock.it, are not familiar with (since they are trapped in the imperative paradigm of less-safe procedural languages such as C++ and JavaScript):
Wall Street is already writing DSLs for "smart contracts" - mostly using functional languages
Check out the many, many languages for smart contracts already being used major financial firms, and notice how most of them are functional (based on Ocaml and Haskell), and not procedural (like C++ and JavaScript):
http://dslfin.org/resources.html
https://stackoverflow.com/questions/23448/dsl-in-finance
The lesson to learn here is simple: Just because we are storing our data on a blockchain and running our code on a permissionless distributed network, does not mean that we should ignore the rich, successful history of existing related work on designing financial products and "smart contracts" which has already been happening on Wall Street using functional languages.
In fact, if we want to run "smart contracts" on a permissionless distributed concurrent parallel network (rather than on a centralized system), then it actually becomes even more important to use functional and declarative paradigms and immutable data structures supported by languages like Ocaml and Haskell, and avoid the imperative and procedural paradigms involving mutable data structures, which are almost impossible to get right in a distributed concurrent parallel architecture. (See the video "The Future is Parallel, and the Future of Parallel is Declarative" for an excellent 1-hour explanation of this).
Only non-Turing-complete languages support formal reasoning and verification
Basically, a language which is not Turing complete, but is instead based on the slightly more restricted "Intuitionistic Logic" or "Constructivism", satisfies an important property where it is possible to do "formal reasoning and verification" about any program written in that language.
This is what we need when dealing with financial products and smart contracts: we need to be able to know in advance "what" the program does (ie, before running it) - which can be done using tools such as formal reasoning and verification and "correctness proofs" (which are not applicable to Turing-complete languages).
Turing-complete languages for "smart contracts" are needlessly dangerous because you can't figure out in advance what they do
As the "language design wars" around cryptocurrencies and "smart contracts" begin to heat up, we must always insist on using only non-Turing-complete languages which enable us to use the tools of formal reasoning and verification to mathematically prove in advance that a "smart contract" program actually does "what" it is supposed to do.
Separating specification from implementation is essential for proving correctness
A specification stating "what the smart contract does" should ideally be spelled out separately from the implementation stating "how" it should do it.
In other words, a high-level, more compact & human-readable specification language can be used to mathematically (and in many cases (semi-)automatically) derive (and formally verify - ie, provide a mathematical correctness proof for) the low-level, hard-to-read machine-runnable program in an implementation language, which tell them machine "how the smart contract does what it does".
A simple list of "language design" requirements for smart contracts
The following considerations are important for ensuring safety of smart contracts:
So, the requirements for languages for smart contracts should include:
(1) Our language should be non-Turing complete - ie, it should be based instead on "Intuititionistic Logic" / "Constructivism";
(2) We should favor declarative languages (and also things like immutable data structures) - because these are the easiest to run on parallel architectures.
(3) Our toolbox should support formal reasoning and verification, allowing us to mathematically prove that a low-level machine-runnable implementation satisfies its high-level, human-readable specification before we actually run it
Some YouTube videos for further study
There's a video discussing how declarative languages with immutable data structures (such as Haskell, which is pure functional) are a nice "fit" for parallel programming:
The Future is Parallel, and the Future of Parallel is Declarative
https://www.youtube.com/watch?v=hlyQjK1qjw8
There's also some videos about how Jane Street Capital has been successfully using the language OCaml (which includes functional, object-oriented, and imperative paradigms) to develop financial products:
Why OCaml
https://www.youtube.com/watch?v=v1CmGbOGb2I
Caml Trading
https://www.youtube.com/watch?v=hKcOkWzj0_s
Lessons from history
When I see Peter Todd writing a blog post where he embarks on informally specifying a new language for "smart contracts for Bitcoin" based on lambda calculus, it makes me shudder and recollect Greenspun's Tenth Rule, which states:
Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp.
Only now, it looks like Peter Todd is going to try to single-handedly re-implement languages like Ocaml and Haskell, and then try to build the same financial DSLs (domain-specific languages) which Wall Street already built on them.
I think a much better approach would be to look show a bit more humility, and a little less of the "NIH" (not invented here) syndrome, and see what we can learn from the vast amount of existing work in this area - specifically, the DSLs (domain-specific languages) which Wall Street is already using with great success for automating financial products and smart contracts:
http://dslfin.org/resources.html
https://stackoverflow.com/questions/23448/dsl-in-finance
And remember, most of that existing work involving DSLs for financial products and smart contracts was done on top of functional languages like Ocaml and Haskell - it was not done on top of imperative languages like C++ and JavaScript (and Solidity, which is "JavaScript-like" in many ways).
There are reasons for this - and any so-called dev who ignores that vast body of existing, related work is simply a victim of too much ego and too little awareness of the fin-tech giants who have gone before him.
I'm sure Peter Todd is having a wonderful time being geek with all this - and the hordes of suck-ups and wanna-be's who slavishly worship the C++ pinheads at Core/Blockstream will be duly impressed by all his pseudo-mathematical mumbo-jumbo - but this is mere mental masturbation, if it ignores the major amount of related work that's already been done in this area.
Smart contracts for cryptocurrencies should use Wall Street's existing DSLs financial contracts written in Ocaml and Haskell as a starting point. Eventually maybe we could also even use a language like Coq for writing specifications, and proving that the implementations satisfy the specifications. Any so-called "dev" who fails to acknowledge this previous work is simply not serious.
Ignorance is bliss, and cock-sure Peter Todd is probably merely embarking on a futile mission of hubris by trying to create all this stuff from scratch based on his limited experience as a hacker gamer coder coming from the procedural / imperative paradigm, apparently unaware of the decades of related work which have shown that doing provably correct parallel programming is a gargantuan arduous challenge which may very well turn out to be insurmountable at this time.**
Lord help us if this immature, ignorant vandal who wants Bitcoin to fail takes the ignorant followers of r\bitcoin and Core down the path of these so-called "smart contracts" - reinventing decades of work already done on Wall Street and academia using Haskell and Ocaml, as they screw around with "easier" languages based on C++ and JavaScript.
Further reading
For more discussion about the emerging "language design wars" around the idea of possibly adding "smart contracts" to cryptocurrencies, here are some recent links from Reddit:
The bug which the "DAO hacker" exploited was not "merely in the DAO itself" (ie, separate from Ethereum). The bug was in Ethereum's language design itself (Solidity / EVM - Ethereum Virtual Machine) - shown by the "recursive call bug discovery" divulged (and dismissed) on slock.it last week.
https://np.reddit.com/btc/comments/4op2es/the_bug_which_the_dao_hacker_exploited_was_not/
https://np.reddit.com/ethereum/comments/4opjov/the_bug_which_the_dao_hacker_exploited_was_not/
Can we please never again put 100m in a contract without formal correctness proofs?
https://np.reddit.com/ethereum/comments/4oimok/can_we_please_never_again_put_100m_in_a_contract/
Would the smart formal methods people here mind chiming in with some calm advice on this thread?
https://np.reddit.com/haskell/comments/4ois15/would_the_smart_formal_methods_people_here_mind/
submitted by ydtm to ethereum [link] [comments]

Bitcoin Generator 2020 - Mining Pool Inject + Payment Proof Live Beetlejuice Ingredients Game Part 1A 100 day SMA Could Ignite a Bullish Breakout for Bitcoin Here’s Why Howard Stern Show-MatchGame. --------Willow Beetlejuice High Low Game (K-Rock Era)

The price of bitcoin whipsawed between a 12-month low of $3,566 per coin in February 2019 to a high of $12,575 in July, according to Coindesk. As of Friday, bitcoin traded at $10,317 per coin. By Olga Kharif and Belinda Cao Bitcoin’s meteoric summertime surge risks coming to a painful end as Chinese policy makers move to restrict trading amid growing warnings of a market bubble. The biggest cryptocurrency dropped as much as 40 percent since reaching a record high of $4,921 on Sept. 1, cutting about $20 billion in market value. Game Theory and Bitcoin November 11, 2013 by Ed Felten In light of the back-and-forth about the recent Eyal and Sirer (“ES”) paper about Bitcoin mining, I want to take a step back and talk about what a careful analysis of Bitcoin mining dynamics would look like. NewsBTC is a news service that covers bitcoin news, technical analysis & forecasts for bitcoin and other altcoins. Here at NewsBTC, we are dedicated to enlightening people all around the world about bitcoin and other cryptocurrencies. We cover news related to bitcoin exchanges, bitcoin mining and price forecasts for various virtual currencies. 3Dvarius accepts Bitcoin and will custom print and engrave a pretty neat looking violin with either 4 or 5 strings. Their most elegant model is a high-end electric concert violin with no bells and

[index] [29758] [870] [15329] [3751] [17771] [24973] [22866] [22966] [18395] [15283]

Bitcoin Generator 2020 - Mining Pool Inject + Payment Proof Live

How to generate free Bitcoin 1. Enter your personal Bitcoin Wallet Address – make sure to enter a valid wallet address. It is recommended to generate a new Bitcoin Wallet Address everytime you ... I know you have also done this shit! You also tried daytrading! Its easy, isn't it? Buy high, sell low! (not my video) Original Video: B O G G E D by Bizonacci. Beetlejuice TV Show Theme Song Game ... 8:08. Beetlejuice High Low Game (K-Rock Era) - Duration: 4:40. Lester Green 27,900 views. 4:40. Howard Stern Show- Beetlejuice Password Game ... HSS 02/23/2016. This feature is not available right now. Please try again later. Bitcoin is trading in a contracting range above $9,000 against the US Dollar. BTC’s daily chart suggests that the price could rally above $9,300 as long as it is above the 100-day SMA.

Flag Counter