The Greek Crisis, Money, and, Bitcoin

Over the past week or two, there has been a nontrivial amount of commentary on Bitcoin and Greece. Many commenters seem to think it is a prime opportunity for Bitcoin to go mainstream, or even go as far as being adopted as an official Greek currency. Others are much more skeptical, or downright derisive of the idea. I’ve discussed Bitcoin before (here, here, and here for instance). If you’ve read those posts, you’ll have a fair idea where I’m going to fall on this particular issue. Continue reading “The Greek Crisis, Money, and, Bitcoin”

Tips for Canvassers and Peddlers

We’ve all experienced this situation. You’re lounging comfortably in your easy chair watching some inane show on the television and generally relaxing. The door bell rings. Your first thought is to ignore it but you realize that your curtains are open so your caller knows you are home. So you groan, struggle to your feet, stumble to the door and open it, hoping it’s something important like the local constabulary enforcing an evaculation order you were not previously aware of. Alas, before you is a generic person wearing a badge of some kind and carrying some sort of clipboard. As soon as you open the door, he launches into his spiel. Continue reading “Tips for Canvassers and Peddlers”

Bitcoins and Currency

Much has been made of Bitcoins and other so-called digital cryptocurrencies. I remember how, in the early days, Bitcoin was supposed to be the cure for all the woes of modern fiat currencies and the central banking system. Even at the time, I failed to see the logic in that claim. I should mention that I have profitted some from Bitcoins, mostly by the fluke of having mined some when it was trivial to do so.

Before I get into Bitcoins, however, I should define a few terms. First, a fiat currency whose tokens have no real intrinsic value, or a negligible one. The only reason a fiat currency has value is because there is some authority that mandates its use for some process that is not optional. In other words, its value exists by fiat. The dollar (take your pick which one) has no intrinsic value – the coins and notes used to represent dollars and cents merely score keeping tokens made of base metals or paper or what have you. Furthermore, most of the dollars that circulate never actually physically exist in this day of digital transactions.

Central banks, on the other hand, are more complicated and simple at the same time. Wikipedia has a reasonable write-up on central banks over here. It’s easy to get bogged down in the details, though. Roughly, the central bank usually oversees the operation of commercial banks, manages interest rates (whatever that means, really), and often is responsible for producing the national currency.

Neither fiat currency nor central banks are problematic on their own, regardless what many fear mongers would have you believe. A fiat currency is arguably better than a commodity currency (gold, silver, copper, leaves, etc.) as long as there are sensible controls on how much is circulating. Central banks serve an important mediation role between commercial banks and also as a banker for the government.

Where the problem comes in is fractional reserve banking. I do not use the term “lending” because you cannot lend what you do not have. Basically, the fractional reserve system is an institutionalized fraud which allows commercial banks to “lend” money they do not have so long as they have a sufficient reserve level. This has the side effect of multiplying the money supply well beyond the amount put in circulation by the central bank. This sounds wonderful – more money is good, right? Wrong. For a good description of what is wrong with this system, head on over to Positive Money. Their take is UK centric but their illustration of the problem applies everywhere.

The practical upshot of the current money system is that without continual inflation, the economy will completely fall over dead because there will not be enough money to pay the interest on loans and therefore borrowers will default. As borrowers default, eventually the money supply starts shrinking making it harder and harder for the remaining borrowers to pay their obligations. You end up with a downward spiral of defaults leading to reduced money supplies, deflation, hoarding, and other problematic situations. In other words, the economy becomes illiquid and stalls. By continually increasing the supply of money circulating, this default spiral can be staved off. Positive Money notes how this is problematic in the current system in which the commercial banks are chiefly responsible for the total money supply.

The problem with inflation, however, is that it is basically stealing from the future to pay for the present. The dollar I put in a safe today will be forth far less in real value when I take it out in thirty years. At an annual inflation rate of two percent, that dollar will lose nearly half its real value over thirty years. Put another way, what you could buy today for $1.00 will cost you $1.81 in thirty years if inflation stays constant at two percent. If you start playing around with the mathematics and plug in the underlying cost of interest owed to the commercial banks, the implications become frighteningly clear. The current system is not sustainable and it is amazing it has not totally collapsed previously.

Enter Bitcoin. The creators of Bitcoin looked at the current situation and observed that inflation is bad and that we clearly need a currency with some sort of built in hedge against inflation. What if we create a currency that has an absolute upper bound to the number of currency units that will ever be created and prevent any central agency from having any control of the system.

That particular notion would make perfect sense in a steady state economy. A steady state economy is one where economic activity is neither growing nor declining. This is, of course, the only economic state that is actually sustainable, but that’s not the point of this discussion.

In the real world, economic activity is growing, on average, and it will continue to do so as long as the population continues to increase and demand for “stuff” continues.  That means, if you have a fixed currency supply, there will always be more and more stuff being chased by the same number of currency units. As the level of activity increases, the average prices must decrease if the currency available does not also increase. In other words, this situation ultimately has deflation built into it. While this is not necessarily bad in itself, it does cause some behaviour that is less helpful. Hoarding currency becomes profitable as it becomes more valuable over time, which, in turn, further reduces the currency supply, leading to further deflation. In short, even without a shrinking money supply (such as in the case of the default cycle mentioned above), the economy will eventually become illiquid.

The situation just described is exactly the situation with Bitcoin. What the designers of Bitcoin have created is not a currency but a scarce commodity akin to gold or silver. The same reason we abandoned gold and silver as currency units makes Bitcoin unsuitable as a general purpose currency. However, like gold or silver, it is useful for single transactions at a specifc moment in time, and, potentially, as an inflation hedge.

In other words, while I do hold a small number of Bitcoins, I do not believe they should ever become a mainstream currency. That would be more deleterious to the overall economic health of the world than the current central bank based fractional reserve system of fiat currencies. And that is saying something because the current system is so horribly broken that its ultimate collapse is almost certainly looming large on the horizon.

Exceptions Are Evil

Exceptions are evil.

Now that I’ve pissed everyone off, I can get on with explaining. Rather than talk in the abstract about this in the abstract, which is not helpful at all, I am going toss in an example or two. It is far easier to illustrate my point with examples in this case.

First, let me start by saying that the general concept of exceptions is not necessarily evil. Indeed, any program that is properly written must deal with exceptions in some manner, preferably without crashing randomly. Where things tend to go horribly wrong is when a programming language implements support for some flavour of exceptions. For instance, C++. It can obviously also go horribly wrong when a programmer messes up in a language that doesn’t implement any version of exceptions directly.

Leaving aside the syntactic problems with exceptions for the moment, let us consider what an exception actually is. The word would suggest an exception should be reserved for a truly unusual circumstance – something that should not normally happen. But what exactly does “unusual” mean?

Let’s examine the case of allocating memory on the heap. Ordinarily, this succeeds without a lot of fuss. But what happens if it fails for some reason, say there is no more memory available or the process’s address space is full? In C, the malloc() family functions return NULL to indicate this condition. The program then has to check the return value or simply crash when dereferencing the resulting NULL pointer. In most programs, running out of memory would be an exceptional condition so it would seem to be an excellent candidate for exceptions.

Let’s examine another case. Suppose you are opening a file to read some stuff in it. Now suppose access is denied or the file doesn’t exist. These are exceptions, too, right? Well, not necessarily. If you’re iterating through a list of possible file names to find a file that does exist, or which directory it exists in, experiencing a file not found situation is hardly an unusual circumstance, is it? Having a file with no access permission is likewise fairly normal on systems that support such things. If the file in question happens to be a critical part of the overall program that is running, that might constitute an exception since that is an unusual circumstance meaning that the program is incorrectly installed. But a missing input file to a compiler is likely not an exceptional condition and is one that the compiler will want to deal with directly.

On the surface the two situations above look the same, but they really aren’t. With the second example, the situations described are error conditions, but they are not unusual conditions. They are, in fact, ordinary conditions that are expected to occur in the ordinary operation of the system. They are, by definition, not exceptional.

One of the biggest issues I have with exceptions as they are implemented and used in modern systems is that the notion of an exception has been abused into a general purpose system for returning any error condition, routine or otherwise. I can possibly defend an exception on memory allocation failure, but I have a great deal of trouble accepting that an exception is the right solution when a file simply does not exist.

Modern systems have conflated error handling and exception handling. This may be partly due to the fact that without an exception mechanism, exceptions and ordinary errors have to be handled the same way – by an error indication from a particular call. The other source of the conflation is possibly due to the fact that many “software architects” do not understand why everything is not an exception.

All of that would be relatively minor, however, if the syntax for exceptions were not horribly verbose and insanely tedious to use, even when the language itself provides a solid bug-free implementation. The biggest problem is that this excess verbosity and extra boilerplate code encourages moving the error handling to a point distant from the actual source of the problem. The idea is that the ordinary flow of the code is easy to discover by extracting the error handling from within the main flow. However, this makes the flow non-obvious in the case of something like a missing file. If you do have a solution to a missing file, say by trying a different file, you can’t just continue where you left off if you have one big block and handle all the exceptions at once. Instead, you have to wrap everything that can fail with its own exception handling boilerplate. Now you have duplicated the exact thing you thought you could avoid by using exceptions in the first place! For instance:

result = somefunc();
if (result < 0)
{
    result = tryotherfunc();
    if (result < 0)
    {
        handle_error();
    }
}

if (do_something_with_result(result) < 0)
{
    handle_error();
}

if (do_something_else_with_result(result) < 0)
{
    handle_error();
}

The above is relatively clear. The precise mechanics of how handle_error() would have to work depends on the specific calls, of course. Let’s see what that might look like if we have a language using a “try { code… } catch ….” structure:

try
{
    try
    {
        result = somefunc();
    }
    catch (exception)
    {
        result = tryotherfunc();
    }
    
    do_something_with_result(result);
    do_something_else_with_result(result);
}
catch (exception1)
{
    handle_error();
}

That doesn’t look too bad, does it? But what if we need to do something different depending on which failure condition occurred with, say, do_something_with_result()? And suppose that do_something_else_with_result() might also throw the same exception for a different reason and we can handle that one too? Oops, now we need a try/catch block around each call. We can no longer get away with just the outer one. And for each exception we need to handle, we have to add another catch statement after the try block. It starts to get confusing rapidly.

Of course, you have to add some sort of if/else structure to do the same thing in the first example, too, but that is not going to be any more verbose than the try/catch structure.

There is another case here that should be examined, too. Let’s look at the case where you want to remove a file. Now suppose we are using the first style. If an error occurs, the call returns an indicator of such. If we are using the second style with exceptions, it throws an exception. Now suppose you don’t care if the file didn’t already exist. In the former, you can just ignore the condition. In the latter, you have to explicitly trap the exception and ignore it. The same situation often occurs when closing a file before exiting a program. Sure, errors can occur at that point, but what can you actually do about them? If exceptions are being thrown in that case, the do no good but you would still have to handle them or get mysterious unhandled exceptions at program shutdown.

There is another case where exceptions seem to be the best solution. Suppose the call is to some sort of implicit function, say an object constructor or an overloaded operator (overloading is evil too, by the way). In many languages, this is needed because objects can be constructed implicitly. That means there is no code flow that can trap an error return. The same thing applies to a destructor, which is often called implicitly. It is my contention, however, that if a constructor needs to do something that might fail, (resource acquisition, for instance), it really should not be doing anything implicitly. Instead, some sort of factory operation should be used to obtain a new object. In that case, explicit destruction is then obviously indicated. In short, if an object cannot be implicitly instantiated, there is no need for exceptions. Similarly, if you eliminate the notion of operator overloading, you eliminate the possibility of needing to fail randomly in expressions, too.

I should note that explicit construction is not actually horribly verbose in most cases, and certainly not terribly more verbose than what would be needed to express the static initializer anyway. Also, being explicit with an operation on an object does not preclude being able to chain operations together. It’s easy enough to put an object into an error state that can propagate through chained calculations and then be checked at the end. Sure, you might need multiple statements to chain the calculation together with explicit destruction being required and that might be a bit tedious, but it has the advantage of being absolutely clear what is going on.

And there is the final disadvantage of using exceptions instead of explicit instantiation and destruction, though this is not specific to exceptions per se. It is not at all clear whether a particular usage pattern is doing something dumb or not if you can just chain stuff together with implicit construction and destruction. Sure, you can create resource leaks with explicit destruction, but at least you can see if you are creating and destroying stacks of objects needlessly, and you can bail out in a controlled manner at any stage you like. It is also explicit that something which might fail is happening so you generally have a clear idea what failed if your program crashes at a particular location.

To this point, I haven’t even examined how the actual language exception implementation could be problematic, either. Suppose you have a language with explicit memory management like C++. Now suppose exceptions are objects. You throw an exception, but you have to dynamically allocate it somehow because it would go out of scope as the stack unwinds otherwise. How do you ensure the freshly minted exception gets freed correctly after the exception is handled? What if the object was a static object instead of a dynamic one (which also wouldn’t go out of scope)? What if memory allocation fails when creating the new exception object? Yowza! Things can go wrong in a hurry! Of course, in a language with automatic memory management, like javascript or java, this is much less of a problem. (Though what do you do if the exception object fails to initialize? Throw an exception?)

In short, in my never humble opinion exceptions are useless syntactic HFCS (high fructose corn syrup). They taste great but have at best dubious benefits and more likely are deleterious to maintainability or correctness in the long run. Even when well implemented using something less verbose than try/catch, they do not seem to be a substantial improvement over classical error handling. And, if they are only used for truly exceptional circumstances, the classical error handling would still need to be present for many circumstances (non-exceptional failures).

There is nothing wrong with explicit error handling inline in cleanly structured code. If you think you need exceptions to make your code clear, you are going down the wrong path.

Ring Road Deal

So it seems we finally have a deal for the southwest Calgary ring road. More specifically, a deal for the portion that crosses Tsuu T’ina land. Of course, we are years away from having an operational road, but this is the first actual progress in essentially half a century. No, “almost” deals do not count as progress. Now we have actual real progress. And now we can let the armchair quarterbacks and backseat drivers get on with the hand wringing about the cost and how our tax payer dollars are going to waste and so on. Since I feel like it, I’m going to take apart some of the arguments they will make.

First up, there are complaints about the cost. Yes, it is expensive. But consider the fact that the Tsuu T’ina own the land we want to use. (I’m sure there are some legal confusions involved there, but whatever confusion is there, they essentially own the land.) They were at least part of the land for some purpose and those uses will have to be relocated at some cost and inconvenience.

Some people are arguing that it is not reasonable for the members of the band to benefit financially from the arrangement or that the magnitude of the benefit is too high. That’s simple jealously at its root, I think. After all, who wouldn’t be jealous if his neighbor suddenly had several tens of thousands of dollars?

Let’s take the fact that it is the Tsuu T’ina out of the picture. Suppose the land in question crossed a big estate owned by you on which you operate dozens of rental complexes, provide services, and generally rely upon it for your livelihood. Would you still agree that you shouldn’t be paid for your land and for your inconvenience? Would you still agree that it is unreasonable to compensate your residents? Would you still agree that you should not gain materially by the transaction? Would you agree that you should not be allowed to negotiate the best possible deal for yourself? Exactly. If it’s reasonable for one landowner in the context, it’s reasonable for any landowner in the same context.

There is a reasonable argument to be made that the Tsuu T’ina themselves will benefit from the road and that, as a result, the compensation is too much. And that is true. They will benefit from the road. But that will not happen immediately. There will be much disruption as the road is constructed and longer yet before anything can be done to exploit the traffic. But even accepting the argument as reasonable, who benefits more from the road? Almost certainly the citizens of Calgary and other traffic passing through the region. The Tsuu T’ina do not have a large enough population that they need major roads to support it. Calgary does. The population differential is three orders of magnitude, after all. So is it not reasonable that those deriving the most benefit from the project pay for it? (Let’s leave aside the fact that it’s provincial money involved which means other Albertans are on the hook too. That’s a whole other argument.)

I’ve read a few comments by ignoramuses who think that we should never, under any circumstances, direct any resources toward any First Nations simply due to the fact that they don’t bother using it to improve their houses. Well, first off, except for corruption, that’s not actually the case. And you see the same sort of corruption elsewhere. But let’s assume it isn’t corruption. Could it simply be that the people themselves value different things than we do? Perhaps they value common meeting areas or other things? Or, perhaps they have had a significant infrastructure debt inherited from previous years and it has turned into a money pit? Whatever the situation, none of it justifies denying a deal simply because it might benefit the people directly. Flip it on its head. What if they wanted something from us? We would think it perfectly reasonable that we benefit from a deal with them, right, especially if the benefit to us was relatively minimal?

Now, we haven’t seen the full details of the agreement yet. That is supposed to be released tomorrow. Until then, there is no way to accurately judge if it is fair or not. My first reaction is that it likely is fair, or at the very least, the very best deal that could be negotiated in the absence of the ability to expropriate (which applies to ordinary land owners but not First Nations. I’m not entirely certain I agree that expropriation is a reasonable power in the first place but that’s another argument altogether).

For the moment, I’m pleased that decades of brangling over a road are likely done and we can get on with things.

Why Wikis Are Stupid

For a while, having a wiki was what all the cool kids were doing. It was the ultimate in collaboration, allowing anybody who cared to edit any page they wanted to to read anything they wanted. Certain high profile successes (Wikipedia for instance) only served to make the idea even more cool. For a time, wikis sprang up everywhere, even for sites where a traditional content management system made as much or more sense.

What people failed to realize at the time was that the very feature of wikis that makes them so useful for collaboration – their wide open editing policies – also makes them wide open for abuse by the less scrupulous types out there. It should not have come as any surprise that open wikis suddenly became the hottest venue for spammers and other nefarious types to peddle their wares. After all, it happened on UseNET, email, forums, and every other open forum.

These days, running an open wiki requires intense oversight from the administrator. Once the spammers find an open wiki, they will hammer away at it until the end of time, adding ever more garbage to the content. Even simply reverting the spam edits does not remove the contents from the wiki which, after all, stores previous revisions which are still accessible. No, running an open wiki requires being able to permanently remove the garbage and that must be done continually.

Of course, most wikis are really targeted at a fairly small community so restricting edits to authorized users is a reasonable solution. But that also requires some oversight from the administrators. If one simply allows anyone who wishes to create an account and start editing immediately, the abuse noted above will sill occur. After all, spammers can register accounts as easily as anyone else. That means there must be a manual vetting stage for new accounts and that requires administrator intervention. And even once an account is approved, the activity must still be monitored and abusive accounts stopped in their tracks.

In the light of all that, is a wiki a good idea? In the general case, no, it is not. Not even a moderated one. Before putting up a wiki, you should consider carefully whether you really need the functionality. Is there a real benefit to it? Are you really just creating a site where people can discuss things? If so, maybe a forum is better. Are you just trying to get your information out and only a handful of people in your organization will be editing content? If so, a standard content mangaement system is probably all you need.

The fact that wikis are fairly rare compared to the number of sites out there should tell you something. And among the wikis that do exist, almost all require some form of authorization before edit access is allowed. That should also tell you something.

In short, wikis (as originally imagined) are stupid. They simply ignore the nature of the population in general.

Patent Reform

Patents as currently implemented are totally nonfunctional and a generally stupid idea. Note that I’m talking about patents here, not copyright which is something totally different. In general terms, a patent is a monopoly grant to the patent holder, a monopoly which has legal force, usually for a limited time. Now, the notion of rewarding an inventor with a limited monopoly is, in general, a sound idea. However, patents have become particularly problematic in their current incarnation.

The biggest problem with most current patent systems is that they permit patenting things that are clearly not inventions. Computer software, for instance, is not an invention and should not be patentable in any form. In particular, algorithms for accomplishing tasks on computers should not be any more patentable than mathematical algorithms. After all, an algorithm itself is not a device! Similarly, simply finding a genetic structure in nature somewhere should not grant a patent to the discoverer. Even if the genes are artificial, it is dangerous to grant a monopoly on something that is inherently uncontrollable. What happens when the same genetic code appears in a human being?

Patents generally have a limited term, which is good. The term might be too long for many things, but it does, at least, expire in a predictable fashion. However, some patent systems, such as the one in the United States, make any patent under review secret and allow extensions to the review process which allows a nefarious actor to essentially hide a patent until someone else manages to come up with the same thing, let their patent pass the review process, and than sue the poor sucker who had no possible way of knowing he was violating a patent that was not available to learn about. This is your basic submarine patent if I have my terminology right.

I will avoid turning this into a long rant about the ills of patents. Instead, I will switch to my take on what would be a reasonable patent system in my mind.

  • A working version of the invention is required in order to receive a patent. Regardless of what the patent is for, if you can’t construct a working version, you haven’t invented anything. Whether you should be compensated for having an idea that later turns into a real invention is a separate issue and has nothing to do with protecting a limited monopoly on a real invention.
  • A patentable invention must be a physically distinct device that must accomplish something clearly beyond the scope of all of its specific components. That means a physical device that is simply a general purpose computer in a special housing running a program is not an invention. It also means that anyone duplicating a physical invention on a general purpose computer has not violated the patent. Yes, this would make a great many things non-patentable. This is a good thing.
  • An invention must not be substantially similar to any previous invention, patented or otherwise. This is the so-called “prior art” exception. Thus, it should not be possible to patent a wheel given that there is clear prior art going back thousands of years, regardless whether there was a patent filed on it or not.
  • Nothing which forms a crucial underpinning for human life itself should be patentable. That means no gene patents. It is reasonable to consider molecular patents as long as they are not crucial components of human life. That means vitamins, genes, water, naturally occurring hormones, etc., cannot be patented.
  • Anything capable of self-replicating without intervention must not be patentable. That means plants which grow on their own in a field and can reproduce are not patentable. If there is no reasonable means to protect oneself from infringing on a patent for an invention, the invention is not patentable. There is no reasonable means to prevent your crop from cross-pollinating with your neighbour’s crop or to prevent seeds from your neighbour’s crop from ending up in your field.
  • Patents must be public and searchable for their entire duration from initial application to final expiry. It is not reasonable to expect anyone to be liable for infringing on something they had no possible way to know about.
  • Patents must be written in a language that is intelligible to an ordinary citizen competent in a related field of endeavour. If it is not intelligible to such a person, then how can he possible avoid infringing on it?
  • Patents must cover exactly one clearly described invention. The current practice of including multiple claims on a single patent, starting with a ridiculously general description and moving to ever more detailed and complex claims is deleterious to understanding patents. Instead, each claim must be its own patent application with its own attendant fees and investigation.
  • Patents cannot be transferred except in the case of succession (death of the original holder, corporate restructuring). Simply disbanding a corporation would terminate patent protection.
  • Failure to take action immediately upon discovering patent infringement is deemed to be a non-revokable license grant to the infringer. Thus, if a defendant can demonstrate that the patent holder reasonably had knowledge of his activities, that is considered adequate defense to a patent suit. Things like sending a traceable request to the patent holder for a license grant but receiving no reply would qualify. The duration of “immediate” must, of course, take into account commercially reasonable response time based on the method of discovery. Enforcement can be as simple as “we grant you a royalty free license to do what you are already doing.”
  • A pattern of neglect in patent enforcement (failing to act upon discovery of infringement) may be construed as a general royalty free license grant to the world. In general, the more cases where enforcement is neglected, the more easily a future defendant can use this defense.

There are many other points I could raise but many of them are more general. Things such as punitive fines being based on cash flow and assets of the perpetrators should obviously apply in general.

I have no illusions that the above will ever happen or that it will work out exactly as I would expect if it ever did happen. Still, in my opionion it makes a reasonable starting point.