MasterCard Enables Fraud

Yes, the headline is clickbait. However, it is also accurate.

So I had some fraudulent charges on my MasterCard back in June. That did not unduly alarm me. I knew I needed to call my card issuer and disput the charges. I did so and they reversed them, cancelled the card, and issued a new one. All was well with the world. This is what should happen, after all. Alas….

TL;DR: Cancelling a card and getting a replacement after a fraudulent doesn’t necessarily stop the fraudulent charges due to some fuckwit at MasterCard thinking that “force billing” (allowing a merchant to obtain the new card number) is a good idea. My conclusion: “force billing” should be illegal.

Continue reading “MasterCard Enables Fraud”

Leap Second on Dec 31. Sigh.

Yet again, we have a leap second being added to UTC to further complicate everyone’s lives. Well, that might be overstating it, but it sure complicates the lives of server and network administrators, among others. The notion is that leap seconds are required to keep UTC in sync with Earth’s rotation and to prevent our clocks eventually being so far out of sync that solar noon will be at midnight. That notion is wrongheaded in the extreme, though.We would simply use some adjustment to get from UTC to local time once it started getting far enough out of sync. Local time would still continue to be approximately related to mean solar time. Continue reading “Leap Second on Dec 31. Sigh.”

Libraries are always good, right?

There is a pervasive belief in the software world that you should never re-invent the wheel and that an existing library is always the best solution. While there is some merit to the sentiment that re-inventing the wheel is often pointless or dangerous, I have recently come to the conclusion that this is not always the case.

Continue reading “Libraries are always good, right?”

Minute Minuet

I was reading some amateur fiction the other day. It doesn’t matter which specific story it was or where I was reading it, though. I’ve seen this error show up with increasing frequency. Note that I’m referring to amateur fiction that has not been professionally edited. I would be horrified if a professional editor let this error through. This error has made my pet peeves list as a result of at least one author systematically getting it wrong.

So what is this error? Put plainly, it is writing “minuet” instead of “minute”. We all know that a minute is a unit of time consisting of sixty seconds and that there are sixty of them in an hour. It is also used figuratively to refer to a relatively short period of time. On the other hand, a minuet is a ballroom dance. The two really are not interchangeable, though if you squint at things correctly, you could make “I’ll be with you in a couple of minuets” make sense. However, unless the context is dancing, writing “minuet” is almost certainly wrong.

I do have to admit that writing “minuet” instead of “minute” is easy enough to do. The difference is a simple transposition of the last two letters. I could easily put this down to a simple typo if it occurs once. However, when it occurs systematically in a particular text, I have no choice but to conclude that the author is doing it intentionally and, thus, has no idea that they are making an error. I do wonder if this isn’t due to some idiotic preference for the much more rare “minuet” over the common “minute” in the autocompletion gimmick in some device or other.

In this case, “minuet” isn’t even pronounced the same as “minute” so it’s particularly jarring. I’m not going to confuse you with IPA representations of the words since I never could get my head around all the IPA symbols. However, “minuet” is pronounced something like “min-you-ette” while “minute” goes something like “min-utt” or “min-it” depending on your local dialect and/or accent.

Because “minuet” is a valid word, a spell checker will not catch it. Basic grammar checkers won’t either since both are nouns. This is the sort of thing that only actual proofreading will catch. If you are one that often writes “minuet” instead of “minute” (and don’t assume you aren’t – you could be doing it without realizing it), make sure you look specifically for this sort of error when proofreading. And if you don’t proofread, start doing so.

 

Lost Wizard’s Musings 2015-6

I’ve uploaded the sixth installment of Lost Wizard’s Musings. Head on over to Lost Wizard Enterprises Incorporated’s channel to see it and the previous five.

For anyone wondering why I don’t just write blog entries for them, I’m using them as a means to learn more about video production. This one is an experiment with the good old fashioned chroma key type effect, otherwise known as blue or green screen. So far, my conclusion is that to do a really good green screen effect, I need a much better quality camera – something that doesn’t introduce nearly so much noise into the image. Still, since I can’t afford to buy one, I’ll have to make do with what I have and put up with that funky fringe effect around my hair, though I expect I could reduce that with some hair spray or something.

I think I need to start writing scripts ahead of time, too. I might even want to consider some sort of teleprompter. This winging it business leaves way to much editing and fixup in post production.

“Responsive” design

These days, one of the big buzzwords is “responsive design”. But what, exactly, is a “responsive” design? What is it responding to? Theoretically, it means building a site that provides an optimal experience regardless of the device it is being viewed on. Here’s why that notion is stupid.

First off, you have no idea what device a visitor is viewing your site on. If you think you do, you are seriously deluded at best. How do you find out what device they have? Maybe you sniff the user agent string on the server side and send a page designed for that type of device. But how do you know that the user agent strng isn’t lying to you? Even if yu can trust it, how do you handle the case where the viewer has his web browser at a tenth of the screen size? Or maximized on a ridiculously large screen? Or maybe he is viewing his ridiculously large screen from across the street? The user agent string tells you none of this.

Now some dim bulb out there is probably thinking that this sounds like a brilliant application for javascript. Or, worse, jquery. Are you insane? That is just asking for trouble. Sure. Javascript allows you to query the window size and charge around adjusting your content to fit on the screen in some useful manner. It even allows you to handle the case where the window is resized! But what if the viewer has javascript disabled or doesn’t support it? How is it going to play with a screen reader for a blind visitor? Or a brail readout?

Wait, what? Who said anything about blind people? Think about it. The device showing the site may not be a bitmapped display. After all, what good is a bitmapped graphics display to a blind person? If you’re going to be responsive, why not be responsive to that, too? Okay, I’ll grant you that your content may not be relevant to blind people. It may also be that blind people are such a small part of your audience that you don’t care. But to make a site truly responsive, I contend that you must at least consider them.

Of course, we can also make use of new fangled features of CSS and have the web site choose a layout based on the actual viewport size. We could, say, suppress useless columns on smaller displays or reduce the navigation sprawl. Perhaps you limit the width of a text column to something pleasant to read instead of having it sprawl from one side to the other. This would even work for people resizing browser windows. It doesn’t solve the massive screen from a city block away problem, but that is not your problem anyway. That user likely has a magnifier of some kind. This may be the best overall solution once the requisite new fangled features are supported across all the major browsers.

There’s another aspect of site design that is related to responsive design. How many sites have you encountered that used mouseovers to provide a critical access to the navigation with no other possible means of navigating? How does that navigation work on a touch screen interface where it is nearly impossible to actually do a hover? How many other user interface elements have you encountered that are difficult or less than ideal on something that is not substantially similar to a classic desktop computer?

It’s pretty clear that I agree with a “responsive” site that is implemented sensibly (which most are not), so why do I think the notion of “responsive design” is stupid? The whole notion is stupid because it should never have been required in the first place! Every site should be responsive. If we had not gone off the deep end with fancy gewgaws, widgets, gizmos, and buzzwords, every web site would already have been “responsive”. This will date me, but I remember the early days of the web when graphical web browsers were the new thing. I remember using Netscape 0.8. I remember when Lynx was the best option in many circumstances. Back then, we didn’t have these fancy-ass sites that we have now. But they worked! They were “responsive”! Nobody needed a buzzword to make something responsive back then. It just was. In short, the notion of “responsive design” should not be a novel thing. It should just be the thing. Design should just be responsive by default.

 

Bitcoins and Currency

Much has been made of Bitcoins and other so-called digital cryptocurrencies. I remember how, in the early days, Bitcoin was supposed to be the cure for all the woes of modern fiat currencies and the central banking system. Even at the time, I failed to see the logic in that claim. I should mention that I have profitted some from Bitcoins, mostly by the fluke of having mined some when it was trivial to do so.

Before I get into Bitcoins, however, I should define a few terms. First, a fiat currency whose tokens have no real intrinsic value, or a negligible one. The only reason a fiat currency has value is because there is some authority that mandates its use for some process that is not optional. In other words, its value exists by fiat. The dollar (take your pick which one) has no intrinsic value – the coins and notes used to represent dollars and cents merely score keeping tokens made of base metals or paper or what have you. Furthermore, most of the dollars that circulate never actually physically exist in this day of digital transactions.

Central banks, on the other hand, are more complicated and simple at the same time. Wikipedia has a reasonable write-up on central banks over here. It’s easy to get bogged down in the details, though. Roughly, the central bank usually oversees the operation of commercial banks, manages interest rates (whatever that means, really), and often is responsible for producing the national currency.

Neither fiat currency nor central banks are problematic on their own, regardless what many fear mongers would have you believe. A fiat currency is arguably better than a commodity currency (gold, silver, copper, leaves, etc.) as long as there are sensible controls on how much is circulating. Central banks serve an important mediation role between commercial banks and also as a banker for the government.

Where the problem comes in is fractional reserve banking. I do not use the term “lending” because you cannot lend what you do not have. Basically, the fractional reserve system is an institutionalized fraud which allows commercial banks to “lend” money they do not have so long as they have a sufficient reserve level. This has the side effect of multiplying the money supply well beyond the amount put in circulation by the central bank. This sounds wonderful – more money is good, right? Wrong. For a good description of what is wrong with this system, head on over to Positive Money. Their take is UK centric but their illustration of the problem applies everywhere.

The practical upshot of the current money system is that without continual inflation, the economy will completely fall over dead because there will not be enough money to pay the interest on loans and therefore borrowers will default. As borrowers default, eventually the money supply starts shrinking making it harder and harder for the remaining borrowers to pay their obligations. You end up with a downward spiral of defaults leading to reduced money supplies, deflation, hoarding, and other problematic situations. In other words, the economy becomes illiquid and stalls. By continually increasing the supply of money circulating, this default spiral can be staved off. Positive Money notes how this is problematic in the current system in which the commercial banks are chiefly responsible for the total money supply.

The problem with inflation, however, is that it is basically stealing from the future to pay for the present. The dollar I put in a safe today will be forth far less in real value when I take it out in thirty years. At an annual inflation rate of two percent, that dollar will lose nearly half its real value over thirty years. Put another way, what you could buy today for $1.00 will cost you $1.81 in thirty years if inflation stays constant at two percent. If you start playing around with the mathematics and plug in the underlying cost of interest owed to the commercial banks, the implications become frighteningly clear. The current system is not sustainable and it is amazing it has not totally collapsed previously.

Enter Bitcoin. The creators of Bitcoin looked at the current situation and observed that inflation is bad and that we clearly need a currency with some sort of built in hedge against inflation. What if we create a currency that has an absolute upper bound to the number of currency units that will ever be created and prevent any central agency from having any control of the system.

That particular notion would make perfect sense in a steady state economy. A steady state economy is one where economic activity is neither growing nor declining. This is, of course, the only economic state that is actually sustainable, but that’s not the point of this discussion.

In the real world, economic activity is growing, on average, and it will continue to do so as long as the population continues to increase and demand for “stuff” continues.  That means, if you have a fixed currency supply, there will always be more and more stuff being chased by the same number of currency units. As the level of activity increases, the average prices must decrease if the currency available does not also increase. In other words, this situation ultimately has deflation built into it. While this is not necessarily bad in itself, it does cause some behaviour that is less helpful. Hoarding currency becomes profitable as it becomes more valuable over time, which, in turn, further reduces the currency supply, leading to further deflation. In short, even without a shrinking money supply (such as in the case of the default cycle mentioned above), the economy will eventually become illiquid.

The situation just described is exactly the situation with Bitcoin. What the designers of Bitcoin have created is not a currency but a scarce commodity akin to gold or silver. The same reason we abandoned gold and silver as currency units makes Bitcoin unsuitable as a general purpose currency. However, like gold or silver, it is useful for single transactions at a specifc moment in time, and, potentially, as an inflation hedge.

In other words, while I do hold a small number of Bitcoins, I do not believe they should ever become a mainstream currency. That would be more deleterious to the overall economic health of the world than the current central bank based fractional reserve system of fiat currencies. And that is saying something because the current system is so horribly broken that its ultimate collapse is almost certainly looming large on the horizon.

Why C++ Is Bad

First of all, this is an opinion piece. If you don’t agree, fine. Regardless of your opinion, comment if you must, but remember, comments that are completely unconstructive will not be approved.

Now, on to the point of this rant. C++ sucks. It is, perhaps, tied for the worst language design in the history of computing. Here is why.

First, C++ builds on C, but instead of outright replacing features of C that C++ provided alternatives for, it simply bolted additional aspects onto the existing C language. Thus, instead of having only one keyword to make an aggregate type as in C, you now have two with slightly different semantics. Instead of pointers, you have references which are almost but not quite pointers, and also still pointers. Instead of disallowing the C standard library and providing its own, you have the C++ standard library but you can also use the entire C standard library. Indeed, the end result is basically all the festering badness of C combined with a poorly conceived and implemented object model.

Let me go back to early C++ before the standards committees got their hooks into it. It was a simple enough language which had useful features compared to C. It added operator overloading, function overloading, and objects. In all, it seemed a fairly decent expansion on C. There was only one problem. There was no way to handle error conditions in constructors or destructors, making it impossible to properly handle resource acquisition and release. Why it wasn’t possible to return an error from constructors and destructors does make some sense given their actual usage in many circumstances. It is also not an insurmountable problem, after all, since careful programming can avoid most problems in the destructor and a “factory” method can solve the constructor problem.

Even early C++ showed the beginnings of a horribleness, though. The standard library, particularly the iostream stuff, demonstrates a complete lack of understanding of real world I/O problems. Rather than simply encapsulating something like the C stdio subsystem, instead it provided a ridiculously baroque pile of crap that could only be used effectively for very basic text I/O. While stdio sucks somewhat for structured input, iostream sucks more. Instead of a relatively compact scheme like the printf() family of functions for formatted text output, you have to implement a wordy chain of mutators and output values that does little more than obfuscate just what you are attempting to output in the first place. Instead of being able to scan a format string and compare it with a relatively short list of formatting codes and explanations, you have to somehow divine the initial state of the output stream and then track potentially dozens of state shifting objects just to work out how numbers are going to be formatted. And all of this is using overloaded bit shift operators (apparently because hardly anybody does bit shifting).

C++ apologists and fans will claim that the C++ way is more object oriented. I clearly have no understanding that the C++ way is the one I really want. To them I point out that I learned C++ first and the C. Yes, that’s right. I learned the C++ way first. Then I found printf() and friends and I was enlightened.

Fortunately, it is perfectly possible to completely ignore the iostream bullshit and go straight to using stdio. Consider that stdio gives pretty much everything that iostream attempts to provide. The FILE pointer is basically an object with methods on it (all the stdio functions that operate on FILE pointers) including constructors (fopen() among others) and a destructor (fclose()). Used according to the defined interface, this is a perfect implementation of object orientation. There is no reason that stdio could not have provided a means to attach arbitrary user defined schemes into the FILE object or attach additional format specifiers to the printf() family functions.

Had C++ stopped there, it would have been bad enough, but not horrible. Instead, additional features were slowly grafted onto the language. One such feature was templates, which were conceived in the absence of any inkling of how to properly implement them. Sure, the notion looks brilliant on paper. Let’s write a generic set of code for, say, a linked list, and have the compiler generate specific code for the contents of the list. And this works brilliantly in a single file compilation. But what happens if the template is instantiated in several different files in the same project? Oops! Suddenly you have linker errors from multiply defined symbols! This was such a problem that I remember needing to use “-fno-implicit-templates” and have a separate file that instantiated all of the templates needed for a project. That was bad. It actually needed a thing called “weak symbols” or something like that and special support from the linker to make it work properly.

Even templates were not horrible on the surface except for the terrible syntax selected for them. (< and > anyone? Duh! Obviously a bad idea!) They should have had a distinct syntax using distinct tokens. But that would have potentially broken previous code or something like that. That just wouldn’t do. Instead of inventing a new syntax with new tokens and calling it C++ Version 2 or something, they overloaded existing tokens.

At around the same time came namespaces which seem like a good idea, and, in fact, are probably one of the few features of modern C++ that is not a horrible festering pile of crap. At the very least, they allow one to avoid name conflicts between different packages. Even the choice to use the same scope resolution operator (::) is not a horrible one. I don’t actually have an issue with this one save that it changes the semantics of the language at a fairly late date, but that’s even minor.

The biggest thing that causes no end to trouble is exceptions which were ostensibly a solution to the fact that constructors and destructors cannot report errors. But it actually solves the wrong problem there. In fact, exceptions are not necessarily a bad idea when used judiciously (which they aren’t), but the execution in C++ is terrible as it interacts horribly with other language constructs, notably object construction and destruction when resource exhaustion is involved and used by the exception object. Why it was deemed a good idea to be able to throw an arbitrary object as an exception mystifies me – it makes no sense whatsoever. There should have been strict restrictions on what was throwable. Leaving aside the horrible coding style that exceptions encourage, which is hardly the language’s fault, the execution in C++ makes it nearly impossible to do them right.

As C++ developed over the years, it became clear that the ivory tower academics (read idiots who do no real work) had increasing influence, attempting to turn C++ into a poster child for “abstract data types” and all that rot that they teach in comp sci programs. The absolute worst aspect of modern C++ is the standard template library (STL) which epitomizes the academic ideal of data structure design. What that means is it is horribly inefficient, excessively verbose, and difficult to do any real work using it. Worst of all, it attempts to implement a series of data types that make absolutely no sense in C++ or which cannot be sensibly implemented, and these standard templates often have a convoluted incestuous relationship. This is in addition to the non-template standard types which suffer from all the same problems. The thing all this stuff forgets is that there is no one size fits all solution to any non-trivial data structure, and that often holds true for trivial ones too.

Of course, I can simply ignore the standard template library and the standard data types, for the most part, in my own code, but as soon as I attempt to interface with a library or code written by someone else, I have no alternative but to deal with that crap. The problem is further exacerbated by the existance of such “helpful” libraries as Boost.

In short, C++ is a shining example of design by academic committee. (An academic committee is orders of magnitude worse than an plain committee.) This is the absolute worse way to design a programming language. Period. And this is why C++ is bad, and also why I do everything I can to avoid using it at all. I find object orientation in plain C to be far more pleasant than modern C++. (Yes, you can do object orientation in C with relatively little overhead, and with much more flexibility than in C++. All it takes is a bit of discipline, and that is required to do things right in C++ too.)

Anyway, that’s it for my rant. You may or may not agree. That is your perogative. There is plenty on the internet lambasting C++ and plenty praising it. I invite you to read both sides with an open mind and make your own determination. Consider actual real world situations with real code to understand the issue. Examples presented in literature on both sides are contrived to support the point being made. Examine the examples and understand how they work and why. That can be as enlightening as the prose in any argument.

 

Exceptions Are Evil

Exceptions are evil.

Now that I’ve pissed everyone off, I can get on with explaining. Rather than talk in the abstract about this in the abstract, which is not helpful at all, I am going toss in an example or two. It is far easier to illustrate my point with examples in this case.

First, let me start by saying that the general concept of exceptions is not necessarily evil. Indeed, any program that is properly written must deal with exceptions in some manner, preferably without crashing randomly. Where things tend to go horribly wrong is when a programming language implements support for some flavour of exceptions. For instance, C++. It can obviously also go horribly wrong when a programmer messes up in a language that doesn’t implement any version of exceptions directly.

Leaving aside the syntactic problems with exceptions for the moment, let us consider what an exception actually is. The word would suggest an exception should be reserved for a truly unusual circumstance – something that should not normally happen. But what exactly does “unusual” mean?

Let’s examine the case of allocating memory on the heap. Ordinarily, this succeeds without a lot of fuss. But what happens if it fails for some reason, say there is no more memory available or the process’s address space is full? In C, the malloc() family functions return NULL to indicate this condition. The program then has to check the return value or simply crash when dereferencing the resulting NULL pointer. In most programs, running out of memory would be an exceptional condition so it would seem to be an excellent candidate for exceptions.

Let’s examine another case. Suppose you are opening a file to read some stuff in it. Now suppose access is denied or the file doesn’t exist. These are exceptions, too, right? Well, not necessarily. If you’re iterating through a list of possible file names to find a file that does exist, or which directory it exists in, experiencing a file not found situation is hardly an unusual circumstance, is it? Having a file with no access permission is likewise fairly normal on systems that support such things. If the file in question happens to be a critical part of the overall program that is running, that might constitute an exception since that is an unusual circumstance meaning that the program is incorrectly installed. But a missing input file to a compiler is likely not an exceptional condition and is one that the compiler will want to deal with directly.

On the surface the two situations above look the same, but they really aren’t. With the second example, the situations described are error conditions, but they are not unusual conditions. They are, in fact, ordinary conditions that are expected to occur in the ordinary operation of the system. They are, by definition, not exceptional.

One of the biggest issues I have with exceptions as they are implemented and used in modern systems is that the notion of an exception has been abused into a general purpose system for returning any error condition, routine or otherwise. I can possibly defend an exception on memory allocation failure, but I have a great deal of trouble accepting that an exception is the right solution when a file simply does not exist.

Modern systems have conflated error handling and exception handling. This may be partly due to the fact that without an exception mechanism, exceptions and ordinary errors have to be handled the same way – by an error indication from a particular call. The other source of the conflation is possibly due to the fact that many “software architects” do not understand why everything is not an exception.

All of that would be relatively minor, however, if the syntax for exceptions were not horribly verbose and insanely tedious to use, even when the language itself provides a solid bug-free implementation. The biggest problem is that this excess verbosity and extra boilerplate code encourages moving the error handling to a point distant from the actual source of the problem. The idea is that the ordinary flow of the code is easy to discover by extracting the error handling from within the main flow. However, this makes the flow non-obvious in the case of something like a missing file. If you do have a solution to a missing file, say by trying a different file, you can’t just continue where you left off if you have one big block and handle all the exceptions at once. Instead, you have to wrap everything that can fail with its own exception handling boilerplate. Now you have duplicated the exact thing you thought you could avoid by using exceptions in the first place! For instance:

result = somefunc();
if (result < 0)
{
    result = tryotherfunc();
    if (result < 0)
    {
        handle_error();
    }
}

if (do_something_with_result(result) < 0)
{
    handle_error();
}

if (do_something_else_with_result(result) < 0)
{
    handle_error();
}

The above is relatively clear. The precise mechanics of how handle_error() would have to work depends on the specific calls, of course. Let’s see what that might look like if we have a language using a “try { code… } catch ….” structure:

try
{
    try
    {
        result = somefunc();
    }
    catch (exception)
    {
        result = tryotherfunc();
    }
    
    do_something_with_result(result);
    do_something_else_with_result(result);
}
catch (exception1)
{
    handle_error();
}

That doesn’t look too bad, does it? But what if we need to do something different depending on which failure condition occurred with, say, do_something_with_result()? And suppose that do_something_else_with_result() might also throw the same exception for a different reason and we can handle that one too? Oops, now we need a try/catch block around each call. We can no longer get away with just the outer one. And for each exception we need to handle, we have to add another catch statement after the try block. It starts to get confusing rapidly.

Of course, you have to add some sort of if/else structure to do the same thing in the first example, too, but that is not going to be any more verbose than the try/catch structure.

There is another case here that should be examined, too. Let’s look at the case where you want to remove a file. Now suppose we are using the first style. If an error occurs, the call returns an indicator of such. If we are using the second style with exceptions, it throws an exception. Now suppose you don’t care if the file didn’t already exist. In the former, you can just ignore the condition. In the latter, you have to explicitly trap the exception and ignore it. The same situation often occurs when closing a file before exiting a program. Sure, errors can occur at that point, but what can you actually do about them? If exceptions are being thrown in that case, the do no good but you would still have to handle them or get mysterious unhandled exceptions at program shutdown.

There is another case where exceptions seem to be the best solution. Suppose the call is to some sort of implicit function, say an object constructor or an overloaded operator (overloading is evil too, by the way). In many languages, this is needed because objects can be constructed implicitly. That means there is no code flow that can trap an error return. The same thing applies to a destructor, which is often called implicitly. It is my contention, however, that if a constructor needs to do something that might fail, (resource acquisition, for instance), it really should not be doing anything implicitly. Instead, some sort of factory operation should be used to obtain a new object. In that case, explicit destruction is then obviously indicated. In short, if an object cannot be implicitly instantiated, there is no need for exceptions. Similarly, if you eliminate the notion of operator overloading, you eliminate the possibility of needing to fail randomly in expressions, too.

I should note that explicit construction is not actually horribly verbose in most cases, and certainly not terribly more verbose than what would be needed to express the static initializer anyway. Also, being explicit with an operation on an object does not preclude being able to chain operations together. It’s easy enough to put an object into an error state that can propagate through chained calculations and then be checked at the end. Sure, you might need multiple statements to chain the calculation together with explicit destruction being required and that might be a bit tedious, but it has the advantage of being absolutely clear what is going on.

And there is the final disadvantage of using exceptions instead of explicit instantiation and destruction, though this is not specific to exceptions per se. It is not at all clear whether a particular usage pattern is doing something dumb or not if you can just chain stuff together with implicit construction and destruction. Sure, you can create resource leaks with explicit destruction, but at least you can see if you are creating and destroying stacks of objects needlessly, and you can bail out in a controlled manner at any stage you like. It is also explicit that something which might fail is happening so you generally have a clear idea what failed if your program crashes at a particular location.

To this point, I haven’t even examined how the actual language exception implementation could be problematic, either. Suppose you have a language with explicit memory management like C++. Now suppose exceptions are objects. You throw an exception, but you have to dynamically allocate it somehow because it would go out of scope as the stack unwinds otherwise. How do you ensure the freshly minted exception gets freed correctly after the exception is handled? What if the object was a static object instead of a dynamic one (which also wouldn’t go out of scope)? What if memory allocation fails when creating the new exception object? Yowza! Things can go wrong in a hurry! Of course, in a language with automatic memory management, like javascript or java, this is much less of a problem. (Though what do you do if the exception object fails to initialize? Throw an exception?)

In short, in my never humble opinion exceptions are useless syntactic HFCS (high fructose corn syrup). They taste great but have at best dubious benefits and more likely are deleterious to maintainability or correctness in the long run. Even when well implemented using something less verbose than try/catch, they do not seem to be a substantial improvement over classical error handling. And, if they are only used for truly exceptional circumstances, the classical error handling would still need to be present for many circumstances (non-exceptional failures).

There is nothing wrong with explicit error handling inline in cleanly structured code. If you think you need exceptions to make your code clear, you are going down the wrong path.