“Personal” Computing Long Term

It may be inconceivable to many people today, but the era of everybody having personal computers with massive capabilities sitting around mostly idle to read email or write a letter to grandma are numbered. The vast majority of people do not need a full blown totally independent computer. They don’t do anything remotely complex or out of the ordinary on it. Or, if they do, they don’t do it often enough to really warrant having such a resource intensive device.

As technology develops, it becomes clearer and clearer that most people just want to get their email, write their letters, and maybe balance their chequebooks. Very few people need that two dozen core monster with four ganged high power video cards connected to more screen real estate than Times Square. If you need it, you know it, and you likely know what you’re doing and what you’re going to pay for it. For the rest of the world for whom tablets or smartphones seem to be sufficient, any sit-down computing they do at a keyboard and monitor certainly does not need even the low powered computers currently available.

Now suppose that instead of every person having a desktop computer with monitor, keyboard, mouse, etc., plugged in and sucking up power, most people had, instead, a much lower power device more akin to the processing power in a high end smart phone attached to a reasonable sized screen, keyboard, and mouse. Even that, alone, reduces the power footprint noticeably and if you multiply that by hundreds of millions, you’re suddenly talking about real energy reduction.

Now suppose you take some of that energy reduction and budget that for larger centralized processing resources. Make those resources available on a usage basis to people who need something more than their TinyBox™ device occasionally, say to render an edit of Christmas at Uncle Fred’s, or whatever. Or maybe they need to store some files that are larger than the TinyBox™ can handle, or maybe they want to be able to access them from other TinyBox™ devices.

Sure, it sounds like we would rapidly get back to the same energy consumption we had before. However, once the installations reach a certain usage level, something magical happens. The actual required resources to service the needs of the users stops rising linearly and starts to look more like a logarithm. This is the so-called ecnomy of scale graph. The reason this works is that as the number of users increases, the likelihood of too many users needing resources at exactly the same time decreases. As a result, the total computing power required for all their computing needs tends to reduce. If the controller software further prioritizes particular types of work loads, the overall load can be spread around through off-peak time, thus reducing the overall peak further.

Of course, all of this will require a major paradigm shift from the users and it will require solid infrastructure from the providers of the centralized infrastructure. Additionally, it will likely only ever be viable in areas with high population density like city centres or highrises. Still, it will come as energy budgets become tighter. In fact, ultimately, the end result will likely be that any portable device one has will interface directly with the chosen centralized resources and even having a fixed display and keyboard installation will likely become much less common.

Yes, people, I know I have just described “the cloud”. While I don’t see how “the cloud” as implemented today is at all a good idea, ultimately, it will happen and it will have a net beneficial effect on the environment (so-called “emissions” – there’s more to an environmental footprint than just CO₂, people!) when it gets here.

 

2 thoughts on ““Personal” Computing Long Term”

  1. It will happen despite the efforts, and in part because of the efforts, of Microsoft, Apple and the like. Oddly, though it sounds like a bit of contradiction, Microsoft and Apple want end users to buy their product. Portable devices are popular and so damn convenient. So Microsoft and Apple will push their portable operating systems and devices even while they loose shares on desktops and laptops, which arguably have been the source of the lion’s share of their profits.

    That said, it’s amazing what some of these portable devices are capable of doing. My current phone, Samsung Galaxy Note (obviously slightly dated) is capable and used to do such things, in addition to the usual Web Browsing/Text Message, as:

    * Tethering
    * Act as Flashlight
    * Email Client
    * Editing Text/Documents
    * Compile Code (yes, my phone has applications such as gcc/++ and javac! O_o )
    * Telnet/FTP/SSH/SFTP servers and clients
    * VPN
    * Collaborate/contribute in/to research projects (ie. pressureNET)

    I still primarily use a desktop, but my phone has become one of my most important peripherals. Typically, I’m ssh’d into my desktop via my laptop or phone letting my desktop do most of the heavy lifting while I browse and do “fun stuff” with the other devices.

    1. Indeed. I have a Note II and I do much of the same stuff with it. Before I had it, I used a netbook tethered to an older Razr. I still use the netbook when I need to do something substantial since even a tiny too-small keyboard is better than the touch screen for doing any amount of typing. I also have a full sized notebook that I use as a portable workstation to set up for longer term work at a particular location. But my desktop is always online (modulo internet and power problems) and I often end up logging into it to do things simply because it’s more powerful and I don’t have to worry about battery power or what have you.

      It’s not hard to see how this will evolve into centralized computing infrastructure eventually.

Leave a Reply

Your email address will not be published. Required fields are marked *