Archive

Archive for the ‘Computing’ Category

The future of mobile devices

24.09.2015 Leave a comment

There are two main directions of evolution for mobile devices: sensors and form factor.

While there’s been some development in sensor technology, there’s still a lot of possibilities, so I’m looking forward to seeing a startup making advances in this direction.

Pocket mobile devices are quite popular, but the era of wearable computing is yet to come. Wrist devices seem promising. The current wave of smartwatches has not scratched the surface yet. Apple and other manufacturers completely don’t get it, so one day some company will come and create the perfect wrist computer.

Below is a breakdown of what an actually useful wrist computer could do.

Readable screen

A computer-like backlit mini-panel is a joke. It consumes a lot of energy, so it must be turned off when you’re not looking to save battery. The solution is obvious – take a look at Kindle. Current Kindle screens are slow to refresh, but they consume very little power and are readable in every lighting conditions, including bright sunlight and darkness. The optional backlight in a wrist device could be automatic and conditional on day/night as well as the “look at me” gesture (like in current smartwatches).

No charging

I am supposed to charge a smartwatch… every day? That is the biggest joke. That is not the direction this technology should be going in. Solar batteries and kinetic charging has been around for 30 years and more. It’s time these technologies are leveraged in wearable devices.

Voice commands

A touchscreen on a watch-sized computer, however useful and necessary, is really too small for most things, esp. typing. The input of such computer should be primarily based on voice recognition. Regardless of whether it can just listen-in for your commands or whether you need to press a button (and it should be ready within milliseconds), the input should be prefiltered and it should be able to tune-in to the owner’s voice, so that it is usable even in loud or public places, and does not accept commands from other people.

As for the actual speech recognition, solutions like SoundHound promise the future where you can say anything to your computer, and it will understand you.

Hardened and waterproof

When the device does not need to be charged (or can optionally be charged wirelessly) and does not have any buttons, it could potentially be made waterproof. The main challenge for making it waterproof would still be the microphone and speaker.

Connectivity and apps

You can expect these from any kind of computer these days. You need all sorts of notifications around the clock and you need the ability to call people and see them at the same time. The main challenge here lies in energy consumption.

Lots of sensors

Finally, there are lots of useful sensors that could make this kind of device a really useful companion:

  • GPS, accelerometer, gyroscope, compass – nothing new here, you can expect these types of sensors in every mobile device these days. They can tell you your location on demand, figure out what you’re doing with the device (or with your hand), etc. I’m looking forward to the ability of using GPS without active connection, a feature of which the mobile Google Maps app has apparently forgotten, or maybe they can’t do that due to patents (all hail the patent law!).
  • Temperature, pressure and humidity – it would be useful if your watch could tell you these.
  • Heart rate and blood pressure – if you are in pursuit of healthy living, you need to know these.
  • Spectrometer – if you go to a restaurant, you want to know if the fish you’ve been served has been poisoned with heavy metals. Even though spectrometers are still a feat of extraterrestrial rovers, someday they will revolutionize the way we do shopping and help us avoid unhealthy substances served to us by the food industry.

So would you wait for a wrist computer which is useful and not tedious, or can the coolness of the Apple Watch or an Apple Watch-like smartwatches lure you?

Advertisements
Categories: Computing

Can floating point representation be improved?

24.07.2015 Leave a comment

If you ever wrote any code which used floating point numbers, you are probably aware, that the widespread floating point format, also known as IEEE-754, is full of caveats.

Due to the representation used, floating point computations are usually unstable. The errors of certain operations grow rapidly, operations are order-dependent and you can obtain different results on different architectures.

Here is an incredibly promising proposal of unum, a number format which improves the current floating point format in all areas: John Gustafson Explains Energy Efficient Unum Computing.

There are many areas, where this could provide improvements, not only in the quality of programs, but also in energy efficiency.

In particular, scripting languages would benefit from it a lot. For example, JavaScript has a single Number type. This creates a lot of problems, because it is hard to make computations reliable, and there is a limited precision for integers. Using floating point representation for integers was a very bad design choice for JavaScript. With unum, this could change and a single numeric format would make sense.

Another example are statically typed languages, where an elaborate type system leads to headaches. With unum, this would no longer be a problem, and the type system could be simplified, especially because unums can be represented with a variable number of bits (although do not have to). Integer calculations can be carried as fast with unum as they can be carried out in regular 2’s complement representation. And at the same time, unums support floating point numbers.

Will unum prove to be useful and become widespread? I certainly hope so.

Categories: Computing

C++ wrapper for WinAPI

16.04.2015 Leave a comment

By popular request, I just uploaded the full WinAPI Wrapper library source code on GitHub.

[WinAPI Wrapper on GitHub]

This library is a very thin C++ wrapper for the Win32 API, which is a C API. The overhead of the wrapper is almost none.

The last modifications to this source originate from 2003. The development seized around that timeframe. I haven’t done any development on/for Windows for many years now, I rarely fire up Windows these days. The library can still be used for writing a wide range of apps, although I imagine that the Win32 API has evolved since and the library could benefit from some adjustments.

Whether I will maintain it further remains to be seen. 🙂

Categories: Computing

Which programming language is the best?

15.11.2014 Leave a comment

In this fascinating study, the authors analyzed commits on github to see what effect the programming languages have on the number of bugs.

Surprisingly, it does not seem to matter which programming language you choose – you will have the same number of bugs in your code.

The main difference between programming languages is what types of bugs programs written in these languages have. That’s no surprise though, for example while you can leak memory in C++, it’s quite hard to leak in Python.

I suppose what matters is who writes the code, not how? This would explain why it is so hard to find bugs in prof. Donald E. Knuth’s code.

Programming languages are for humans, they are essentially a human interface for compilers. In the future, when programmers are replaced by AI, programming languages will disappear as AI will produce optimal machine byte code and all software will be bug-free.

Categories: Computing

From iOS to Android

29.07.2014 Leave a comment

My iPod Touch is almost 4 and it’s growing old. I decided to swap it for a cheap Android phone. How did it go? Below are my impressions.

But first, what is an iPod Touch? Well, for all practical purposes, iPod Touch is just an iPhone without the phone. Mine was the first 4th generation device, the first one with retina display. I always thought that the price difference between iPhone and iPod Touch was just too big, it’s just not worth paying the extra bucks for the phone functionality. Plus, it was back in the US where cell phone plans are crazy expensive. Instead, I was always carrying a cheap, old simple phone and using a prepaid SIM card. You have to understand that I don’t use a cell phone too much, on average maybe ~5 times a month. Data plans? I’ve always had WiFi at home and at work.

The iPod Touch has been a great companion for all these years, so to speak.

  • It has a great build quality (metal case, Gorilla glass screen).
  • The retina display is perfect as the human eye cannot distinguish separate pixels. Everything looks crisp and smooth.
  • Obviously it has all the goodness you can expect from a mobile device of this class, including useful apps for e-mail, news, calendar, and much more.
  • Obviously there are lots of games for it, useful esp. in a waiting room.
  • I could take it everywhere. My notes, my calendar, always at hand.
  • I could watch Netflix in bed. And it doubled as an alarm clock.

On the down side:

  • I had to carry a separate phone device, for emergencies.
  • I sometimes missed the smartphone function, esp. when away from home and without WiFi access – e.g. I couldn’t read news or use maps. Also, it has no GPS.
  • It started showing age. It was the last iPod Touch with a single core CPU. So some apps are really slow on it, including the Web browser.
  • Only 2 days on a single charge under light use. I do have to give it credit though: after almost four years I still don’t see any battery deterioration.

Finally, my old simple phone’s battery started giving up and I got fed up with my current prepaid plan. When looking for a new cell phone plan, I’ve determined that you can actually get a new phone for free with it.

Now, let me tell you about one of the Europe vs. the US differences. In the US, cell phones really are a rip off. For example, the cheapest plans go for $40 a month. Maybe you can find a $30 one these days if you are lucky. But in Europe, you can really afford it and you don’t regret it. For example, my new plan is £5 a month and it comes with a free Android phone. It is a two-year contract, but hey, it’s only £5 a month! Free family and home calls – my most popular call destinations. Over the course of two years, I expect to pay the same amount I paid for the iPod Touch – and in this price I have a phone service with data, useful occasionally and in emergency situations.

So how does the Android phone stand up against the iPod Touch?

This is the LG D160, also marketed as LG L40.

Let’s get the obvious stuff out of the way:

  • It came free with the plan, so no wonder the build quality is relatively worse. It’s made of plastic, both the body and the screen. I don’t expect any good durability.
  • While the LG’s physical screen dimensions are the same as in the iPod Touch, the resolution is lower, so the pixels are visible and thus anti-aliasing artifacts etc.
  • It’s Android, so I don’t expect to get any updates. I picked the only phone which had the latest Android K (4.4 KitKat). The list of the phones sold notably included much more expensive phones which still have Android G (2.3 Gingerbread). In contrast, my iPod Touch has received two major version updates over the years until Apple gave up on it. I’m still waiting for an Android phone manufacturer to provide updates for a couple of years.

Now, after a few days of use and despite of the above drawbacks, I am actually pleasantly surprised and expecting having a good time with this device, which may successfully replace the four-year old iPod Touch.

  • It has a dual core CPU, which outruns the old iPod Touch’s CPU in most cases. For instance, the browser is much snappier. Apps start quicker.
  • I installed most, if not all the important apps I used on the iPod Touch. Either the same or equivalent apps are available on Android. Many of the apps synced data without any problems and delivered the same experience out of the box, like the Calendar app, which obviously works with Google Calendar on all kinds of devices. The migration to Android was pain-free.
  • Just by trying something new – I learned new stuff. For example, the Stocard app provides support for loyalty/rewards/gift cards, so no need to carry them in the wallet anymore. (This applies to iOS too, I just found it so useful that I thought it’s worth mentioning.)
  • While the build quality is poorer and the screen has the same size, this LG phone actually feels better in my hand than the iPod Touch. And even though it’s probably a few grams heavier. I’m not sure yet whether it’s because it’s thicker or shorter, but it is somehow better to hold.
  • I don’t know why iOS feels better put together. Maybe it’s because the LG’s screen is pixelated and generally worse, while the iPod Touch’s retina display was smooth and crisp? Maybe it’s something about graphics design and fonts? I’m not sure. Maybe it’s just a matter of getting used to.
  • Maybe this goes back to the CPU speed, but the LG’s home screen scroll quality parallels that of iPod Touch’s. The scrolling on many Android phone’s I’ve seen in the past used to be choppy, but on this one it is smooth.
  • The drop down control center draggable from the top on Android is much better than equivalent functionality on iOS. For example turning data, WiFi, etc. on/off is super easy.
  • I miss notifications on the lock screen like on iOS, where I didn’t have to unlock the device to see the notifications.
  • The keyboard and typing were actually better on iOS. Keys were slightly bigger and clearer, the key indicator when pressing keys was better visible. Also switching between alpha/numeric/symbols is slower on Android. I could type faster on iOS. Aside from the fact that typing is something you want to avoid on pocket devices like these.
  • I haven’t used the built-in GPS much, but it seems to work. It may come handy one day. iPod Touch didn’t have it.
  • I don’t care too much about the quality of the built-in camera, because there is no way a cheap lens and sensor like this can parallel those of aDSLR. But a review wouldn’t be any good if it didn’t mention the camera. So below are the photos taken by the iPod Touch 4 and theLGD160. These photos were taken at the same time, so the lighting conditions were identical and the scene is the same as well. Judge it yourself.
    • iPod Touch 4:iPod Touch 4
    • LG D160:LG-D160

Although I am hoping I could replace it with a device of a better build quality at some point, I think I will get a good run out of the LG D160. I would certainly recommend it to a friend who had an old iPod Touch.

Categories: Computing

OO design and classes in JavaScript

2.05.2014 Leave a comment

JavaScript is a language which has a lot of crufty syntax, but underneath the cruft it has many useful features.

One of the problems people encounter when coming to JavaScript with experience from other languages, is that there are no classes in JavaScript.

The rest of this post assumes you have basic knowledge of JavaScript.

Here are two basic ways to create objects in JavaScript:

// The most common way - using constructor function
function Point(x, y)
{
    this.x = x;
    this.y = y;
}
var p = new Point(1, 2);

// Using a create function
function CreatePoint(x, y)
{
    return { x: x,
             y: y };
}
var p = CreatePoint(1, 2);

This is not exactly object-oriented programming, is it? Let’s say we stick with it, how do we introduce inheritance? JavaScript has prototypal inheritance, which is not how most developers understand inheritance. Let me give you an example:

// The prototype
function Base(a)
{
    this.a = a;
    this.print = function() {
        console.log("a=" + this.a + ", b=" + this.b);
    };
}
var proto = new Base(0);

// Usable constructor
function Derived(b)
{
    this.b = b;
}
Derived.prototype = proto;

// Classic approach to adding more members to the prototype
Derived.prototype.hello = function() {
    console.log("Hello!");
};

var o1 = new Derived(1);
var o2 = new Derived(2);

o1.print(); // prints: a=0, b=1
o2.print(); // prints: a=0, b=2

proto.a = -1;

o1.print(); // prints: a=-1, b=1
o2.print(); // prints: a=-1, b=2

First observation: objects created by the Derived constructor share the same instance of the prototype, not a copy. If the prototype object changes, all objects which use this prototype see these changes.

Second observation: the base “class” is non-customizable from the Derived constructor. We don’t call the Base constructor from the Derived constructor. One workaround would be to add an Init function in the prototype, which would set some members of the object.

Third observation: if we have lots of functions and members in the base “class”, prototypal inheritance can in theory save on memory (the same members are not duplicated across all instances).

Fourth observation: there is no such thing as private properties.

“Real” classes in JavaScript

To the contrary of what most people think, constructors may be used the same way as classes are used in other OO languages.

The key to taking JavaScript to the next level are closures. Closures are variables accessible across functions. When an inner function (defined inside another function) accesses a variable of the outer function, that variable becomes a closure. In JavaScript a very interesting thing happens with closures: they survive the end of the function which declared them and are still usable in the inner functions which access them.

Let’s get on with it: Here is an idiom which lets us create classes in disguise using constructor functions, just like in any other OO language.

// Class (constructor function)
function Rectangle(x1, y1, x2, y2) // Constructor arguments
{
    // Private variables
    var w = x2 - x1;
    var h = y2 - y1;

    // Note: reuse arguments as members!

    // Public functions
    this.getWidth = function() {
        return w;
    };
    this.getHeight = function() {
        return h;
    };
    this.getArea = function() {
        return this.getWidth() * this.getHeight();
    };

    // Accessor
    this.getX1 = function {
        return x1;
    };
}

// Usage
var o = new Rectangle(1, 1, 3, 4);
console.log(o.getArea()); // 6

What about inheritance? Easy:

function Square(x, y, size)
{
    // Call base class constructor on this object
    Rectangle.apply(this, x, y, x+size, y+size);

    // Other Square-specific members follow...
}

var s = new Square(1, 1, 2);
console.log(s.getArea()); // 4

Last but not least, this is a very useful idiom to complete member functions later, handy in user interfaces:

function delay(time, func)
{
    window.setInterval(func, time);
}

function SomeObject(x)
{
    // Save 'this' for lambdas
    var self = this;

    var v = x * x;
    this.publicV = x * x * x;

    this.getValue = function() {
        return v;
    };

    // Private member
    var alterValue = function(newx) {
        v = newx * newx; // access private variable
        self.publicV = newx * newx * newx; // access 'this' via 'self'
    };

    this.setValues(x1, time, x2) {
        alterValue(x1)
        delay(time, function() {
            // 'this' is bound to something else in a lambda function,
            // use self instead
            alterValue(x2);
        });
    };
}

var o = new SomeObject(2);
console.log(o.getValue());     // 4
o.setValues(5, 1000, 6);
console.log(o.getValue());     // 25

// Wait >1 seconds, because after at least one second, the value will change again
delay(2000, function() {
    console.log(o.getValue()); // 36
});

Summary

In effect, the above approach works like classes in other object-oriented languages. Arguably, it’s cleaner than a typical prototype-based approach in which one assigns members to a prototype outside of the constructor function – the guts of the object in a typical prototype-based approach are scattered around the source file(s).

I haven’t measured the performance of the above approach versus a prototype-based approach, but my gut feeling is that modern JavaScript engines deal with it comparably well.

Categories: Computing

Boilerplate

6.10.2013 Leave a comment

Programming languages have different levels of verbosity. Some languages have terse syntax, so you need less text to express what you want the computer to do. Others require you to repeatedly type elaborate constructs, often multiple times, to achieve the same.

Usually you don’t have a choice of programming language. You are hired by a company who already has some existing code and you have to work with that code base. Or you are targeting a specific platform and you have no choice, but to use a particular language.

Regardless of the language you use, you still have to make many choices when designing the software you write, and the choices you make will contribute to the size of the source code and may indirectly affect maintainability, extensibility and robustness.

So what makes a program a good program? I have one theory.

Copy&paste

They don’t teach how to write good programs in schools. In most schools they only teach you the mechanics of programing: they show you the tools, but they don’t teach you how to use them effectively.

My programming adventure started in high school with Turbo Pascal. One of my first projects was a simple game. One time I found a bug and I realized, that I have already fixed it once in another function. I noticed that both pieces of code which had the bug were originally copied from another function.

This was one of my first lessons, and as a programmer you never stop learning. The lesson learnt was that copy&paste approach to programming is a bad practice. If you had to modify one piece of a copied code for whatever reason, you likely have to modify all of them – that’s a lot of unnecessary manual labor, which is something programmers hate. If you just wrote a new expression, which looks similar to or exactly like an existing piece of code, you should instead put it in a new function and call it in both places.

Summary: copy&paste == bad programming practice.

Beyond copy&paste

Not too long ago I’ve been reading some articles criticizing C++, the language I use the most. One of the rightful points was that C++ needs you to type the same code at least twice in more than one place. A typical location of duplicate code is class definitions. First you define a class in a header file, so you type the function declarations there, then you type exactly the same function signatures in a source file where you define the functions.

class Vehicle {
public:
    void StartMotor();
    void Accelerate(double acc);
};

void Vehicle::StartMotor() // you had to type this again!
{
    :::
}

void Vehicle::Accelerate(double acc) // and this too!
{
    :::
}

If you later need to modify a function, e.g. change the number of arguments or their types, you have to do it at least twice. It is actually even worse if you have a derived class and you overload virtual functions. To change the interface, you have to change it in 2*C places, where C is the number of classes which declare that function.

Yet worse, it may happen that you change the function’s signature in the derived class, but forget to change it in the base class. In result, you will have a bug in your program. If you use a pointer to the base class to call the function, the function from the derived class will not be called, since it has a different signature. Fortunately modern compilers issue a warning when this happens, but you still have to write the same piece of code twice.

Sounds like copy&paste? Well, that’s how I write new classes in C++, I define class’ functions in a header file, then copy them to a source file, then use a macro in my editor to expand them by removing semicolons and adding braces on new lines.

Boilerplate code

Welcome to boilerplate code. The definition of boilerplate code is exactly that: redundant code, which you have to write to make a well-formed program source code, but which is unnecessary from your perspective as a programmer.

But the definition of boilerplate code extends beyond what you actually have to write to make a well-formed program. Consider a C++ program where you use a well-known libpng library to load a PNG image. In the basic version, you could write a single function like this:

bool LoadPng(const char* filename, std::vector< char>* image)
{
    :::
}

Inside the function you call the PNG library, which has the C interface, to verify whether the file exists, is a PNG, you load the headers, determine dimensions of the image and color format, and finally you load the image data. Without going into the details, here is how a piece of that function could look like:

bool LoadPng(const char* filename, std::vector< char>* image)
{
    :::

    png_structp png_ptr = png_create_read_struct(PNG_LIBPNG_VER_STRING, NULL, NULL, NULL);
    if (!png_ptr)
    {
        printf("Failed to create png structure\n");
        return false;
    }

    png_infop info_ptr = png_create_info_struct(png_ptr);
    if (!info_ptr)
    {
        printf("Failed to create png info structure\n");
        png_destroy_read_struct(&png_ptr, (png_infopp)NULL, (png_infopp)NULL);
        return false;
    }

    :::
}

This is maybe 10% of the code you have to write to load a PNG file. There are only two lines of code above, which actually do something potentially useful or necessary. The rest of the lines are boilerplate code.

Especially please notice that every time an error can potentially occur, you have to handle it. So you have to write error handling code many, many times. If an error occurs later in the function, you will have to delete the allocated data structures before returning from the function, and you have to write exactly the same code many times, once for every function which could fail.

I bet that there are many programs out there, which have a bug in their PNG loading code and don’t handle the error conditions fully correctly. So in some circumstances these programs will leak memory or behave unpredictably.

Now, the situation above can be improved by writing a C++ wrapper class for loading PNG.  But I wouldn’t go too far with it, or we will just shift the boilerplate code elsewhere, instead of reducing it.

I can imagine somebody writing a PNG loader class, and declaring one function of the class per function of the libpng library. Such PNG loader class would simply build a C++ interface over the library’s C interface. That approach may be appealing to some, but the problem is that 90% of that class will be… boilerplate code! There will only be one single place in the whole program where this class would be used – the LoadPng() function. So all that code would be written in vain and only be a maintenance chore, plus a potential place for bugs to hide. Moreover, the compiler would generate much more unnecessary code, contributing to the program’s final size.

class PNGLoader {
public:
    PNGLoader();
    ~PNGLoader();
    static bool SigCmp(...);
    void CreateReadStruct(...); // throw on error
    void CreateInfoStruct(...); // throw on error
    :::
    void ReadImage(...); // throw on error
};

A fact of life is that many programmers call the above approach a “design”. They create beautiful class designs and hierarchies, which are only taking space and engineering time, but contribute little to the program.

And if you happen to be a C++ hater, please know that the above problem affects not only object-oriented languages, but all programming languages in general. Programmers often tend to put too much work and thought into the form instead of focusing on the contents.

So it seems to me that the best approach is to preserve a balance between the amount of code and functionality. Sure, you can write a beautiful command line argument parser, but what good is it if your program only handles two arguments anyway? Handle them correctly, but avoid too much boilerplate code which you will never use.

In case of the PNG loader, a good choice is a function like LoadPng(), which inside uses something like Boost.ScopeExit to handle errors and corner cases. Boost.ScopeExit is actually a good way of safely handling many kinds of resources in C++.

Quality

In general, programs in source code form consist of:

  1. Comments and whitespaces, generally harmless if used wisely and not to comment out dead code,
  2. Data structures, describing the internal state of the program,
  3. Algorithms, which are mathematical transformations of the program state, and last but not least:
  4. Boilerplate code, which clutters the programs, makes them harder to understand, hides bugs and generally causes programs to be big and slow.

To write good programs, avoid boilerplate code like the plague. It’s not the only rule for writing good programs, but I think it’s an important one.

Categories: Computing