Archive

Archive for the ‘Computing’ Category

Which programming language is the best?

15.11.2014 Leave a comment

In this fascinating study, the authors analyzed commits on github to see what effect the programming languages have on the number of bugs.

Surprisingly, it does not seem to matter which programming language you choose – you will have the same number of bugs in your code.

The main difference between programming languages is what types of bugs programs written in these languages have. That’s no surprise though, for example while you can leak memory in C++, it’s quite hard to leak in Python.

I suppose what matters is who writes the code, not how? This would explain why it is so hard to find bugs in prof. Donald E. Knuth’s code.

Programming languages are for humans, they are essentially a human interface for compilers. In the future, when programmers are replaced by AI, programming languages will disappear as AI will produce optimal machine byte code and all software will be bug-free.

Categories: Computing

From iOS to Android

29.07.2014 Leave a comment

My iPod Touch is almost 4 and it’s growing old. I decided to swap it for a cheap Android phone. How did it go? Below are my impressions.

But first, what is an iPod Touch? Well, for all practical purposes, iPod Touch is just an iPhone without the phone. Mine was the first 4th generation device, the first one with retina display. I always thought that the price difference between iPhone and iPod Touch was just too big, it’s just not worth paying the extra bucks for the phone functionality. Plus, it was back in the US where cell phone plans are crazy expensive. Instead, I was always carrying a cheap, old simple phone and using a prepaid SIM card. You have to understand that I don’t use a cell phone too much, on average maybe ~5 times a month. Data plans? I’ve always had WiFi at home and at work.

The iPod Touch has been a great companion for all these years, so to speak.

  • It has a great build quality (metal case, Gorilla glass screen).
  • The retina display is perfect as the human eye cannot distinguish separate pixels. Everything looks crisp and smooth.
  • Obviously it has all the goodness you can expect from a mobile device of this class, including useful apps for e-mail, news, calendar, and much more.
  • Obviously there are lots of games for it, useful esp. in a waiting room.
  • I could take it everywhere. My notes, my calendar, always at hand.
  • I could watch Netflix in bed. And it doubled as an alarm clock.

On the down side:

  • I had to carry a separate phone device, for emergencies.
  • I sometimes missed the smartphone function, esp. when away from home and without WiFi access – e.g. I couldn’t read news or use maps. Also, it has no GPS.
  • It started showing age. It was the last iPod Touch with a single core CPU. So some apps are really slow on it, including the Web browser.
  • Only 2 days on a single charge under light use. I do have to give it credit though: after almost four years I still don’t see any battery deterioration.

Finally, my old simple phone’s battery started giving up and I got fed up with my current prepaid plan. When looking for a new cell phone plan, I’ve determined that you can actually get a new phone for free with it.

Now, let me tell you about one of the Europe vs. the US differences. In the US, cell phones really are a rip off. For example, the cheapest plans go for $40 a month. Maybe you can find a $30 one these days if you are lucky. But in Europe, you can really afford it and you don’t regret it. For example, my new plan is £5 a month and it comes with a free Android phone. It is a two-year contract, but hey, it’s only £5 a month! Free family and home calls – my most popular call destinations. Over the course of two years, I expect to pay the same amount I paid for the iPod Touch – and in this price I have a phone service with data, useful occasionally and in emergency situations.

So how does the Android phone stand up against the iPod Touch?

This is the LG D160, also marketed as LG L40.

Let’s get the obvious stuff out of the way:

  • It came free with the plan, so no wonder the build quality is relatively worse. It’s made of plastic, both the body and the screen. I don’t expect any good durability.
  • While the LG’s physical screen dimensions are the same as in the iPod Touch, the resolution is lower, so the pixels are visible and thus anti-aliasing artifacts etc.
  • It’s Android, so I don’t expect to get any updates. I picked the only phone which had the latest Android K (4.4 KitKat). The list of the phones sold notably included much more expensive phones which still have Android G (2.3 Gingerbread). In contrast, my iPod Touch has received two major version updates over the years until Apple gave up on it. I’m still waiting for an Android phone manufacturer to provide updates for a couple of years.

Now, after a few days of use and despite of the above drawbacks, I am actually pleasantly surprised and expecting having a good time with this device, which may successfully replace the four-year old iPod Touch.

  • It has a dual core CPU, which outruns the old iPod Touch’s CPU in most cases. For instance, the browser is much snappier. Apps start quicker.
  • I installed most, if not all the important apps I used on the iPod Touch. Either the same or equivalent apps are available on Android. Many of the apps synced data without any problems and delivered the same experience out of the box, like the Calendar app, which obviously works with Google Calendar on all kinds of devices. The migration to Android was pain-free.
  • Just by trying something new – I learned new stuff. For example, the Stocard app provides support for loyalty/rewards/gift cards, so no need to carry them in the wallet anymore. (This applies to iOS too, I just found it so useful that I thought it’s worth mentioning.)
  • While the build quality is poorer and the screen has the same size, this LG phone actually feels better in my hand than the iPod Touch. And even though it’s probably a few grams heavier. I’m not sure yet whether it’s because it’s thicker or shorter, but it is somehow better to hold.
  • I don’t know why iOS feels better put together. Maybe it’s because the LG’s screen is pixelated and generally worse, while the iPod Touch’s retina display was smooth and crisp? Maybe it’s something about graphics design and fonts? I’m not sure. Maybe it’s just a matter of getting used to.
  • Maybe this goes back to the CPU speed, but the LG’s home screen scroll quality parallels that of iPod Touch’s. The scrolling on many Android phone’s I’ve seen in the past used to be choppy, but on this one it is smooth.
  • The drop down control center draggable from the top on Android is much better than equivalent functionality on iOS. For example turning data, WiFi, etc. on/off is super easy.
  • I miss notifications on the lock screen like on iOS, where I didn’t have to unlock the device to see the notifications.
  • The keyboard and typing were actually better on iOS. Keys were slightly bigger and clearer, the key indicator when pressing keys was better visible. Also switching between alpha/numeric/symbols is slower on Android. I could type faster on iOS. Aside from the fact that typing is something you want to avoid on pocket devices like these.
  • I haven’t used the built-in GPS much, but it seems to work. It may come handy one day. iPod Touch didn’t have it.
  • I don’t care too much about the quality of the built-in camera, because there is no way a cheap lens and sensor like this can parallel those of aDSLR. But a review wouldn’t be any good if it didn’t mention the camera. So below are the photos taken by the iPod Touch 4 and theLGD160. These photos were taken at the same time, so the lighting conditions were identical and the scene is the same as well. Judge it yourself.
    • iPod Touch 4:iPod Touch 4
    • LG D160:LG-D160

Although I am hoping I could replace it with a device of a better build quality at some point, I think I will get a good run out of the LG D160. I would certainly recommend it to a friend who had an old iPod Touch.

Categories: Computing

OO design and classes in JavaScript

2.05.2014 Leave a comment

JavaScript is a language which has a lot of crufty syntax, but underneath the cruft it has many useful features.

One of the problems people encounter when coming to JavaScript with experience from other languages, is that there are no classes in JavaScript.

The rest of this post assumes you have basic knowledge of JavaScript.

Here are two basic ways to create objects in JavaScript:

// The most common way - using constructor function
function Point(x, y)
{
    this.x = x;
    this.y = y;
}
var p = new Point(1, 2);

// Using a create function
function CreatePoint(x, y)
{
    return { x: x,
             y: y };
}
var p = CreatePoint(1, 2);

This is not exactly object-oriented programming, is it? Let’s say we stick with it, how do we introduce inheritance? JavaScript has prototypal inheritance, which is not how most developers understand inheritance. Let me give you an example:

// The prototype
function Base(a)
{
    this.a = a;
    this.print = function() {
        console.log("a=" + this.a + ", b=" + this.b);
    };
}
var proto = new Base(0);

// Usable constructor
function Derived(b)
{
    this.b = b;
}
Derived.prototype = proto;

// Classic approach to adding more members to the prototype
Derived.prototype.hello = function() {
    console.log("Hello!");
};

var o1 = new Derived(1);
var o2 = new Derived(2);

o1.print(); // prints: a=0, b=1
o2.print(); // prints: a=0, b=2

proto.a = -1;

o1.print(); // prints: a=-1, b=1
o2.print(); // prints: a=-1, b=2

First observation: objects created by the Derived constructor share the same instance of the prototype, not a copy. If the prototype object changes, all objects which use this prototype see these changes.

Second observation: the base “class” is non-customizable from the Derived constructor. We don’t call the Base constructor from the Derived constructor. One workaround would be to add an Init function in the prototype, which would set some members of the object.

Third observation: if we have lots of functions and members in the base “class”, prototypal inheritance can in theory save on memory (the same members are not duplicated across all instances).

Fourth observation: there is no such thing as private properties.

“Real” classes in JavaScript

To the contrary of what most people think, constructors may be used the same way as classes are used in other OO languages.

The key to taking JavaScript to the next level are closures. Closures are variables accessible across functions. When an inner function (defined inside another function) accesses a variable of the outer function, that variable becomes a closure. In JavaScript a very interesting thing happens with closures: they survive the end of the function which declared them and are still usable in the inner functions which access them.

Let’s get on with it: Here is an idiom which lets us create classes in disguise using constructor functions, just like in any other OO language.

// Class (constructor function)
function Rectangle(x1, y1, x2, y2) // Constructor arguments
{
    // Private variables
    var w = x2 - x1;
    var h = y2 - y1;

    // Note: reuse arguments as members!

    // Public functions
    this.getWidth = function() {
        return w;
    };
    this.getHeight = function() {
        return h;
    };
    this.getArea = function() {
        return this.getWidth() * this.getHeight();
    };

    // Accessor
    this.getX1 = function {
        return x1;
    };
}

// Usage
var o = new Rectangle(1, 1, 3, 4);
console.log(o.getArea()); // 6

What about inheritance? Easy:

function Square(x, y, size)
{
    // Call base class constructor on this object
    Rectangle.apply(this, x, y, x+size, y+size);

    // Other Square-specific members follow...
}

var s = new Square(1, 1, 2);
console.log(s.getArea()); // 4

Last but not least, this is a very useful idiom to complete member functions later, handy in user interfaces:

function delay(time, func)
{
    window.setInterval(func, time);
}

function SomeObject(x)
{
    // Save 'this' for lambdas
    var self = this;

    var v = x * x;
    this.publicV = x * x * x;

    this.getValue = function() {
        return v;
    };

    // Private member
    var alterValue = function(newx) {
        v = newx * newx; // access private variable
        self.publicV = newx * newx * newx; // access 'this' via 'self'
    };

    this.setValues(x1, time, x2) {
        alterValue(x1)
        delay(time, function() {
            // 'this' is bound to something else in a lambda function,
            // use self instead
            alterValue(x2);
        });
    };
}

var o = new SomeObject(2);
console.log(o.getValue());     // 4
o.setValues(5, 1000, 6);
console.log(o.getValue());     // 25

// Wait >1 seconds, because after at least one second, the value will change again
delay(2000, function() {
    console.log(o.getValue()); // 36
});

Summary

In effect, the above approach works like classes in other object-oriented languages. Arguably, it’s cleaner than a typical prototype-based approach in which one assigns members to a prototype outside of the constructor function – the guts of the object in a typical prototype-based approach are scattered around the source file(s).

I haven’t measured the performance of the above approach versus a prototype-based approach, but my gut feeling is that modern JavaScript engines deal with it comparably well.

Categories: Computing

Boilerplate

6.10.2013 Leave a comment

Programming languages have different levels of verbosity. Some languages have terse syntax, so you need less text to express what you want the computer to do. Others require you to repeatedly type elaborate constructs, often multiple times, to achieve the same.

Usually you don’t have a choice of programming language. You are hired by a company who already has some existing code and you have to work with that code base. Or you are targeting a specific platform and you have no choice, but to use a particular language.

Regardless of the language you use, you still have to make many choices when designing the software you write, and the choices you make will contribute to the size of the source code and may indirectly affect maintainability, extensibility and robustness.

So what makes a program a good program? I have one theory.

Copy&paste

They don’t teach how to write good programs in schools. In most schools they only teach you the mechanics of programing: they show you the tools, but they don’t teach you how to use them effectively.

My programming adventure started in high school with Turbo Pascal. One of my first projects was a simple game. One time I found a bug and I realized, that I have already fixed it once in another function. I noticed that both pieces of code which had the bug were originally copied from another function.

This was one of my first lessons, and as a programmer you never stop learning. The lesson learnt was that copy&paste approach to programming is a bad practice. If you had to modify one piece of a copied code for whatever reason, you likely have to modify all of them – that’s a lot of unnecessary manual labor, which is something programmers hate. If you just wrote a new expression, which looks similar to or exactly like an existing piece of code, you should instead put it in a new function and call it in both places.

Summary: copy&paste == bad programming practice.

Beyond copy&paste

Not too long ago I’ve been reading some articles criticizing C++, the language I use the most. One of the rightful points was that C++ needs you to type the same code at least twice in more than one place. A typical location of duplicate code is class definitions. First you define a class in a header file, so you type the function declarations there, then you type exactly the same function signatures in a source file where you define the functions.

class Vehicle {
public:
    void StartMotor();
    void Accelerate(double acc);
};

void Vehicle::StartMotor() // you had to type this again!
{
    :::
}

void Vehicle::Accelerate(double acc) // and this too!
{
    :::
}

If you later need to modify a function, e.g. change the number of arguments or their types, you have to do it at least twice. It is actually even worse if you have a derived class and you overload virtual functions. To change the interface, you have to change it in 2*C places, where C is the number of classes which declare that function.

Yet worse, it may happen that you change the function’s signature in the derived class, but forget to change it in the base class. In result, you will have a bug in your program. If you use a pointer to the base class to call the function, the function from the derived class will not be called, since it has a different signature. Fortunately modern compilers issue a warning when this happens, but you still have to write the same piece of code twice.

Sounds like copy&paste? Well, that’s how I write new classes in C++, I define class’ functions in a header file, then copy them to a source file, then use a macro in my editor to expand them by removing semicolons and adding braces on new lines.

Boilerplate code

Welcome to boilerplate code. The definition of boilerplate code is exactly that: redundant code, which you have to write to make a well-formed program source code, but which is unnecessary from your perspective as a programmer.

But the definition of boilerplate code extends beyond what you actually have to write to make a well-formed program. Consider a C++ program where you use a well-known libpng library to load a PNG image. In the basic version, you could write a single function like this:

bool LoadPng(const char* filename, std::vector< char>* image)
{
    :::
}

Inside the function you call the PNG library, which has the C interface, to verify whether the file exists, is a PNG, you load the headers, determine dimensions of the image and color format, and finally you load the image data. Without going into the details, here is how a piece of that function could look like:

bool LoadPng(const char* filename, std::vector< char>* image)
{
    :::

    png_structp png_ptr = png_create_read_struct(PNG_LIBPNG_VER_STRING, NULL, NULL, NULL);
    if (!png_ptr)
    {
        printf("Failed to create png structure\n");
        return false;
    }

    png_infop info_ptr = png_create_info_struct(png_ptr);
    if (!info_ptr)
    {
        printf("Failed to create png info structure\n");
        png_destroy_read_struct(&png_ptr, (png_infopp)NULL, (png_infopp)NULL);
        return false;
    }

    :::
}

This is maybe 10% of the code you have to write to load a PNG file. There are only two lines of code above, which actually do something potentially useful or necessary. The rest of the lines are boilerplate code.

Especially please notice that every time an error can potentially occur, you have to handle it. So you have to write error handling code many, many times. If an error occurs later in the function, you will have to delete the allocated data structures before returning from the function, and you have to write exactly the same code many times, once for every function which could fail.

I bet that there are many programs out there, which have a bug in their PNG loading code and don’t handle the error conditions fully correctly. So in some circumstances these programs will leak memory or behave unpredictably.

Now, the situation above can be improved by writing a C++ wrapper class for loading PNG.  But I wouldn’t go too far with it, or we will just shift the boilerplate code elsewhere, instead of reducing it.

I can imagine somebody writing a PNG loader class, and declaring one function of the class per function of the libpng library. Such PNG loader class would simply build a C++ interface over the library’s C interface. That approach may be appealing to some, but the problem is that 90% of that class will be… boilerplate code! There will only be one single place in the whole program where this class would be used – the LoadPng() function. So all that code would be written in vain and only be a maintenance chore, plus a potential place for bugs to hide. Moreover, the compiler would generate much more unnecessary code, contributing to the program’s final size.

class PNGLoader {
public:
    PNGLoader();
    ~PNGLoader();
    static bool SigCmp(...);
    void CreateReadStruct(...); // throw on error
    void CreateInfoStruct(...); // throw on error
    :::
    void ReadImage(...); // throw on error
};

A fact of life is that many programmers call the above approach a “design”. They create beautiful class designs and hierarchies, which are only taking space and engineering time, but contribute little to the program.

And if you happen to be a C++ hater, please know that the above problem affects not only object-oriented languages, but all programming languages in general. Programmers often tend to put too much work and thought into the form instead of focusing on the contents.

So it seems to me that the best approach is to preserve a balance between the amount of code and functionality. Sure, you can write a beautiful command line argument parser, but what good is it if your program only handles two arguments anyway? Handle them correctly, but avoid too much boilerplate code which you will never use.

In case of the PNG loader, a good choice is a function like LoadPng(), which inside uses something like Boost.ScopeExit to handle errors and corner cases. Boost.ScopeExit is actually a good way of safely handling many kinds of resources in C++.

Quality

In general, programs in source code form consist of:

  1. Comments and whitespaces, generally harmless if used wisely and not to comment out dead code,
  2. Data structures, describing the internal state of the program,
  3. Algorithms, which are mathematical transformations of the program state, and last but not least:
  4. Boilerplate code, which clutters the programs, makes them harder to understand, hides bugs and generally causes programs to be big and slow.

To write good programs, avoid boilerplate code like the plague. It’s not the only rule for writing good programs, but I think it’s an important one.

Categories: Computing

Advertisement!

4.10.2013 Leave a comment

AdBlock is a wonderful little browser plugin. It does not get in the way. If you have it, you may not even know it’s there.

All it does to you is a favor. By blocking the ads, it removes all the unnecessary bling bling from your view. The result is that the websites that you are browsing contain only what you are interested in.

The functionality of AdBlock should really be part of browsers. Obviously Google would shoot themselves in the foot if they added it in Chrome. I suppose other browsers are trying to be politically correct by not including similar functionality.

There is a group of people who are against using AdBlock, because it strips them from potential income by preventing visitors from clicking on ads on their websites.

But I like AdBlock a lot, you wanna know why?

Let’s take Facebook, which is one of the most popular websites. It started off as a website who helped people get back together. Had a friend in school? Now it’s easy to reconnect! But Facebook accumulated a lot of users who upload a lot of information about themselves. It turned out to be a great source of information for which many companies pay prime money. After cashing on selling information about their users, Facebook also started serving ads to their users. Double win!

But I am not really against Facebook, I only don’t like their clunky web UI. If you are using Facebook, do you check out the things your friends post? So sometimes they post links to videos on YouTube. Unfortunately lots of YouTube videos are censored in many countries. Germany, for instance, is one of the countries leading in Internet censorship (among other countries). People from certain countries may in fact find it ironic!

So here are the two biggest problems the Internet has in this day and age:

  1. People are the product. We, the users of the Internet, anything we produce and any information available about us are being traded.
  2. Censorship is gaining strength, even in “highly developed” countries.

To me, AdBlock is our little means of getting back at them, a way of getting censorship onto our side.

Categories: Computing

Rest in peace, Steve

22.09.2013 1 comment

Steve Jobs did a lot of good for humanity. Maybe he was not always a good person (e.g. he used to park in handicapped spots), but let him, who is without sin, cast the first stone. Steve showed us that a single company can make great, high quality products. He was a genius in bringing a vision to market.

Sure, a Dell or HP laptop can be useful, but frankly after years of using an aluminium MacBook I can’t even look at the plasticky laptops. The PC laptops are of the same bad build quality they used to be 15 years ago. Once I was in a store and I thought I had a revelation, I saw a HP laptop which looked like a MacBook ripoff. I thought – great, finally they are trying to copy Apple and bring good quality to PC users. But when I touched it, I found out it was the same plastic quality as the black cousins, only it was in the aluminium color. Nice try.

Say what you want, but MacBook Airs are like devices from Sci-Fi movies from the previous decade. The latest batch is not only thin and light, they also outlive most other laptops on a single charge.

Ultimately, Steve drove the latest revolution in computing. With iPhone, iPod touch and later iPad, he showed us that one can really make a phone or a PDA which is really useful. A really personal device, which is easy to use and beautiful. Everything before iPhone was clumsy and choppy.

Steve was the heart of Apple, he was making the company work efficiently and effectively. But I always knew, that if Steve were to leave Apple, the company would not do so good anymore.

Regrettably Steve is no longer with us. It’s been a tragedy for his family, for Apple and for all of us.

Every company has a period of getting there, its top days and a decay. The length of decay usually depends on how much wealth and mass the company has accumulated during its top days.

Apple is already past its best times. The problem with the market of electronic devices is that as soon as you stop innovating, you are dead. iOS 7 is the first sign of Apple’s demise. If you are not familiar with iOS 7, it looks a lot like a cross of Android and Metro (Windows 8). I personally find the Metro design too simplistic. In short: I wholeheartedly hate it and find it repulsive. It seems as it’s been “designed” by a wannabe artist who thinks MS Paint is a great tool for making graphics. In my opinion, Metro is not something I would recommend another company to copy. Unfortunately iOS 7 looks a lot like that. I am sure that Samsung is now really happy.

I truly hope that I am wrong and that Apple will show us many great innovations. They have a lot of talented employees, but how well their talent will be used depends on the management. I wish Apple all the best and expect them to stay on top of further innovations, although I feel that the loss of Steve and the current developments don’t bode well for them. If this trend continues, Apple may be out of business (or bought out) in less than 10 years.

Categories: Computing

new is abomination

20.09.2013 Leave a comment

If you’re seriously into writing code in C++, I strongly recommend watching the recordings from the Going Native 2013 conference.

One of the talks reminded me of the following guideline: Avoid using the new operator and never use the delete operator. It’s very easy to make a mistake when using them and the consequences are usually severe. Obviously you need to replace them with RAII (use constructors and destructors for acquiring and releasing resources, respectively).

The following seemingly innocuous example demonstrates the problem with the new operator:

class MyClass {
    OtherClass* ptr;
public:
    MyClass()
        : ptr( new OtherClass )
    {
        // ... do some work here ...
    }
    ~MyClass() {
        delete ptr;
    }
};

What’s wrong here? The problem is not obvious at the first glance. If some code in the “do some work here” section throws an exception for whatever reason, the compiler has no way of knowing whether the object construction has been successfully finished or not, so the destructor’s body will never be invoked. If this happens, the object under ptr member will simply leak.

It may not seem serious at the first glance, but someone could spend weeks chasing down this leak, especially if the exception is thrown very rarely.

What scares me is that this approach to handling memory resources is very common…

What are the solutions?

  • If it’s a single object, try to make it a member of the class directly. This is  solution is particularly good if the parent class needs to be copyable.
  • If you have to allocate it for whatever reason, use std::unique_ptr in C++11 and std::auto_ptr C++98 (with caveats!). In this case the parent class must not be copyable, so better prevent that with some idiom, e.g. by deleting the copy constructor and assignment operator in C++11, or making the copy constructor and assignment operator private in C++98.
  • If you need a dynamically allocated array of objects, use std::vector.

The way of storing the object has to be carefully chosen depending on the usage scenario.

Categories: Computing

Mutex vs. binary semaphore

28.05.2013 8 comments

Mutices and semaphores are among the most basic tools in multithreaded programming. However most people I asked do not know what is the difference between them. So please let me introduce you to them.

Consider a resource which is shared between multiple threads. For example a container. You don’t want to have multiple threads modifying the container simultaneously, or one thread modifying the container while other threads are reading from it, otherwise you will end up with classical race conditions and unpredictable things will happen.

To guard a resource from other threads while you are accessing it, you use a synchronization primitive, which you conceptually associate with the guarded resource. Threads can lock the synchronization primitive when they need to access the resource. Then they can release the primitive after they are finished with accessing the resource. When the synchronization primitive is already locked, a thread trying to lock it will wait/stall until the other thread who locked it – unlocks it.

At the first glance, both mutex and binary semaphore fit the description of the above synchronization primitive. Well, not quite. Using binary semaphore in place of a mutex is a bad idea.

Conceptually a semaphore is like an integer. You can increment it and you can decrement it. If the semaphore’s value is 0, the thread trying to decrement it will wait/stall until somebody else increments it. This way, the semaphore never has a negative value.

A binary semaphore is just a semaphore capped at one, i.e. it’s value cannot exceed one. You can treat the decrement operation as “lock” and the increment operation as “unlock”.

The problem with the semaphore is that any thread can increment it or decrement it. In particular, if the semaphore’s value is 0 (“locked”), another thread can increment it (“unlock”), even if this is not the thread which locked it! It takes more discipline to write code which correctly uses binary semaphores for locking and there is still a potential for error.

Another problem is that most semaphore implementations allow sharing semaphores between processes. This makes them much heavier than e.g. POSIX mutices or critical sections on Windows, which are lightweight, because they only work within one process and don’t require calling into kernel space in most cases.

Unlike binary semaphores,mutices may also have another interesting property, depending on an implementation, i.e. they can be recursive. A recursive mutex can be locked twice from the same thread. This allows you to write an accessor function and not have to care whether the mutex is already locked by the current thread or not. However it is generally not recommended to use recursive mutices, the necessity for recursive mutices is a sign of a bad design and indicates that there may be potential problems with the interfaces or even hidden multithreading bugs.

In general mutices (or critical sections on Windows) are typically recommended over binary semaphores as synchronization primitives between multiple threads in the same process.

Categories: Computing

RIP, MacBook White

13.04.2013 Leave a comment

We got the MacBook White over 5.5 years ago. It withstood the trial of time. A year after we got it, it survived a spill of tea, which killed the IR sensor for the remote. Two years later the inverter cable went flaky – the backlight became intermittent, so we stopped closing the lid, otherwise it was difficult to restore the backlight. Then the fan started being loud. I replaced the fan with a new one. Then I upgraded the memory to 4GB and upgraded the OS. A year ago the battery died, so I replaced it. A few days ago the kids inadvertently pushed it off from the coffee table and a day later the hard drive died.

That’s it, I will stop trying to keep it alive and I will let it die.

Until its last minutes, the MacBook White worked very well, almost the same like when it was new. The only reason to ever reinstall the OS was to upgrade it. After the memory upgrade it was able to even run Lion without any problems.

I expect the new generations of MacBooks sold today to be even better.

  • The very nice looking plastic of which MacBook White was made was nevertheless – plastic. There were tiny cracks here and there and tiny pieces chipped off on the edges of the keyboard. All current MacBooks are made of aluminium and are not susceptible to this kind of damage as easily.
  • The lid hinges in the aluminium MacBooks feel much more solid, not sure if his is because the hinges are better, but I have no problems whatsoever with the one I’ve been using for over 3 years now.
  • The latest generations of MacBooks don’t have hard drives. Hard drives are delicate. Some people rightfully call them the spinning discs of rust. The latest MacBooks have flash-based non-volatile memory instead of hard drives, which should theoretically have longer average life time and is more reliable than mechanical hard drives.

Apple’s MacOSX integrates really well with the hardware. But one can also run Linux or Windows just fine on MacBooks, either in a virtual machine or natively. It’s certainly a piece of hardware worth recommending. It is expensive, but it is worth every penny spent on it.

Categories: Computing

Android calendar idea

3.02.2013 Leave a comment

I have an idea for an Android device which I want to share with you.

At some point in my life I started using calendars on portable devices, first on Palm Z22, then on iPod touch. Whenever I am I can always turn on the device and check if I have anything to do this day or the next day, so I can plan ahead. There is so much going on that it’s hard to remember all the things I have to do, let alone things planned months ahead, such as dentist appointments for example.

But when I am home, the problem is I have to walk to the place where I put my device, unlock it, then open the calendar app. This costs time.

This is easily solvable by having a calendar hanging on the wall in the central place of the house such as the kitchen. Another good place is the door of the fridge. But a static, paper calendar can only be looked up where it is placed. I cannot check it when I am away from home.

Android to the rescue! I’ve seen people using their Android tablets as picture frames. Why not use an Android tablet as a calendar? There could even be a device especially suited for this task. The nice thing about Google calendar is that you can share it with other people, so you could have a common family account and all members of the family would share their calendars with it (you can have multiple calendars with your Google account).

The device I am looking for could be described as follows:

  • It is an Android tablet.
  • It is very thin and very light.
  • It has an e-ink screen so it does not consume much energy. The screen will display the last image even if the battery is discharged.
  • It has a low power CPU. The CPU can be slow, it does not matter for this purpose.
  • It does not need to have any connectors.
  • It has WiFi.
  • It has a solar cell with which it charges its battery. No charger necessary.
  • It has a touch screen as an input device.
  • No other gimmicks necessary, no Bluetooth, no camera.
  • It can be hung on the wall, it can stand on the shelf or it can be attached to the fridge door using magnets on its back.
  • It has no unlock screen. In the default mode it displays the calendar app.
  • It is cheap. The upper limit would be $50, but $25 price tag would be perfect. There are e-ink readers which cost less than this (although they are subsidized). Some printed calendars cost this much.

I would certainly purchase such device if it was available. So far I failed to find one. If you find a similar device, let me know.

Categories: Computing