Basic principles of programming

6.05.2012 4 comments

This post is meant to serve as advice for beginner programmers. If you don’t consider yourself a beginner, read on and check if you agree.

So here are a few basic principles which a programmer should follow to write good code, no matter what the language is.

When I started my adventure with programming back in fall of 1993, I wish somebody laid these out for me. There are many good books about the dos and don’ts of programming, such as Effective C++, but I think it’s still difficult to find to find the more simple basic principles like the ones below, esp. for novice programmers.

These principles apply to most areas of programming. They should be taught at the beginning of programming courses. Unfortunately most programming courses focus on tools, such as programming languages, environments, data structures, etc. but they don’t touch the craft and art of programming.

Copy & paste

When writing one of my first programs in Turbo Pascal, I quickly learned that pasting pieces of copied code around in a program leads to a lot of unnecessary work at a later time. Let’s say you have exactly the same piece of code in N places, and that code is meant to do the same thing, or even if there are N places with very similar code doing very similar thing. One day you will have to change that code slightly or enhance it. Or you will find a bug in that code. You will have to find all N places and apply the same change N times. There is a high probability that you will miss some of the places – the larger the N the higher the probability. This will lead to either not fixing existing bugs entirely or introducing new ones.

The takeaway from this is that multiple copies of the same pieces of code should be avoided. Similar code should be collapsed into one function and that function invoked wherever it’s needed. Avoid copy & paste.

Code clarity

Variables, functions, members etc. should be named after what they do. Most variables should be nouns. Most functions should be verbs (e.g. variable num_args vs. function count_args()). The names should be short, but long enough to give a clue to the reader what they mean. There are some schools which teach that variables should never have long names. This is OK, but in some circumstances it’s necessary to give longer names so that the reader can easily understand what is happening in the code.

There is another term for using meaningful identifier names: self-documenting code. When clear names are used, there is less need for comments and other kind of documentation.

Comments should be used whenever the names of variables, constructs etc. are not sufficient to understand what’s going on. This applies especially to more complicated pieces of code, algorithms, etc.

But why bother with all of this? One word: maintenance. Sometimes a person other than the author has to maintain the code – fix bugs, add new functionality, refactor or reuse. The less time that person needs to spend to understand what’s going on, the better. Often even the author may need to return to the code he’s written and may not remember why certain decisions were made.

Language constructs which promote bugs

Every language has constructs which promote bugs. Such constructs should be avoided. They may be useful or necessary in certain situations, but in these situations they are the necessary evil. In most other cases we’re better off without them.

Examples include: goto in C++, a bit less in C, C-style macros in C++, == operator in JavaScript.

In general any language feature which has gotchas, which may behave in an unexpected way (e.g. friend or protected in C++), should be avoided, unless specifically beneficial in a certain situation. When used, precise comments should be added describing the use case.

The unfortunate thing is that until you know a particular language really well, you don’t know what these tricky constructs are. They are usually not advertised in the language manuals. Sometimes there are books which help to learn about why particular features are dangerous. So the best advice one could give here is: stay alert!

Diligence vs. ignorance

Or I should say: willful ignorance. Programming has become ubiquitous and some languages like JavaScript have a very low entry-level. It’s good, but it also comes with some disadvantages, such as: programmers don’t put enough thought into what they are doing. I’ve seen too much mindlessly written code in my career. Some simply assume that they are writing throwaway code and they don’t care about the quality. Other just implement the first solution which came into their mind, they don’t try to think of all advantages and disadvantages of that solution, it’s like they only wanted to finish their current task and move on, as if the code they write was going to be thrown away right after being written, or as if they were going to quit soon and they don’t care who will be maintaining that. But the code tends to outlive the task, somebody has to maintain or extend it. This leads to the same piece of code being reimplemented over and over again multiple times, which is a huge waste. If the first implementor gave enough thought into what he was doing, the original piece of code could have been used for years, perhaps even reused.

The advice here is: be diligent. Learn about the environment surrounding the code you write (i.e. callers, callees, etc.). Learn about all the use cases. Try to think of all things that your approach may break. It does take experience to write good code, but it also takes common sense.

Code reviews

It’s good to have an additional pair of eyes review your code. If you’re writing code for fun, have a friend take a look at it. If you’re working for a company, have a coworker review your code and review his code in return. I don’t know why code reviews are not a custom at many companies. Reviews take only a small amount of time, but they have a big benefit of unifying the code to ease future maintenance, promote coding style conventions, promote good behaviors and suppress the bad ones, etc. It’s even more beneficial if somebody more experienced reviews your code, you will learn from him.

Reviews are not an ultimate solution, they will not help to find all bugs, in fact many bugs will slip through reviews, but reviews help improve code quality in the long run.

Categories: Computing

The Thing (2011)

3.05.2012 Leave a comment

A post in a new category – movie reviews. Hopefully everyone’s favorite. 🙂 This new category will in fact be not only about reviews, but also about my commentary.

The Thing is a movie about an alien lifeform which crash-landed on Earth in Antarctica long ago. It’s actually a prequel to John Carpenter’s original movie from 1982 with the same title.

If you like movies about evil aliens like I do, you would certainly like this one. The prequel blends very well with the original movie, in fact it at the end it is surprisingly well connected to the original. They did not overdo the monsters but made them more believable using current technology. There is a lot of action in it, boring scenes were kept to a minimum to sustain the story. Definitely worth watching if you like thrillers, action and aliens.

Spoilers

The lifeform in the movie is bloodthirsty, but it does not only want to just consume Earth’s lifeforms. When the organism catches and swallows a dog or a human, or even if it only sprinkles one with its fluids, the alien cells attack prey’s cells and convert them into alien cells.

It wasn’t said in the movie, but one can suspect the alien lifeform attacked some other intelligent aliens who had a spaceship and this way came into the possession of that spaceship, which it used to come to Earth. This could potentially explain why a graceful landing would be too hard for that creature.

What does not make much sense is how such lifeform with the ability to understand and pretend other lifeforms from other planets could evolve in the first place. All lifeforms are bound to their environment. They are used to pressure and chemical composition of that environment. Advanced organisms don’t have the flexibility to live in environments beyond their home, because they are too complex for that. I find it hardly believable that a species from another planet could come to Earth and breathe our air, eat our meat, let alone have the ability to change itself into one of us in a matter of minutes.

However there is one scenario in which such a life form could be possible – if it was deliberately created as a weapon or even just as a crazy experiment.

But even if such a lifeform existed, I still find it quite improbable that it could find another alien with spaceship technology who it could mimic. I would argue, that civilisations able to travel in space to other planets must have evolved beyond physical limitations of biological life and consist of cybernetic beings, which we today consider as “robots” or “androids” etc. For this reason I think that even if such a creature existed, it wouldn’t be able to spread effectively to other planets.

Categories: Movie Reviews, Universe

Linux Mint

26.04.2012 2 comments

What is up with Linux?

I use Linux a lot. For years I’ve been using Gentoo Linux. Gentoo is great if you want to learn about Linux, how it works and how it’s put together. On Gentoo you choose all packages that you want installed and also you have choice of what options you want enabled in every package. Gentoo compiles all the packages you chose and their dependencies from sources. It also comes with excellent documentation describing most setup and configuration options, esp. for popular packages.

After years of using Gentoo I determined that one of its drawbacks started bothering me: updating. I kept updating it too often, and updating Gentoo takes a lot of time, because all packages are compiled from sources. Also a very common problem was that newer packages quite often tended to break things in the system. Most often some auxiliary package or library didn’t compile, or was provided with options incompatible with or breaking other packages, and I had to search for workarounds. At other times configuration files changed and I had to find out how to use the new ones. So I was spending too much time on fixing these breakages.

At some point I decided to switch to Ubuntu, which used to be the most popular distribution. I switched to Ubuntu only at home, and left Gentoo on my desktop at work. It mostly worked fine, but I didn’t use it too much. Ubuntu gets major updates every six months, in addition to that updating is easy, you just click and it just downloads the updated packages. Unfortunately it sometimes breaks portions of the system when you update, apparently no Linux distribution out there figured out how to avoid breaking users’ systems during updating. For example sometimes in the middle of updating the process would stop because of some configuration problem and I had to find out how to work around it to continue the upgrade. At other times some packages stopped working (e.g. Bluetooth) and I needed to fix them.

But the most annoying thing about Ubuntu is that it is changing default programs all the time. For example the program for browsing images would change once a year, i.e. the update process would remove the old program and install a new, different one. Or the program for playing music would change. The settings would not be copied from the old program, so I would have to configure the new one from scratch. In the latest incarnation, half a year ago, the default desktop environment was changed to Ubuntu-sponsored one (Unity). That was the end of Ubuntu for me.

I found about Linux Mint. This is another Debian-based distribution (similarly to Ubuntu), but it comes in several flavors, so it’s easier to choose the flavor you like. I am particularly a fan of XFCE desktop environment, which is very simple and fast, yet has all the features of the major desktop environments (KDE, Gnome).

So I switched to Linux Mint. This time I switched to Linux Mint both at home and at work (although I still keep Gentoo on the side for some tasks, since it’s very convenient for development due to its nature). In my opinion Mint is generally nicer than Ubuntu, probably because it comes in many flavors and the XFCE version appealed to me. Of course you could say that you can install and use XFCE in Ubuntu too, but then you would also have to keep KDE or Gnome, and that I wouldn’t like – I especially don’t like keeping a lot of garbage which I don’t use. The package manager and updater are also nicer on Mint than on Ubuntu.

Very shortly after switching to Mint I found out an inconvenient truth: sound does not work! It did not work neither at home nor at work, and these computers have very different, but not uncommon motherboards. Actually ALSA supports them just fine and I never had any problems with sound in Gentoo or Ubuntu. Fortunately I don’t do much sound-related things on Linux. I determined that the problem lies in pulseaudio, which is one of many sound managers for Linux. This is a general problem with Linux (still after so many years!) – there is no single sound solution. I don’t know why some distributions choose pulseaudio, it just doesn’t work for so many people. Pulseaudio is very crappy, because it just doesn’t work. After spending many hours trying to get any sound out of it, I uninstalled pulseaudio and got sound working through ALSA in most applications. Except Flash! I still don’t know how to get sound in my browser. I hope that Google soon releases Chrome with built-in flash and it will just work. It looks like currently Chrome uses Flash from the distribution, and Mint has its own version of Flash. Another possibility could be to replace Mint’s flash with another build, but I haven’t tried that yet.

Recently I did a major upgrade of Mint and I also got serious issues with the update. The update died in the middle with a couple of broken packages because my partition ran out of space. Somehow it is hard for the package manager to figure out that it’s going to run out of space and warn the user. Again I had to look on the Web and find a solution, which involved reverting to apt to fix that (a command line tool). Then graphics drivers stopped working and it took me some time to figure out that one package has been mysteriously uninstalled during the update and I had to reinstall it manually. Afterwards the update finished fine and I had no other issues. But I feel like these issues shouldn’t have happened in the first place. It’s a pity their package updaters are so crappy. Like I wrote before, no Linux distribution has figured out how to handle system updates gracefully.

Other issues I have with Mint are: Apple Magic Mouse doesn’t work. Apparently the drivers for that mouse are unusable. The mouse works fine in Windows 7. Also QEMU which comes with Mint hangs. I tried QEMU compiled manually, but it’s very slow. I ended up using QEMU compiled on Gentoo and that works great, so far I haven’t had time to figure out what’s wrong on Mint.

Besides the problems I described above, I am still happy with Mint and it seems to work better for me on desktop than Ubuntu (has XFCE and I choose what I want to have installed) and Gentoo (no constant lengthy updates and manual fixes, at least not that often). On the other hand because of these problems I don’t know if I would recommend Linux to people who are not computer gurus.

Besides Linux I also use MacOSX, Windows XP, Windows 7, iOS and Android. All of them have their goods and bads. Linux is nice on the desktop for software development if you get used to it. Although I must say that I like MacOSX better, but that could be a topic for another post…

Categories: Computing

How to buy a gaming rig

22.04.2012 Leave a comment

I used the Diablo III Open Beta to see how my 7 year old computer will be able to handle it. Well, it is not really 7 years old. 5 years ago I upgraded the CPU from a single core 2GHz to a dual core 1.8GHz. This year I upgraded the graphics card to GF 460GT and added a second gig of RAM. But you could say that except the graphics card, everything else is quite old in that “rig”. The motherboard supports only PCI Express 1.0, which is twice as slow as the 2.0 standard supported by this graphics card.

I tried the beta on a MacBook Pro with Intel Core 2 Duo, 2.66GHz (above the recommended spec for this game) and GF 9600M GT (the bottom of the minimum spec). It plays horribly with the minimum resolution (800×600) and all settings set to low. Barely playable.

Then I tried the beta on the dual-core Opteron 1.8GHz (much below the minimum spec, which is 2.2GHz) and GF 460GT (above the recommended spec, but PCI Express 1.0 only). It plays smoothly at 1920×1080 with all settings set to high. The beta is supposedly not optimized, the final game will probably play even smoother.

The bottom line: if you’re buying or beefing up a gaming rig, buy as cheap a CPU as you can find, then spend the saved money on a graphics card.

Categories: Other

Diablo III Open Beta

20.04.2012 Leave a comment

This weekend, you can play Diablo III Beta without a special key! It’s time to check if the hardware will be able to handle the game. 🙂

Link to Battle.net

Categories: Other

How to hire great programmers?

20.04.2012 Leave a comment

Noticed on Herb Sutter’s blog: quotes from Steve Jobs.

The interesting quote is: “In most businesses, the difference between average and good [employee] is at best 2 to 1. […] But in software, it’s at least 25 to 1. […] The secret of my success is that we have gone to exceptional lengths to hire the best people in the world.”

I guess the difference between people in the middle of that range (12) and the people closer to the end (1) can easily be spotted per my previous post. But how do you tell the difference between the best ones (25) and the middle ones (12)? Especially this: how do you tell whether a person has the potential to become a “25”? How did Steve do that?

Categories: Computing, Other

C++ renaissance

12.04.2012 Leave a comment

Today I stumbled upon this very interesting comparison of which languages are used to write popular software (it was mentioned on Herb Sutter’s blog).

I encourage you to go through the table yourself and contemplate it. To sum it up, most of the popular software which we use every day is written in either C or C++. The author makes a point that other languages, esp. languages which don’t compile directly to executable machine code (a.k.a. native code), are still in a niche and always will be, because no matter how fast computers become, we will use their extra power and resources for new features instead of wasting them on non-native languages.

Back in the old days, when the Moore’s law directly influenced CPU frequencies, the advocates of non-native languages always used the argument that at some point CPUs will become so fast that native languages will lose their usefulness due to their clunkiness. Among other similar arguments.

The year 2006 came and CPU frequencies hit a wall. It turned out that in order to make further progress in performance CPU manufacturers have to start packing more and more cores into their products. We can leverage that in many algorithms which are parallelizable, but there is still a lot of things our programs have to do sequentially and there is no way to go around that. Sure multiple applications can leverage separate cores, but multiple cores don’t make miracles, we are stuck with a frequency limit!

Before the year 2006 everything went well, Java was at its peak and Microsoft promoted C# and .NET as their thing of the future. Today Microsoft backs C++11 and encourages developers to “go native”, while the future of C# is uncertain (there are fears that Microsoft will drop .NET).

When Google came out with its Chrome browser and revolutionized – if not leveled – the browser landscape, some of their most significant improvements went into the JavaScript engine. The Chrome browser was released in the midst of a JavaScript performance war between Firefox and Opera, but Chrome did it better. After years of improving their JavaScript engine Google declared that they gave up on JavaScript performance, they decided to develop a new language (Dart) which will be typed and will compile in browser into native code. Consider this: web scripting is important and performance of website scripts must improve, but to get there Google decided they can’t rely on a non-native language anymore! This does not mean that JavaScript will go away, but Google puts its bet on Dart which is a native language for the future of more sophisticated web programming.

I think non-native languages are still very useful for scripting, prototyping and other similar tasks where you need a simple language which does not have to produce lightning fast results. But non-native languages cannot replace native languages for more sophisticated tasks where the software must meet resource constraints. In other words, usefulness of Java, C# et al has been strongly overrated, they will remain niche languages at least for the next decade.

Update

A friend of mine pointed out that job postings indicate that Java is in higher demand than C or C++. That is a valid point, every language has its own purpose and application. It does not change the fact that non-native languages will replace native languages any time soon.

Categories: Computing

Interviewing

1.04.2012 2 comments

However easy it may seem, interviewing software engineers is hard.

If you have a little experience in developing software, you are probably participating in the hiring process once in a while. It goes like this: you come up with a few questions, see how candidates tackle them and give your opinion whether the candidates will fit into the position you are interviewing for or not. Sounds easy.

But how do you know that you can measure anything with your questions? If you are a novice interviewer, you will make mistakes. You will sometimes make bad judgements even if you are a seasoned interviewer. This is why most companies throw several interviewers at each candidate. If all interviewers agree about a candidate, the hiring manager has an easy task. If there is too much disagreement between interviewers, the manager will either dismiss the candidate or recommend him to another team. Also sound easy. But it’s far from perfect, let’s look how the process works and what could be improved.

How hiring works at most companies

The first steps of the hiring process involve getting candidates, e.g. by advertising open job positions. When the candidates send in their resumĂ©s, they go through the HR filter, which is typically based on keywords. For this reason, if you are looking for a job, you should put in as many keywords and acronyms as you can come up with into your “experience” section, to just get through the Great Filter. If you are a C++ developer applying for a C++ position, but you have used C# in the past, just throw it in, even if you are not interested in a C# position or have very little experience with it – to an HR person you may appear as more valuable than other potential candidates.

The great fault of this process is not that bad candidates slip in through the filter – those who have no experience but a pretty resumĂ© – but that the filter potentially rejects good candidates. Some candidates may be missing resumĂ© writing skills, but could make great employees.

Then there is the interview series, where multiple interviewers question each candidate, as described above. Usually it is easy to dismiss weak candidates. For example, most candidates are unable to answer simple programming questions, hiring a person who does not care about programming won’t make a great employee, unless the goal is to hire a bunch of drones for a thoughtless job, which is always a bad idea – such approach is usually a sign of bad management.

The fault here is not that this sieves through the sea of candidates lacking competence for this particular job. The problem is that the process fails to distinguish between great hires and mediocre hires and also usually fails to match candidates for a particular position (about this later).

The last step is salary negotiation, which I am not going to cover here, although this is also an interesting topic.

Questions

Probably everybody has their own favorite types of questions. So there are the technical questions, which focus on knowledge. For a long time this was my favorite kind, the problem was that almost no candidate was able to answer them all. For example I would ask about five various C++ keywords. I could count on the fingers of one hand the candidates who actually knew them all. Most candidates are able to guess a few keywords.

I am asking, why are you applying for a position, which specifies a particular language you will be using, and you are not even willing to learn what all its keywords are for? OK, C++ is one of the most difficult languages and has over 60 keywords. I have seen several candidates claiming being “advanced” or “experienced” in C++, and still not able to even guess some keywords. You might say, some of these keywords are rarely used or not useful, or it’s easy to look them up, but would you expect the candidate to do his job seriously with attention to details if he is not even willing to prepare for the interview? (BTW. “I could look it up” is also not an entirely bad answer)

Indeed, too technical questions do not help us to determine how good a fit a particular candidate is for the job. Knowing rare keywords by heart does not help with good design, thorough testing or finishing the job at all.

For years I would indulge myself and abuse the fact that there were other interviewers interviewing the candidates as well, so they would compensate for my being too harsh. I would also throw in a piece of C code where the candidate would have to track what number gets printed in the end. Very tedious. Not everybody knows that 1.0f is represented as 0x3F800000U. But I was not satisfied, in the end knowledge != wisdom. I realised that I am looking for something more than just knowledge in the candidates.

So there are the riddle kinds of questions. Some companies are famous or excel at giving sophisticated riddles. The proponents of riddles claim this is the best way to hire smart people. So do you really want to hire smart people who don’t know anything about the technology they are going to be working with in the coming years? Or people who are extremely smart, but don’t make good team players? Or nobody wants to work with them because they are rude and don’t respect others’ opinions? I guess the riddle questions are not the best kind.

Two kinds of employees

There are two kinds of employees. It is a simplification, but it boils down to this – the process of recruiting should take this into account, because everything else and the whole “lifetime” of an employee at one company is centered around this. If an employee of one kind takes a position that requires an employee of the other kind, the employment will not be successful and in the best case scenario will be terminated sooner than later, no matter how good the employee really is. This classification applies not only to software engineering, but to most, if not all jobs.

The first kind is more common. These employees must be told what to do. They are not able to find problems and fix them on their own, somebody else must find work to do for them. They are usually not proactive. Novice employees of this kind will not be able to complete tasks without guidance, will not research problems thoroughly, will not look for potential issues. When faced with obstacles, they will be hopelessly stuck until somebody pushes them in the right direction. After many years of work they will finally learn how to cope with most of these problems, they will still need to be told what and when to do and they will still not pursue new potential tasks on their own. Be careful not to dismiss them, they are very valuable when assigned the right job and they can handle many if not most common tasks. They are good are performing repetitive or simple tasks, after long training even sophisticated ones. They are great to have if the direction is clear. They always need supervision.

The second kind is much less common. These employees are the drivers. They go out looking for problems, find solutions and fix them. They are proactive and often don’t require supervision. They should not be micro managed, because they will feel they don’t have control and will not be able to perform and will not be satisfied with their job. These employees are often good at researching difficult problems and driving things to completion. It is always good to have at least one on a team.

There are groups within each kind and there is a special group in the second kind, let’s call them high achievers for now. I admire them the most. They constantly strive to improve themselves, they treat their job almost like an art, they are the craftsmen. Whatever problem you will throw at them, they will always solve it.

Now don’t be fooled, you don’t want to assemble a team of high achievers. The perfect team is a balance of people of each kind, so the entire team can make forward progress as well as solve problems as they are encountered. The last thing you want is to assemble a team of all smart gurus who won’t want to communicate with each other and will rip the team apart.

This is true not only in software development but in most kinds of jobs. For example there have been many attempts of building all-star teams in sports, but eventually these teams were mediocre, because the top players were not able to work together.

The question is, how do you determine during an interview to which group the candidate belongs? Throw in a riddle? People from both groups can be smart and able to solve riddles. Technical questions are not good either. To be frank, I don’t know yet how to distinguish between the two kinds of candidates yet.

Two kinds of employees, second try

To make it easier, somebody suggested to me another classification: those who are hard-working and those who are not. This is orthogonal to the other method of classification. Hard working in this case does not mean willing to do overtime, but it simply means: bent on finishing tasks, finding solutions and being honest about the work. Even people who need constant guidance can be split into those who just don’t care about the job and those who are willing to learn and build their careers.

The nice thing about this classification is that it is easy to devise simple questions with which you can judge with a high level of certainty whether a person is hard-working or not. You can start with asking details about their previous assignments and judge how willing they were to complete them and how competent they are.

I like to think of these two classifications as of two dimensions of employee classification scale. There are other dimensions as well, such as the technical knowledge dimension, which is also important but trivial to assess.

Other dimensions such as “fit for the team” are often overrated and over-advertised by HR or management. What matters is whether a person is a team player or not, but sometimes even this does not matter – if you need a “high achiever” to tackle the difficult problems your team comes across, it does not matter much whether he will be a team player or not, but how effective he will be, although you still need to back other tasks in the project with team players.

The bottom line

The bottom line is whether the candidate will be able to perform the job or not. Will he write good code or not? We he care about the code he writes or not? Will he be willing to improve his skills or not?

Currently I am focusing on presenting the candidates with real world problems and asking them how they would solve them. For example, what would they do if they faced a badly written code? Would they leave it alone or try to fix it and why yes or why not? It still gives me some ideas about the candidates and allows me to compare them, discussing the solutions with them also allows me to judge their experience. I also like to give them some simple (but not too trivial) piece of code to write and see how they cope with the task.

I know this is not a perfect way to interview, I am still looking for a better way.

Categories: Computing

Facebook and Twitter

31.03.2012 Leave a comment

I set up a Facebook page and a Twitter account so that it’s easier for you to follow this blog. The Facebook page is facebook.com/chris.dragan.name and the Twitter account is @chrisdraganname . Hopefully this post will be the first to be publicized on both.

Twitter was trivial, but it was not as easy as one would have thought to set up a Facebook page. After setting up the Facebook page for the blog I learned that WordPress refuses to post my posts on it, I had to create a profile. Even after I created the profile and bound the page to it WordPress still refused to connect to Facebook.

Let’s see how it goes, all for your convenience, dear reader.

Update

WordPress still refuses to publish posts to the Facebook page and wants to publish to my profile instead. Very confusing. You can still like my blog page on Facebook, it’s public.

Categories: Other

Nokia’s road to demise

21.03.2012 Leave a comment

When Nokia dumped Symbian I was saying that they either have to embrace an existing solution, such as Android, or create something from scratch.

The reason for dumping Symbian was likely because is was too much rooted in the past. They thought it would be hard for Symbian to compete with iOS and Android. Maybe, maybe not.

But then they chose to replace Symbian with Windows Mobile/Phone. It was obvious it was not going to fly. Windows Phone had a niche market and it was unlikely it would pick up. Basically they bet on a dead horse. They jumped on it because Microsoft paid them to use Windows Phone, they were tempted by the money.

Now a Nokia’s ex-exec confirms it that it was a very bad decision.

I guess they did not go with Android, because they had their heads too high in… the clouds.

Categories: Computing