Archive

Archive for the ‘Computing’ Category

Linux Mint

26.04.2012 2 comments

What is up with Linux?

I use Linux a lot. For years I’ve been using Gentoo Linux. Gentoo is great if you want to learn about Linux, how it works and how it’s put together. On Gentoo you choose all packages that you want installed and also you have choice of what options you want enabled in every package. Gentoo compiles all the packages you chose and their dependencies from sources. It also comes with excellent documentation describing most setup and configuration options, esp. for popular packages.

After years of using Gentoo I determined that one of its drawbacks started bothering me: updating. I kept updating it too often, and updating Gentoo takes a lot of time, because all packages are compiled from sources. Also a very common problem was that newer packages quite often tended to break things in the system. Most often some auxiliary package or library didn’t compile, or was provided with options incompatible with or breaking other packages, and I had to search for workarounds. At other times configuration files changed and I had to find out how to use the new ones. So I was spending too much time on fixing these breakages.

At some point I decided to switch to Ubuntu, which used to be the most popular distribution. I switched to Ubuntu only at home, and left Gentoo on my desktop at work. It mostly worked fine, but I didn’t use it too much. Ubuntu gets major updates every six months, in addition to that updating is easy, you just click and it just downloads the updated packages. Unfortunately it sometimes breaks portions of the system when you update, apparently no Linux distribution out there figured out how to avoid breaking users’ systems during updating. For example sometimes in the middle of updating the process would stop because of some configuration problem and I had to find out how to work around it to continue the upgrade. At other times some packages stopped working (e.g. Bluetooth) and I needed to fix them.

But the most annoying thing about Ubuntu is that it is changing default programs all the time. For example the program for browsing images would change once a year, i.e. the update process would remove the old program and install a new, different one. Or the program for playing music would change. The settings would not be copied from the old program, so I would have to configure the new one from scratch. In the latest incarnation, half a year ago, the default desktop environment was changed to Ubuntu-sponsored one (Unity). That was the end of Ubuntu for me.

I found about Linux Mint. This is another Debian-based distribution (similarly to Ubuntu), but it comes in several flavors, so it’s easier to choose the flavor you like. I am particularly a fan of XFCE desktop environment, which is very simple and fast, yet has all the features of the major desktop environments (KDE, Gnome).

So I switched to Linux Mint. This time I switched to Linux Mint both at home and at work (although I still keep Gentoo on the side for some tasks, since it’s very convenient for development due to its nature). In my opinion Mint is generally nicer than Ubuntu, probably because it comes in many flavors and the XFCE version appealed to me. Of course you could say that you can install and use XFCE in Ubuntu too, but then you would also have to keep KDE or Gnome, and that I wouldn’t like – I especially don’t like keeping a lot of garbage which I don’t use. The package manager and updater are also nicer on Mint than on Ubuntu.

Very shortly after switching to Mint I found out an inconvenient truth: sound does not work! It did not work neither at home nor at work, and these computers have very different, but not uncommon motherboards. Actually ALSA supports them just fine and I never had any problems with sound in Gentoo or Ubuntu. Fortunately I don’t do much sound-related things on Linux. I determined that the problem lies in pulseaudio, which is one of many sound managers for Linux. This is a general problem with Linux (still after so many years!) – there is no single sound solution. I don’t know why some distributions choose pulseaudio, it just doesn’t work for so many people. Pulseaudio is very crappy, because it just doesn’t work. After spending many hours trying to get any sound out of it, I uninstalled pulseaudio and got sound working through ALSA in most applications. Except Flash! I still don’t know how to get sound in my browser. I hope that Google soon releases Chrome with built-in flash and it will just work. It looks like currently Chrome uses Flash from the distribution, and Mint has its own version of Flash. Another possibility could be to replace Mint’s flash with another build, but I haven’t tried that yet.

Recently I did a major upgrade of Mint and I also got serious issues with the update. The update died in the middle with a couple of broken packages because my partition ran out of space. Somehow it is hard for the package manager to figure out that it’s going to run out of space and warn the user. Again I had to look on the Web and find a solution, which involved reverting to apt to fix that (a command line tool). Then graphics drivers stopped working and it took me some time to figure out that one package has been mysteriously uninstalled during the update and I had to reinstall it manually. Afterwards the update finished fine and I had no other issues. But I feel like these issues shouldn’t have happened in the first place. It’s a pity their package updaters are so crappy. Like I wrote before, no Linux distribution has figured out how to handle system updates gracefully.

Other issues I have with Mint are: Apple Magic Mouse doesn’t work. Apparently the drivers for that mouse are unusable. The mouse works fine in Windows 7. Also QEMU which comes with Mint hangs. I tried QEMU compiled manually, but it’s very slow. I ended up using QEMU compiled on Gentoo and that works great, so far I haven’t had time to figure out what’s wrong on Mint.

Besides the problems I described above, I am still happy with Mint and it seems to work better for me on desktop than Ubuntu (has XFCE and I choose what I want to have installed) and Gentoo (no constant lengthy updates and manual fixes, at least not that often). On the other hand because of these problems I don’t know if I would recommend Linux to people who are not computer gurus.

Besides Linux I also use MacOSX, Windows XP, Windows 7, iOS and Android. All of them have their goods and bads. Linux is nice on the desktop for software development if you get used to it. Although I must say that I like MacOSX better, but that could be a topic for another post…

Categories: Computing

How to hire great programmers?

20.04.2012 Leave a comment

Noticed on Herb Sutter’s blog: quotes from Steve Jobs.

The interesting quote is: “In most businesses, the difference between average and good [employee] is at best 2 to 1. […] But in software, it’s at least 25 to 1. […] The secret of my success is that we have gone to exceptional lengths to hire the best people in the world.”

I guess the difference between people in the middle of that range (12) and the people closer to the end (1) can easily be spotted per my previous post. But how do you tell the difference between the best ones (25) and the middle ones (12)? Especially this: how do you tell whether a person has the potential to become a “25”? How did Steve do that?

Categories: Computing, Other

C++ renaissance

12.04.2012 Leave a comment

Today I stumbled upon this very interesting comparison of which languages are used to write popular software (it was mentioned on Herb Sutter’s blog).

I encourage you to go through the table yourself and contemplate it. To sum it up, most of the popular software which we use every day is written in either C or C++. The author makes a point that other languages, esp. languages which don’t compile directly to executable machine code (a.k.a. native code), are still in a niche and always will be, because no matter how fast computers become, we will use their extra power and resources for new features instead of wasting them on non-native languages.

Back in the old days, when the Moore’s law directly influenced CPU frequencies, the advocates of non-native languages always used the argument that at some point CPUs will become so fast that native languages will lose their usefulness due to their clunkiness. Among other similar arguments.

The year 2006 came and CPU frequencies hit a wall. It turned out that in order to make further progress in performance CPU manufacturers have to start packing more and more cores into their products. We can leverage that in many algorithms which are parallelizable, but there is still a lot of things our programs have to do sequentially and there is no way to go around that. Sure multiple applications can leverage separate cores, but multiple cores don’t make miracles, we are stuck with a frequency limit!

Before the year 2006 everything went well, Java was at its peak and Microsoft promoted C# and .NET as their thing of the future. Today Microsoft backs C++11 and encourages developers to “go native”, while the future of C# is uncertain (there are fears that Microsoft will drop .NET).

When Google came out with its Chrome browser and revolutionized – if not leveled – the browser landscape, some of their most significant improvements went into the JavaScript engine. The Chrome browser was released in the midst of a JavaScript performance war between Firefox and Opera, but Chrome did it better. After years of improving their JavaScript engine Google declared that they gave up on JavaScript performance, they decided to develop a new language (Dart) which will be typed and will compile in browser into native code. Consider this: web scripting is important and performance of website scripts must improve, but to get there Google decided they can’t rely on a non-native language anymore! This does not mean that JavaScript will go away, but Google puts its bet on Dart which is a native language for the future of more sophisticated web programming.

I think non-native languages are still very useful for scripting, prototyping and other similar tasks where you need a simple language which does not have to produce lightning fast results. But non-native languages cannot replace native languages for more sophisticated tasks where the software must meet resource constraints. In other words, usefulness of Java, C# et al has been strongly overrated, they will remain niche languages at least for the next decade.

Update

A friend of mine pointed out that job postings indicate that Java is in higher demand than C or C++. That is a valid point, every language has its own purpose and application. It does not change the fact that non-native languages will replace native languages any time soon.

Categories: Computing

Interviewing

1.04.2012 2 comments

However easy it may seem, interviewing software engineers is hard.

If you have a little experience in developing software, you are probably participating in the hiring process once in a while. It goes like this: you come up with a few questions, see how candidates tackle them and give your opinion whether the candidates will fit into the position you are interviewing for or not. Sounds easy.

But how do you know that you can measure anything with your questions? If you are a novice interviewer, you will make mistakes. You will sometimes make bad judgements even if you are a seasoned interviewer. This is why most companies throw several interviewers at each candidate. If all interviewers agree about a candidate, the hiring manager has an easy task. If there is too much disagreement between interviewers, the manager will either dismiss the candidate or recommend him to another team. Also sound easy. But it’s far from perfect, let’s look how the process works and what could be improved.

How hiring works at most companies

The first steps of the hiring process involve getting candidates, e.g. by advertising open job positions. When the candidates send in their resumés, they go through the HR filter, which is typically based on keywords. For this reason, if you are looking for a job, you should put in as many keywords and acronyms as you can come up with into your “experience” section, to just get through the Great Filter. If you are a C++ developer applying for a C++ position, but you have used C# in the past, just throw it in, even if you are not interested in a C# position or have very little experience with it – to an HR person you may appear as more valuable than other potential candidates.

The great fault of this process is not that bad candidates slip in through the filter – those who have no experience but a pretty resumé – but that the filter potentially rejects good candidates. Some candidates may be missing resumé writing skills, but could make great employees.

Then there is the interview series, where multiple interviewers question each candidate, as described above. Usually it is easy to dismiss weak candidates. For example, most candidates are unable to answer simple programming questions, hiring a person who does not care about programming won’t make a great employee, unless the goal is to hire a bunch of drones for a thoughtless job, which is always a bad idea – such approach is usually a sign of bad management.

The fault here is not that this sieves through the sea of candidates lacking competence for this particular job. The problem is that the process fails to distinguish between great hires and mediocre hires and also usually fails to match candidates for a particular position (about this later).

The last step is salary negotiation, which I am not going to cover here, although this is also an interesting topic.

Questions

Probably everybody has their own favorite types of questions. So there are the technical questions, which focus on knowledge. For a long time this was my favorite kind, the problem was that almost no candidate was able to answer them all. For example I would ask about five various C++ keywords. I could count on the fingers of one hand the candidates who actually knew them all. Most candidates are able to guess a few keywords.

I am asking, why are you applying for a position, which specifies a particular language you will be using, and you are not even willing to learn what all its keywords are for? OK, C++ is one of the most difficult languages and has over 60 keywords. I have seen several candidates claiming being “advanced” or “experienced” in C++, and still not able to even guess some keywords. You might say, some of these keywords are rarely used or not useful, or it’s easy to look them up, but would you expect the candidate to do his job seriously with attention to details if he is not even willing to prepare for the interview? (BTW. “I could look it up” is also not an entirely bad answer)

Indeed, too technical questions do not help us to determine how good a fit a particular candidate is for the job. Knowing rare keywords by heart does not help with good design, thorough testing or finishing the job at all.

For years I would indulge myself and abuse the fact that there were other interviewers interviewing the candidates as well, so they would compensate for my being too harsh. I would also throw in a piece of C code where the candidate would have to track what number gets printed in the end. Very tedious. Not everybody knows that 1.0f is represented as 0x3F800000U. But I was not satisfied, in the end knowledge != wisdom. I realised that I am looking for something more than just knowledge in the candidates.

So there are the riddle kinds of questions. Some companies are famous or excel at giving sophisticated riddles. The proponents of riddles claim this is the best way to hire smart people. So do you really want to hire smart people who don’t know anything about the technology they are going to be working with in the coming years? Or people who are extremely smart, but don’t make good team players? Or nobody wants to work with them because they are rude and don’t respect others’ opinions? I guess the riddle questions are not the best kind.

Two kinds of employees

There are two kinds of employees. It is a simplification, but it boils down to this – the process of recruiting should take this into account, because everything else and the whole “lifetime” of an employee at one company is centered around this. If an employee of one kind takes a position that requires an employee of the other kind, the employment will not be successful and in the best case scenario will be terminated sooner than later, no matter how good the employee really is. This classification applies not only to software engineering, but to most, if not all jobs.

The first kind is more common. These employees must be told what to do. They are not able to find problems and fix them on their own, somebody else must find work to do for them. They are usually not proactive. Novice employees of this kind will not be able to complete tasks without guidance, will not research problems thoroughly, will not look for potential issues. When faced with obstacles, they will be hopelessly stuck until somebody pushes them in the right direction. After many years of work they will finally learn how to cope with most of these problems, they will still need to be told what and when to do and they will still not pursue new potential tasks on their own. Be careful not to dismiss them, they are very valuable when assigned the right job and they can handle many if not most common tasks. They are good are performing repetitive or simple tasks, after long training even sophisticated ones. They are great to have if the direction is clear. They always need supervision.

The second kind is much less common. These employees are the drivers. They go out looking for problems, find solutions and fix them. They are proactive and often don’t require supervision. They should not be micro managed, because they will feel they don’t have control and will not be able to perform and will not be satisfied with their job. These employees are often good at researching difficult problems and driving things to completion. It is always good to have at least one on a team.

There are groups within each kind and there is a special group in the second kind, let’s call them high achievers for now. I admire them the most. They constantly strive to improve themselves, they treat their job almost like an art, they are the craftsmen. Whatever problem you will throw at them, they will always solve it.

Now don’t be fooled, you don’t want to assemble a team of high achievers. The perfect team is a balance of people of each kind, so the entire team can make forward progress as well as solve problems as they are encountered. The last thing you want is to assemble a team of all smart gurus who won’t want to communicate with each other and will rip the team apart.

This is true not only in software development but in most kinds of jobs. For example there have been many attempts of building all-star teams in sports, but eventually these teams were mediocre, because the top players were not able to work together.

The question is, how do you determine during an interview to which group the candidate belongs? Throw in a riddle? People from both groups can be smart and able to solve riddles. Technical questions are not good either. To be frank, I don’t know yet how to distinguish between the two kinds of candidates yet.

Two kinds of employees, second try

To make it easier, somebody suggested to me another classification: those who are hard-working and those who are not. This is orthogonal to the other method of classification. Hard working in this case does not mean willing to do overtime, but it simply means: bent on finishing tasks, finding solutions and being honest about the work. Even people who need constant guidance can be split into those who just don’t care about the job and those who are willing to learn and build their careers.

The nice thing about this classification is that it is easy to devise simple questions with which you can judge with a high level of certainty whether a person is hard-working or not. You can start with asking details about their previous assignments and judge how willing they were to complete them and how competent they are.

I like to think of these two classifications as of two dimensions of employee classification scale. There are other dimensions as well, such as the technical knowledge dimension, which is also important but trivial to assess.

Other dimensions such as “fit for the team” are often overrated and over-advertised by HR or management. What matters is whether a person is a team player or not, but sometimes even this does not matter – if you need a “high achiever” to tackle the difficult problems your team comes across, it does not matter much whether he will be a team player or not, but how effective he will be, although you still need to back other tasks in the project with team players.

The bottom line

The bottom line is whether the candidate will be able to perform the job or not. Will he write good code or not? We he care about the code he writes or not? Will he be willing to improve his skills or not?

Currently I am focusing on presenting the candidates with real world problems and asking them how they would solve them. For example, what would they do if they faced a badly written code? Would they leave it alone or try to fix it and why yes or why not? It still gives me some ideas about the candidates and allows me to compare them, discussing the solutions with them also allows me to judge their experience. I also like to give them some simple (but not too trivial) piece of code to write and see how they cope with the task.

I know this is not a perfect way to interview, I am still looking for a better way.

Categories: Computing

Nokia’s road to demise

21.03.2012 Leave a comment

When Nokia dumped Symbian I was saying that they either have to embrace an existing solution, such as Android, or create something from scratch.

The reason for dumping Symbian was likely because is was too much rooted in the past. They thought it would be hard for Symbian to compete with iOS and Android. Maybe, maybe not.

But then they chose to replace Symbian with Windows Mobile/Phone. It was obvious it was not going to fly. Windows Phone had a niche market and it was unlikely it would pick up. Basically they bet on a dead horse. They jumped on it because Microsoft paid them to use Windows Phone, they were tempted by the money.

Now a Nokia’s ex-exec confirms it that it was a very bad decision.

I guess they did not go with Android, because they had their heads too high in… the clouds.

Categories: Computing

Language drawbacks

13.03.2012 Leave a comment

The longer I am in the business of writing code (over 11 years and counting) the more nuisances I see in the set of technologies we use to write software.

When you start programming, you learn your programming language for years. Then you learn another language, which has some similarities to the first one, but also some other, new features. And then you learn even more languages (assuming you are that kind of person). There are features which most languages have, like ways to declare variables, invoke subroutines or create loops, and there are features which only one group of languages shares, like closures, coroutines, classes or templates.

Eventually you start to realize that there are language features which are useful and promote good ways of programming, i.e. improve readability, maintainability and reduce number of bugs, but there are also features which are best avoided, which encourage bad style which leads to bugs or unmaintainable mess.

I could list a dozen or two of such bad features in C++, such as macros, goto, protected, etc. I’m giving C++ here as an example, as every language has such features. In case of C++ they are legacy, and hard to remove. Perhaps the compiler could have a mode where these features could be turned off completely. Perhaps even the standards committee could propose such set of obsolete or deprecated features. Last month I had an opportunity to ask Bjarne Stroustrup, the creator of C++, what is his opinion about deprecating some features, his response was that despite of former attempts to create a subset of C++, it is hard to do that, because everybody has their own set of favorite features.

There are people who claim there should only be two languages in existence, such as C and Python (I knew such person). Yet these languages, like any other, bear their own sets of drawback features.

I argue, that we actually need more programming languages, and Neil McAllister seems to have nailed it down. Because we can’t fix the existing languages, we need new languages to build upon experience of existing ones and to avoid their mistakes.

Let’s take JavaScript. This language has many very useful features, such as persistent closures (you can return a lambda function and the state of closures from its parent will be preserved) and gave birth to the JSON format, but it also has as many, if not more terrible pitfalls, such as default globals (undeclared variables are global by default), semicolon insertion (semicolons are automatically added as they fit the parser, even if you don’t want them), type promotion (it’s hard to predict what type your variables will have in each expression) and so on. JavaScript has very low entry level, almost anybody can write code in JavaScript, but there are relatively few people who know what really happens in their JS programs, e.g. most people don’t know the difference between the == and === operators or the difference between obj[“prop”] and obj.prop. Only recently I realized the subtle difference between named and unnamed functions.

Not long ago I took a look at Lua, praised by some. After a few steps of an online tutorial I learned that assigned variables, which are not explicitly declared, are global by default. Why would anybody create a language which does something like that? Why do we still see new languages with such features? (Lua is not new)

You might ask, what’s wrong with that? Well, when you write a program, you make mistakes. Some mistakes are quickly caught by the parser, but many subtle ones are not. If you forget to declare a variable inside a function in JavaScript or Lua and you assign to it, the variable will be global. It may overwrite an existing global variable, it may leak your local state or hold unreleased memory until the end of the program or even be prone to race conditions if you invoke the function from multiple threads at once. If you are not the only person working on a project, the probability of that happening is even bigger.

My point is, that every language feature which introduces uncertainty or has some other kinds of drawbacks, has a direct contribution to bugs and increases the amount of time people have to spend on making a program stable or even making it work at all.

The same person who claimed that there should only be two languages was ignorant about features which promote good style, such as many of the features that C++ has over C, like RAII or exceptions, which reduce the number of lines of code one has to write, places to modify and potential number of bugs. That person was admittedly known for producing stable code, even if sometimes convoluted and it was not easy to find bugs in his code. But here is the thing: one swallow does not make a spring. All people make mistakes, some more, some less. If a language feature promotes bugs, many programmers will suffer because of that feature.

So there are language features we don’t want, which we try to avoid. But what about features which are missing?

The basic purpose of computer’s existence is to do repetitive tasks so that humans can do harder tasks. This is why we don’t have to solve difficult equations anymore, this is why we don’t program in machine code, computers do these and many more things for us.

I recently watched Bret Victor’s presentation and he asked a very cool question: why the heck do we have to check function arguments for correctness, over and over and OVER again? When you write a function, you’re supposed to check the arguments first. When you are interviewing a potential new employee, the first thing you look at in his code is whether he checked the arguments. But isn’t this what computers are for? So why are we still doing the computers’ job?

How many of undiscovered features are still waiting for being added to new languages and to help us to write software in a better way?

Categories: Computing

C++11

6.02.2012 2 comments

A few days ago I attended the Going Native 2012 conference. I captured my thoughts as they were fresh while sitting on a plane going back home a few hours after the conference, later I added a summary of the current state of C++ and where it is possibly going.

The Conference

The Going Native 2012 conference was devoted to the new C++ standard, C++11. The conference was organized by Microsoft at their main campus in Redmond. The name implies a switch from languages which rely on virtual machines, like Java or .NET, back to native languages which compile into binary code natively executed by target architectures.

It was a big fun and pleasure to attend the conference. A number of distinguished speakers who are directly involved with the creation and standarization of C++ were present, the talks were very interesting and of a great value and it was all well organized. I enjoyed a lot the ability to talk to the speakers in person inbetween the talks as well as the ability to have photos taken with them.

Everything was transmitted live over the internet and also it is all available for replay, for free, which adds additional value to the conference, because everybody interested in the new standard can benefit from it.

Attendance fee was very low, so I was surprised I was able to sign up after a month and a half since Going Native 2012 was announced. I found out that many people signed up a few weeks before it took place. That was surprising, considering the low fee and the high quality content which could have been expected from the speakers.

Some speculate that Microsoft threw in this conference with this generosity to advertise that they are not abandoning C++ as many have thought based on the fact that Microsoft’s compiler in its current state is very much behind in support for the new standard. To fix this, they announced they are working hard on improving the support for the new standard in the upcoming version of their compiler. They are also planning more frequent releases to bring new features to the programmers sooner. Also their implementation of the standard library is going to be complete in the upcoming release.

Other good things about the conference – good food (nothing really to complain from my side), excellent and energizing atmosphere and the ability to meet various people and talk to them about their experiences with the language. I also came back from it with some splendid trophies.

The talks spanned over two days. On the evening of day one there was a dinner in a billiard club in downtown Bellevue.

The Speakers and the talks

Me with Bjarne Stroustrup

The most of all I enjoyed the opening keynote by Bjarne Stroustrup, the creator of the C++ language. To me his talk wonderfully explained the gist of programming in C++ – the emphasis on style. It was very down to earth and is applicable to all C++ code, esp. industrial. I have the privilege and pleasure of working with great engineers and what Bjarne touted, I see every day in our code reviews. One of the most important things when writing C++ code is to make it clean and understandable to others and to your future self. The less time you have to spend to understand the code the better the code is. This actually applies to every language, not only C++. C++11 gives the programmers new, outstanding tools which will improve the code immensely. When used correctly of course, that’s where the style comes into play.

Hans Boehm provided insight to the threading capabilities of the new standard library, which he authored. It’s great threads finally made it into the standard library. Hans gave a good introduction into the most important aspects of that new feature.

Next Stephen T. Lavavej, a.k.a. STL, gave a good speech about standard library optimizations and a few new features which are very useful in conjunction with the standard library. STL is a very interesting personality, he is notable for talking very fast and he is relatively young compared to other speakers. Smells like a genius?

Me with Andrei Alexandrescu

Andrei Alexandrescu, known for bold and ingenious use of templates, talked about variadic templates, which provide a great simplification to certain template use cases and make it easier to write functions which accept variable number of arguments. In his second talk on the second day he talked about a proposal for a future version of C++ – static if, a version of “if” which is resolved at compile time and its branches are compiled on an as-needed basis much like specialized templates. It improves the use of templates even more. Andrei also has an interesting personality, he is a great joker and showman. His talks are really fun to watch.

On the second day, Herb Sutter, the chair of the C++ standard comittee, talked about Microsoft’s Visual C++ compiler, it’s current state and future. He also pointed out a number of new C++11 features which will be immensely useful in common, every day code. In one of his short speaches he also mentioned C++ AMP, which is Microsoft’s proposed extension to C++, which allows portion of a C++ program to run on the GPUs and also leverage other kinds of parallel hardware. The extension is simple, open for other compilers to adpot and integrates well with the C++11 language, likely something like this will be added to the standard itself in the future.

Chandler Carruth, who is working on tools based on Clang, gave an introduction to the Clang compiler, which is a relatively new implementation of C++ compiler based on the LLVM backend. It is open source and rivals GCC. Among its advantages are real openness – it’s available for commercial use – and its modularity. Modularity is especially important, because you can take parts of the compiler and reuse for various purposes. Some tools being developed, based on Clang, include a tool to automatically insert minimal set of #includes or refactor C++ code, even if it is partially hidden behind macros. Another awesome feature of Clang are its diagnostic messages, which not only cleanly point out where a problem lies, even for complicated template code, but also suggest possible solutions.

I was not familiar with Clang or C++ AMP, these are two really interesting technologies which will likely affect the way we program in C++ in the near future, in a positive way.

Last but not least, Bjarne together with Andrew Sutton talked about history and current state of concepts, a feature which was not accepted into C++11 and many people miss it. Concepts are about specifying intent or template argument constraints when writing templates. The work on concepts is still not finished, Bjarne and Andrew are still working on them, but are soon going to propose them for addition to a future version of the C++ standard.

C++11 introduces a lot of useful features. The preferred ways of writing C++ programs changed. It is advantageous to use the new features to produce better code. Therefore the new standard made a lot of books obsolete in a way, books which teach C++ should focus on different features than before. It will probably take 3-5 years to release new books which will catch up with the standard. Herb promised that the commitee will attempt to not repeat this with the next version of the standard, they will mostly focus on fixes and useful additions, but without as enourmous improvements as this time.

The standards commitee also wants to address the problem of scarcity of good, portable C++ libraries or standard library features, which would integrate well with current the standard library. Their current approach is to make it easier for other contributors to submit proposals for standard library extensions and relax the strictness of reviews.

The state of C++11

My takeaway from the conference is that the C++ standard caught up with modern language features (such as lambdas, closures, threads, etc.). A lot of features which were missing for a very long time are now available. Among them are tools which significantly improve programming style by automating some tasks in the compiler instead of requiring the programmer to explicitly do them, such as the auto keyword.

The current compilers implement a subset of C++11 features. In the coming versions of Microsoft Visual C++ and GCC this will be a major subset. Less and less features are missing. It will probably take about two more years before all major compilers come with full support for C++11.

Probably even more years will pass before many projects upgrade their compilers to the latest version and will be able to leverage C++11 features. I’m hoping this will be sooner than later, because C++11 really is a better language than C++98 and code written with C++11 will statistically be better.

The future of C++

It is easy to notice the current trends in computing and where C++ is going. Future revisions of the C++ standards will provide even more expressiveness to the programmer and more opportunities for writing better and cleaner code.

The tools are also evolving and we will see faster compilers which optimize the code better than today, which issue better diagnostic messages. We will also see more tools which leverage compilers which will give us more way to modify, transform and refactor our code, which will also provide better instrumentation and other ways of detecting bugs.

We can expect more libraries for C++, either standardized or as part of the standard library, which will provide useful functionaltiy readily available for use in our programs without the need for digging the Internet or writing our own libraries. Many languages such as C#, Python, etc. come with lots and lots of libraries and allow the programmer to better leverage these languages in many fields out of the box. C++ will also gain similar capabilties.

C++ will likely be able to better leverage the architectures of today and tomorrow. The language and the standard library will include functionality similar to C++ AMP or Thrust. The programmer will be able to leverage vector instructions, multiple CPU cores and heterogenous architectures (e.g. CPU+GPU) right in the C++ code without the need of using external tools and the code will just work.

Categories: Computing

Upgrades

28.01.2012 Leave a comment

I’ve been silent this month, but I’m preparing for the highly anticipated release of Diablo 3. I used to be a gamer back in the day, but haven’t had time to play games in recent years. I’m a fan of the Diablo genre, it’s been almost 12 years since the second part. The third installment was announced almost 5 years ago and it’s been in the making probably for more than that. I’ve had a chance to play the demo recently and I can’t wait for the final release.

So I blew off the dust of my almost six year old computer. It has a dual core Opteron 1.8GHz, 1GB of RAM and GF 6600 graphics card. That’s below the game’s official spec, so I upgraded RAM to 2GB and acquired GF 460GT graphics card (still in the shipment). I’m holding off with the CPU upgrade until I’m sure it’s really needed. The official spec recommends a 2.2GHz CPU, but I have a feeling my current one should do just fine, after all all that matters should be the graphics card, shouldn’t it? All that I know for sure is that I can’t go higher than 2.6GHz, that was the fastest Opteron for my motherboard, these are scarce these days since they’ve been discontinued a long time ago.

I played the demo on a 27″ (I think) monitor and it looked great. So I started thinking of upgrading my 19″ monitor as well, though my wife opposed and proposed that I leverage the TV. Well, I’ll see how our 50″ plasma can deal with games.

At this occasion I switched from Ubuntu to Linux Mint. I got fed up with Ubuntu since they keep adding more and more cruft and bloat. I got aggravated by them constantly changing image viewers, media players and even desktop environment. The Linux Mint setup I chose is based on rolling Debian distribution. By rolling I understand there are no major upgrades, new packages simply appear ready for upgrade from time to time. This is similar to Gentoo, you can keep the system up to date all the time if you want and when you want it, you don’t have to depend on major releases. Also I switched back to the good old XFCE, which is one of the leanest but still useful desktop environments for GNU/Linux.

Not long after I screwed up my Gentoo-based Linux desktop at work. Came at the wrong moment of course. So I decided to switch it to Linux Mint as well. Works good so far – with two exceptions: QEMU is slow as hell and there was no sound! I still haven’t figured it out. With QEMU it’s problematic, since I compiled it from source (the one from the package database was also slow but was hanging). I don’t have a clue how to approach it yet, maybe I’ll end up trying to rebuild the kernel. Now regarding sound, the matter is embarrassing (for Linux Mint of course). I tried everything I think and there was still no sound. Finally I wiped out pulseaudio with anger. Now I can play a sound in the command line through ALSA, but I still have no sound in Chrome and other apps. Doh!

No wonder the year of Linux desktop never came! Well, it’s not that surprising after all, but Linux got into pocket masquerading as Android…

Categories: Computing

Case-insensitive identifiers

23.12.2011 Leave a comment

Recently I came across Jeff Atwood’s article about the idea of case-insensitive identifiers. I think this is an interesting idea, here’s why.

Why have case-sensitive identifiers at all? Function names, variable names, object member names. Having two variables in your program which overlap in any scope, whose names differ only by case is generally a bad idea. To somebody who tries to read and understand the program, they are indistinguishable, most likely they are a programming error or a remainder from an older version of the code. It would probably be a good idea for statically typed languages to forbid two variables of the same name differing only by case.

Let’s take dynamically typed languages, such as Python or JavaScript. One of their advantages over statically typed languages is that they allow faster development cycle, because there is less text needed to write a program so source code is more concise.  More concise source code is statistically easier to read and review and therefore maintain. However the disadvantage of dynamically typed languages is that the variables references are not checked during compile time, but resolved in run time. Hence it is easy for bugs to hide in Python or JS programs – the kind of bugs that can only be detected in run time in very specific situations.

Let’s consider the following function in Python:

class Vector:
    def __init__(self, x, y):
        self.x = x
        self.y = y
def AddVectors(v1, v2):
    return Vector(v1.x+v2.x, v1.y+v2.y)

Most of the time there won’t be a problem with it, but sometimes the caller may pass malformed input (e.g. input read from a wrong file) and the function will raise an exception. That’s the drawback of dynamically typed languages.

But in some code path the programmer may just make a mistake:

class Empty:
    pass
v1 = Empty()
v1.X = 1
v1.Y = 2
v2 = AddVectors(v1, v1)

This program will obviously fail with an exception. It is a programming error, but does it have to be?

I argue that no, it should not really be a programming error. This kind of bug may be very annoying if it occurs in a rarely traversed path, and causes rare, unnecessary crashes for the end user.

Because using two variables differing only by case should be avoided as it leads to confusion and therefore bugs, it would actually be useful if identifiers in dynamically typed languages were case-insensitive.

This problem does not directly apply to statically typed languages, because all variable references are resolved during compile time, so the programmer has the opportunity to catch all spelling errors before the program is executed for the first time. It would still not hurt if the compiler (say for C++ or Java) did not allow two variables differing only by case – this would lead to cleaner and better code.

Categories: Computing

Operator precedence

30.11.2011 Leave a comment

Recently I came a cross a bug where the author forgot to use parentheses in a conditional expression. The code went like this:

if (AAA &&
    BBB || CCC || DDD)
{
    /* ... */
}

The bug was obvious, because this is what the author really meant:

if (AAA && (BBB || CCC || DDD))

But this is not how compiler understood it.

There are so many operators, that it’s hard to remember their precedence. Not many people remember immediately whether | (bitwise or) operator has higher or lower precedence than & (bitwise and) or ^ (bitwise xor). Let alone the << and >> (bitwise shift) operators, which in C++ are used as stream operators, and have higher precedence than other bitwise operators. There are also other surprises, such as comparison operators having a higher precedence than bitwise and/or/xor operators.

These days its not uncommon to use more than one language in everyday life, especially various scripting languages come to mind. Many languages share the set of operators, but operator precedence may vary between languages, for example Python’s operator precedence is different than C’s.

All this results in errors when writing code and creates unnecessary maintenance problems.

There are several groups of operators which have obviously higher precedence than others. For example * (multiply) operator has higher precedence than + (add) operator, we were taught this on mathematics lessons in elementary school. Also it’s not a surprise than arithmetic operators have higher precedence than logical operators. But other combinations are ambiguous. Should bitwise operators have higher or lower precedence than arithmetic operators?

To avoid bugs and make the code easier to read for anybody who will be maintaining or extending it, it is a good practice to use parentheses. It’s good to have this rule in coding conventions for any project.

Languages should impose the usage of parentheses in ambiguous situations in their grammars. It is easy to define such grammar rules even in the simplest notations like BNF. For example such rule would forbid mixing various bitwise operators without parentheses or mixing arithmetic and bitwise operators, etc. This would help to avoid subtle bugs which are sometimes difficult to spot.

Categories: Computing