Archive

Archive for March, 2012

Facebook and Twitter

31.03.2012 Leave a comment

I set up a Facebook page and a Twitter account so that it’s easier for you to follow this blog. The Facebook page is facebook.com/chris.dragan.name and the Twitter account is @chrisdraganname . Hopefully this post will be the first to be publicized on both.

Twitter was trivial, but it was not as easy as one would have thought to set up a Facebook page. After setting up the Facebook page for the blog I learned that WordPress refuses to post my posts on it, I had to create a profile. Even after I created the profile and bound the page to it WordPress still refused to connect to Facebook.

Let’s see how it goes, all for your convenience, dear reader.

Update

WordPress still refuses to publish posts to the Facebook page and wants to publish to my profile instead. Very confusing. You can still like my blog page on Facebook, it’s public.

Advertisements
Categories: Other

Nokia’s road to demise

21.03.2012 Leave a comment

When Nokia dumped Symbian I was saying that they either have to embrace an existing solution, such as Android, or create something from scratch.

The reason for dumping Symbian was likely because is was too much rooted in the past. They thought it would be hard for Symbian to compete with iOS and Android. Maybe, maybe not.

But then they chose to replace Symbian with Windows Mobile/Phone. It was obvious it was not going to fly. Windows Phone had a niche market and it was unlikely it would pick up. Basically they bet on a dead horse. They jumped on it because Microsoft paid them to use Windows Phone, they were tempted by the money.

Now a Nokia’s ex-exec confirms it that it was a very bad decision.

I guess they did not go with Android, because they had their heads too high in… the clouds.

Categories: Computing

Nanotechnology

14.03.2012 Leave a comment

Nanotechnology is one of the technologies of the future which we will develop and embrace. The idea of nanotechnology was pioneered by Eric Drexler who in his book “Engines of Creation” described the potential and many uses of this technology.

The whole idea is centered around the ability to manipulate individual atoms for various purposes, such as creating new materials, creating entire devices from scratch with unparalleled precision or modifying molecules in living organisms, including fixing human bodies.

In the recent years we’re seeing incremental progress in the ability to manipulate individual atoms, with or without the help of carbon-based nanotubes. However we are still very far from mastering the technology, we still need one or more breakthroughs.

There is a lot of debate concerning nanotechnology, also related to its feasibility or dangers. But nanotechnology is already everywhere around us – we and all lifeforms are its creation. Nanotechnological devices lie at the basis of all living cells and are nanomechanical parts of all organelles.

When nanotechnology finally arrives, it will change our world more than cars or computers did. We will be able to manufacture goods at home, we will just have a pot or a chamber filled with a medium, we will download designs from the internet, throw in raw materials such as dirt, wait and take out a TV or parts of a car to assemble. Just like we have paid and free software, we will eventually have paid and free designs of devices to assemble at home.

Carbon and Silicon will become the most common materials used, but surely people will still want to use wood and other common materials, but they will be more expensive and less durable.

A lot of people will lose low paid jobs, esp. in manufacturing and distribution, but more intellectual jobs will open instead. After all we’re good at thinking, we should let robots and computers do the mechanical jobs. There will still be demand for food, but the production of food has already been automated to some extent.

Categories: Universe

Language drawbacks

13.03.2012 Leave a comment

The longer I am in the business of writing code (over 11 years and counting) the more nuisances I see in the set of technologies we use to write software.

When you start programming, you learn your programming language for years. Then you learn another language, which has some similarities to the first one, but also some other, new features. And then you learn even more languages (assuming you are that kind of person). There are features which most languages have, like ways to declare variables, invoke subroutines or create loops, and there are features which only one group of languages shares, like closures, coroutines, classes or templates.

Eventually you start to realize that there are language features which are useful and promote good ways of programming, i.e. improve readability, maintainability and reduce number of bugs, but there are also features which are best avoided, which encourage bad style which leads to bugs or unmaintainable mess.

I could list a dozen or two of such bad features in C++, such as macros, goto, protected, etc. I’m giving C++ here as an example, as every language has such features. In case of C++ they are legacy, and hard to remove. Perhaps the compiler could have a mode where these features could be turned off completely. Perhaps even the standards committee could propose such set of obsolete or deprecated features. Last month I had an opportunity to ask Bjarne Stroustrup, the creator of C++, what is his opinion about deprecating some features, his response was that despite of former attempts to create a subset of C++, it is hard to do that, because everybody has their own set of favorite features.

There are people who claim there should only be two languages in existence, such as C and Python (I knew such person). Yet these languages, like any other, bear their own sets of drawback features.

I argue, that we actually need more programming languages, and Neil McAllister seems to have nailed it down. Because we can’t fix the existing languages, we need new languages to build upon experience of existing ones and to avoid their mistakes.

Let’s take JavaScript. This language has many very useful features, such as persistent closures (you can return a lambda function and the state of closures from its parent will be preserved) and gave birth to the JSON format, but it also has as many, if not more terrible pitfalls, such as default globals (undeclared variables are global by default), semicolon insertion (semicolons are automatically added as they fit the parser, even if you don’t want them), type promotion (it’s hard to predict what type your variables will have in each expression) and so on. JavaScript has very low entry level, almost anybody can write code in JavaScript, but there are relatively few people who know what really happens in their JS programs, e.g. most people don’t know the difference between the == and === operators or the difference between obj[“prop”] and obj.prop. Only recently I realized the subtle difference between named and unnamed functions.

Not long ago I took a look at Lua, praised by some. After a few steps of an online tutorial I learned that assigned variables, which are not explicitly declared, are global by default. Why would anybody create a language which does something like that? Why do we still see new languages with such features? (Lua is not new)

You might ask, what’s wrong with that? Well, when you write a program, you make mistakes. Some mistakes are quickly caught by the parser, but many subtle ones are not. If you forget to declare a variable inside a function in JavaScript or Lua and you assign to it, the variable will be global. It may overwrite an existing global variable, it may leak your local state or hold unreleased memory until the end of the program or even be prone to race conditions if you invoke the function from multiple threads at once. If you are not the only person working on a project, the probability of that happening is even bigger.

My point is, that every language feature which introduces uncertainty or has some other kinds of drawbacks, has a direct contribution to bugs and increases the amount of time people have to spend on making a program stable or even making it work at all.

The same person who claimed that there should only be two languages was ignorant about features which promote good style, such as many of the features that C++ has over C, like RAII or exceptions, which reduce the number of lines of code one has to write, places to modify and potential number of bugs. That person was admittedly known for producing stable code, even if sometimes convoluted and it was not easy to find bugs in his code. But here is the thing: one swallow does not make a spring. All people make mistakes, some more, some less. If a language feature promotes bugs, many programmers will suffer because of that feature.

So there are language features we don’t want, which we try to avoid. But what about features which are missing?

The basic purpose of computer’s existence is to do repetitive tasks so that humans can do harder tasks. This is why we don’t have to solve difficult equations anymore, this is why we don’t program in machine code, computers do these and many more things for us.

I recently watched Bret Victor’s presentation and he asked a very cool question: why the heck do we have to check function arguments for correctness, over and over and OVER again? When you write a function, you’re supposed to check the arguments first. When you are interviewing a potential new employee, the first thing you look at in his code is whether he checked the arguments. But isn’t this what computers are for? So why are we still doing the computers’ job?

How many of undiscovered features are still waiting for being added to new languages and to help us to write software in a better way?

Categories: Computing