Archive
Game consoles
High-end game consoles are past their best days. The top consoles from Microsoft and Sony haven’t been refreshed in 6-7 years. Meanwhile the PC platform kept improvements coming and Nintendo has shown with Wii that a console doesn’t need top hardware to be popular. In the recent years console manufacturers faced a new challenge – iOS and Android platforms introduced casual gaming and started eroding the console market.
The nice thing about iOS and Android is that the games for these platforms are super cheap. Spending a few bucks a month for a few games is not a big deal, home budged will certainly not notice that. More sophisticated games cost $5+ which is still not a big deal. For a small portion of a cost of one console game one can purchase several good quality games and play them anywhere, not attached to a TV.
Nintendo has already released a new version of their console – Wii U. However it seems that their new console doesn’t sell as good as they anticipated. Sony and Microsoft are expected to release new versions of their consoles later this year. Will they enjoy better sales, or will they face similar problems to Nintendo’s?
Well, the Android ecosystem isn’t sleeping either. Ouya is one example of an Android-based console, which was an overwhelming success on Kickstarter last year, confirming that this is what the users want. Ouya will certainly steal more market share from the big guns.
I anticipate that Sony’s, Microsoft’s and Nintendo’s consoles will face a really touch competition. My advice for them is to jump on the Android bandwagon, otherwise they will share the same fate as Nokia.
Do you think the high-end consoles will survive?
Package management on Ubuntu, Linux Mint, Debian, etc.
One area in which Linux shines compared to, let’s say, Windows, is how the OS is put together. An entire working Linux distribution is just a set of installed packages. Each file in the system outside of the user’s home directory belongs to some package. There are no mysterious directories containing lots of files of questionable origin and purpose. One can remove any unwanted packages which came pre-installed. In fact one can even install the whole OS from scratch onto a clean drive, package by package, just by installing individual packages.
For years I’ve been using Gentoo Linux, which distributes packages in the source code form, so installing any package involves compiling it from source. The compilation itself is automatic, the user only needs to choose which packages to install, like on any other distribution. When installing Gentoo Linux, one can choose to start from a pre-installed base, or start from scratch and install individual packages one by one, including the kernel, C library and all basic packages.
This approach is great when you want to learn how Linux works and how it is built. Thanks to this, Gentoo Linux is very configurable, since you can choose dependencies and features for every package. I once leveraged this to build fully functional mini Linux OS which occupied 20MB and booted in 3 seconds.
Gentoo Linux comes with a very good package management system. At the core there is the emerge tool, which is used to install and uninstall packages. There is also an optional tool called equery, which can provide various information about the installed packages.
Not long ago I switched my desktop to Linux Mint. Linux Mint is either Debian-based or Ubuntu-based, depending on the flavor. Ubuntu leverages all the tools and structure from Debian, although its packages are often not compatible with Debian.
The immediate problem I faced was how to manage the packages installed on my system. It turns out that on Debian and all derivatives thereof, package management is more complicated than on Gentoo Linux. There are two tools available, which are providing similar functionality. The first one is dpkg, which is the primary package management tool on Debian. Unfortunately dpkg is not that easy to use and has several missing features, so there is another tool called Debian APT, which is in fact a conglomerate of several, separate tools, such as apt-get. On top of that there are graphical tools, such as aptitude or Synaptic, which try to make package management easier, although they are lacking more sophisticated functionality. Overall, package management on Debian feels like an afterthought, the authors of dpkg probably did not intend to address its shortcomings, so other teams kept creating other tools, which have their own shortcomings.
To install new packages I like to use Software Manager on Linux Mint. This is the best graphical package management tool I’ve ever used on Linux. I just type a portion of the name of the package I want to install, and it gives me a list of all packages matching this name, so I click on the one I was looking for and install it. I used Synaptic before on Ubuntu, but Software Manager is easier to use and less confusing.
However, to acquire any kind of information about packages installed on my system, or to uninstall packages, I use command line tools. It’s not because I am used to this approach from Gentoo Linux, there is simply no better way.
Here is how to perform various package management tasks on Debian-derived Linux distributions like Ubuntu or Linux Mint.
sudo apt-get install $PACKAGE_NAME
dpkg --get-selections [$PACKAGE_PATTERN]
dpkg -l [$PACKAGE_PATTERN]
dpkg -L $PACKAGE_NAME
dpkg -S $FILE
apt-cache depends $PACKAGE_NAME
sudo dpkg --purge $PACKAGE_NAME
sudo apt-get remove --purge $PACKAGE_NAME
sudo apt-get autoremove --purge
Show all automatically or manually installed packages
apt-mark showauto apt-mark showmanual
sudo apt-mark auto $PACKAGE_NAME sudo apt-mark manual $PACKAGE_NAME
Update all packages in the system
sudo apt-get update sudo aptitude safe-upgrade
sudo apt autoremove
Update GRUB configuration
sudo update-grub
Multithreading in scripting languages
Scripting languages and multithreading don’t go together. Sure, many scripting languages implement something which looks and works like threads, but in the end these are usually fake threads, which work like a single CPU thread and run on a single CPU core. Fake threads are of course useful, but they cannot leverage the full power of a modern CPU. Or GPU.
Today CPUs have many cores and GPUs have hundreds of little “hardware threads”. Typically a GPU has multiple cores, each of which can execute multiple threads simultaneously.
All this computing power cannot be harnessed in general purpose scripting languages. This is because implementing proper multithreading support in scripting languages is very difficult. At some point Mozilla dropped multithreading support in their JS engine as they thought multithreading in JavaScript was not needed at all and the complicated multithreading implementation was a drag on the engine’s robustness.
What makes many scripting languages useful is dynamic typing. Variables don’t have predetermined types, the types of objects they reference can change in time. But from CPU’s point of view objects in scripting languages are complex, it is difficult to make them modifiable atomically with regards to other threads.
However, it would be great to use a general purpose scripting language and be able to leverage multiple CPU cores as well as GPUs. Scripting languages are easy to use – this is their purpose – so they are perfect for simple tasks or prototyping.
Options
There are three general approaches to creating a general-purpose scripting language with support for robust multithreading:
1. Making the interpreter/engine of a scripting language thread-safe.
This is certainly doable, but in effect it would make the interpreter very slow. Python has been criticized for years for being slow even without real multithreading support.
2. Designing a new scripting language specifically for multithreading.
This is an interesting option. Imperative and functional languages have a bigger potential for this approach. It is very difficult to design a new programming language which will be easy to use, though.
3. Modifying an existing language and applying some limitations for multithreaded operation.
This is something inbetween the other two options. This is the most promising option. The idea is to not sacrifice too much of a language’s flexibility but still provide robust multithreading support.
All three approaches have to take into account several common limitations.
Variables
The most common classes/kinds of variables on which functions operate are: locals, arguments, closures and globals.
Local variables which are not closures to inner functions are always thread-safe, because they are local to the executing function’s context.
This can be complicated by coroutines, functions which preserve their state across multiple calls. Local variables in a coroutine are no longer thread-safe if two threads want to call the coroutine simultaneously.
Function arguments are something between closures and locals from the programmer’s perspective. An object passed from the caller to the callee is accessible to both. If the callee is used as a thread function, the arguments are shared between two threads.
Closures share some similarities with function arguments, although they are in a way local variables to the function they belong to. Multiple threads may get spawned using a local function which has access to the closures. This way multiple threads may be trying to modify the closures.
Globals are the most susceptible to manipulation from multiple threads. Globals are not recommended in general, although they are typically more pervasive in scripting languages, esp. in languages where functions are first class objects and share the global namespace with other objects.
Immutability (or constness) is a very useful trait for variables. Immutable variables, or rather their immutable values, are thread-safe, because no thread can modify them, so all threads are safe to read them simultaneously. Therefore immutability is the ultimate key to thread safety. Unfortunately most scripting languages do not support explicit immutability. It is difficult to impose and control immutability in a dynamically typed language. Immutability would become another source of unexpected exceptions.
Possible solution
An example possible solution based on option 3 from the above list would be to leverage an existing language model but limit data exchange between threads. If threads cannot exchange data beyond special facilities like mailboxes, the risk of deadlocks or data races is low. It would work as follows.
All objects would have an additional binary state attribute, they would either be unique or shared.
A unique object is an object which is only accessible to the current thread and the thread can do anything with the object.
A shared object is an object which is accessible to multiple threads. A shared object is immutable and cannot be modified by any thread.
And here is how this new state would be used:
- When a program or script is started, all objects are marked as unique or shared or whichever is more convenient in the long run. In languages which have explicit immutability, immutable objects would be marked as shared. Otherwise all new objects are marked as unique.
- When a new thread is spawned, first all objects accessible to this thread are marked as shared. This is a deep operation and applies to all children of the global object and their children and so on. It also applies to the locals of the current function and all closures accessible to the current thread, as well as all saved coroutine state (continuations). Both the parent and the child thread proceed as usual once the child thread is spawned.
- Because shared objects are immutable, whenever a thread attempts to modify a shared object, a shallow copy of the object being modified is created, and that shallow copy is modified. The new copy is marked as unique.
Critique
There are two problems with the above approach. First of all spawning a new thread is quite expensive, because it requires walking all data visible to the parent thread and marking it as shared.
Secondly, the operation of shallow-copying shared objects to make them unique (mutable) is non-trivial, because it would require modifying all other objects which reference them as well in the same manner.
This second problem could be avoided if objects were always referenced by handles and the actual object pointers would be stored in a big array per-thread to which the handle would be an index. Only the object under the handle would be copied and made unique. Other threads wouldn’t notice this change.
The handle/array approach would also reduce the impact of the first problem, because instead of walking a tree of objects the interpreter would only have to walk a linear array.
The handle/array approach would have a negative impact on performance of modern engines where raw pointers are used internally to reference objects.
Alternative approach
Another approach would be to use thread identifier to indicate which thread owns a particular object. If the thread identifier stored in a particular object would be equal to current thread’s identifier, it would mean that the object is unique and mutable and can be modified by this thread. If the identifier was different from the current thread’s identifier, the object would be considered shared and therefore immutable.
When a thread spawned another thread, the parent thread’s identifier would be modified. This way all objects belonging to this thread would automatically be marked as shared. The objects would be directly inherited by the child thread without any complicated operations.
Now when either the parent or the child thread tried to access an object, it would still have to do a shallow copy. It is not worth mentioning that the new copy would be marked with the thread’s identifier, because all objects created by a particular thread would always be marked with thread identifier of the creator thread.
With this approach objects would still have to be referenced by handles. Therefore a copy of the handle space would have to be made for each thread when a thread is spawned.
Summary – impact on the interpreter/engine
- All objects handles would have to be translated to access actual objects.
- To mitigate the impact of object handles, local objects in functions could still be held by pointer instead of by handle.
- This would be OK even if callees spawned threads which had access to the local variables, because spawning a thread would turn the objects into shared, so the local objects would have to be replaced if they were modified later by the parent thread.
- However all handles would still be kept up to date for closures as well as in coroutines.
- Global objects and properties of local objects would still have to be translated.
- Object refinement would remain the same. Reading object properties would be unaffected.
- Object modification would involve checking object owning thread identifier and creating a shallow copy of modified shared objects for the current thread.
- Each thread would have its own handle space.
- Creating a new thread would involve copying handle space from the parent thread.
Request for comments
Do you have any ideas how to further mitigate the penalties of using object handles?
Or can you think of a better general approach to the problem of multithreading in scripting languages?
Alternative point of view on C++11
Everybody loves C++11. What’s there not to like? In fact everybody I know wants to switch to C++11 at the first opportunity, including me. This is because C++11 is a better language than C++98. Or is it?
John Sonmez argues in his post that besides all the great new features which make C++11 feel like a new language, there is one big problem with it – it’s HUGE!
I agree with John’s statement, C++ has grown big. It’s been hard to learn and master. It’s even more complex now. I’ve been learning C++ for many, many years, and it still surprises me sometimes, and I haven’t started using C++11 in everyday code yet.
There are lots of very nice features in C++, but there are also lots of features which are tricky. There are features which invite bugs. Sure, friends, multiple inheritance and even goto all have their place, but in most cases they will make someone miserable in the long run. And it’s not about “don’t like them, so don’t use them”. It’s about all those unexperienced programmers who stumble upon them and think they are a good idea to use.
One possibility to improve this state could be to remove, limit or forbid certain features. Subsetting is discussed every now and then. As good a solution as it may sound, it won’t solve all the problems. Some suggest the problem lies in C++’s compatibility with C. So subsetting definitely won’t solve this, as the remaining features will need to remain unchanged.
I suspect Andrei Alexandrescu would say something like: “Don’t like C++’s complexity and C compatibility? Switch to D”. Yes, there’s that.
C++ just can’t break backwards compatibility. This is why new languages are created. One day some programmer or group will gather all the best ideas from C++ and create a new language, which will be simpler and will avoid many pitfalls of C++.
Until that happens, I will happily look forward to using C++11 in my projects.
Photography
I recently read an article about daguerreotypes. This was the first commercially available photography technology from 19th century. What stroke me was that these first captured images had good composition! No wonder, composition was invented by painters long before photography was even remotely feasible.
Today photography is accessible to everyone. Most often people take photos with their pocket computers, commonly referred to as phones. The photos taken with these mobile devices are of very poor quality, not only technically due to low quality lenses and sensors, but also due to the fact that the authors of these photos know nothing about composition, lighting etc. Photography is available to the masses, not only to the talented ones. The richer ones spend money on digital SLR cameras, but many of them leave their DSLRs in the closet at home considering them too bulky and heavy and turn back to the cheap pocket devices.
If you are wondering what I mean, go over the photos of your friends on Facebook. Unless you only know professional photographers, I am sure you will find lots of pearls such as photos of a man with the face in the middle, legs cut off and the upper half of the photo containing only the ceiling.
One could argue that this is normal when you make any technology available to the average Joe. But it does not have to be this way. Today’s pocket computers are getting more sophisticated and powerful. They are capable of analyzing images taken with their cameras. The next step is to improve the camera software. Today’s camera software on mobile devices has sophisticated image processing algorithms to convert the ultra-poor quality data from the cheap sensors into photos. But the authors of this software need to push it to the next level.
The desktop entry-level photo editing software such as iPhoto or Picasa is already capable of recognizing faces in photos for the purpose of cataloging them. These algorithms need to be merged with the camera software. Then we need to add algorithms which will recognize feats such as composition and aid the casual photographer in improving their photos.
I expect that future generations of digital cameras in pocket computers will suggest to the users how to improve their photos using simple means. If the person decides to shoot a bad photo anyway, the software will crop it automatically giving the option to restore the original as the last resort. This will improve the quality of most photos which are shown to us.
Multithreading
Recently when reviewing some piece of code, which was going to be used in a multithreaded environment, I came to the conclusion that writing thread-safe code is twice as difficult as regular single-threaded code.
Concurrency adds another dimension to programs. A function in a multi-threaded program must be aware that other pieces of code will be executing at the same time. What happens when two threads call this function simultaneously? What happens if other functions start changing the data this function uses?
Not only code has to be written making sure that nobody messes it up during execution, but the lifetime of resources must also be considered. The resources used by a thread must be allocated and freed even more diligently than in single-threaded programs. Threads are also resources, while it’s easy to spawn them, stopping them must also be planned.
Here are a few tips for those new to the world of multiple threads of execution. There are exceptions to every “rule” described below, so always stay alert!
- Use mutexes to guard resources which can be modified simultaneously by multiple threads.
- Reading is a thread-safe operation, unless someone can modify the data you’re reading. Data which is initialized once before any threads are created or const data does not have to be guarded – unless multiple threads can attempt to initialize it.
- Use as few mutexes as possible to reduce the risk of deadlocks.
- When using multiple mutexes to lock multiple resources, always lock mutexes in the same order and unlock them in reverse order to avoid deadlocks.
- Prefer lock-free constructs. They usually rely on atomic operations. Creating a good lock-free construct can be quite tricky. If multiple atomic operations are needed, maybe it’s time to use a mutex – multiple atomic operations can lead to race conditions.
- A good design pattern applicable to many situations is a queue with worker threads. Such queue can be designed to be fed by either one or many threads, which will keep adding elements to its tail. Worker threads will keep retrieving items from the head of the queue. In some situations a lock-free queue is useful, in other situations it’s good to have a semaphore to indicate to worker threads if something is in the queue and a mutex to guard the queue’s guts. Such queue is quite easy to write and reasonably safe.
- Alternatively you can think of such queue as of a messaging mechanism, where threads can interchange information using messages. Software which relies on message passing instead of sharing data between threads statistically has a lower chance of defects.
- Another relatively safe construct are barriers. In a model with barriers you create multiple seemingly identical threads, which occasionally wait on a barrier to synchronize with each other. All threads in a group must stop on a barrier, only then can they continue further execution. The programming model based on barriers can be found in CUDA or OpenCL and is relatively safe, although deadlocks may occur if threads have a way to avoiding barriers.
- Group resources into classes and guard them with a single mutex. Have all the public functions of such classes lock the mutex.
- In general highly cohesive modules are easier to maintain in multi-threaded environments.
- For every function make sure that it is safe to call it simultaneously from multiple threads.
- For every class make sure that it is safe to call any number and combination of the class’ function simultaneously from multiple threads.
- Make all class’ member variables private. This is generally encouraged in C++, but it’s even more important for thread safety.
- Make sure that functions you call are thread-safe.
- Avoid global variables. You can get away with them in single-threaded programs, but they can and will mess things up severely in multi-threaded code.
- Avoid static variables. They are just a form of globals and will also lead to problems.
- Avoid nested mutexes. Some platforms allow nested mutex locking, such as Windows’ critical section or a recursive POSIX mutex. The problem is that when you determine that you need a recursive mutex it is a symptom of a bad multi-threaded design. There are exceptions to this, as always, but usually the code will sooner or later get out of hand and you will spend a lot of time debugging spurious failures. Once you need a recursive mutex it indicates that there is no one, clean way to enter a function or section of code and users of the code may enter into situations which you failed to predict.
- Avoid passing function arguments and local variables to other threads by pointer or reference. This way you can rely on them as always being thread-safe.
- When passing data between threads, design well when that data will be created and destroyed. Make sure it doesn’t leak. Make sure it isn’t accessed after it’s destroyed.
- When writing C or C++ code, avoid the volatile keyword, unless you use it for hardware resources and you know exactly what you are doing. The volatile keyword has nothing to do with thread safety.
If you are writing single-threaded code, keep in mind that some day you or somebody else may need to use it in a multi-threaded environment. Therefore most of the hints above apply to some extent to single-threaded programs as well!
Rewrite from scratch?
One day we’ve decided to import a piece of code of a significant size from another project. It made perfect sense and allowed us to avoid spending months to write what others have already written. This happens quite often in many projects, because code reuse saves time and money.
After making it work, days later we’ve discovered saddening truth. Lots of resource leaks! The source project was sloppy with resources, because it did not matter much in that particular project and perhaps they don’t have the right tools to even know about the resource leaks’ existence.
A few hours spent with a leak checker revealed that even though the code was written in C++, it was written in a C-like fashion, with objects allocated directly with new and stored in raw pointers without ever being released. Some containers were semi-hand crafted, though they used STL containers underneath, but holding raw pointers.
Lots of programmers given such code would scream: “Rewrite from scratch!”. But is it really such a good idea? I don’t think so. I don’t doubt that there are projects which absolutely need a rewrite, but in many cases refactoring comes to the rescue. In the end refactoring is a less costly alternative. If there are tests for the original code (hopefully!), the approach is to slowly fix the code. In this particular case the first thing is to gradually introduce the right resource allocation and management mechanisms which are not prone to leaking.
The size of the task of importing foreign code is hard to estimate. One of the reasons is that taking code from one project and putting it in another, new environment will make the code behave differently enough to reveal obscure bugs. On the other hand if that code remains maintained and shared between projects then its quality will improve. So everybody can benefit from reusing, and if needed refactoring and fixing existing code.
What’s the magic word?
Apparently cracking a 11-character password with lowercase letters and numbers plus a single symbol takes under 3 weeks if you have a lot of computing power.
If you only have a few powerful computers it will take 54 years.
I think the author forgot to add that it will take 54 years “using today’s technology” and did not adjust for the fact that the computing power doubles every 18 months. This is still true for GPUs and CPUs may start improving like this again in a few years with manycore architectures. One may argue that we will eventually hit a wall once we get down to a few molecules, but there’s still quantum computing and optical computing to explore, so computers will keep getting faster in the years to come.
It looks like no password is safe.
Most common source of bugs
There is one single source of most bugs. It’s obvious, yet almost nobody talks about it. That source is statistics.
You can call it a stretch, but it’s true. No matter how you classify a bug, whether it’s a race, a leak, whether a programming error, a typo or a misunderstanding, statistics influences whether that bug occurs or not in the first place.
Using mallocs/frees all over the code or using lots of gotos is not a source of bugs per se. Most occurrences of constructs widely perceived as bug prone will not produce an incorrect program. Until these constructs meet statistics.
If we put it this way, it sounds hopeless, because statistics quantifies everything around us, every programming technique. But we can use this fact to choose which programming techniques we use or even as far as which programming language we use, to reduce the number of bugs.
Using the above example, if we write a program explicitly calling malloc() and free() to allocate and release memory as needed, we will occasionally forget to call free() and introduce a memory leak in the code. In C++ the simplest solution to that is to never use free() or the delete operator, but to rely on destructors to do their job instead (e.g. the use of std::vector or std::make_shared() in C++11, etc.). This way we will never forget to free memory, the compiler will do this for us.
Basic principles of programming
This post is meant to serve as advice for beginner programmers. If you don’t consider yourself a beginner, read on and check if you agree.
So here are a few basic principles which a programmer should follow to write good code, no matter what the language is.
When I started my adventure with programming back in fall of 1993, I wish somebody laid these out for me. There are many good books about the dos and don’ts of programming, such as Effective C++, but I think it’s still difficult to find to find the more simple basic principles like the ones below, esp. for novice programmers.
These principles apply to most areas of programming. They should be taught at the beginning of programming courses. Unfortunately most programming courses focus on tools, such as programming languages, environments, data structures, etc. but they don’t touch the craft and art of programming.
Copy & paste
When writing one of my first programs in Turbo Pascal, I quickly learned that pasting pieces of copied code around in a program leads to a lot of unnecessary work at a later time. Let’s say you have exactly the same piece of code in N places, and that code is meant to do the same thing, or even if there are N places with very similar code doing very similar thing. One day you will have to change that code slightly or enhance it. Or you will find a bug in that code. You will have to find all N places and apply the same change N times. There is a high probability that you will miss some of the places – the larger the N the higher the probability. This will lead to either not fixing existing bugs entirely or introducing new ones.
The takeaway from this is that multiple copies of the same pieces of code should be avoided. Similar code should be collapsed into one function and that function invoked wherever it’s needed. Avoid copy & paste.
Code clarity
Variables, functions, members etc. should be named after what they do. Most variables should be nouns. Most functions should be verbs (e.g. variable num_args vs. function count_args()). The names should be short, but long enough to give a clue to the reader what they mean. There are some schools which teach that variables should never have long names. This is OK, but in some circumstances it’s necessary to give longer names so that the reader can easily understand what is happening in the code.
There is another term for using meaningful identifier names: self-documenting code. When clear names are used, there is less need for comments and other kind of documentation.
Comments should be used whenever the names of variables, constructs etc. are not sufficient to understand what’s going on. This applies especially to more complicated pieces of code, algorithms, etc.
But why bother with all of this? One word: maintenance. Sometimes a person other than the author has to maintain the code – fix bugs, add new functionality, refactor or reuse. The less time that person needs to spend to understand what’s going on, the better. Often even the author may need to return to the code he’s written and may not remember why certain decisions were made.
Language constructs which promote bugs
Every language has constructs which promote bugs. Such constructs should be avoided. They may be useful or necessary in certain situations, but in these situations they are the necessary evil. In most other cases we’re better off without them.
Examples include: goto in C++, a bit less in C, C-style macros in C++, == operator in JavaScript.
In general any language feature which has gotchas, which may behave in an unexpected way (e.g. friend or protected in C++), should be avoided, unless specifically beneficial in a certain situation. When used, precise comments should be added describing the use case.
The unfortunate thing is that until you know a particular language really well, you don’t know what these tricky constructs are. They are usually not advertised in the language manuals. Sometimes there are books which help to learn about why particular features are dangerous. So the best advice one could give here is: stay alert!
Diligence vs. ignorance
Or I should say: willful ignorance. Programming has become ubiquitous and some languages like JavaScript have a very low entry-level. It’s good, but it also comes with some disadvantages, such as: programmers don’t put enough thought into what they are doing. I’ve seen too much mindlessly written code in my career. Some simply assume that they are writing throwaway code and they don’t care about the quality. Other just implement the first solution which came into their mind, they don’t try to think of all advantages and disadvantages of that solution, it’s like they only wanted to finish their current task and move on, as if the code they write was going to be thrown away right after being written, or as if they were going to quit soon and they don’t care who will be maintaining that. But the code tends to outlive the task, somebody has to maintain or extend it. This leads to the same piece of code being reimplemented over and over again multiple times, which is a huge waste. If the first implementor gave enough thought into what he was doing, the original piece of code could have been used for years, perhaps even reused.
The advice here is: be diligent. Learn about the environment surrounding the code you write (i.e. callers, callees, etc.). Learn about all the use cases. Try to think of all things that your approach may break. It does take experience to write good code, but it also takes common sense.
Code reviews
It’s good to have an additional pair of eyes review your code. If you’re writing code for fun, have a friend take a look at it. If you’re working for a company, have a coworker review your code and review his code in return. I don’t know why code reviews are not a custom at many companies. Reviews take only a small amount of time, but they have a big benefit of unifying the code to ease future maintenance, promote coding style conventions, promote good behaviors and suppress the bad ones, etc. It’s even more beneficial if somebody more experienced reviews your code, you will learn from him.
Reviews are not an ultimate solution, they will not help to find all bugs, in fact many bugs will slip through reviews, but reviews help improve code quality in the long run.