Nanotechnology
Nanotechnology is one of the technologies of the future which we will develop and embrace. The idea of nanotechnology was pioneered by Eric Drexler who in his book “Engines of Creation” described the potential and many uses of this technology.
The whole idea is centered around the ability to manipulate individual atoms for various purposes, such as creating new materials, creating entire devices from scratch with unparalleled precision or modifying molecules in living organisms, including fixing human bodies.
In the recent years we’re seeing incremental progress in the ability to manipulate individual atoms, with or without the help of carbon-based nanotubes. However we are still very far from mastering the technology, we still need one or more breakthroughs.
There is a lot of debate concerning nanotechnology, also related to its feasibility or dangers. But nanotechnology is already everywhere around us – we and all lifeforms are its creation. Nanotechnological devices lie at the basis of all living cells and are nanomechanical parts of all organelles.
When nanotechnology finally arrives, it will change our world more than cars or computers did. We will be able to manufacture goods at home, we will just have a pot or a chamber filled with a medium, we will download designs from the internet, throw in raw materials such as dirt, wait and take out a TV or parts of a car to assemble. Just like we have paid and free software, we will eventually have paid and free designs of devices to assemble at home.
Carbon and Silicon will become the most common materials used, but surely people will still want to use wood and other common materials, but they will be more expensive and less durable.
A lot of people will lose low paid jobs, esp. in manufacturing and distribution, but more intellectual jobs will open instead. After all we’re good at thinking, we should let robots and computers do the mechanical jobs. There will still be demand for food, but the production of food has already been automated to some extent.
Language drawbacks
The longer I am in the business of writing code (over 11 years and counting) the more nuisances I see in the set of technologies we use to write software.
When you start programming, you learn your programming language for years. Then you learn another language, which has some similarities to the first one, but also some other, new features. And then you learn even more languages (assuming you are that kind of person). There are features which most languages have, like ways to declare variables, invoke subroutines or create loops, and there are features which only one group of languages shares, like closures, coroutines, classes or templates.
Eventually you start to realize that there are language features which are useful and promote good ways of programming, i.e. improve readability, maintainability and reduce number of bugs, but there are also features which are best avoided, which encourage bad style which leads to bugs or unmaintainable mess.
I could list a dozen or two of such bad features in C++, such as macros, goto, protected, etc. I’m giving C++ here as an example, as every language has such features. In case of C++ they are legacy, and hard to remove. Perhaps the compiler could have a mode where these features could be turned off completely. Perhaps even the standards committee could propose such set of obsolete or deprecated features. Last month I had an opportunity to ask Bjarne Stroustrup, the creator of C++, what is his opinion about deprecating some features, his response was that despite of former attempts to create a subset of C++, it is hard to do that, because everybody has their own set of favorite features.
There are people who claim there should only be two languages in existence, such as C and Python (I knew such person). Yet these languages, like any other, bear their own sets of drawback features.
I argue, that we actually need more programming languages, and Neil McAllister seems to have nailed it down. Because we can’t fix the existing languages, we need new languages to build upon experience of existing ones and to avoid their mistakes.
Let’s take JavaScript. This language has many very useful features, such as persistent closures (you can return a lambda function and the state of closures from its parent will be preserved) and gave birth to the JSON format, but it also has as many, if not more terrible pitfalls, such as default globals (undeclared variables are global by default), semicolon insertion (semicolons are automatically added as they fit the parser, even if you don’t want them), type promotion (it’s hard to predict what type your variables will have in each expression) and so on. JavaScript has very low entry level, almost anybody can write code in JavaScript, but there are relatively few people who know what really happens in their JS programs, e.g. most people don’t know the difference between the == and === operators or the difference between obj[“prop”] and obj.prop. Only recently I realized the subtle difference between named and unnamed functions.
Not long ago I took a look at Lua, praised by some. After a few steps of an online tutorial I learned that assigned variables, which are not explicitly declared, are global by default. Why would anybody create a language which does something like that? Why do we still see new languages with such features? (Lua is not new)
You might ask, what’s wrong with that? Well, when you write a program, you make mistakes. Some mistakes are quickly caught by the parser, but many subtle ones are not. If you forget to declare a variable inside a function in JavaScript or Lua and you assign to it, the variable will be global. It may overwrite an existing global variable, it may leak your local state or hold unreleased memory until the end of the program or even be prone to race conditions if you invoke the function from multiple threads at once. If you are not the only person working on a project, the probability of that happening is even bigger.
My point is, that every language feature which introduces uncertainty or has some other kinds of drawbacks, has a direct contribution to bugs and increases the amount of time people have to spend on making a program stable or even making it work at all.
The same person who claimed that there should only be two languages was ignorant about features which promote good style, such as many of the features that C++ has over C, like RAII or exceptions, which reduce the number of lines of code one has to write, places to modify and potential number of bugs. That person was admittedly known for producing stable code, even if sometimes convoluted and it was not easy to find bugs in his code. But here is the thing: one swallow does not make a spring. All people make mistakes, some more, some less. If a language feature promotes bugs, many programmers will suffer because of that feature.
So there are language features we don’t want, which we try to avoid. But what about features which are missing?
The basic purpose of computer’s existence is to do repetitive tasks so that humans can do harder tasks. This is why we don’t have to solve difficult equations anymore, this is why we don’t program in machine code, computers do these and many more things for us.
I recently watched Bret Victor’s presentation and he asked a very cool question: why the heck do we have to check function arguments for correctness, over and over and OVER again? When you write a function, you’re supposed to check the arguments first. When you are interviewing a potential new employee, the first thing you look at in his code is whether he checked the arguments. But isn’t this what computers are for? So why are we still doing the computers’ job?
How many of undiscovered features are still waiting for being added to new languages and to help us to write software in a better way?
C++11
A few days ago I attended the Going Native 2012 conference. I captured my thoughts as they were fresh while sitting on a plane going back home a few hours after the conference, later I added a summary of the current state of C++ and where it is possibly going.
The Conference
The Going Native 2012 conference was devoted to the new C++ standard, C++11. The conference was organized by Microsoft at their main campus in Redmond. The name implies a switch from languages which rely on virtual machines, like Java or .NET, back to native languages which compile into binary code natively executed by target architectures.
It was a big fun and pleasure to attend the conference. A number of distinguished speakers who are directly involved with the creation and standarization of C++ were present, the talks were very interesting and of a great value and it was all well organized. I enjoyed a lot the ability to talk to the speakers in person inbetween the talks as well as the ability to have photos taken with them.
Everything was transmitted live over the internet and also it is all available for replay, for free, which adds additional value to the conference, because everybody interested in the new standard can benefit from it.
Attendance fee was very low, so I was surprised I was able to sign up after a month and a half since Going Native 2012 was announced. I found out that many people signed up a few weeks before it took place. That was surprising, considering the low fee and the high quality content which could have been expected from the speakers.
Some speculate that Microsoft threw in this conference with this generosity to advertise that they are not abandoning C++ as many have thought based on the fact that Microsoft’s compiler in its current state is very much behind in support for the new standard. To fix this, they announced they are working hard on improving the support for the new standard in the upcoming version of their compiler. They are also planning more frequent releases to bring new features to the programmers sooner. Also their implementation of the standard library is going to be complete in the upcoming release.
Other good things about the conference – good food (nothing really to complain from my side), excellent and energizing atmosphere and the ability to meet various people and talk to them about their experiences with the language. I also came back from it with some splendid trophies.
The talks spanned over two days. On the evening of day one there was a dinner in a billiard club in downtown Bellevue.
The Speakers and the talks
The most of all I enjoyed the opening keynote by Bjarne Stroustrup, the creator of the C++ language. To me his talk wonderfully explained the gist of programming in C++ – the emphasis on style. It was very down to earth and is applicable to all C++ code, esp. industrial. I have the privilege and pleasure of working with great engineers and what Bjarne touted, I see every day in our code reviews. One of the most important things when writing C++ code is to make it clean and understandable to others and to your future self. The less time you have to spend to understand the code the better the code is. This actually applies to every language, not only C++. C++11 gives the programmers new, outstanding tools which will improve the code immensely. When used correctly of course, that’s where the style comes into play.
Hans Boehm provided insight to the threading capabilities of the new standard library, which he authored. It’s great threads finally made it into the standard library. Hans gave a good introduction into the most important aspects of that new feature.
Next Stephen T. Lavavej, a.k.a. STL, gave a good speech about standard library optimizations and a few new features which are very useful in conjunction with the standard library. STL is a very interesting personality, he is notable for talking very fast and he is relatively young compared to other speakers. Smells like a genius?
Andrei Alexandrescu, known for bold and ingenious use of templates, talked about variadic templates, which provide a great simplification to certain template use cases and make it easier to write functions which accept variable number of arguments. In his second talk on the second day he talked about a proposal for a future version of C++ – static if, a version of “if” which is resolved at compile time and its branches are compiled on an as-needed basis much like specialized templates. It improves the use of templates even more. Andrei also has an interesting personality, he is a great joker and showman. His talks are really fun to watch.
On the second day, Herb Sutter, the chair of the C++ standard comittee, talked about Microsoft’s Visual C++ compiler, it’s current state and future. He also pointed out a number of new C++11 features which will be immensely useful in common, every day code. In one of his short speaches he also mentioned C++ AMP, which is Microsoft’s proposed extension to C++, which allows portion of a C++ program to run on the GPUs and also leverage other kinds of parallel hardware. The extension is simple, open for other compilers to adpot and integrates well with the C++11 language, likely something like this will be added to the standard itself in the future.
Chandler Carruth, who is working on tools based on Clang, gave an introduction to the Clang compiler, which is a relatively new implementation of C++ compiler based on the LLVM backend. It is open source and rivals GCC. Among its advantages are real openness – it’s available for commercial use – and its modularity. Modularity is especially important, because you can take parts of the compiler and reuse for various purposes. Some tools being developed, based on Clang, include a tool to automatically insert minimal set of #includes or refactor C++ code, even if it is partially hidden behind macros. Another awesome feature of Clang are its diagnostic messages, which not only cleanly point out where a problem lies, even for complicated template code, but also suggest possible solutions.
I was not familiar with Clang or C++ AMP, these are two really interesting technologies which will likely affect the way we program in C++ in the near future, in a positive way.
Last but not least, Bjarne together with Andrew Sutton talked about history and current state of concepts, a feature which was not accepted into C++11 and many people miss it. Concepts are about specifying intent or template argument constraints when writing templates. The work on concepts is still not finished, Bjarne and Andrew are still working on them, but are soon going to propose them for addition to a future version of the C++ standard.
C++11 introduces a lot of useful features. The preferred ways of writing C++ programs changed. It is advantageous to use the new features to produce better code. Therefore the new standard made a lot of books obsolete in a way, books which teach C++ should focus on different features than before. It will probably take 3-5 years to release new books which will catch up with the standard. Herb promised that the commitee will attempt to not repeat this with the next version of the standard, they will mostly focus on fixes and useful additions, but without as enourmous improvements as this time.
The standards commitee also wants to address the problem of scarcity of good, portable C++ libraries or standard library features, which would integrate well with current the standard library. Their current approach is to make it easier for other contributors to submit proposals for standard library extensions and relax the strictness of reviews.
The state of C++11
My takeaway from the conference is that the C++ standard caught up with modern language features (such as lambdas, closures, threads, etc.). A lot of features which were missing for a very long time are now available. Among them are tools which significantly improve programming style by automating some tasks in the compiler instead of requiring the programmer to explicitly do them, such as the auto keyword.
The current compilers implement a subset of C++11 features. In the coming versions of Microsoft Visual C++ and GCC this will be a major subset. Less and less features are missing. It will probably take about two more years before all major compilers come with full support for C++11.
Probably even more years will pass before many projects upgrade their compilers to the latest version and will be able to leverage C++11 features. I’m hoping this will be sooner than later, because C++11 really is a better language than C++98 and code written with C++11 will statistically be better.
The future of C++
It is easy to notice the current trends in computing and where C++ is going. Future revisions of the C++ standards will provide even more expressiveness to the programmer and more opportunities for writing better and cleaner code.
The tools are also evolving and we will see faster compilers which optimize the code better than today, which issue better diagnostic messages. We will also see more tools which leverage compilers which will give us more way to modify, transform and refactor our code, which will also provide better instrumentation and other ways of detecting bugs.
We can expect more libraries for C++, either standardized or as part of the standard library, which will provide useful functionaltiy readily available for use in our programs without the need for digging the Internet or writing our own libraries. Many languages such as C#, Python, etc. come with lots and lots of libraries and allow the programmer to better leverage these languages in many fields out of the box. C++ will also gain similar capabilties.
C++ will likely be able to better leverage the architectures of today and tomorrow. The language and the standard library will include functionality similar to C++ AMP or Thrust. The programmer will be able to leverage vector instructions, multiple CPU cores and heterogenous architectures (e.g. CPU+GPU) right in the C++ code without the need of using external tools and the code will just work.
Upgrades
I’ve been silent this month, but I’m preparing for the highly anticipated release of Diablo 3. I used to be a gamer back in the day, but haven’t had time to play games in recent years. I’m a fan of the Diablo genre, it’s been almost 12 years since the second part. The third installment was announced almost 5 years ago and it’s been in the making probably for more than that. I’ve had a chance to play the demo recently and I can’t wait for the final release.
So I blew off the dust of my almost six year old computer. It has a dual core Opteron 1.8GHz, 1GB of RAM and GF 6600 graphics card. That’s below the game’s official spec, so I upgraded RAM to 2GB and acquired GF 460GT graphics card (still in the shipment). I’m holding off with the CPU upgrade until I’m sure it’s really needed. The official spec recommends a 2.2GHz CPU, but I have a feeling my current one should do just fine, after all all that matters should be the graphics card, shouldn’t it? All that I know for sure is that I can’t go higher than 2.6GHz, that was the fastest Opteron for my motherboard, these are scarce these days since they’ve been discontinued a long time ago.
I played the demo on a 27″ (I think) monitor and it looked great. So I started thinking of upgrading my 19″ monitor as well, though my wife opposed and proposed that I leverage the TV. Well, I’ll see how our 50″ plasma can deal with games.
At this occasion I switched from Ubuntu to Linux Mint. I got fed up with Ubuntu since they keep adding more and more cruft and bloat. I got aggravated by them constantly changing image viewers, media players and even desktop environment. The Linux Mint setup I chose is based on rolling Debian distribution. By rolling I understand there are no major upgrades, new packages simply appear ready for upgrade from time to time. This is similar to Gentoo, you can keep the system up to date all the time if you want and when you want it, you don’t have to depend on major releases. Also I switched back to the good old XFCE, which is one of the leanest but still useful desktop environments for GNU/Linux.
Not long after I screwed up my Gentoo-based Linux desktop at work. Came at the wrong moment of course. So I decided to switch it to Linux Mint as well. Works good so far – with two exceptions: QEMU is slow as hell and there was no sound! I still haven’t figured it out. With QEMU it’s problematic, since I compiled it from source (the one from the package database was also slow but was hanging). I don’t have a clue how to approach it yet, maybe I’ll end up trying to rebuild the kernel. Now regarding sound, the matter is embarrassing (for Linux Mint of course). I tried everything I think and there was still no sound. Finally I wiped out pulseaudio with anger. Now I can play a sound in the command line through ALSA, but I still have no sound in Chrome and other apps. Doh!
No wonder the year of Linux desktop never came! Well, it’s not that surprising after all, but Linux got into pocket masquerading as Android…
Case-insensitive identifiers
Recently I came across Jeff Atwood’s article about the idea of case-insensitive identifiers. I think this is an interesting idea, here’s why.
Why have case-sensitive identifiers at all? Function names, variable names, object member names. Having two variables in your program which overlap in any scope, whose names differ only by case is generally a bad idea. To somebody who tries to read and understand the program, they are indistinguishable, most likely they are a programming error or a remainder from an older version of the code. It would probably be a good idea for statically typed languages to forbid two variables of the same name differing only by case.
Let’s take dynamically typed languages, such as Python or JavaScript. One of their advantages over statically typed languages is that they allow faster development cycle, because there is less text needed to write a program so source code is more concise. More concise source code is statistically easier to read and review and therefore maintain. However the disadvantage of dynamically typed languages is that the variables references are not checked during compile time, but resolved in run time. Hence it is easy for bugs to hide in Python or JS programs – the kind of bugs that can only be detected in run time in very specific situations.
Let’s consider the following function in Python:
class Vector:
def __init__(self, x, y):
self.x = x
self.y = y
def AddVectors(v1, v2):
return Vector(v1.x+v2.x, v1.y+v2.y)
Most of the time there won’t be a problem with it, but sometimes the caller may pass malformed input (e.g. input read from a wrong file) and the function will raise an exception. That’s the drawback of dynamically typed languages.
But in some code path the programmer may just make a mistake:
class Empty:
pass
v1 = Empty()
v1.X = 1
v1.Y = 2
v2 = AddVectors(v1, v1)
This program will obviously fail with an exception. It is a programming error, but does it have to be?
I argue that no, it should not really be a programming error. This kind of bug may be very annoying if it occurs in a rarely traversed path, and causes rare, unnecessary crashes for the end user.
Because using two variables differing only by case should be avoided as it leads to confusion and therefore bugs, it would actually be useful if identifiers in dynamically typed languages were case-insensitive.
This problem does not directly apply to statically typed languages, because all variable references are resolved during compile time, so the programmer has the opportunity to catch all spelling errors before the program is executed for the first time. It would still not hurt if the compiler (say for C++ or Java) did not allow two variables differing only by case – this would lead to cleaner and better code.
Are we alone?
Once in a while a theory pops up that questions whether Earth is special for whatever reason. After all we know that our planet is just a speck in the vast depths of the Universe, but as of today we have no proof that life ever existed anywhere beyond Earth.
Is there life anywhere else in the Universe?
Elements which are basic building blocks for the life as we know it are abundant in the Universe. Water is everywhere, we have evidence that it’s on the Moon, on Mars and various other planets and satellites in our system, not mentioning comets. It’s essentially trivial to create membranes of which living cells are composed by taking a bunch of chemical compounds and shocking them with electric current. Give these processes billions of years and mono-cellular organisms will likely evolve.
Mars may have harbored life in the past. Certainly it had liquid water an may still occasionally have it. In a matter of decades we may find out whether it did have life indeed.
Europa (Jovian moon) is thought to have liquid ocean underneath the icy surface. One day we may find out whether there is anything living in it or not.
So out of 12-15 bodies in our Solar system (planets and satellites) there are three or four which have or might have harbored life.
During the recent few years we learned that at least half of stars harbor massive planets. And this is just through observation under certain conditions – the axis of the planetary system of the observed star must meet conditions for us to notice the movement of the star. We can’t notice smaller planets yet. So it’s safe to assume that most stars have complex planetary systems like ours.
Our Galaxy contains 200-400 billion stars. That means that there are probably a trillion planets or satellites. 10% of those may have conditions for mono-cellular life.
So definitely life exists beyond Earth and it’s abundant in our own Galaxy. In addition to that, we estimate that the observable Universe contains 100 billion galaxies (10^11) and the total size of the Universe is not known (may be one or two orders of magnitude bigger). The odds for extraterrestial life existence are quite high!
Do extraterrestial civilizations exist?
We estimate that a star like ours lives for 10 billion years. It took 5 billion years for our Civilization to appear on our planet. Multi cellular organisms had 1-1.5 billion years only, before our planet may not have had conditions necessary for them – we are not really sure, because most of the surface of our planet was recycled due to plate tectonics.
Should a terminal event not occur 65 million years ago, maybe dinosaurs would eventually evolve a civilization? Well, life on our planet had at least two chances to evolve a civilization.
If life is so abundant in our Galaxy, that there may be 10 billion of planets or satellites on which life may have abound, and many of these bodies may have had billions of years to evolve a civilization, then it becomes quite clear that there may be plenty of planets that will produce a civilization in their life time. How many bodies exactly? Millions? Billions? We can’t estimate right now due to lack of statistical data… but they must be abundant.
After a civilization leaves their planet, life will remain on it and continue evolving. So potentially a single planet may produce more than one civilization.
If extraterrestial civilizations are so abundant, why haven’t they contacted us yet?
This question is also known as the Fermi paradox.
It took 5 billion years for us to appear on Earth. We started putting together something that could have been called an early civilization only a mere few thousands years ago. The real transformation started about 200 years ago and it speeds up ever since, exponentially. The means of communication 200 years ago are today considered primitive. The means of communication used today will also be considered primitive in a few tens of hundreds of years. So the aliens may not even know how to contact us, that’s how fast our technology is changing.
Besides, we may not be interesting to be contacted at all! We are as interesting to contact for them as fish are for us. I’m not saying we are edible to them. We have nothing to offer for them, just like we have nothing to offer for humans 100 years from now. We are not that much different from any other creature that lives on this planet after all, we haven’t really managed to produce intelligent beings able to sustain cosmic conditions and explore the Galaxy. Plus we cause a lot of suffering to ourselves like other mindless animals. Our systems, corporations, governments, politicians, are resource driven (money).
Given how rapidly our technology changes today, in a few tens or hundreds of years we may change significantly. Maybe then we will be worth to be contacted, or maybe we will find out their existence and be willing and able to contact them ourselves.
Operator precedence
Recently I came a cross a bug where the author forgot to use parentheses in a conditional expression. The code went like this:
if (AAA &&
BBB || CCC || DDD)
{
/* ... */
}
The bug was obvious, because this is what the author really meant:
if (AAA && (BBB || CCC || DDD))
But this is not how compiler understood it.
There are so many operators, that it’s hard to remember their precedence. Not many people remember immediately whether | (bitwise or) operator has higher or lower precedence than & (bitwise and) or ^ (bitwise xor). Let alone the << and >> (bitwise shift) operators, which in C++ are used as stream operators, and have higher precedence than other bitwise operators. There are also other surprises, such as comparison operators having a higher precedence than bitwise and/or/xor operators.
These days its not uncommon to use more than one language in everyday life, especially various scripting languages come to mind. Many languages share the set of operators, but operator precedence may vary between languages, for example Python’s operator precedence is different than C’s.
All this results in errors when writing code and creates unnecessary maintenance problems.
There are several groups of operators which have obviously higher precedence than others. For example * (multiply) operator has higher precedence than + (add) operator, we were taught this on mathematics lessons in elementary school. Also it’s not a surprise than arithmetic operators have higher precedence than logical operators. But other combinations are ambiguous. Should bitwise operators have higher or lower precedence than arithmetic operators?
To avoid bugs and make the code easier to read for anybody who will be maintaining or extending it, it is a good practice to use parentheses. It’s good to have this rule in coding conventions for any project.
Languages should impose the usage of parentheses in ambiguous situations in their grammars. It is easy to define such grammar rules even in the simplest notations like BNF. For example such rule would forbid mixing various bitwise operators without parentheses or mixing arithmetic and bitwise operators, etc. This would help to avoid subtle bugs which are sometimes difficult to spot.
The shift in personal computing
Last month Samsung sold more smartphones than any other manufacturer. Apple is #2 and Nokia is #3. The predictions of some analysts last year that Android will take over are becoming true while Google is trying to make Android a more solid platform and fix some mistakes they’ve made.
It’s interesting to observe the battle between Apple and Google, with multiple hardware vendors involved on the Google side, while other well established cell phone and smartphone manufacturers begin to struggle.
But where is it all going? It looks like in the coming years small mobile devices will replace PCs. We will no longer be chained to a desk. We won’t need to carry around a bulky laptop. Instead we will be carrying a small touchscreen device. Take a look at some designs like the ones from Motorola, where you can take the smartphone and plug it into a bigger device. Your pocket device will become your universal personal computer. You will plug it into a dock, or just connect a monitor. You can do it today with some tablets. Most devices also already support Bluetooth keyboards. Android also supports a mouse. Future designs will simply become more convenient to use.
Mobile devices become more and more versatile. You can write and print documents using both mobile apps and online office suites, you can take, upload and edit photos – some online tools like Picasa provide basic, but easy to use photo editing capabilities, you can browse the Internet, watch movies, listen to music, play games – do most common tasks on a small device which fits in your pocket.
PCs will remain a niche for specialized uses, which require sophisticated software and high hardware specs. Because of that they will become more expensive, which will force even more users to stick with smaller devices.
This does not bode well for existing monopolies.
The Big Bounce and the shape of the Universe
So we evolved on this rock somewhere in the suburbs of a large galaxy. But looking around (read: into the stars) we are able to figure some things out, like where it all came from and where it is going. There is still a lot to learn, we barely scratched the surface, let’s try to sum up some things we already know.
Until recently the widely or wildly popular theory was that it all started with a Big Bang! But where did this singularity which gave birth to our Universe come from? What triggered the explosion of space, time, energy, matter and information?
Cosmic microwave background should be uniform. However astronomers who observe and analyze it find some irregularities. These irregularities could be the remnants of the previous incarnation of our Universe, the proof for a new interesting theory, which says it wasn’t a Big Bang, but rather a Big Bounce. The Universe existed before, but for some reason, like gravity, it compressed into one spot. For some reason the compression reached a critical point and it all exploded again.
The Universe has been expanding for the last ~14 billion years. It’s likely going to slow down at some point. Whatever caused to compress it before, will stop the expansion and induce a collapse again. Gravity is a force which we know, but we can’t tell for sure yet if that’s the real cause behind the collapse. In fact we can’t even reliably estimate the mass of our galaxy, let alone the whole Universe and that would be necessary to tell whether it’s gravity that pulling it all together or not.
Looking far away with our best telescopes we can see the faint light that gets to us after billions of years. The farther we look, the older the light. The “oldest” light we see is from before about 14 billion years. This is how we estimate the age of the Universe. The shift of the spectrum of the light we see towards red indicates that the Universe is expanding.
Some theories propose that the expansion is not only caused by the matter speeding after the Big Bounce, but also by expansion of the space itself. This would explain why two points of space sufficiently far away from each other escape from each other with a speed which virtually seems greater than speed of light. Therefore if places far away from us escape virtually faster than light, we will never be able to se what’s beyond a certain point.
Because of this “faster-than-light” expansion, the Universe is bigger than what we can see, according to some estimates by an order of magnitude, or even more. The truth is, at this point we don’t even know how big the Universe really is, the estimates vary, we suspect it’s ~14 billion years old but much bigger than that the distance that light would travel in ~14 billion years.
What does the Universe look like, what is its shape? The simplest answer is that it’s filling the inside of an expanding sphere. The shockwave of the initial explosion is really fast, so we can’t get out and see it from outside. Some even propose that there is no space outside, that the space exists only inside of the Universe, which physically makes sense, because what we think of as empty space, completely sterile vacuum, is in fact not empty, but full of energy. Since there is no energy or anything beyond the expanding sphere of the Universe, we can’t treat it as ordinary space as we know it.
I do not really believe that the Universe is an expanding sphere sitting in an unexplainable, unmeasurable, infinite, sterile and pristine void without bounds of end. That does not make any sense to me. But I have my own theory.
In my opinion our Universe is a three dimensional surface of a four dimensional sphere. This is mind boggling and impossible to imagine “from the outside”, but it’s quite easy to grasp from the inside if we compare it to our two dimensional life on Earth. If we were able to freeze the time and then send a spaceship in any randomly chosen direction, the starship would eventually return from the opposite side. This means that to us, confined to the three dimensional space of our Universe, there is no boundary, there is no escape, there is no getting out. No matter where we go, we will eventually get back to where we started.
What I think really expands is the four dimensional sphere on which surface our three dimensional Universe is located. It’s like the Earth was a balloon and somebody was blowing it. When we’re blowing a balloon, its volume expands, so does its surface. The number of atoms on the surface remains the same, only the expansion of the balloon causes them to move away from each other. Similar thing happens to our Universe.
The fourth dimension of the expanding sphere is not time. At some point the Universe will stop expanding and will start collapsing again, but the time will not start going back. (Or will it?)
So what if it’s true, if the Universe has multiple lives, if it is expanding and collapsing back and forth? What is causing it? It could be gravity. My theory is that the behavior of the Universe is like a perfect pendulum. It swings back and forth. When it expands, the expansion is slowing down up to certain point when it stops, then collapses back again, the collapse accelerates until finally the Universe collapses into a singularity and all matter and energy ends up in one spot having maximum “kinetic” energy it bounces and starts expanding again, the expansion decelerates until it stops again at the same point as before. If the contents of the Universe don’t have a place to escape, the process is perfectly conserved and the Universe is bouncing back and forth, forever. And every iteration of the Universe can be different, doesn’t need to be identical, the elements of matter and energy don’t have to follow the same paths as before the bounce.
Maybe it’s like that, maybe not. Even if it is, it does not explain how this all started, where it came from, what’s inside the four dimensional sphere and what’s outside. Will anyone be ever able to find out?
First post!
I’m starting this blog to have a place to publish my ideas and thoughts, perhaps also to share things that seem interesting to me. To describe it as a brain dump is a bit exaggerated though.
I don’t know too many blogs, I follow one or two, so I don’t really have a picture of how to shape this blog yet. Some rules out there propose that a good blog should be focused, frankly I don’t really care. If someone finds my future posts interesting – great!
Googling revealed this blogging website is #1, although not the most user-friendly. The W logo looks like Volkswagen logo at a first glance. The deal breaker for me was the availability of highly rate mobile apps, so I don’t have to use a PC to add new posts.
So this is a beginning, let’s see where we get from here…

