While Grace was using the method to describe the delay in satellite communication and why computers need to get smaller, it turns out it is more relevant than ever today. We are officially over the era of computing where people glued together libraries to form programs. Today, virtually every new application built is distributed.
The problem is that if you think latency due to disk seeks are slow, then imagine reaching a machine at the other end of the world - the delay is high. From here, my delay to yahoo.com is around 200ms, much more than the average seek time on an old harddisk. There is a common trick in the high-performance computing world to battle latency: latency hiding. While waiting for a message to pass through the slow network, you do something else. And then when data from the message is in grave need to continue the program, you gamble that it already arrived. The same trick has been used in operating systems for years while waiting on the disk to return data: run another program in the meantime.
And this is why any modern programming language must tackle concurrent operations. From now on, most programs will be distributed. The client program will be because they live on mobile phone-sized systems and they have to pull in data from multiple sources. The server program will be because distribution is key to scaling and redundancy. In short, any program will have a situation where getting data amounts to asking another system for them -- and then handling the inherent latency present in the communication. The limit of a computer from here on out will not be in the amount of instructions it can retire successfully on its cores. Rather, it will be the amount of communication it can perform and how well it deals with it. On the memory bus. On the network. To the satellite.
I am making a bet: Distribution will be huge and will be solved by message passing concurrency. That is, I claim we already have the necessary tools in the toolbox to tackle the problem. There is not going to be the concurrent doomsday where the world curls up in a corner and deadlocks. There is not going to be a parallel doomsday revolution either. If the internet has told us anything it should be that we can handle the problems which will come forth to rear their ugly heads (and breath fire). The reason I am betting on message passing concurrency is that it is easy to program and fast enough for a 200ms round trip in most cases. The amount of nanosecond wires utterly dwarfs the price we pay to pass a message.
I am making another bet as well: parallellism is not going to be as huge as we think it is. We have relied for years on our computing technology to be faster every other year -- and this low hanging fruit is not there anymore. But it disguised another trend with even greater consequences: computers keep getting smaller and smaller. My mobile phone has the computing power of a State-of-the-art computer in 2001-2002. Imagine that in 8 years! It is not about splitting the computation up so it can run on a single machine with many cores anymore. It is about splitting the computation so it can run on many machines with many cores.
Today is the 10/10/10 but the day is only special in the calendar. The change did not happen overnight. There has been a slow crawl towards a more distributed world for some years. But undoubtedly, the mobile devices will propel us with full force into the new world order.
-- All Hail! Think concurrently!
Add a comment