The danger of pursuing a 10x productivity in the blind is that you may end up using the only viable trick you have right at your disposal: increasing time spent working.
There are numerous irritating things with this "solution". The most important one is that it doesn't scale. And at best it only scales linearly which means—everything else being equal—that it can only provide a factor of 2, perhaps 2.5 if we are really strechy. Anything more than that requires the use of dubious substances and funny drugs.
There are many employers however, who would love to see you scale along that axis, for one reason or the other. It is quite vampiric, and I don't think it works over longer stretches of time. The only way to get more productivity is by using other tricks. Especially if you are to find the 10 factor increase.
But I think that the only way to really understand the paper is by reading it yourself. So I did.
The 1966 myth paper
The problem, which is the underlying one, is that it is notoriously hard to make accurate controlled tests. The original paper, "EXPLORATORY EXPERIMENTAL STUDIES COMPARING ONLINE AND OFFLINE PROGRAMING PERFORMANCE", 20 December 1966[1], studies if having on-line access to a machine helps productivity. And the findings, compared to an off-line machine is in the positive. In passing, the large diversity between programmers completion time is mentioned. If you disagree with my findings here, I suggest you go read the paper yourself. It is always good to look at old research and there is a good chance that you were not programming—nor born—when this paper was written.The paper itself is actually pretty interesting. There are some amusing comments in the paper, pertaining to the viewpoint that having an on-line editor is detrimental to development. That is, off-line programming forces the user to cook up better programs. Another amusing fact is that while debugging happened either on-line or off-line, actual programing (note the missing 'm') was always carried out off-line.
The analysis in the article is rather sound, when it comes to deciding if on-line or off-line debugging is most efficient. Even for the small set of participants, ANOVA is performed. Even with the rather small sample, there is a significant favor of on-line debugging (today, most people would probably not be surprised by this fact, but it was not a fact back in the day).
The paper works with two groups. The latter group of 9 persons is for inexperienced programmers. They are not interesting at all to our 10x myth: since they are newcomers, the speed at which you absorb and learn is going to be dominant. Specifically, new programmers—today—come with vastly different experience and skillsets so naturally their diversity is high. Back in the day, it may have been a different situation, I think. Yet, I won't really account for these as it is not as interesting as the remaining 12 participants:
The "horrid" portion of the performance frequency distribution is the long tail
at the high end, the positively skewed part in which one poor performer can
consume as much time or cost as 5, 10, or 20 good ones. Validated techniques
to detect and weed out these poor performers could result in vast savings of
time, effort, and cost.
Given the article is from 1966, saving CPU hours on the system for debugging is as important as everything else. It is not only to get rid of "horrid" programers. It is also a questions of saving valuable machine time.Another important point in the article is
Programers primarily wrote their programs in JTS (JOVIAL Time-Sharing—a procedure-oriented
lauguage for time-sharing).
This implies that some programers have chosen another language. Especially if you also take the following quote later on in the paper:a. A substantial performance factor designated as "programing
speed," associated with faster coding and debugging,
less CPU time, and the use of a higher-order language.
b. A well-defined "program economy" factor marked by shorter
and faster running programs associated to some extent with
greater programing experience and with the use of machine
language rather than higher-order language.
In other words, it highly suggests a confounding variable which is the choice of programming language. Given the general state of compilers and interpreters in 1966, I am willing to bet that the difference in performance, contributing to the 10x myth is programming language choice.Very few will object to the fact that machine code is harder to write than higher-level programs in JOVIAL. Especially at that point in time where knowledge of a systems machine language was more specialized and required more intimate knowledge of the machine. I find it pretty amusing that this little fact has been hidden completely from the discussion about the paper. Also, it is odd that you have no complete table with 12 users and their times and language choice. It wold make the material stronger since you would be able to run experiments on the data yourself.
If the 10x programmer exist, this paper isn't the one proving it. And if the 10x programmer exists, that programmer is writing Haskell :)
[0] https://medium.com/about-work/6aedba30ecfe
[1] http://www.dtic.mil/dtic/tr/fulltext/u2/645438.pdf
Written in Acme and converted with StackEdit.
Add a comment