1. So the discussion of the 10x programmer has come up again, this time because Shanley Kane wrote about the 10x programmer being a myth over on medium[0]. Shanley makes the case that the original 1966 paper used as a basis for the argument is not really about high productivity. My take is more or less in alignment with Shanley: the 10x programmer is a myth. What is far more likely is that there is a confounding variable in most of the cases which then attributes to some programmers being more productive than others.
    The danger of pursuing a 10x productivity in the blind is that you may end up using the only viable trick you have right at your disposal: increasing time spent working.
    There are numerous irritating things with this "solution". The most important one is that it doesn't scale. And at best it only scales linearly which means—everything else being equal—that it can only provide a factor of 2, perhaps 2.5 if we are really strechy. Anything more than that requires the use of dubious substances and funny drugs.
    There are many employers however, who would love to see you scale along that axis, for one reason or the other. It is quite vampiric, and I don't think it works over longer stretches of time. The only way to get more productivity is by using other tricks. Especially if you are to find the 10 factor increase.
    But I think that the only way to really understand the paper is by reading it yourself. So I did.

    The 1966 myth paper

    The problem, which is the underlying one, is that it is notoriously hard to make accurate controlled tests. The original paper, "EXPLORATORY EXPERIMENTAL STUDIES COMPARING ONLINE AND OFFLINE PROGRAMING PERFORMANCE", 20 December 1966[1], studies if having on-line access to a machine helps productivity. And the findings, compared to an off-line machine is in the positive. In passing, the large diversity between programmers completion time is mentioned. If you disagree with my findings here, I suggest you go read the paper yourself. It is always good to look at old research and there is a good chance that you were not programming—nor born—when this paper was written.
    The paper itself is actually pretty interesting. There are some amusing comments in the paper, pertaining to the viewpoint that having an on-line editor is detrimental to development. That is, off-line programming forces the user to cook up better programs. Another amusing fact is that while debugging happened either on-line or off-line, actual programing (note the missing 'm') was always carried out off-line.
    The analysis in the article is rather sound, when it comes to deciding if on-line or off-line debugging is most efficient. Even for the small set of participants, ANOVA is performed. Even with the rather small sample, there is a significant favor of on-line debugging (today, most people would probably not be surprised by this fact, but it was not a fact back in the day).
    The paper works with two groups. The latter group of 9 persons is for inexperienced programmers. They are not interesting at all to our 10x myth: since they are newcomers, the speed at which you absorb and learn is going to be dominant. Specifically, new programmers—today—come with vastly different experience and skillsets so naturally their diversity is high. Back in the day, it may have been a different situation, I think. Yet, I won't really account for these as it is not as interesting as the remaining 12 participants:
    The "horrid" portion of the performance frequency distribution is the long tail
    at the high end, the positively skewed part in which one poor performer can
    consume as much time or cost as 5, 10, or 20 good ones. Validated techniques
    to detect and weed out these poor performers could result in vast savings of
    time, effort, and cost.
    
    Given the article is from 1966, saving CPU hours on the system for debugging is as important as everything else. It is not only to get rid of "horrid" programers. It is also a questions of saving valuable machine time.
    Another important point in the article is
    Programers primarily wrote their programs in JTS (JOVIAL Time-Sharing—a procedure-oriented
    lauguage for time-sharing).
    
    This implies that some programers have chosen another language. Especially if you also take the following quote later on in the paper:
    a. A substantial performance factor designated as "programing
      speed," associated with faster coding and debugging,
      less CPU time, and the use of a higher-order language.
    
    b. A well-defined "program economy" factor marked by shorter
      and faster running programs associated to some extent with
      greater programing experience and with the use of machine
      language rather than higher-order language.
    
    In other words, it highly suggests a confounding variable which is the choice of programming language. Given the general state of compilers and interpreters in 1966, I am willing to bet that the difference in performance, contributing to the 10x myth is programming language choice.
    Very few will object to the fact that machine code is harder to write than higher-level programs in JOVIAL. Especially at that point in time where knowledge of a systems machine language was more specialized and required more intimate knowledge of the machine. I find it pretty amusing that this little fact has been hidden completely from the discussion about the paper. Also, it is odd that you have no complete table with 12 users and their times and language choice. It wold make the material stronger since you would be able to run experiments on the data yourself.
    If the 10x programmer exist, this paper isn't the one proving it. And if the 10x programmer exists, that programmer is writing Haskell :)
    [0] https://medium.com/about-work/6aedba30ecfe
    [1] http://www.dtic.mil/dtic/tr/fulltext/u2/645438.pdf
    Written in Acme and converted with StackEdit.
    0

    Add a comment

Blog Archive
About Me
About Me
What this is about
What this is about
I am jlouis. Pro Erlang programmer. I hack Agda, Coq, Twelf, Erlang, Haskell, and (Oca/S)ML. I sometimes write blog posts. I enjoy beer and whisky. I have a rather kinky mind. I also frag people in Quake.
Popular Posts
Popular Posts
  • On Curiosity and its software I cannot help but speculate on how the software on the Curiosity rover has been constructed. We know that m...
  • In this, I describe why Erlang is different from most other language runtimes. I also describe why it often forgoes throughput for lower la...
  • Haskell vs. Erlang Since I wrote a bittorrent client in both Erlang and Haskell, etorrent and combinatorrent respectively, I decided to put ...
  • A response to “Erlang - overhyped or underestimated” There is a blog post about Erlang which recently cropped up. It is well written and pu...
  • The reason this blog is not getting too many updates is due to me posting over on medium.com for the time. You can find me over there at thi...
  • On using Acme as a day-to-day text editor I've been using the Acme text editor from Plan9Port as my standard text editor for about 9 m...
  • On Erlang, State and Crashes There are two things which are ubiquitous in Erlang: A Process has an internal state. When the process crashes,...
  • When a dog owner wants to train his dog, the procedure is well-known and quite simple. The owner runs two loops: one of positive feedback an...
  • This post is all about parallel computation from a very high level view. I claim Erlang is not a parallel language in particular . It is not...
  • Erlangs message passing In the programming language Erlang[0], there are functionality to pass messages between processes. This feature is...
Loading
Dynamic Views theme. Powered by Blogger. Report Abuse.