So, one searches the a job index in denmark:
ML: 0
Lisp: 0
Haskell: 0
Erlang: 0
Python: 1 - Manage a Java EE build system(!)
Perl: 2-3 system admin jobs.
The conclusion is clear: the Law of the Secret Weapons(tm) still holds true. You can pick anything from the list above and you will be able to enter almost any field with the best technical platform in 4-6 months time. I am sick of the Java and .NET jobs which let me code in a mediocre language with mediocre IDE-crap on some project I have absolutely no love for.
I can understand the functional toolset and mindset of ML and Haskell puts off people. Lisp is harder since it has so much backing in the commercial world after all. Same goes with Erlang - given that it currently has some of the hype. But seeing that few Python jobs astounds me. It is fairly easy to learn and has some great backing in Google. It is not going to perish the next 10 years. And you can deliver great software in a short amount of time with it.
Python also has the advantage that its dynamic typing makes it easier for the PHP crowd to pick up. Static types are awfully cool, but they require some training and routine to use effectively to your advantage. Most integration is also complete bliss to pull off in Python if you know what you are doing.
So any company that wants to use anything of the above can have me. I would be delighted to rip some Java/C# programs apart with a real language.
-
-
First you find a pid. Then you run
fprof:trace([start, {file, "f"}, {procs, [pid(0,1292,0)]}]).
and let it cook for a while. "f" will contain the profile raw data. When it has cooked for some time, you can runfprof:trace(stop).
to close the file and stop the trace. Then you read in the data withfprof:profile(file, "f").
which builds a database in memory from the profile run. The analysis output is then generated in 150 columns withfprof:analyse([{dest, "f.analysis"}, {cols, 150}]).
to the file "f.analysis". To understand the profile output, you can read the Tools Users Guide which has a chapter on fprof and describes the output format. Let the CPU-cycle hunt begin!1View comments
-
So I am hacking a lot of erlang at the moment. The problem is that erlang is a dynamically typed language and bugs are sneaking into my code. This is where the dialyzer enters the game. It is a system which will take erlang code, infer types for the code and use it in a static analysis in order to find errors. For me, this is a very valuable tool. While you can just run the code, it may take some time before you cover all cases, so periodically running the dialyzer can sometimes find bugs before they show up in running code. You compile your code and then you run
dialyzer -r "path-to-ebin-beams"
and the dialyzer will process your code and report problems. You will have to produce a PLT first. The PLT is the initial table with type information from other applications that you have stored in it. Creating the PLT takes time. A lot of time. If you run the dialyzer without a PLT you get hints on how to produce the initial PLT. I most often use the dialyzer before comitting a larger set of code or after merging a branch upwards (From less stable to more stable code). Sometimes it finds messy stuff. And then I can go and fix it!0Add a comment
-
So, writing a torrent client goes through 3 phases: In the first phase, you implement what is in the spec. These are the obvious things you need and the things you use as building blocks for the later steps. In the second phase, you read the spec and figure out how things must fit together. Luckily, there is only one way to fit in most places, but there are some small nasty exceptions where you need to read code of other clients to figure out what should happen. In the third phase, you go from the spec to implementing all the features that are more or less undocumented, but crucial for speed. Here is an example:
It does reciprocation and number of uploads capping by unchoking the four peers which it has the best download rates from and are interested. Peers which have a better upload rate but aren't interested get unchoked and if they become interested the worst uploader gets choked.
So this is simple enough to implement. Sort the peers on download speed to the client and go through them one by one, choking and unchoking accordingly. But you need to recalculate this on state changes. And then there is optimistic unchoking which must also be handled. Seems easy enough. The etorrent client currently has numerous problems with the above scheme. First of all, the measurement of download in clients must be running average, dampened and fine grained. Second, the 'four peers' part is a lie, and no modern client does that. Rather they choose the number based on the sum of the download rates and a heuristic. Of course this can only be understood if you read the source code of other clients. This is the undocumented hairy part of the bittorrent protocol.0Add a comment
View comments