tag:blogger.com,1999:blog-5411139659011156551.post1675737231943260007..comments2013-03-30T15:53:54.523-07:00Comments on JLOUIS Ramblings: Errlxperimentation: Clustering of Erlang Error Reports.Jesper Louis Andersenhttps://plus.google.com/108725849902883879959noreply@blogger.comBlogger2125tag:blogger.com,1999:blog-5411139659011156551.post-9348679428705860432011-01-09T21:14:34.475-08:002011-01-09T21:14:34.475-08:00@Thomas:
The similarity notion may still need som...@Thomas:<br /><br />The similarity notion may still need some tuning indeed.<br /><br />Another way to look at it is this, however:<br /><br />Suppose we have G = (V, E) be a directed call-graph. Every vertex v in V are {Module, Function, Arity} entries and (v,u) is in E if the function v calls u. Augment this graph with a source node, s, and a sink node, t. Every possible "start function", v, of a spawned function is connected to the source by (s,v). And every possible point of error is connected to the sink by (u,t).<br /><br />A bigram [u, v] in a stack trace corresponds to an edge in E. Thus, a stack trace is a <i>path</i> in the call graph just mentioned from the source to the sink (the source and sink acts as "padding" for the first and last entry in the stack trace respectively).<br /><br />The similarity notion for two traces/paths is thus: <i>How many edges do the two paths have in common?</i> If they are completely equivalent the similarity is 1.0 and if they have nothing in common the similarity is 0.0. For each edge they do have in common, the similarity is 1/n where n is is length of the longest trace in consideration.<br /><br />Currently it looks as if this works quite well experimentally, but as you say, it needs a good rationale for why this is so, theoretically. Also, my experiments currently are quite non-extensive so I don't know if this works on a larger scale.<br /><br />There are a couple of knobs we can twist: The weight of the bigrams are all 1. We could give the type errors or div-by-zero errors at the bottom of the trace more weight for instance, making them more important. The second knob is similarity in case of not two, but a cluster of traces. We currently take the minimum over any pair from the cluster but there are several alternative metrics to try here. Finally, the third knob is the dendogram tree formed, which can be changed from a binary tree into an n-way tree or something else entirely.Jlouishttps://www.blogger.com/profile/02990737394952724516noreply@blogger.comtag:blogger.com,1999:blog-5411139659011156551.post-36099347815769082772011-01-09T07:13:25.875-08:002011-01-09T07:13:25.875-08:00Interesting blog post.
If I understand this corre...Interesting blog post.<br /><br />If I understand this correctly (yes, I should read the article), you characterize a distance between Erlang stack traces. The closer they are, the more it points to the same error.<br /><br />Clearly, one error can expose itself as many faults and hence this is useful to find the error when you observe the faults.<br /><br />But, I fail to understand the intuition behind your measure. The bigrams form the vector, as far as I can see, i.e., a vector is the collection of bigrams in a stack trace....? But then, wouldn't any type error or division by zero error have very many ways to get there, hence many different stack traces? In that case, the most important information in the stack trace is the final failure... which programmers do look at first, the error message of the last call.<br />There is no weight in your vector as far as I see that compensates for this?<br /><br />Thus, if you can demonstrate that the similarity notion you use is a useful one, then I would surely encourage you to wrap this up in an article for the upcoming Erlang workshop. It seems like a cool thing to present there!Thomashttps://www.blogger.com/profile/05696162248969800304noreply@blogger.com