Process mining algorithms can be partitioned by the type of model that they output: imperative miners output flow-diagrams showing all possible paths through a process, whereas declarative miners output constraints showing the rules governing a process. For processes with great variability, the latter approach tends to provide better results, because using an imperative miner would lead to so-called “spaghetti models” which attempt to show all possible paths and are impossible to read. However, studies have shown that one size does not fit all: many processes contain both structured and unstructured parts and therefore do not fit strictly in one category or the other. This has led to the recent introduction of hybrid miners, which aim to combine flow- and constraint-based models to provide the best possible representation of a log. In this paper we focus on a core question underlying the development of hybrid miners: given a log, can we determine a priori whether the log is best suited for imperative or declarative mining? We propose using the concept of entropy, commonly used in information theory. We consider different measures for entropy that could be applied and show through experimentation on both synthetic and real-life logs that these entropy measures do indeed give insights into the complexity of the log and can act as an indicator of which mining paradigm should be used.
|Title of host publication
|International Conference on Business Process Management : BPM 2017: Business Process Management Workshops
|17 Jan 2018
|Published - 17 Jan 2018
|Lecture Notes in Business Information Processing