Next: Conclusion
Up: Experiments
Previous: Comparison with the FSM
The automata used in the previous experiments were randomly generated.
However, it may well be that in
practice the automata that are to be treated by the algorithm have
typical properties which were not reflected in this test data. For
this reason results are presented for a number of automata that were
generated using approximation techniques for contextfree grammars.
In particular,
automata have been used which were created by Nederhof, using the
technique described in [14]. In addition, a small number of
automata have been used which were created using the technique of
[18] (as implemented by Nederhof). We have
restricted our attention to automata with at least 1000 states in
the input.
The automata typically contain lots of jumps. Moreover, the number of
states of the resulting automaton is often smaller than the
number of states in the input automaton. Results are given in the
tables 1 and 2. One of the most striking
examples is the ygrim automaton consisting of 3382 states and
9124 jumps. For this example, the per graph implementations
ran out of memory (after a long time), whereas the implementation of
the per subset algorithm produced the determinised automaton
(containing only 9 states) within a single CPUsecond. The FSM
implementation took much longer for this example (whereas for many of
the other examples it is faster than our implementations). Note
that this example has the highest number of jumps per number of states
ratio. This confirms the observation that the per subset
algorithm performs better on inputs with a high deterministic jump
density.
Table 1:
The automata generated by
approximation algorithms. The table lists the number of states,
transitions and jumps of the input automaton, and the number of
states of the determinised machine using respectively the
efree^{s},
efree^{t}, and the
efree^{t,c} variants.

Input 
Output 
Id 
#states 
#trans 
#jumps 
#states 




per graph^{s}

per graph^{t}

per graph^{t,c} 




per graph^{s,a}

per subset






FSM 
per state 

g14 
1048 
403 
1272 
137 
137 
131 
ovis4.n 
1424 
2210 
517 
164 
133 
107 
g13 
1441 
1006 
1272 
337 
337 
329 
rene2 
1800 
2597 
96 
846 
844 
844 
ovis9.p 
1868 
2791 
2688 
2478 
2478 
1386 
ygrim 
3382 
5422 
9124 
9 
9 
9 
ygrim.p 
48062 
63704 
109296 
702 
702 
702 
java19 
54369 
28333 
51018 
1971 
1971 
1855 
java16 
64210 
43935 
41305 
3186 
3186 
3078 
zovis3 
88156 
78895 
68093 
5174 
5154 
4182 
zovis2 
89832 
80400 
69377 
6561 
6541 
5309 

Table 2:
Results for automata generated by
approximation algorithms. The dashes in the table indicate
that the corresponding algorithm ran out of memory (after a long
period of time) for that
particular example.

CPUtime (sec) 

graph^{t} 
graph^{t,c} 
graph^{s} 
graph^{s,a} 
subset 
state 
FSM 
g14 
0.4 
0.4 
0.3 
0.3 
0.4 
0.2 
0.1 
ovis4.n 
0.9 
1.1 
0.8 
1.0 
0.7 
0.6 
0.6 
g13 
0.9 
0.8 
0.6 
0.6 
1.2 
0.7 
0.2 
rene2 
0.2 
0.3 
0.2 
0.2 
0.2 
0.2 
0.1 
ovis9.p 
36.6 
16.0 
16.9 
17.0 
25.2 
20.8 
21.9 
ygrim 
 
 
 
 
0.9 
21.0 
512.1 
ygrim.p 
 
 
 
 
562.1 
 
4512.4 
java19 
55.5 
67.4 
52.6 
45.0 
25.8 
19.0 
3.8 
java16 
30.0 
45.8 
35.0 
29.9 
11.3 
12.1 
3.0 
zovis3 
741.1 
557.5 
 
407.4 
358.4 
302.5 
325.6 
zovis2 
909.2 
627.2 
 
496.0 
454.4 
369.4 
392.1 

Next: Conclusion
Up: Experiments
Previous: Comparison with the FSM
20000710