Merge lp:~afrantzis/lava-test/alf-testdefs into lp:lava-test/0.0

Proposed by Alexandros Frantzis
Status: Rejected
Rejected by: Neil Williams
Proposed branch: lp:~afrantzis/lava-test/alf-testdefs
Merge into: lp:lava-test/0.0
Diff against target: 375 lines (+319/-1)
10 files modified
abrek/test_definitions/average_parser.py (+73/-0)
abrek/test_definitions/clutter-eglx-es20.py (+33/-0)
abrek/test_definitions/es2gears.py (+13/-0)
abrek/test_definitions/glmark2-es2.py (+40/-0)
abrek/test_definitions/glmemperf.py (+13/-0)
abrek/test_definitions/gtkperf.py (+3/-1)
abrek/test_definitions/qgears.py (+21/-0)
abrek/test_definitions/render-bench.py (+33/-0)
abrek/test_definitions/timed_test_runner.py (+51/-0)
abrek/test_definitions/x11perf.py (+39/-0)
To merge this branch: bzr merge lp:~afrantzis/lava-test/alf-testdefs
Reviewer Review Type Date Requested Status
Zygmunt Krynicki (community) Needs Information
Paul Larson Pending
Review via email: mp+36990@code.launchpad.net

Description of the change

Test definitions for Linaro User Platforms graphics related benchmarks.

To post a comment you must log in.
Revision history for this message
Paul Larson (pwlars) wrote :

I haven't had time to fully look at all of these, but a few observations on what I've seen so far:
First off, the clutter tests that had been segfaulting for me previously are no longer doing that. Something, somewhere, got fixed on this already. yay!
Also, it would be much easier to review these separately if at all possible. Also, that way some of the simpler ones can get in quickly while we deal with issues of the others.

1 === added file 'abrek/test_definitions/average_parser.py'
We need to find a different place for these. Anything under test_definitions is assumed to be a test. If it's only used by a single test, then you can have a directory with the test name. If it's something shared among more than one test, then we really ought to put it in the common code somewhere.

277 === added file 'abrek/test_definitions/timed_test_runner.py'
same here

301 + def _runsteps(self, resultsdir, quiet=False):
302 + outputlog = os.path.join(resultsdir, 'testoutput.log')
303 + for (cmd, runtime, info) in zip(self.steps, self.runtime, self.info):
304 + # Use a pty to make sure output from the tests
305 + # is immediately available (line-buffering)
I've been looking at doing something like this for everything, so I want to take a look at moving this out as well.

Revision history for this message
Alexandros Frantzis (afrantzis) wrote :

Agreed, I will break this up into separate merge proposals.

About the shared runner/parser code, perhaps it is worth having a testdef utilities directory where we can put commonly used parser/runner classes, so that the main code remains clean.

Revision history for this message
Paul Larson (pwlars) wrote :

> Agreed, I will break this up into separate merge proposals.
>
> About the shared runner/parser code, perhaps it is worth having a testdef
> utilities directory where we can put commonly used parser/runner classes, so
> that the main code remains clean.
Yes, that sounds good

Revision history for this message
Zygmunt Krynicki (zyga) wrote :

Alf: is this still valid? I'm inclined to reject it as you are splitting those up to separate branches?

review: Needs Information

Unmerged revisions

41. By Alexandros Frantzis

Download qgears2 package from personal apt repo.

40. By Alexandros Frantzis

Update clutter-eglx-es20 test definition for new abrek API.

39. By Alexandros Frantzis

Update x11perf test definition for new abrek API.

38. By Alexandros Frantzis

Really append 'units' and 'result' fields to all gtkperf test cases.

37. By Alexandros Frantzis

Update render-bench test definition for new abrek API.

36. By Alexandros Frantzis

Update qgears test definitions for new abrek API.

35. By Alexandros Frantzis

Update glmemperf for new abrek API.

34. By Alexandros Frantzis

Update glmark2-es2 test for new abrek API.

33. By Alexandros Frantzis

Update es2gears test for new abrek API.

32. By Alexandros Frantzis

Update average_parser and timed_test_parser for new abrek API.

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== added file 'abrek/test_definitions/average_parser.py'
2--- abrek/test_definitions/average_parser.py 1970-01-01 00:00:00 +0000
3+++ abrek/test_definitions/average_parser.py 2010-09-29 14:13:40 +0000
4@@ -0,0 +1,73 @@
5+import re
6+import abrek.testdef
7+import pickle
8+
9+class AverageParser(abrek.testdef.AbrekTestParser):
10+ """Single test average result parser
11+
12+ Parses a series of results from the same test and computes the
13+ average.
14+ """
15+ def parse(self):
16+ filename = "testoutput.log"
17+ pat = re.compile(self.pattern)
18+ total = 0.0
19+ count = 0
20+ with open(filename, 'r') as fd:
21+ for line in fd:
22+ match = pat.search(line)
23+ if match:
24+ d = match.groupdict()
25+ total = total + float(d['measurement'])
26+ count = count + 1
27+
28+ if count > 0:
29+ avg_result = total/count
30+ else:
31+ avg_result = 0
32+
33+ avg = {'measurement':avg_result}
34+ self.results['test_results'].append(avg)
35+ if self.fixupdict:
36+ self.fixresults(self.fixupdict)
37+ if self.appendall:
38+ self.appendtoall(self.appendall)
39+
40+class MultiAverageParser(abrek.testdef.AbrekTestParser):
41+ """Multiple test average result parser
42+
43+ Parses a series of results from multiple tests and computes the
44+ average for each test.
45+ """
46+ def parse(self):
47+ filename = "testoutput.log"
48+ pat = re.compile(self.pattern)
49+ pat_end = re.compile("^\$End AbrekTest\$ (?P<info>.*)")
50+ total = 0.0
51+ count = 0
52+ with open(filename, 'r') as fd:
53+ for line in fd:
54+ match = pat.search(line)
55+ if match:
56+ d = match.groupdict()
57+ total = total + float(d['measurement'])
58+ count = count + 1
59+ else:
60+ match = pat_end.search(line)
61+ if match:
62+ d = match.groupdict()
63+ if count > 0:
64+ avg_result = total/count
65+ else:
66+ avg_result = 0
67+ avg = pickle.loads(d['info'].decode('string-escape')[1:])
68+ avg['measurement'] = avg_result
69+ self.results['test_results'].append(avg)
70+ total = 0.0
71+ count = 0
72+
73+ if self.fixupdict:
74+ self.fixresults(self.fixupdict)
75+ if self.appendall:
76+ self.appendtoall(self.appendall)
77+
78
79=== added file 'abrek/test_definitions/clutter-eglx-es20.py'
80--- abrek/test_definitions/clutter-eglx-es20.py 1970-01-01 00:00:00 +0000
81+++ abrek/test_definitions/clutter-eglx-es20.py 2010-09-29 14:13:40 +0000
82@@ -0,0 +1,33 @@
83+import abrek.testdef
84+import pickle
85+from average_parser import MultiAverageParser
86+from timed_test_runner import TimedTestRunner
87+
88+clutter_tests = [
89+ ("test-interactive", "test-rotate", 10),
90+ ("test-interactive", "test-actors", 10),
91+ ("test-interactive", "test-fbo", 10),
92+ ("test-interactive", "test-animator", 10),
93+ ("test-interactive", "test-random-text", 10),
94+]
95+cmd_fmt = "CLUTTER_SHOW_FPS=1 /usr/lib/clutter-1.0/tests/eglx-es20/%s %s"
96+
97+def create_test_cmds(tests):
98+ return [cmd_fmt % args[:2] for args in tests]
99+
100+def create_test_runtimes(tests):
101+ return [args[2] for args in tests]
102+
103+def create_test_info(tests):
104+ return ["%r" % pickle.dumps({'test_case_id':'%s.%s' % args[:2], 'units':'fps'}) for args in tests]
105+
106+
107+parse = MultiAverageParser(pattern="\*\*\* FPS: (?P<measurement>\d+) \*\*\*",
108+ appendall={'result':'pass'})
109+inst = abrek.testdef.AbrekTestInstaller(deps=["clutter-eglx-es20-1.0-tests"])
110+run = TimedTestRunner(create_test_cmds(clutter_tests),
111+ create_test_runtimes(clutter_tests),
112+ create_test_info(clutter_tests))
113+
114+testobj = abrek.testdef.AbrekTest(testname="clutter-eglx-es20", installer=inst,
115+ runner=run, parser=parse)
116
117=== added file 'abrek/test_definitions/es2gears.py'
118--- abrek/test_definitions/es2gears.py 1970-01-01 00:00:00 +0000
119+++ abrek/test_definitions/es2gears.py 2010-09-29 14:13:40 +0000
120@@ -0,0 +1,13 @@
121+import re
122+import abrek.testdef
123+from average_parser import AverageParser
124+from timed_test_runner import TimedTestRunner
125+
126+parse = AverageParser(pattern="=\W+(?P<measurement>\d+\.\d+) FPS",
127+ appendall= {'test_case_id':'es2gears', 'units':'fps',
128+ 'result':'pass'})
129+inst = abrek.testdef.AbrekTestInstaller(deps=["es2gears"])
130+run = TimedTestRunner(["es2gears"], [16], [""])
131+
132+testobj = abrek.testdef.AbrekTest(testname="es2gears", installer=inst,
133+ runner=run, parser=parse)
134
135=== added file 'abrek/test_definitions/glmark2-es2.py'
136--- abrek/test_definitions/glmark2-es2.py 1970-01-01 00:00:00 +0000
137+++ abrek/test_definitions/glmark2-es2.py 2010-09-29 14:13:40 +0000
138@@ -0,0 +1,40 @@
139+import re
140+import abrek.testdef
141+
142+RUNSTEPS = ["glmark2-es2"]
143+
144+class Glmark2Parser(abrek.testdef.AbrekTestParser):
145+ def parse(self):
146+ PAT1 = "^\W+(?P<subtest>.*?)\W+FPS:\W+(?P<measurement>\d+)"
147+ filename = "testoutput.log"
148+ pat1 = re.compile(PAT1)
149+ in_results = False
150+ cur_test = ""
151+ with open(filename, 'r') as fd:
152+ for line in fd.readlines():
153+ if line.find("Precompilation") != -1:
154+ in_results = True
155+ if in_results == True:
156+ match = pat1.search(line)
157+ if match:
158+ d = match.groupdict()
159+ d['test_case_id'] = "%s.%s" % (cur_test, d['subtest'])
160+ d.pop('subtest')
161+ self.results['test_results'].append(d)
162+ else:
163+ if line.startswith("==="):
164+ in_results = False
165+ else:
166+ cur_test = line.strip()
167+
168+ if self.fixupdict:
169+ self.fixresults(self.fixupdict)
170+ if self.appendall:
171+ self.appendtoall(self.appendall)
172+
173+parse = Glmark2Parser(appendall={'units':'fps', 'result':'pass'})
174+inst = abrek.testdef.AbrekTestInstaller(deps=["glmark2-es2"])
175+run = abrek.testdef.AbrekTestRunner(RUNSTEPS)
176+
177+testobj = abrek.testdef.AbrekTest(testname="glmark2-es2", installer=inst,
178+ runner=run, parser=parse)
179
180=== added file 'abrek/test_definitions/glmemperf.py'
181--- abrek/test_definitions/glmemperf.py 1970-01-01 00:00:00 +0000
182+++ abrek/test_definitions/glmemperf.py 2010-09-29 14:13:40 +0000
183@@ -0,0 +1,13 @@
184+import abrek.testdef
185+
186+RUNSTEPS = ["glmemperf"]
187+PATTERN = "^(?P<test_case_id>\w+):\W+(?P<measurement>\d+) fps"
188+
189+inst = abrek.testdef.AbrekTestInstaller(deps=["glmemperf"])
190+run = abrek.testdef.AbrekTestRunner(RUNSTEPS)
191+parse = abrek.testdef.AbrekTestParser(PATTERN,
192+ appendall={'units':'fps',
193+ 'result':'pass'})
194+
195+testobj = abrek.testdef.AbrekTest(testname="glmemperf", installer=inst,
196+ runner=run, parser=parse)
197
198=== modified file 'abrek/test_definitions/gtkperf.py'
199--- abrek/test_definitions/gtkperf.py 2010-09-22 17:55:02 +0000
200+++ abrek/test_definitions/gtkperf.py 2010-09-29 14:13:40 +0000
201@@ -27,7 +27,9 @@
202 if match:
203 self.results['test_results'].append(match.groupdict())
204
205-parse = GtkTestParser(appendall={'units':'seconds', 'result':'pass'})
206+ self.appendtoall({'units':'seconds', 'result':'pass'})
207+
208+parse = GtkTestParser()
209 inst = abrek.testdef.AbrekTestInstaller(deps=["gtkperf"])
210 run = abrek.testdef.AbrekTestRunner(RUNSTEPS)
211
212
213=== added file 'abrek/test_definitions/qgears.py'
214--- abrek/test_definitions/qgears.py 1970-01-01 00:00:00 +0000
215+++ abrek/test_definitions/qgears.py 2010-09-29 14:13:40 +0000
216@@ -0,0 +1,21 @@
217+import re
218+import abrek.testdef
219+from average_parser import AverageParser
220+from timed_test_runner import TimedTestRunner
221+
222+INSTALL_STEPS = [
223+ 'sudo add-apt-repository "deb http://people.canonical.com/~afrantzis/packages/ ./"',
224+ 'sudo apt-get update',
225+ 'sudo apt-get install -y --force-yes qgears2'
226+ ]
227+
228+parse = AverageParser(pattern="=\W+(?P<measurement>\d+\.\d+) FPS",
229+ appendall={'test_case_id':'qgears', 'units':'fps',
230+ 'result':'pass'})
231+
232+inst = abrek.testdef.AbrekTestInstaller(INSTALL_STEPS,
233+ deps=["python-software-properties"])
234+run = TimedTestRunner(["qgears"], [10], [""])
235+
236+testobj = abrek.testdef.AbrekTest(testname="qgears", installer=inst,
237+ runner=run, parser=parse)
238
239=== added file 'abrek/test_definitions/render-bench.py'
240--- abrek/test_definitions/render-bench.py 1970-01-01 00:00:00 +0000
241+++ abrek/test_definitions/render-bench.py 2010-09-29 14:13:40 +0000
242@@ -0,0 +1,33 @@
243+import re
244+import abrek.testdef
245+
246+class RenderBenchParser(abrek.testdef.AbrekTestParser):
247+ def parse(self):
248+ PAT1 = "^Test: (?P<test_case_id>.*)"
249+ PAT2 = "^Time: (?P<measurement>\d+\.\d+)"
250+ filename = "testoutput.log"
251+ pat1 = re.compile(PAT1)
252+ pat2 = re.compile(PAT2)
253+ cur_test = None
254+ with open(filename, 'r') as fd:
255+ for line in fd:
256+ match = pat1.search(line)
257+ if match:
258+ cur_test = match.groupdict()['test_case_id']
259+ else:
260+ match = pat2.search(line)
261+ if match:
262+ d = match.groupdict()
263+ d['test_case_id'] = cur_test
264+ self.results['test_results'].append(d)
265+
266+ self.appendtoall({'units':'seconds', 'result':'pass'})
267+
268+RUNSTEPS = ["render_bench"]
269+
270+inst = abrek.testdef.AbrekTestInstaller(deps=["render-bench"])
271+run = abrek.testdef.AbrekTestRunner(RUNSTEPS)
272+parse = RenderBenchParser()
273+
274+testobj = abrek.testdef.AbrekTest(testname="render-bench", installer=inst,
275+ runner=run, parser=parse)
276
277=== added file 'abrek/test_definitions/timed_test_runner.py'
278--- abrek/test_definitions/timed_test_runner.py 1970-01-01 00:00:00 +0000
279+++ abrek/test_definitions/timed_test_runner.py 2010-09-29 14:13:40 +0000
280@@ -0,0 +1,51 @@
281+import abrek.testdef
282+import pty
283+import time
284+import os
285+import subprocess
286+import signal
287+import sys
288+
289+class TimedTestRunner(abrek.testdef.AbrekTestRunner):
290+ """Test runner class for running tests for specific amounts of time
291+
292+ steps - list of steps to be executed in a shell
293+ runtime - list of runtimes for each step
294+ info - informational string to append to start/end markers
295+ """
296+ def __init__(self, steps=[], runtime=[], info=[]):
297+ super(TimedTestRunner, self).__init__(steps)
298+ self.runtime = runtime
299+ self.info = info
300+
301+ def _runsteps(self, resultsdir, quiet=False):
302+ outputlog = os.path.join(resultsdir, 'testoutput.log')
303+ for (cmd, runtime, info) in zip(self.steps, self.runtime, self.info):
304+ # Use a pty to make sure output from the tests
305+ # is immediately available (line-buffering)
306+ (pid, ptyfd) = pty.fork()
307+ if pid == 0:
308+ try:
309+ print "$Start AbrekTest$ %s" % info
310+ subprocess.call(cmd, shell=True)
311+ except:
312+ pass
313+
314+ print "$End AbrekTest$ %s" % info
315+ sys.exit()
316+ else:
317+ time.sleep(runtime)
318+ os.kill(pid, signal.SIGINT)
319+ with open(outputlog, 'a') as fd:
320+ out = os.read(ptyfd, 1024)
321+ while out != "":
322+ fd.write(out)
323+ out = ""
324+ try:
325+ out = os.read(ptyfd, 1024)
326+ except:
327+ pass
328+
329+ os.close(ptyfd)
330+
331+
332
333=== added file 'abrek/test_definitions/x11perf.py'
334--- abrek/test_definitions/x11perf.py 1970-01-01 00:00:00 +0000
335+++ abrek/test_definitions/x11perf.py 2010-09-29 14:13:40 +0000
336@@ -0,0 +1,39 @@
337+import re
338+import abrek.testdef
339+
340+x11perf_options = "-repeat 3"
341+
342+x11perf_tests = [
343+ # Antialiased text (using XFT)
344+ "-aa10text",
345+ "-aa24text",
346+
347+ # Antialiased drawing (using XRENDER)
348+ "-aatrapezoid300",
349+ "-aatrap2x300",
350+
351+ # Normal blitting
352+ "-copypixwin500",
353+ "-copypixpix500",
354+
355+ # Composited blitting
356+ "-comppixwin500",
357+
358+ # SHM put image
359+ "-shmput500",
360+ "-shmputxy500",
361+
362+ "-scroll500",
363+ ]
364+
365+RUNSTEPS = ["x11perf %s %s" % (x11perf_options, " ".join(x11perf_tests))]
366+PATTERN = "trep @.*\(\W*(?P<measurement>\d+.\d+)/sec\):\W+(?P<test_case_id>.+)"
367+
368+inst = abrek.testdef.AbrekTestInstaller(deps=["x11-apps"])
369+run = abrek.testdef.AbrekTestRunner(RUNSTEPS)
370+parse = abrek.testdef.AbrekTestParser(PATTERN,
371+ appendall={'units':'reps/s',
372+ 'result':'pass'})
373+
374+testobj = abrek.testdef.AbrekTest(testname="x11perf", installer=inst,
375+ runner=run, parser=parse)

Subscribers

People subscribed via source and target branches