# WHAT - AutoQA discussion - Making results maintainer friendly # WHO - jlaska, tflink, vhumpa, kparal # WHERE - http://openetherpad.org/AutoQA-0-5-0-brainstorm and teleconference # WHY - "because we care to send the very best" = Proposed Agenda = # What is the most important triage information for maintainers? # What could our logs look like? #* Depcheck #* Upgradepath # When should we be sending emails to maintainers? = Minutes = == Most important triage information for maintainers == - what failed: - what passed - the reason why - link to test documentation -> not like this: http://fedoraproject.org/wiki/QA:Depcheck_Test_Case -> maintainer-oriented documentation (most common failure causes, how to resolve them, how to find them in the log) == What could our logs look like == - html logs? -> html upgradepath mockup: http://kparal.fedorapeople.org/autoqa/autoqa%20test%20output.png -> sample html result from autotest-web failures - http://jlaska.fedorapeople.org/screenshots/Screenshot.png - Would be easier to do with upgradepath? -> easier to highlight the relevant - Might be able to do algorithmically for depcheck? - Harder to do with accurately? Plaintext mockups: http://tflink.fedorapeople.org/autoqa/log_reformat/depcheck/tflink_depcheck_log_mockup.txt http://tflink.fedorapeople.org/autoqa/log_reformat/upgradepath/tflink_upgradepath_log_mockup.txt High-Level Proposals: - HTML output with the highlights and the first 1k lines (or so) of the log output - Drill down into separate logs for different packages/updates - for either plaintext or HTML Design-goals: - Minimize first-result page scrolling for maintainers (doesn't apply to detailed logs) -> for depcheck we just cut out the important part ("xxx has depsolving problems") -> link to full output log (without autotest debug messages) -> for upgradepath we select just the few lines concerning relevant packages -> link to full output log may not be necessary -> i.e. we will have 3 logs: "pretty log" for maintainers, "full log" for the whole test output, and "debug log" with autotest debug messages - Provide a solution in $time_frame -> like the idea of spending time to explore different solutions, then make a determination on the best option to proceed in the given time frame Agreed: * provide maintainer-oriented documentation for test cases * plain-text is a must-have, html is a nice-to-have for the next release * 3 log solution seems ideal for now (pretty, full and DEBUG) * gather test result usage data to provide estimate for required disk space to hold 30 days of test results * desired default bodhi email notification frequency - - don't send notification until *all* tests have completed for an update ... ✔ - don't send any email if all tests pass ... ✔, let's make it configurable, let's ask experienced people (jlaska mentioned some) - don't send notification for re-sent FAIL results after 3 days ... ✔, let's make it configurable - *send* notification if result changes (PASS->FAIL or FAIL->PASS) ... ✔ -> send it whenever any _test_ result changes state, make it configurable if reasonable - opt-in support to get all notification? -> let's wait for feedback from 0.5.0, and react accordingly in 0.6.0 if necessary * Priority 1. Notification frequency (depends on bodhi) 2. Pretty logs ... then release 3. Other stuff ... == Next Steps... == * autoqa-0.5.0 * 298 - test.py - split postprocess_iteration reporting into standalone methods * 314 - Decrease the volume of 'PASSED' email sent to maintainers from bodhi * 315 - Create per-item logs for multi-item tests * 316 - depcheck: extract possible cause of failure * 317 - Document depcheck and upgradepath * 318 - Provide access to test documentation * 319 - Create HTML log output if possible * 321 - figure out how much disk space needed to store 1 month of test results * jlaska - reach out to dmalcolm and esantiago for thoughts on maintainer result notification frequency * create mockups for plaintext logs ("pretty log")