Checking is a process of confirmation, verification and validation. All checks return binary "pass/fail" metrics and can be executed by machines.
Testing is a process of exploration, discovery, investigation and learning. Checking is a subset of checking so while all checks are tests, not all tests are checks but in this context, tests cannot be automated and checks can be.
See [1], [2], [3] for more discussion on testing vs. checking.
A task is a unit of work that can be executed by taskotron. Can be a check, doesn't have to be.
[1] | Testing and Checking Refined |
[2] | Testing vs. Checking |
[3] | Tests vs. Checks: The Motive for Distinguishing |
This isn't a talk about checking vs. testing, so don't harp on this too much. It's to make sure that my terminology is understood, not to evangelize. There are links for farther reading if folks are interested.
Why is it that we have so many things which would be beneficial to run at scale but no reasonable way to actually run them at scale?
Taskotron is first and foremost a system to enable Fedorans to automate tasks which should be executed on a pre-defined schedule. It is designed to do as much as it needs to but delegate the specialized functionality to appropriate code.
Of course, this includes the checks that used to run in AutoQA but it is in no way limited to just those things.
While Taskotron is similar in concept to AutoQA, it has been redesigned to work around many of the limitations that we found in the years of working on AutoQA.
To get a bit more into the realm of the concrete, Taskotron does:
Note that a test runner isn't specified, there is no requirement for a specific test runner and as many runners could be supported as there are resources to support them.
At the time that Taskotron started, there was no other option that was:
Not quite an absolute but exceptions will be rare and only if there are no other sane options
python runtask.py -i foo-1.2-3.fc99 -t koji_build -a x86_64 \
../task-rpmlint/task-rpmlint..yml
TAP version 13 1..1 ok - $CHECKNAME for Koji build datagrepper-0.4.2-1.fc21.noarch.rpm --- details: output: | datagrepper.noarch: W: spelling-error Summary(en_US) webapp -> web app, web-app, weapon datagrepper.noarch: W: spelling-error %description -l en_US webapp -> web app, web-app, weapon datagrepper.noarch: W: spelling-error %description -l en_US api -> pi, ape, apt datagrepper.noarch: W: spelling-error %description -l en_US datanommer -> manometer 1 packages and 0 specfiles checked; 0 errors, 4 warnings. item: datagrepper-0.4.2-1.fc21.noarch.rpm outcome: PASSED summary: RPMLINT PASSED for datagrepper-0.4.2-1.fc21.noarch.rpm type: koji_build ...
From a high level, it isn't much more complicated:
- Scheduling from incoming Fedmsg
- Execution in buildbot job
- reporting into resultsdb
Scheduling is handled by taskotron-trigger and works off of fedmsgs.
Right now it's using simple per-message-type logic but that will be changing once we have more tasks and a better idea of what the scheduling needs are.
In the production instance, all tasks are executed with buildbot. The process is similar to:
This is vague, but at the same time, we don't pretend to be experts in all of Fedora. The whole idea here is to enable automation as much as it is to get that work done. I'm not going to get up here and say that I'm the only authority on what needs to be automated and instead of trying to do all the automation ourselves, I want to see what ideas and approaches that other fedorans have.
Table of Contents | t |
---|---|
Exposé | ESC |
Full screen slides | e |
Presenter View | p |
Source Files | s |
Slide Numbers | n |
Toggle screen blanking | b |
Show/hide slide context | c |
Notes | 2 |
Help | h |