Thoughts on Drupal Testbot Evolution
Having spent the better half of the last year within the PIFT/PIFR issue queues, chasing the holy grail of sustained testbot stability, I find myself reaching a bit of a crossroads.
Over time, I've developed quite a large testbot "capability wish list" of things I'd like to see from the testing infrastructure. And while I've managed to build up a reasonable testbot commit count, the truth is that I've focused primarily on the quick fixes and low-hanging fruit to date. With most of those now out of the way, I look at the next few tasks at hand ... namely, i) support for sandbox testing ii) development of new generic PIFR Clients for things like performance testing, CodeSniffer, or PHPUnit support, and iii) support for more granular client/environment selection per test, which would support concepts such as the one illustrated in this mockup. Each time I start down these paths, evaluating potential database structure changes and the major re-architecting involved ... a little voice in the back of my mind pipes up, warning "Here be dragons!".
All great changes are preceeded by chaos
With DrupalCon now only a week away, it seems that a number of exciting initiatives are finally starting to converge ... and with the D8 freeze dates announced, the rate of change is quickly increasing. Looking around the Drupal community, we see discussion on topics such as the porting of Project* (and Drupal.org) to D7, potentially replacing Simpletest with PHPUnit tests, performance testing, code coverage, etc., etc., etc. ... all of which have the potential for introducing rather disruptive implications on our neglected testbots.
Take the porting of Drupal.org to D7 as an example. While Project* has often been publically maligned as the main roadblock to upgrading d.o to Drupal 7, very few of these discussions have even identified a D7 upgrade of PIFT on the list of release blockers for this initiative. While an initial D7 port does exist in the repository, it is unmaintained and at least a year out of date ... which means that as soon as the port of Project* is complete, we're going to see a need to port PIFT (and in a hurry!).
If we look at the current roadblocks standing between us and automated testing support for sandbox projects, the main concern lies within the project* suite as well ... namely, the fact that project_issue does not support software versions on sandbox issue queues, meaning there is no way to inform the testbot which branch of a project the patch-under-test should be applied against. While this could be addressed today with a little bit of extra coding, anyone with the prerequisite Project* knowledge is too focused on the D7 port to create the patch; especially considering that the effort would be throwaway once the D7 port is completed.
It is this same 'throwaway' concern that has me hesitating when it comes to development of new PIFR Clients at this point in time. The current implementation has an inherent 'all or nothing' limitation, in that we can not turn on an environment for 'some' patches/projects without enabling that environment for 'all' patches/projects. Thus, the introduction of each new PIFR client or environment exponentially increases the risk of an error in any one environment preventing us from obtaining results from *any* of the applicable environments involved for a given test. This is a fundamental limitation that requires some heavy re-design before we can realistically consider introducing additional PIFR clients and environments ... and if we're already looking at a rewrite of PIFT to support migrating d.o to D7, and our current Project* integration is about to change, I start to question whether our current architecture has a lifespan long enough to justify these efforts, relative to taking a 'fresh start' approach.
Fear, uncertainty, and discomfort are the compasses towards growth
Whichever approach we end up taking, I see the following as the key attributes of our testing infrastructure in the future:
- Whether core or contrib, the entire community depends heavily on the testing infrastructure every day. Stability and availability is key.
- History has shown that the complexity of the current implementation is a significant barrier to entry for new contributors and patch writers, where even the most experienced community members steer clear of the testing infra due to a lack of knowledge of its internal workings, and a steep on-ramp to gaining this knowledge.
- Continued use of custom and complex components, which are kept running only through the heroic efforts of a chosen few, is not a recipe for long-term success. Hand in hand with the 'simplicity' item above, we need to move towards a model that supports the easy on-boarding of new contributors, patch-writers, and maintainers. This includes a need to better leverage existing contrib modules and patterns, and less custom (or 'magic') code.
- Decoupling of Environments/Clients
- We need to separate the concepts of 'environment' (e.g. mysql or postgres?) and 'client' (e.g. Simpletest or Coder?), and a way of specifically requesting a particular client (or subset of clients) for any given test request. In addition, I see a need to seperate reporting of results per environment/client, to eliminate today's interdependencies and associated risk they introduce.
- Increased Testing Granularity
- While we would still want a full test runs for any 'final' patch before committing, it would be an efficiency boost to be able to specify a subset of tests (or subset of test types) to invoke on any given test request during the ongoing development and evaluation of any particular patch. Likewise, the ability to limit a coder run to a particular file might prove useful for projects under development.
- Interim Reporting
- Currently, the wait for a testbot response on a core simpletest run is typically somewhere between 45 minutes and an hour. It would be nice if we could support batching the tests, and providing some level of interim reporting; so that a failure in the first set of tests could be identified and corrected without needing to wait for the full test suite to complete.
- Flexible & Future Proof
- The future testing infrastructure should also be flexible enough to accomodate other future items currently being discussed within the community, such as support for new testing frameworks, per-issue repositories and branches, and support for communication excange using things other than XML-RPC.
Even if you stumble, you are still moving forward
Admittedly, this is a fairly ambitious list ... and in proposing a 'fresh start' approach, I must also acknowledge that the probability of myself developing a solution to meet the above goals is fairly slim, and next to zero if you add 'in a timely fashion' to the equation. But even with this in mind, consider the Warren Buffet quote: "In a chronically leaking boat, energy devoted to changing vessels is more productive than energy devoted to patching leaks" ... and as someone who's been patching leaks for the last six months, I'm starting to think it may be time to at least start scanning the ocean for another ship. :)
As for what's next, my desire is to come out of DrupalCon with a solid plan and roadmap for the future, and will be seeking out a number of individuals for their thoughts on the matter during this conference. While it's likely we won't necessarily be able to find a time that works for everyone, I hope to host a BoF centered around discussing what the various evolutionary options may look like from a testing infrastructure point of view; and will continue the conversation during Friday's sprints. I know there's a number of opinions on this subject, and I'm willing to step up and act as the collection point for your thoughts and comments ... but I will also be looking for others willing to step up and help out, both in defining the roadmap and turning the dream into reality.
Please consider joining in the conversation, and add your own thoughts to the discussion. In the interest of keeping the discussion and follow-up somewhere within the drupal.org environment (for tracking and history purposes), please use this issue to add your comments.