Automated Coder Reviews ... a Roadmap

As I've mentioned in other posts, I've been working on 'enabling automated coder reviews for New Project applications' as one of my personal Drupal initiatives. As things progress, however, I continue to run into a domino-like series of pre-requisites which need to be solved before this feature can become a reality.

I'll be using this post to lay out a roadmap of tasks which need to be solved in order to reach the 'holy grail' of automated coder testing, and hopefully update the status of each item as they progress.

Update PIFR code to Enable Coder Testing

Prior to the 'Great Git Migration', the Coder module had been developed, tested, and subsequently disabled (due to the fact that virtually no module could pass a full Coder + Coder Tough Love review). Because it was not being used, the Coder review component of the testing infrastructure wasn't given any attention during the Great Git Migration, and required some tweaking to get it working again with the new environments.
Current Status: Coding Complete, Code Updates Deployed to Production
September 26 update: Essentially complete

Enable Advisory Tests

Because of the large number of exceptions that Coder generates, we need to update the Testing infrastructure to support 'Advisory' reviews ... in other words, reviews which can return result details, but do not affect the pass/fail status of the overall patch/branch/project review. The issue for this initiative is located here.
Current Status: Coding Complete, Initial tests passed, Pending additional testing and review.
September 26 update: Patch @ pending RTBC

Deploy Coder Environment on Production Testbots

Once the advisory reviews are ready to enable on the production testing infrastructure, we'll need to make some configuration tweaks on the testbots to enable them to perform Coder runs. There are currently a couple of outstanding items related to this.

First, once this is enabled on, both Simpletest AND Coder results need to be completed before a test status is returned to d.o ... if we only turn on Coder on a single testbot, a failure on that testbot causes testbot results to freeze until a new Coder environment is available. Thus, production deployment is an all-or-nothing item.

Secondly, the daily client certification process for a testbot with both SimpleTest and Coder environments enabled may be somewhat less dependent than one with only SimpleTest enabled. Certification does usually work, but I have seen higher than expected certification failure rates on my local testbot. This may simply be the fact that two environments doubles the potential break points in the certification process ... but more investigation and testing would be prudent.
Current Status: Initial Testing in scratch passed, additional testing and review recommended.
September 26 update: Pending above, Needs communication plan prior to rollout, to avoid confusion with Coder comments appearing in issue patch statuses.

Enabling testing of Sandboxes

Currently, the testing infrastructure uses Release Node IDs as a unique key in a number of places during the communication between,, and the testbots. While this was fine for CVS, the lack of release support in sandbox projects means that the current testing infrastructure can not support sandbox testing.

There are two potential resolutions to this issue:

1. Enable support for releases in sandbox projects

Enabling sandbox releases would allow the current testing infrastructure to work without any code changes. I suspect this approach would basically consist of reversing the patch located here, and hiding the 'downloads' table from sandbox project pages (to make it more difficult for end-users to download sandbox code). I've also proposed limiting sandboxes to the creation of '-dev' releases only, which would require additional coding.
Current Status: Pending discussion/decisions
September 26 update: Essentially rejected - looking instead at option 2

2. Refactor the Testing Infrastructure to key off of vcs Label IDs instead of Release Node ids

Because the new versioncontrol system has a unique id for each branch/tag for every repository, these would make a good unique identifier for replacing the release node ids ... and would also support sandboxes. This requires some more disruptive coding changes in PIFT (and possibly PIFR).
Current Status: Pending development. Sandbox code (not quite working) here, with notes/approach documented here.
September 26 update: No update. Work queued behind Project Dependency deployment on

Contrib Project Dependencies

Since mid-January, calculation of contrib module project dependencies hasn't been working ... I suspect that something accidentally got dropped during the Git Migration. To properly test contrib projects, the testing infrastructure needs to be able to figure out what dependencies to test/load for a given review. A new module (Project dependency) has been developed to address this issue, and testing/d.o deployment is the target of rfay's code sprint next Friday.
Current Status: Coding Complete, pending further testing, integration, and d.o deployment.
September 26 update: deployment imminent (this week?)

Add 'On-demand' tests triggers on

The current testing infrastructure assumes that if a project has a release, it's going to be (relatively) stable. Therefore, enabling testing on a project is as simple as setting a checkbox; and every patch and commit will trigger a re-test.

The same is not true for sandbox projects, which are likely to be unstable and/or outright broken more often than not. As such, the current 'always-on' testing triggers won't really work. Instead, sandbox testing should be handled 'on-demand', which in turn requires a new interface or process on d.o from which to trigger these tests.

My initial thought was a new 'testing' tab on the project page, which would contain the latest SimpleTest and Coder status messages for each branch of the repository which has been tested, and a form which could be used to trigger new tests. An initial stab at some code (basically a copy of the drupalorg_gitinstructions module) can be found in the sandbox here. However, I quickly got stalled, as most of the information is stored on and not d.o itself ... so need to re-evaluate what data is currently available in the pift database tables (on d.o), what new data we want to pull over from the PIFR tables (on qa.d.o), and whether we want to replicate those parts of the qa.d.o data we're interested in inside the d.o database ... or generate a new xml-rpc call to facilitate the information transfers.
Current Status Some (very early) code started, needs strategy and UI design discussions.
September 26 update: Patch pending review at, which will enable this for projects with releases. Needs RTBC and d.o deployment, after which it will require some refactoring to support sandboxes (coordinated with the rest of the testing infrastructure refactoring ...)


With regards to the sandbox project releases line, there may be an option to allow 'unpublished' release nodes on sandboxes. The work required to do this includes:

Updates to project_release.module:

  • project_release_alter_project_settings_form
    • Add 'Publish sandbox project releases' setting
  • project_release_release_nodeapi:
    • add 'case: Presave':
      • if sandbox, and 'allow unpublished sandbox releases' is true, unset node->published
      • set a drupal_set_message explaining that the release has been created, but releases nodes are not published for sandbox projects, and the release will not be visible to non-maintainers.
  • Update the node/%/edit/releases text.
    • if 'allow unpublished sandbox', then hide the supported versions table and provide explaination that sandbox releases are not published/visible to non-priviledged users, but may be created in preparation for 'full project' promotion (in which case they may need to be 'published' manuall), or for use by other components (such as the automated testing infrastructure).

Update the 'promote to full project' code to publish any unpublished sandbox releases (since maintainers don't have access to the publish checkbox themselves)


Add new comment

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Type the characters you see in this picture. (verify using audio)
Type the characters you see in the picture above; if you can't read them, submit the form and a new image will be generated. Not case sensitive.