Revisiting Drupal's "New Project Application" Process (Part 3)

Once again, it's been a few weeks since my last post on this topic, where I promised to outline a new vision of how a revamped process might look. But before I dive into it, I should mention that this is just one person's view of what the process could be, based on experience with the current process and a number of individual conversations with other members of the Drupal community. It is not intended as a definitive version of the new model, but rather as a base for further discussion.

I'll provide a flowchart of the full process proposal at the bottom of this article ... but for now, I'll break it into individual concepts in the interest of keeping things easily digestable.

Concept 1: Separation of 'code approval' from 'developer approval'

One of my long-standing criticisms of the existing Project Application process is the fact approval was 'all-or-nothing', in that if your module didn't get approved then you didn't get approved ... or, alternatively, there was no way to approve a simple module without also giving the author free reign on contrib, even though the module may not contain enough code to demonstrate programming competence and knowledge of Drupal APIs/licencing/security requirements. One of the suggestions which came out of the post-DrupalCon discussions on this topic was the development of a 'quiz' to validate a user's knowledge of Drupal's APIs and common security issues. To me, this would serve as a great tool to validate the 'developer approval' side of this equation.

Thus, the first piece of this proposal:

User passes 'Full Project Certification' quiz --> User is granted the 'Create Full Projects' permission

Concept 2: Review Automation

In all of my discussions, this has been the one piece that everyone agrees on ... we need to automate as many of the review process checks as possible. The logical place to enforce these checks is just before the project is promoted to 'full project' status. I see these automated checks including the following:

1. Automated Coder Reviews (Must pass cleanly ... but at what Coder severity level?)
2. Automated XSS Security Scans (http://drupal.org/node/786856)
3. Namespacing Review (ensure functions namespaced properly, verify project shortname uniqueness)
4. Licensing Review (scan for license.txt files, inclusion of external libraries ... how?)
5. Documentation Review (doc blocks, presence of README.txt, etc)
6. Repository Scan (repository contains code, key Drupal module files such as .info/.module present)
7. Demonstrated Basic Git Understanding (Maintainer has successfully created a -1.x branch, and -1.0 tag)
etc.

As the above checks would help to improve the overall quality of contrib projects in Code, I see them being applied against ALL sandboxes before promotion, not just those for new contributors. This brings up the question of non-typical projects with a valid reason for not passing one of the automated tests (or which trips false positives) ... which I will address in a bit.

For now, the 'typical' process for the majority of people (those who already have full project access) is outlined in the next image.

Concept 3: Users Without "Create Full Projects" Permissions

The above process is fine for existing contributors, but how do we handle the folks who do not have the "Create Full Projects" access? They do have the option of attempting the aforementioned challenge quiz, but making this mandatory would introduce a new (and signifcant!) barrier to entry for new contributors ... and we're trying to remove roadblocks, not put more up!

So instead, once a user's project is passing all of the automated tests, we have them create a ticket with their formal "Project Submission" notice in an issue queue. Here, human volunteers pick up the ticket and perform a sanity check on the following:

  1. Validate the automated test results look correct (ie. Yup! They all passed!)
  2. Ensure the sandbox project page has a reasonable description of the project (ie. Project Page contains more text than "this is my sandbox i'll add description later")
  3. Ensure the sandbox actually contains a project (ie. To discourage 'namespace squatting' and 'Just_for_Spam' projects)
  4. Ensure the applicant's chosen namespace makes sense for the project (eg. "mydrupalmodule" might pass a namespace uniqueness test, but let's keep it out of contrib)
  5. Ensure the applicant's chosen namespace doesn't conflict with existing projects (ie. avoid ending up with three different projects using the entityapi, entity_api, and entities_api namespaces)
  6. Ensure this isn't simply image slideshow module #374. (Allow 'duplicate' modules to encourage innovation ... these are intended as 'sanity checks, not 'justification points'!)
  7. Ensure the code doesn't include any external libraries.
  8. Ensure the code repository isn't totally gitfucked.


Concept #4: Why keep the Queue?

Okay ... I know what you're thinking.

"This looks alot like the 'Project Applications Queue' ... the same painful, overloaded, understaffed, and unappreciated issue queue that got us all talking about this in the first place. It's still a bottleneck ... You can't just change the label to *"Project Submission"* and expect everything to change. Just give up and scrap the entire queue already!"

I know it sounds crazy right now ... but please read on. There's a few good reasons.

  1. First of all, the name change is simply to help reinforce that fact that this is no longer a 'project review' queue. The checks listed above are intended as quick sanity checks before rubber-stamping the application with an RTBC status.
  2. While it doesn't really lend itself as a full 'mentoring' opportunity, the human touch point does allow an opportunity to provide direction and correction for things the applicant may not be fully understanding, but which can not be caught via automated tests. (eg. "No, you don't need to create a new branch/tag after every commit!")
  3. Automated processes are great ... until you don't fit the assumptions they are built around, at which point they become the largest PITA. If a particular module gets blocked via the automated tests due to a testing bug or false positive, or contains code which falls outside the assumptions the tests are based on, then they can use the issue queue to explain the situation and request an exception ... allowing folks with 'full project access' an alternative when the automated testing says 'omgwtf? kthxbai!'.

So once the automated tests are in place, we mark all existing applications as 'postponed', with instructions to the applicants to re-apply once their module passes the automated testing requirements. (In fact, I'd suggest creating a new queue, and closing off the existing CVS and Project App queues entirely after directing existing applicants to the new process). But how do we keep the queue from growing out of control again?

Concept 4: Application Auto-pruning

As long as the number of open issues is kept at a manageable level, and the action required to address an issue is relatively minor, than issue queues are actually very efficient. With the existing Project Application queue, however, the action required to make progress on an issue often required an hour or so on the part of the reviewer, and sometimes days of effort on the part of the applicant ... and most tickets needed to be revisited numerous times before being closed off. As a result, the number of open issues in the queue grew to the point where the reviewers could no longer keep up, and the sheer volume began to serve as a demotivator for people volunteering their time to perform reviews.

To prevent this scenario from re-occuring, there is a need to ensure that i) applications can be vetted and approved with a minimal time commitment, and ii) a process is put in place to prevent unattended applications from slipping through the cracks.

Moving from a 'review' process to a 'sanity check' helps to satisfy the first condition. To satisfy the second condition, any application which goes idle (no comments or updates) for two weeks, and has not been marked as 'needs work', 'postponed', or 'closed (won't fix)', is automatically granted an auto-RTBC.

The theory behind this is that auto-pruning the queue in this manner will help keep the queue size down to a manageable level, and the two-week delay is to provide a reasonable chance for reviewers to re-visit applications which were flagged as having potential issues, before the issue is pushed through (but still not permanently block said application when that re-visit doesn't happen).

Concept 5: Per-Project Promotion (without 'create full projects) permission)

The last piece of this puzzle that is required is the decoupling of the 'Promote Project' link from the 'Create Full Projects' permission ... instead enabling the ability to enable it on a per-project basis. When someone (who: require 'full project' rights? Git Admin?) promotes a Project Submission to 'fixed', this should trigger the appearance of the 'Promote Project' link on that project ... but the exact timing of when the project is promoted to official Contrib status should still be left in the control of the applicant/maintainer.

Putting it all together ...

Combining the above concepts/flowcharts gives us the following result ... a complex and convoluted flowchart, which should hopefully result in a much simpler process. Comments, opinions, and feedback encouraged!!!


Comments

After finally reading everything, I'd have to say that your process goes even further than what I was thinking and even has a shortcut that would help significant supporters be unhindered.

The one tiny change I would suggest would be to have an additional qualification on the Quiz to Create Full Project part of the flow chart. That the user must have gone through the 'Submit a project' process either a certain number of times, or managed to get through without needing major changes to get it approved.

Theoretically the automated tests should force most people to take enough care when creating a new project, but there are always going to be people that need to be pushed to take the time to get it right.

Pages

Add new comment

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Type the characters you see in this picture. (verify using audio)
Type the characters you see in the picture above; if you can't read them, submit the form and a new image will be generated. Not case sensitive.