Want to Drive Innovation through Crowdsourcing? Learn the 5 Steps Get the eBook ×

Who’s Keeping Score?

By jcori In Community Stories, Help Articles

Posted September 2nd, 2016

We are in the process of rolling out our new scorecards for Topcoder. Take a peek at the new scorecard here and find out more about the scorecard by reading the help center documentation. In order to ensure the scorecard’s effectiveness, we took some existing challenges and ran a parallel review using our new draft scorecard against our existing scorecards. We also tested a member-suggested, more wide open scorecard, but our tests showed that approach could cause more issues in review and appeals. Based on these results, we decided to use the new scorecard linked above.

In addition to the cleaner scorecard, the challenge specification process is being improved as well. Clearer specifications will make it easier for submitters and reviewers to ensure the client will receive the work they need. Specifications will now enumerate the requirements, both major and minor, to ensure each requirement is called out explicitly. Reviewers will validate each enumerated requirement has been covered by the submission. The deployment guide has also been revamped to remove the confusion around the old RTF or Word Doc template. Now you can write your deployment guide in markdown using our new template for deployment guides.

We will be working on additional scorecards that are tailored toward different tech standards and technologies. Associating the scorecard to specific standards will help drive quality and make sure the best patterns and practices are followed. Please keep an eye on the topcoder-standards github repo. We’ll be adding more information to this repo and the associated wiki. This is a community spot we will be hosting all templates and standards for challenges, and we’re always open for feedback, so please use github issues if you have comments or suggestions.

Lastly, we want to thank all the members for their generous feedback and our Community Advisory Board member and copilot callmekatootie for helping to drive this process. I know this is a sensitive area but we think this will help drive quality reviews, minimize appeals, and produce quality outputs. That said, nothing’s carved in stone, so please keep the dialog open and let us know what we can do as we continue to work through the review, scorecard, and overall Topcoder challenge process.