A transparent project on transparency?

Since my last blog post I’ve made some good progress on a set of survey questions (see the “Two Projects” tab) and also on understanding the overall lifecycle of this project. I’m grateful for the help of Dr. Jennifer Rice. As I was pondering how helpful she has been it occurred to me that I didn’t actually know what I’m asking her to do. And, I also wasn’t very sure of what I was asking me to do, and while I’ve done some work, I didn’t really know what was next. So I set out to draw a picture of the project that would show a sequence of events, dependencies among them, and where I would need outside input. It ended looking like this:



(Click image to see it full-size)
(Click this for the latest version)


The diagram shows that I’ve created a survey, revised it for Bias and Loading and revised it for consistency. Dr. Rice is currently reviewing it. Then I’ll open it up to friends and interested parties for their feedback, after which I plan to revise it again to take their comments into account. After that the survey needs the approval of an Institutional Review Board whose job it is to oversee research involving human subjects, Dr. Rice explained to me. Many concepts seem analogous to the steps I take as a programmer when creating an e-commerce or even HIPAA-compliant system: protect people’s identity and privacy, don’t collect personal information you don’t need, etc. In the diagram that is the “
IRB Approval ?” step. It is iterative because the IRB will make recommendations that need to be factored into the project somehow, or may need further explanation. When they are satisfied we can begin testing the survey by asking people to fill it out.

Simultaneously with the development of the survey, I have begun working on three
instruments. They are “Scorecards” only in a colloquial sense, but that is what I call them. These instruments are each derived from a subset of the questions in the survey. All of the questions related to Gauging Transparency ended up in the Software Project Transparency Scorecard. All of the questions related to Personal Outcome will end up in the Personal Outcome Scorecard. Similarly there will be a Project Outcome Scorecard. I’ve only worked on the first one, but it will likely be the largest.

The scorecards provide a value for each possible answer to each question included in them. There is also often one or more disqualifying responses and these are give a nil value that will not affect correlations, averages, etc. Most questions are of the type “Never, Rarely, Sometimes, Often, Always,” but sometimes Always is best and sometimes Never is best. Other questions have their own unique set of choices.

As a start, I have assigned consistent but arbitrary coefficients for every single response to every single question in the scorecard. This is an uninformed scorecard. That is, it might ask the right questions, but it doesn’t apply weight to the answers with any insight into which responses are actually more important than others, nor how much more important.

One goal of the survey is to shed light on how transparency practices correlate with outcomes. Specifically, to understand which ones seem correlated with better outcomes, and which ones do not seem to be. I think the scorecard can be improved and made less arbitrary in a few ways:


1. Questions that have no significant correlation should be dropped from the scorecard

2. Questions that have a strong correlation should get a higher coefficient as a result. How much higher could be driven by the strength of the correlation, when compared to the strength of the other correlations.

3. Questions which correlate strongly with each other, but are not equivalent in what they are asking, define a subset of practices of interest.


At the moment, with these arbitrary coefficients, the Software Project Transparency Scorecard is informed by 81 different questions. The worst score is -128 and the best score is +152. As I mentioned, this is with arbitrary coefficients. The number of questions included and the coefficients will change based on data derived from survey responses.

After the survey responses have been used to tune the scorecards, I think it would be possible to build a CGI tool that asked a series of questions and then used the answers to make some conditional recommendations. For example: If you are seeing a problem with X, these practices have been correlated with projects that saw less X. That way if someone isn’t seeing a problem with X, even though their responses correlate with people who did, in their project that isn’t a problem. I don’t think this survey can show causality, so the strongest suggestion we can make is that there is a correlation and if there’s a problem perceived, these might be things to consider. Not cause and effect: there are no guarantees. The goal is to propose relevant questions to consider.


(Click image to see it full-size)
(Click this for the latest version)

Which brings me up to the moment. Feel free to follow my progress. I’m trying to operate this project transparently so anyone can look at my work as I am doing it, in the hopes that with their help the project will have a better chance of success. For me, success is helping people.