Even though I signed up late, I was able to volunteer as a specialist mentor, mock judge and provided logistic support for the regional finals as the scoring data guy. My comments are from those three perspectives. Before I get to that, I just wanted to say how impressed with the quality of a nearly entirely voluntarily run organization and thankful for the opportunity to give my time.
Specialist Mentor – I provided pitch book feedback to 5 different teams. I am not sure if teams use the mentor profiles, but none of these teams reached out to me initially and stated that they didn’t even know I was available. They were more than happy to take the feedback and some of them even incorporated some of the suggestions they agreed with into their presentations.
Not all teams use Launch Pad either and from a management perspective, this could be a useful tool for CleanTech. I was engaged 6 different times, but only 1 of them was recorded. Granted not all teams need all of the services provided, but it should be at least known who is using what and how often.
Mock Judge – This is where the rubber hits the road for the teams and I thought a pretty good job was done at getting a good mix of different kinds of judges. This is important for the teams so they get a balanced feedback loop. The biggest room for improvement in this process has to do with managing the team’s documents which includes their presentations, summaries and worksheets. There just isn’t enough time to ask all of your questions and give all of your feedback in the time allotted. I am fine with the time allotment, but I am wondering if there is a way to document and provide teams your questions and feedback – perhaps allowing judges and teams to share documents?
Even though I didn’t do it, I considered filling out my scorecards ahead of time just so I take my time analyzing them and not worry about getting fatigued in the afternoon session. Then, if needed, I could have modified the scores and commentary base on the pitch. This might be overkill for a mock judge session, but maybe a good idea for the regionals and finals.
Scoring Data Guy – The most obvious thing I noticed is how inefficient it is to print out score sheets, have judges fill them out and then manually input the scores. Ideally, one of the sponsors would provide tablets for the judges which would eliminate the paper, potential for input error and scores would be all stored instantly. That way, we could spend more of our time analyzing the data.
I also noticed that the second question in the legal section (Is their corporate and cap structure free of issues?) was the least scored question. Surprisingly, the questions in the sustainability section were the next least scored questions. The topics are important, but the questions might need to be reworked.