Evaluation & Feedback
Big Ideas firmly believes that rigorous program evaluation is key to understanding whether or not the Contest is meeting its goals. As a result, Big Ideas gathers feedback to conduct an impact assessment and process evaluation each contest year to measure the program’s impact and learn about how it can better improve its offerings.
To accurately measure outcomes, the Blum Center has rigorously monitored and evaluated the Contest using: annually gathered data; surveys of applicants, judges and mentors; external evaluators; and the social science expertise of UC Berkeley graduate student researchers. In the last few years, Big Ideas has integrated statistical analysis techniques into its monitoring and evaluation systems. For instance, in order to determine whether teams experience improvement in certain skills, a before and after analysis is conducted, controlling for certain key variables like student status or start-up background.
Big Ideas utilizes three surveys each year that provide information that feeds into both analyses: a Pre-proposal Feedback Survey (for all student contest participants), a Full Proposal Feedback Survey (for finalist students), and a Judge & Mentor Feedback Survey. Additionally, on a routine basis Big Ideas issues an Alumni Feedback Survey to former contest winners in order to capture more information on teams’ progress after they leave the contest. Survey questions are developed by Big Ideas staff and are refined each year to ensure that they provide accurate measurements of skill development and provide opportunities for feedback on Contest components (see the Tools section for the 2015-2016 Metrics Framework to learn about the types of questions asked in each survey).
The Blum Center measures Big Idea’s impact in three key ways:
- The size and diversity of the applicant pool
- The transformative nature of the program on applicants in terms of improvements in skills development and entrepreneurial aspirations
- The amount of progress achieved by Big Ideas winners to date
These three metrics reflect the broad scope of the Big Ideas pipeline, which transforms early-stage undergraduates and graduate students into a comprehensive network of innovators.
The size and diversity of the applicant pool
The Big Ideas contest aims to attract a broad pool of student entrepreneurs representing a diverse mix of academic disciplines, ages, genders and ethnicities. It uses the following guiding questions to inform its evaluation process:
- Do Big Ideas submissions incorporate a variety of diverse perspectives and approaches (in terms of students’ backgrounds and academic foci)? Does Big Ideas also encourage participation from student populations that typically lack access to key entrepreneurial support services (e.g. female, low-income, minority students)?
- How much startup background do applicants typically have?
- Are there key determining characteristics that determine the success of an applicant?
To answer these questions, a series of indicators are collected across all participant teams (see a full list of survey questions in the Big Ideas Metrics Framework).
The transformative nature of the program on applicants
Big Ideas seeks to better understand the extent of its contribution to applicants’ development over the course of the program. It uses the following guiding questions to inform its evaluation process:
- To what extent do students develop skills critical to project management and entrepreneurial success through the Big Ideas program (e.g., grant-writing skills, project management skills, leadership skills)?
- To what extent did these skills help them develop a highly innovative and viable project?
- To what extent do students change the way they think about their roles in society?
In order to assess the impact of the Big Ideas program on students’ development over the course of their participation, Big Ideas uses a mix of quantitative and qualitative evidence to gauge how students value the provided services.
Big Ideas team leads are asked to rank their confidence in a number of different skills areas at the time they submit their Pre-proposal and Full Proposal applications. They are also asked to report on the likelihood of implementing or working for a social venture in the next year, and rank their top sectors of interest. The results of these two surveys are analyzed to see if there is any significant difference between the two rounds of reporting.
In the 2015-2016 contest year, Big Ideas found the following:
- Finalist teams reported greatest increase in confidence in their ability to: estimate the need within a particular market for a product or service; clearly and concisely explain their social venture in written or verbal form; and develop a plan for implementing their venture.
- Teams that took advantage of Big Ideas resources in the Final Round were more likely to win & receive higher amounts of funding. 1st place winners used an average of 3.8 Big Ideas support services, winners 3.2, and non-winning finalists 2.9. Each additional Full Proposal resource utilized by a team correlates with a $400 increase in award amount allotted.
- Non-UC Berkeley students are much less likely to access contest resources (15% of non-UC Berkeley students signed up for Pre-proposal Advising compared to 39% of Berkeley applicants, and on average UC Berkeley teams used 3.6 Full Proposal resources compared to 2.5 utilized by non-UC Berkeley teams).
- Big Ideas raises the prospects of teams implementing social ventures. 16% of teams report a greater likelihood that they will start a social venture in 12 months between the Pre-proposal and Full Proposal deadlines (even before funding decisions are announced).
The quantitative approach is also supplemented with free answer responses in the surveys, where teams can describe in detail what they perceive the impact of Big Ideas to be. Year after year, the mentorship and advising hours with Big Ideas staff are overwhelmingly cited as the most useful contest offering. Specifically, it was having a dedicated industry professional who was able to connect teams with the resources they need, and offer a great deal of specific feedback on the design of the project. The different perspectives and availability of last minute feedback provided by Big Ideas staff were also reported to improve upon the quality of submissions. The amount of detail provided in the judging feedback is also mentioned as an important resource utilized by teams.
In their responses, teams also mentioned that they achieved a great deal in developing their proposal writing, team building, and project management skill sets. The framework and deadlines of the application provided teams with a set of deliverables that forced them to be accountable. In order to strengthen their projects to meet the criteria demanded of the Big Ideas application, applicants sought partners, conduct market surveys, built prototypes, and test their hypotheses. For many teams, Big Ideas was the extra push they needed to actually execute a social venture.
Sample responses include:
- “The competition made my team really think about our implementation plan, and exactly what resources we potentially needed or were lacking currently. We had to think more about the social impact side and not just focus on the technology of our device.”
- “The best part about participating in Big Ideas was the incredible amount of self-growth; I’ve learned how to talk with people, overcome implementation obstacles, delegate tasks to my team. I think this project has taught me how to be an adult!”
- “The best part about participating in the Big Ideas competition was getting hands-on experience with the intricate process of turning a simple idea into a fully-researched, fully-staffed, fully-funded project. Our entire team learned that coming up with the initial idea was the most straightforward part of the process. As we began to develop the idea, we bumped into gas in research and practical barriers that pushed us to reach our goals.”
- Embed the surveys in the application platform. Prior to 2012, surveys were conducted using a Google Form after finalists submitted their Full Proposal. Despite sending multiple reminder emails, the surveys were completed by only a small portion of finalists and non-finalists (less than 20%), and Big Ideas staff were therefore unable to analyze results from a representative sample. As a result, Big Ideas offered $5 Amazon.com gift cards as an incentive for participation, and although this improved the response rate, only about half of finalists and even fewer applicants who were not chosen as finalists completed the survey. Thus, the contest moved towards embedding the survey in the application form for both rounds, which has led to a 100% response rate. To ensure that applicants would answers as candidly as possible, the applicant is assured that their survey answers would not be associated with their project and would have no impact on their proposals’ scores.
- Avoid bias in responses by surveying applicants at two points in time. Big Ideas previously measured skills development progress by asking participants to recall the growth they experienced over the course of the year at the end of the contest. Thus, the Contest would receive 100% positive responses rates when students were asked if they felt that they became more creative in thinking about solutions to problems, enhanced their critical thinking ability, or were better able to transform innovative ideas to applications. By surveying them at the Pre-proposal and Full Proposal applications, Big Ideas is thus able to capture a more representative picture of the amount of improvement students made through participating in Big Ideas
Progress achieved by winners to date
Big Ideas also evaluates the extent to which teams continue to work on their Big Ideas projects and the difference that those teams are making as part of its impact assessment. Initially, Big Ideas created a LinkedIn group to connect past winners and keep track of their updates. Big Ideas staff hoped that the group would provide a forum for past winners to share their accomplishments with each other and with staff, but the LinkedIn group has proven relatively inactive, and has therefore not been a particularly effective evaluation tool. Thus, Big Ideas gathers information on past winners primarily by issuing alumni surveys and conducting phone interviews.
Alumni surveys and phone interviews
Alumni surveys are sent out every couple of years to capture information on graduated teams. Big Ideas captures three key metrics to help assess its influence: additional revenue generation, number of people working on the project, and number of beneficiaries or clients served to date. The Alumni Survey also requires a more detailed response about the progress projects made to date. It prompts the respondent to report on the team’s current involvement of the project, whether any key pivots have been made in the project, and its current state (design, pilot, scale etc.). It also asks alumni to describe any key challenges they are facing in implementation, and what gaps are preventing them from taking the project to the next level. Questions also focus on how the program can better prepare or support teams to deal with these obstacles and teams’ plans for future work (see the Alumni Update Survey in the tools section).
Phone interviews cover the same content sent out in Alumni Surveys. Outreach to former Big Ideas winners was conducted in 2011 (to 2010 winners), and again in 2014 (to 2012-2013 winners) via follow-up phone calls to teams that did not respond to the survey. The phone interviews ask the same questions prompted in the survey, but is a more effective means of reaching past winners. These phone calls allow Big Ideas to keep up-to-date with winners’ stories, which have been used in Big Ideas newsletters, in pitching Big Ideas to potential category sponsors, and as informal evidence of the impact of the Contest in grant proposals. Reconnecting with past winners has also allowed staff to develop a greater sense of connection to and commitment from past winners to the Contest.
- Conduct both online surveys and phone interviews. Alumni surveys have very low response rates, which is an ongoing challenge. As a result, phone interviews are conducted to supplement missing data points that the surveys fail to capture. Following up with Big Ideas alumni over the phone was found to be a more effective means of reaching teams after they graduate, and better in sustaining the relationship with teams over time.
- Continuity and transparency is key. Due to the fact that alumni outreach is conducted every few years by different Big Ideas staff, former winners are sometimes frustrated with the multiple surveys or phone interview conducted with repeat information. Alumni outreach typically falls under the work program of a short-term contracted staff position, and knowledge carryover is challenging. It is important for the person conducting alumni outreach to build upon existing conversations with teams, and be transparent that their responses are being taken into account through adjustments made to the program. The use of Salesforce has been instrumental in establishing continuity for Big Ideas.
Big Ideas also uses all three surveys and input from staff to conduct an informal process evaluation each year to assess its execution of the program each year. The team collects a great deal of feedback from students, judges, and mentors on whether they utilize the resources offered and found them to be effective. It explores which of its strategies are the most effective in conduct outreach to students and recruitment of judges and mentors. It also gauges whether participation in the Contest is seamless for students, judges, and mentors (see all three surveys in the Tools section for sample process evaluation questions).
Big Ideas develops a set of recommendations each year on how the program can be adjusted for the better next year, and uses these lessons to inform its long-term strategy. The process evaluation allows the team to prioritize resources in future years, and constantly reflect upon how it can best service teams going through its program.
- Alumni Update Survey
- Full Proposal Applicant Feedback Survey
- Judge & Mentor Feedback Survey
- Metrics Framework
- Pre-Proposal Applicant Feedback Survey