This survey was initially conceived of as a minor update of the 2007 survey (Palo Alto Election Information Survey Report of Results as of November 13, 2007). The intent was to preserve as much direct comparability as possible by changing as little as possible. However, while the additions seemed small, the response of those testing the draft was that it had become too long, and this was handled by some reorganization. Grouping the questions ranking the sources into categories, instead of the original single list, significantly reduced the amount that needed to be read. It also eliminated having the same phrasing occurring is a series of questions—one complaint of the early testers was that they felt they were being asked the same question multiple times. In the 2007 survey, the information sources were ranked on a scale of 1 to 5; testers reported that reducing that to a scale of 1 to 3 made the survey much more approachable.
Be aware that the 2009 election was unusual: Because the School Board seats were uncontested they did not appear on the ballot. This may have affect a number of factors relevant to this survey's results: (1) how engaged different classes of voters were, and (2) who was involved in campaigns. Or the people normally focused on the School Board races may have just shifted to the City Council campaigns.
The 2007 survey was created by three of the campaigns for the School Board and hence its geographic categorization was based on elementary schools, versus the neighborhood focus of the 2009 survey. Historical aside: In 2007, PAN was close to releasing a survey when we learned of this survey and attempted to collaborate. When the authors of this survey released it as-is, PAN decided it was close-enough and it was best to publicize that survey rather than have competing surveys.
This survey was publicized in a very similar manner to the 2007 survey. It was announced to the Palo Alto Neighborhoods (PAN) email list for distribution to their neighborhood email lists, to the candidates for distribution to their supporters, and to the newspapers. Surges in responses can be seen in response to when announcements when out to specific neighborhood email lists and when the announcement was posted on Palo Alto Online. There was no discernible uptick when the announcement appeared in the hardcopy of the Palo Alto Weekly and the announcement is not known to have appeared in either the Daily News nor Daily Post (please notify me if it did). It is not known if any of the candidates helped advertise this survey.
The easiest way to track when responses were received is to look at the first slot in Q2, but be aware that those times are UTC (GMT), not local time. The rough timeline was:
Subject: Please take survey on information sources used in voting for City Council candidates
Please take a survey about the information sources that you used to make you decision about who to vote for in the 2009 City Council election at either
- http://www.PANeighborhoods.org/ under
Voters' Sources of Information...- http://www.surveymonkey.com/s.aspx?sm=1nQ5L0GoTccK2hyRearJhw_3d_3d
We are interested in which of a range of information sources were and were not consulted, and which of them you found useful. This data can be used by candidates in future elections to better allocate their time and money to get better information to the voters. The current high cost of campaigning is a deterrent for well-qualified people. It is important to get responses from the broad range of voters, not just those that are more politically active.
The sponsor of this survey is Palo Alto Neighborhoods (PAN), an umbrella group of neighborhood associations. During the campaign, PAN published the candidates' responses to a questionnaire covering a broad range of issues and conducted a Candidates Forum.
The results of this survey will be compiled and placed online for public use (on the PAN website).
Neighborhood group wants voters' opinions: Survey on influence of campaign information could help future City Council candidates, group says—as a featured story (under
LocalHeadlineNews from the Palo Alto Weekly and the Palo Alto Online staff). This quickly doubling the number of responses. After a short tenure as a featured story (1-2 days), it dropped down into the list of stories represented by the first part of their headline for another brief interval, and then dropped off the home page. This story was mentioned in the hardcopy of the Palo Alto Weekly in the November 13 issue on pg 8, but the synopsis omitted the link to the survey. A short version of the story with the link to the survey appeared in the hardcopy Weekly of November 20 on page 22.
If you look at the results of Q17, you will see that some neighborhoods are over-represented, and this is almost certainly attributable to the prominence with which announcements were made to their email lists.
Interestingly, the results of this survey suggest that
the caveat in the 2007 survey (a significant number of respondents were also associated with the campaigns
)
may not be true—The same PAN lists publicized that survey, it had the same number of responses (347 vs. 339),
and 72% of this survey's respondents were not involved in the campaigns,
and only 7% were actively involved
(Q15).
Response rates: Barron Park encompasses about 1600 households and has 658 email addresses on its primary email list. Some of the subscribers to this list are residents of other neighborhoods and some are non-voters (non-citizens, not registered to vote, etc.). The 68 responses represent 10.5% of the email list membership. Just before the second announcement, the response rate was just under 7%. The likely reason for the large response from Barron Park is not just the prominent manner in which the survey was announced, but that the sender (Moran) was a well-known long-time Barron Park neighborhood leader.
Crescent Park encompasses about 1600 households and its email list has approximately 550 members, for a response rate of 8.4%.
Midtown has about 3000 households and an email list of 900, for a response rate of 4.7%.
Q1. How did you cast your ballot this year?:
Logically, this question belongs among the demographic information questions at the end of the survey,
but since it provided significant context for many of the other answers,
I wanted to reduce the chance that it would be skipped (as often happens with demographic questions).
Secondarily, the demographic questions were visually quite long—because of the list of neighborhoods—so
moving this question to the front better balanced the pages.
Much of the campaign schedule was predicated on the belief that many/most voters would mail in their ballots shortly after receiving them (in early October). I wanted to separate such people from those who mailed in their ballots at the end of the campaign. Since you were advised to mail the ballot on or before the Saturday before Election Day and that many people would give themselves a little cushion, I decided that the preceding week was the appropriate measure, plus Sunday provided a psychologically significant dividing line.
Q2. What were the sources of information about candidates that MOST INFLUENCED who you voted for? Please name up to five.:
This free/unaided recall question was placed on the first page,
with the questions enumerating the common sources of information on the next page.
Although the intent was not to have that enumeration influence these choices,
I deliberately chose to not configure the survey to prevent people from returning from the second page
to change their responses to this questions (and there is no way to know whether this happened).
Q3-Q9:
The information sources were grouped into categories partly to reduce the amount of reading required—by eliminating repetitive phrasing—and to reduce cognitive load by visibly grouping related items together.
A 3-step scale was used because we didn't think that most survey takers would have a finer-grain distinction,
plus it seemed to make the survey seem much simpler to those taking it.
Individual newspapers:
The early drafts of this survey enumerated the newspapers under the categories of Endorsement
, Advertisements
and News Coverage
.
However, the testers identified this as making the survey too long and giving it a feel that the same question
was being asked several times.
The latter observation sparked the realization that the typical survey taker
was unlikely to make many of those distinctions,
for example, would respondents remember in which newspapers they saw candidate ads,
much less be able to report that the same ads in different newspapers had different effectiveness/utility?
And even if these distinctions were adequately captured,
changes in practices and policies at the newspapers could render these results irrelevant in future campaigns
(for example, a newspaper changes its policy on ad placement, which occurred for this campaign).
Finally, the survey results are intended as a resource for the candidates considering where to put their efforts,
and newspaper endorsements and news coverage are largely out of their control and
are rather something they may need to compensate for.
SJ Mercury News should have been more prominently cited in the text of the questions of this survey, judging from the mentions in the various comment fields. This mistake occurred because, among those creating and testing the survey, the Merc was viewed as having bowed out of participating in the campaign: It decided not to interview candidates and have endorsements, confining itself to reprinting a selection of stories from the Daily News, often with several days delay (the two are part of the same chain). However, we forgot that the Daily News is distributed primarily from boxes in the commercial districts and thus not readily available to many residents whereas those residents may have home delivery of the Merc (or access at work).
The N/A
category covers multiple situations, some of which warranted being included in the calculation
of the average response and others that didn't.
We choose not to do so because the costs outweighed the likely benefits.
First, it would have increased the size of the survey when we already knew that the initial draft was judged to be too long.
Second, the choices were fine gradations and it was unclear that we could get meaningful responses plus
such fine gradation add disproportionately to the cognitive load that can cause people to give up before the end.
For the record, some of these subcases of N/A
identified were: (1) not aware of the information source (example: didn't see announcement of candidates forum), (2) disregarded the source based upon experience in previous elections, and (3) didn't have access to the source (examples: don't have cable TV, don't use the Internet).
Q10. If you were dissatisfied with any of the above sources of information…:
From discussions with people during the campaign,
there seemed to be great dissatisfaction with a wide range of the information sources.
The concern was that this would not be adequately captured by the standard Other comments
question (Q11 here)—so a question explicitly prompting for such was included.
Q11. Any other comments on these or other sources?:
Allowing such comments is a standard practice in survey design.
In addition to capturing useful information for the current survey,
what gets entered can provide guidance on how to improve the next iteration of the survey.
Q12. Age and Q13. Gender:
These standard demographic questions were included both to allow comparison of the survey's respondents to those who voted,
and to allow anyone so interested to see if there were differences between these groups in their use of
the various information sources.
Q14. Are you involved in City issues BETWEEN campaigns? and Q15. Were you involved in any candidate's campaign?:
Since the survey's respondents were self-selected,
the concern was that the respondents would be disproportionately people who were active in the campaigns
and/or the issues related to the campaign.
Q16. What general region of Palo Alto do you live in? and Q17. NEIGHBORHOOD ASSOCIATION: …:
Q16 was included to capture geographic information from people who might find Q17's list of neighborhood associations
too daunting and simply skip it.
This concern also caused us to put the Don't Know
entry first, rather than in its usual position at the end of the list.
The neighborhood association information was highly relevant to this survey because their email lists have played an important role in announcing various of these information sources: candidate events, candidate websites, forums, cable TV and web rebroadcasts of forums … However, as mentioned under How Publicized above, different neighborhood associations have different practices for distributing such information. PAN was also responsible for a prominent candidate questionnaire and one of the forums.