Academics Call Foul On Florida Test of Voting Machines

Two academics say that tests run by the state of Florida to determine what went wrong with e-voting machines in the disputed Congressional District 13 race were badly designed, and the results are essentially useless. They’ve released their response (.pdf) detailing problems with the tests — which some have touted as evidence that there was […]

Vote_2_4Two academics say that tests run by the state of Florida to determine what went wrong with e-voting machines in the disputed Congressional District 13 race were badly designed, and the results are essentially useless. They've released their response (.pdf) detailing problems with the tests -- which some have touted as evidence that there was nothing wrong with the machines, made by Election Systems & Software (ES&S).

For those who haven't been following the issue closely, a little background: Some 18,000 ballots cast on the touch-screen machines didn't record a vote in the CD-13 race between Democrat Christine Jennings and Republican Vern Buchanan. Buchanan won the race by fewer than 400 votes and was sworn into Congress. Jennings is calling for a re-vote because hundreds of voters claimed they had problems with the machines -- although they voted for Jennings or Buchanan, when they reached the review screen at the end of the ballot it showed no choice selected in the CD-13 race. These are the voters, of course, who caught the problem. The Jennings camp says there are likely voters who never looked at the review screen or didn't look closely enough to see that no vote appeared on the review screen before casting their ballot.

So Florida conducted two tests -- a mock election on ten machines
(five used in the election and five set up for the election but never actually used) and a review of the source code. According to the reports the testers released in January, they found nothing wrong with the machines in either the hands-on test or the source code review to account for the high undervote rate in that race.

Now David Dill at Stanford and Dan Wallach from Rice University have released their response to the Florida reports. Among the problems:

The testers examining the machines defined acccuracy as a machine making a correct electronic copy of the review screen.
Dill and Wallach say this ignores whether the review screen itself is correct. If a voter touched the machine to vote for a candidate and the vote showed up on the ballot page but not on the review screen, the machine would still be considered accurate, according to the Florida testers, if it copied the error on the review screen.

The testers didn't test for latency issues with the screens. Numerous voters complained they had to touch the screens repeatedly or with extra pressure to get it to register their selection. The tests weren't designed to look for that, however. Dill and Wallach say that on videos of the state testing there were several instances when a tester had to touch the screen more than once to get it to register. That's not, however, reflected in the state's report.

The state didn't test for calibration errors, although there were voters who complained about selecting one candidate and having a vote for another candidate appear.

The testers examining the source code did not verify that the compiled binary executable code used on the machines during the election was consistent with the source code they examined.

You'll find the state's two reports here (.pdf) and here (.pdf).