# Student Satisfaction Survey Results

In the Winter term 2016, I taught the University of Calgary’s second logic course from a textbook remixed from the Open Logic Project.  Traditionally, Logic II has used Boolos, Burgess & Jeffrey’s Computability and Logic, and it was taught in Fall 2015 using that book as the required text by my colleague Ali Kazmi, and before that by him, Nicole, and me twice a year from that same book.  One aim Nicole and I had specifically for the OLP was that it should provide a better text for Logic II, since neither we nor our students seemed to be very happy with “BBJ”.

In order to ascertain that the OLP-derived text fares better with students, we did something radical: we asked them what they thought of it.  Ali graciously gave permission to run the same textbook survey in his class, so we have something of a baseline.  A direct comparison of the two books as textbooks for the course is not easily made, since Ali and I used the books differently: I stuck closer to my text than he did to BBJ; I assigned homework problems from the text; and we assessed students differently, so it’s difficult to control for or compare teaching outcomes.  With small samples like ours the results are probably also not statistically significant. But the results are nevertheless interesting, I think, and also gratifying.

We obtained clearance from the Conjoint Faculties Research Ethics Board for the study.  All students in each section of Logic II in F15 and W16 were sent links to an electronic survey.  As an incentive to participate, one respondent from each group was selected to receive a $100 gift certificate to the University of Calgary bookstore. The surveys were started in the last week of classes and remained open for 3 weeks each. Response rates were comparable (23/43 in F15, 23/42 in W16). The survey was anonymous and developed with the help of the Taylor Institute for Teaching and Learning, who also administered the survey; results were not given to us until past the grade appeal deadline in W16. We asked 23 questions. The first three regarded how students accessed and used the textbooks. In the F15 section, the textbook was not made available electronically, but students were expected to buy their own copy (about$40).  Most respondents did that, although almost a quarter apparently pirated electronic copies.  In W16, the OLP-derived text was available for free in PDF and students had the option to buy a print copy at \$10. Over half the respondents still opted to buy a copy.  We asked students how they used the texts in hardcopy and electronic form.

Those using the OLP-derived printed text underlined significantly less than those who used BBJ. I’m guessing the OLP text is better structured and so it’s not as necessary to provide structure & emphasis yourself by underlining. In fact, one student commented on BBJ as follows: “Very little in the way of highlighting, underlining, or separating the information. It was often just walls of text broken up by the occasional diagram.”

When using the electronic version (both PDF), students did not differ much in their habits between F15 and W16. More students took notes electronically in F15. I suspect it’s because the PDF provided in W16 was optimized for screen reading, with narrow margins, and so there was little space for PDF sticky notes as compared with a PDF of the print book in F15. Also notable: highlighting and bookmarking is not very common among users of the PDF.

The second set of questions concerned the frequency with which students consulted the textbook, generally and for specific purposes.  W16 students used the OLP-derived text significantly more often than F15 students did, and for all purposes.

The difference is especially striking for the questions about how often students consult the textbook for exams and homework assignments:

We next asked a series of questions about the quality of the texts. These questions were derived from the “Textbook Assessment and Usage Scale” by Regan Gurung and Ryan Martin. On all but one of these questions, the OLP-derived text scored positive (4 or 5 on a 5-point Likert scale) from over half the respondents. The discrepancy to students’ opinions of BBJ is starkest in the overall evaluations:

The one exception was the question “How well are examples used to explain the material?”:

This agrees with what we’ve heard in individual feedback: more, better examples!

Lastly, we were interested in how students think of the prices of textbooks for Logic II. We asked them how much they’d be willing to spend, how much the price influenced their decision to buy it. Interestingly, students seemed more willing to spend money on a textbook in the section (W16) in which they liked the textbook better. They also thought a free/cheap textbook was better value for money than the commercial textbook.

We also asked demographic data. Respondents from both sections were similar: almost all men in each (the course is mainly taken by Computer Science and Philosophy majors), evenly divided among 2nd, 3rd, 4th year students plus a couple of grad students in each (Logic II is required for the Philosophy PhD program). Student in W16 expected higher grades than those in F15, but that may well be just an effect of differences in assessment and grading style rather than better student performance.

If you care, there’s an interactive dashboard with all the graphs, and the raw data.