Student Response System Technology in Accounting Information Systems Courses
Student Response Systems (SRSs), also known as clickers, are posited to increase class participation and enhance active learning. In this study, we evaluate perceived effectiveness and student satisfaction with SRSs in Accounting Information Systems classes over several semesters. We also provide additional analyses to determine how SRSs are used in the classroom and which student characteristics and aspects of the classroom experience appear to be related to perceived satisfaction. We find three factors that explain 58% of the variation in SRS satisfaction. These are learning, environment, and class interaction. Two of these factors (learning and environment) are affected by variation in the way the system is used (participation mode vs. quiz mode), and all three are affected by the gender of the student. We find that gender is not directly related to overall satisfaction. In addition, we propose a model for SRS satisfaction based on our exploratory results.ABSTRACT
INTRODUCTION
Every semester, higher education instructors face the challenge of capturing student attention and maintaining that attention throughout a semester-long course. Without incorporating elements of standup comedy and/or reality television, many professors and instructors find themselves increasingly distant from their students. As their anecdotes get old, educators discover a need to change up their presentation style and pedagogy to make use of technology—both to increase student learning and to better relate to students.
One potential enhancement is the use of a student response system, which records, aggregates, and reports student responses in real time. These responses have many uses, including establishing student presence (at least physically); eliciting student opinions; and/or assessing comprehension—all of which the instructor can use to change the direction of a class session in real time. Vendors of SRSs tout their ability to revolutionize classroom presentation (see for examples, einstruction.com and h-itt.com).
This paper presents the results of using one such system, eInstruction's Classroom Performance System (CPS), in an Accounting Information Systems course at a public university over six semesters. The instructor started using the system in an environment where the students were furnished with response pads, aka “clickers,” as a component of the classroom setting and later used the system where students were required to purchase hardware as part of the course-required materials. Additionally, the instructor expanded the use of the system from attendance-based participation and novelty to quizzes and more active interaction.
To evaluate effectiveness, the instructor administered a paper-and-pencil survey at the end of each semester to determine users' impressions of the student response system (SRS). This paper presents the results of this exploratory analysis. Our goal is to shed light on two main questions: What factors did learners perceive as being important in their classroom experiences? What factors appear to moderate those experiences?
In the remainder of the paper we present a brief review of prior research into SRS use and describe the student surveys administered to Accounting Information Systems students. We then present our exploratory analysis of students' responses by looking for underlying constructs related to student satisfaction with the SRS, student characteristics that are directly related to SRS satisfaction or indirectly related to satisfaction through moderating constructs, and the direct and indirect relationships between how the SRS was used in the classroom and satisfaction. We conclude with limitations of our study and directions for future research.
BACKGROUND AND RELEVANT LITERATURE
Instructors in science and education were the first to use and conduct research in SRS technology. In general, SRS research suggests that the use of such systems provide clear benefits to instruction—allowing for increased levels of interaction between instructor and student (Bryant and Hunton, 2000) and improving the learning environment in the classroom (Beatty, 2004). Attendant problems with SRS implementation are also described (Arter et al., 2008). The main conclusion so far provided is that benefits of SRSs may indeed outweigh the additional effort required (Caldwell, 2007). One of the main predictors of successful use, however, appears to be a willingness to integrate SRS questions actively into presentation—often a significant change in classroom management (Beatty, 2004; Draper and Brown, 2004; Martin et al., 2006; Arter et al., 2008).
Bryant and Hunton (2000) summarize the theoretical underpinnings for considering educational technology innovation as a means to improve learning. They suggest that transmission mode and learner control are significant in learning. Transmission mode is better when it is synchronous. Therefore, learning is expected to increase through the use of interactive measures. Learner control contributes to enhanced learning as well. Both elements are possible with SRS adoption. In higher education, Arter, et al. (2008) developed and presented significant justification for the use of SRS in large classrooms where additional interaction is beneficial and in all sizes of classrooms where immediate feedback can be obtained.
Beatty (2004) describes the uses of classroom communication systems in the ten-year experience of the University of Massachusetts Physical Education Research Group. This group developed curriculum and pedagogic techniques for teaching and researched teaching with SRSs. Beatty suggests that effective use of such systems involves a complete rethinking of the classroom instructional model—in particular, making the system's use an integral part of the course delivery.
Consistent with studies of SRS use in the sciences and higher education, many studies in business and economics find favorable student satisfaction with SRSs (Guthrie and Carlin, 2004; Freeman, et al. 2006; Masikunas, et al. 2007; Segovia, 2008; Eastman, et al. 2011). Eastman, et al. (2011) present an exploratory study of satisfaction in marketing classes and find that students who pay more attention due to the use of the technology are more satisfied and have a more positive attitude toward its use. Similarly, Ghosh and Renna (2009) find that economics students perceive that the technology improved their performance.
In accounting, Mula and Kavanagh (2009) find support for increases in participation, understanding and a positive learning experience. They find no improvement in performance, but a decline in failure rates. Carnahan and Webb (2007) found only limited effects on learning and a decline in oral participation by students in management accounting classes after controlling for timing of SRS use and course content. Consistent with prior studies, they find positive student perception of the SRS usage, though it was not associated with greater student satisfaction with the course.
Therefore, our major research question is, “What are the major factors apparent in student satisfaction with SRS?”
SRS Techniques
One major difference in the way SRSs have been used has been in how rewards are associated with system use. One approach has been to use the systems as pure voting systems where students are rewarded for participation with little or no potential to suffer for incorrect answers. The other approach is to use the system more in a quiz/test mode where rewards are tied specifically to getting the correct answer.
Graham et al. (2007) found that reluctant learners had positive impressions of the SRS and participated more freely when the system was used as response without penalty. In contrast, Willoughby and Gustafson (2009) report on an experiment where SRS technology was used in two distinct ways—with differential point association for correct answers. The research question addressed was to what extent was the penalty/payoff for incorrect/correct answers associated with improved performance and student interaction in a group learning environment. She found that performance on SRS tasks was improved, though interaction and willingness to share declined for the high stakes treatment groups. In addition, there was no difference between groups in overall performance on standardized measures of learning. She suggests that SRS use in this setting can enhance interaction and participation as long as the stakes are not so high that higher-achieving group members dominate discussion and feed correct answers to group members with little discussion.
Similarly, Edens (2008) found that anxiety, preparation and attendance were higher when SRS use was a significant portion of the grade (25% vs zero). Exam scores were not significantly different.
Therefore, an additional research question is, “How is student satisfaction with SRS affected by differences in how the system is used?”
Gender Effects
Flansburg (2004) suggests that females have a complicated and conflicted relationship with technology and that gender is a factor in this setting. Females absorb a gender-related belief that they are not as technologically proficient as males. Additionally, the structure and design of technology by males plays a large role in a ‘disconnect' between females and new technologies. While focused principally on the use of the internet, her comments seem quite appropriate to the use of SRSs as well. She calls for research evaluation of gender differences in the uses of technology. While information is often gathered regarding gender in key studies, rarely is such analysis presented.
MacGeorge, et al. (2008) find little effect of gender on levels of satisfaction with SRS in large section classes in communications, natural resources, and introduction to business. Sprague and Dahl (2010) found no significant gender difference in SRS use in marketing classes. Edens (2008) found gender differences in the relationship between goal orientation and SRS use, but no significant differences in satisfaction or perceived value as a learning tool.
Therefore, our research is focused on exploring accounting student perceptions of SRS application over time. What factors appear to be related to SRS satisfaction? How might these factors be moderated by pedagogical differences like using the system in a participation mode versus a quiz mode, and how are these factors potentially moderated by student characteristics, specifically gender?
SURVEY DESIGN
To address these research issues, an accounting instructor at a medium-sized public university administered a survey to upper-level undergraduate students in an accounting information systems (AIS) course. The instructor distributed the paper-and-pencil survey at the end of each semester for seven consecutive semesters. The survey included thirteen statements related to SRS use. Students responded to each statement by circling a number on a five-point Likert scale, ranging from “strongly disagree” (1) to “strongly agree” (5). The individual statements were designed to elicit students' perceptions of how clickers affected their learning experiences (see the Appendix for the thirteen statements). The Cronbach's Alpha for these items was 0.92, which indicates that the items are measuring similar concepts (Nunnally and Bernstein, 1994).
We also included two general statements about students' overall level of satisfaction: one statement for satisfaction with the course itself and the other for satisfaction with the classroom response system. Cronbach's Alpha for all 15 items was 0.92. Students rated satisfaction on a five-point scale, ranging from “not satisfied at all” (1) to “completely satisfied” (5) (see the Appendix). Additionally, we included demographic items on the survey—including age, gender, college major, and GPA.
The instructor informed students that their responses were completely anonymous. Students were given one percentage point (10 points in a 1,000-point semester) of extra credit for filling out the survey. Since the survey was anonymous, students self-reported completing the survey by signing a survey-completion sheet taped to the instructor's door. The instructor also informed students that he or she would not look at the results until after course grades were posted.
PARTICIPANTS
The students surveyed are almost evenly split between males (49%) and females (51%), and their average age is 23, which is consistent with that of most senior-level college students in a traditional public institution. Based on informal feedback, a majority of the students that were enrolled in the later semesters of the course had previous experience using the clicker system. The average self-reported GPA of the students surveyed is 3.2 and the majority of students expected a B in the course. Accounting Information Systems is a required course only for accounting majors at this mid-sized university. Therefore, the students surveyed were almost exclusively accounting majors (98%). Exhibit 1 presents descriptive statistics for the survey participants.

RESULTS
Dependent Variable
The main goal of our exploratory research is to identify constructs related to students' satisfaction with clicker use in the classroom. We combined Survey Items 12 and 15 (original item numbers used in our survey, see the Appendix) to use as our measure for satisfaction with the SRS technology:
-
Item 12, “I would recommend the use of SRS technology in BAAC 328 next year” (RecommendSRS12), using a 5-point Likert scale from “Strongly Disagree” to “Strongly Agree.”
-
Item 15, “Rate your overall satisfaction with the SRS system” (SRSSatisfaction15), using a 5-point satisfaction scale from “Not Satisfied at All” to “Completely Satisfied.”
Survey responses to these items individually and our composite variable, Satisfaction, are summarized in Exhibit 2. Although the two items are based on different scales, responses on these items are highly correlated, r = 0.61. Moreover, the Cronbach's Alpha of 0.75 indicates that Satisfaction has sufficient inter-item reliability, especially for a two-item measure. We considered adding Item 14, “Rate your overall level of satisfaction with the course,” but found it did not improve the reliability. Therefore it was not included in our analysis.

Factor Analysis
To enhance interpretation and simplify the presentation, we use factor analysis as a data-reduction technique on the remaining survey items to help identify the underlying constructs measured by the survey instrument. We factor-analyzed the remaining twelve items, resulting in a 16:1 ratio of observations to scale items, which is greater than the minimum 10:1 ratios recommended by Nunnally and Bernstein (1994).
We use principal axis factoring (PAF) with a promax (non-orthogonal) rotation because we believe it is unrealistic to assume that the factors extracted will be unrelated, an assumption implicit in using principal components and orthogonal rotation (Gorusch 1983). Our initial PAF analysis for the twelve remaining items yields three factors, accounting for 46%, 6%, and 3% of the variance, respectively. Exhibit 3 presents the initial rotated factor pattern and factor correlation matrices. The high inter-factor correlations, ranging from 0.62 to 0.74, indicate that the underlying constructs are closely related to one another.

The items loading on the first and most significant factor directly relate to ways in which the SRS may help students learn (variable names are indicated in parentheses):
-
(ReinforceConcepts03) The SRS system reinforces important concepts presented in class.
-
(HelpMeLearn13) I feel that SRS technology helps me learn.
-
(EffectiveLearningTool10) The SRS system is an effective teaching and learning tool.
-
(PrepareForExams02) The SRS system helps me prepare for the exams.
-
(GaugeUnderstanding01) The SRS system helps me to gauge my level of understanding of course material.
We refer to the construct underlying this factor as Learning. The inter-item reliability of Learning is quite high, with a Cronbach's Alpha of 0.90, indicating that these five items are measuring a single construct.
The second factor contains three items relating to various ways in which clickers may affect the classroom environment:
-
(FunToUse06) The SRS system is fun to use.
-
(BreakUpClass07) I like the SRS system because it breaks up the class.
-
(DesireToAttend05) The SRS system increases my desire to come to class.
We refer to this second construct as Environment. However, unlike Learning, not all three items load cleanly on one factor. While loadings of less than 0.30 are considered insignificant (Hair et al., 1998), this factor clearly has some noise. Moreover, this factor only accounts for six percent of the variance and its Cronbach's Alpha is 0.72, which is minimally acceptable (Nunally and Bernstein, 1994). This is not surprising, considering the factor only consists of three items.
The third factor is weak, only accounting for three percent of the variance—and two of the four items have significant cross-loadings. Moreover, we find no unifying theme related to these items:
-
(WillingnessToAskQs11) The SRS system increases my willingness to ask questions in class.
-
(StudentInteraction09) The SRS system increases my interaction with other students.
-
(ImproveProblemSolving04) The SRS system improves my problem solving skills.
-
(ConcentrateInClass08) I feel that the SRS system helps me concentrate more in class.
Although the highest loading items (WillingnessToAskQs11 and StudentInteraction09) are related to how students interact in class, ImproveProblemSolving04 seems more related to Learning than to class interaction as indicated by the significant loading on Learning, and ConcentrateInClass08 seems more related to Environment than to class interaction as indicated by the significant loading on Environment. Although the Cronbach's Alpha of 0.76 indicates that this factor has an acceptable measure of inter-item reliability, it does not appear to have a single underlying construct. Therefore, we eliminate Items 4 and 8 to create a third construct, Class Interaction.
The factor names selected are subjective, and we chose names based on the nature of the underlying variables seen to load highest on that factor. In addition, these constructs are similar in character to those found by MacGeorge et al. (2008).
Exhibit 4a presents the revised factor analysis and factor correlation matrix. In the revised solution, Learning accounts for 46% of the variance, Environment accounts for 7% of the variance, and Class Interaction accounts for 5% of the variance. Eliminating the two items with significant cross-loadings allows for a cleaner factor pattern with no cross-loadings greater than 0.30 and only two cross-loadings greater than 0.20. In addition, the factors in the revised solution are more orthogonal; especially Environment and Class Interaction, which now have a correlation of 0.49. This is considerably lower than the initial solution correlation of 0.62. Exhibit 4b shows the means and frequencies of these items.


The factor loadings on Learning and Environment are similar to the 12-item solution (see Exhibit 3) while the loadings for the two items of Class Interaction have become more disparate, whereby StudentInteraction09 now has a much lower loading on Class Interaction. A further indication of this construct's weakness is that the Cronbach's Alpha is only 0.57, although not surprising for a two-item construct.
Based on our factor analysis, we created the composite variables for our three factors. The descriptive statistics for these composite variables and our dependent measure are shown in Exhibit 5. These are the variables that we use in our subsequent analysis.

Regression Analysis
After identifying the underlying constructs from our survey instrument—Learning, Environment, and Class Interaction—we use a stepwise regression to explore the relationships between these three constructs and our dependent measure, Satisfaction. Specifically, we wish to explore main effects and interactions between satisfaction and student perception of how the clickers affect their learning, classroom environment, and classroom interaction. Exhibit 6 presents the stepwise regression results.

Model 1 shows that Learning accounts for 56.0% of the variance of student satisfaction with the clickers as measured by the adjusted R2. Model 2 shows that the addition of Environment increases the adjusted R2 to 62.7%, indicating that Environment adds a statistically and practically significant amount of explanatory power to the regression equation. However, when the Learning*Environment interaction is added to the regression model in Model 3, the explanatory power of the equation improves only slightly to 66.0%. Additionally, Class Interaction does not add any explanatory power to the regression as it is not significant at the 0.05 level. The failure of Class Interaction to have a significant effect on Satisfaction is not surprising due to the weakness of the factor.
Although Learning*Environment is statistically significant, we use Model 2, the main effects model, as our best model due to its parsimony, which improves its ability to interpret the results. The coefficients on Learning (0.65, p < .001) and Environment (0.35, p < .001) indicate that these factors are both positively related to Satisfaction. Additionally, the stepwise regression results are consistent with our factor analysis whereby Learning accounted for most of the systematic variance (46%).
Direct Effects of within-Subject Variables on Satisfaction
After establishing that students' perceptions of Learning and Environment positively affect their satisfaction with using clickers in the classroom, we examine the potential effect of the following within-subject variables on Satisfaction: age, GPA, expected grade in the class, and gender. None of the Pearson correlations between Satisfaction and age (0.05, p ≤ .53), GPA (0.03, p ≤ .74), and expected grade in the class (0.09, p ≤ .23) are significant. Also, a one-way ANOVA shows no significant difference in Satisfaction across Gender (F = 2.22, p ≤ .14). Therefore, none of these four demographic variables are directly related to Satisfaction.
Indirect Effects: Relationships between the within-Subject Variables on Factors
Since we already established relationships between Learning and Environment on SATISFACTION, we next explore the possibility that the within-subjects variables above indirectly affect Satisfaction through relationships with our factors. Here too, none of the Pearson correlations between each of the factors (Learning, Environment, and Class Interaction) were related to age, GPA or expected grade in the class. However, one-way ANOVAs show significant differences in the factors across Gender. Exhibit 7 is a summary of these results.

Specifically, our results indicate that males are more likely than females to rate the clickers highly in terms of Learning, Environment, and Class Interaction. These results also suggest that Gender may have an indirect effect on Satisfaction through its effects on the three factors, which seem to be antecedents to Satisfaction. We find these results surprising because there is little research on clickers that finds significant gender effects.
Direct Effect of Teaching Mode on Satisfaction
In addition to examining the effects of within-subjects variables on satisfaction and its possible antecedents, we examine the direct and indirect effects of teaching mode on Satisfaction. Instructor use of clickers in this AIS course varied throughout the 3-1/2 year period of data collection. We refer to this categorical variable as Teaching Mode, where Teaching Mode encompasses two different types of clicker use. During the first three semesters of our sample period, the clicker portion of the overall course grade, five percent, was based solely on quantity of participation; that is, ([total responses] / [total questions]). In this mode, the instructor utilized the SRS to take roll and gauge student understanding of topics covered during the class period. Most questions were not prepared ahead of time. We refer to this teaching mode as Participation Mode.
During the last two semesters of the data-collection period, the instructor used the clickers in what we refer to as Quiz Mode; that is, the instructor used the SRS to administer (mostly) prepared quizzes one to two times per week. The instructor informed the students in advance when to expect a quiz and what material the quiz would cover. Quiz content included material covered in the prior class period as well as content assigned for the current period. The instructor generally integrated quizzes into lesson plans and included conceptual questions and problems requiring analysis. Grading was based on student performance ([correct responses] / [total questions]). Clicker quizzes accounted for 6% and 7.5% of the total course grade in the last two semesters, respectively.
Finally, during the two semesters between Participation Mode and Quiz mode, the instructor engaged in what we refer to as Mixed Mode. Mixed Mode integrates both Participation and Quiz grading rubrics into the clicker grade: four percent for class participation plus four percent for quiz performance.
Exhibit 8 presents the descriptive statistics, one-way ANOVA, and post-hoc means tests of Satisfaction by Teaching Mode. Students reported the lowest satisfaction with the clickers when questions were created during the class and graded based only on participation (mean = 3.49), and the highest satisfaction when questions were prepared in advance and graded based on performance (mean = 3.87). The one-way ANOVA indicates a significant difference between teaching modes, though the post-hoc comparisons show that only the significant difference is between Participation Mode and Quiz Mode (p ≤ .035).

Effects of Teaching Mode on Factors
In addition to having a direct effect on Satisfaction, Teaching Mode also appears to indirectly affect Satisfaction through Learning and Environment as illustrated by the significance of the F-tests (p < .01 and p < .01, respectively) as shown in Exhibit 9a. The post-hoc comparisons (Exhibit 9b) show that students reported significantly higher direct learning outcomes (Learning) when the clickers were used in Quiz Mode (mean = 3.93) than either Participation Mode (mean = 3.46) or Mixed Mode (mean = 3.54). Additionally, students reported significantly higher scores on Environment when in Quiz Mode as compared to Mixed Mode (p < .01), although students did not report significantly higher Environment scores in Quiz Mode as compared to Participation mode (p ≤ .20). We observe no significant differences in Class Interaction (p ≤ .47).


Interaction between Gender and Teaching Mode
In separate analysis (not reported) we found no significant interaction between Gender and Teaching Mode on Satisfaction. However, since both Gender and Teaching Mode are related to at least two of the factors, we believe it is more likely to find a Gender-by-Teaching Mode interaction with one or more of the three factors. Moreover, the three constructs are related as indicated by the inter-factor correlations of 0.64 (Learning x Environment), 0.64 (Learning x Class Interaction) and 0.49 (Environment x Class Interaction) (see Exhibit 4a). Therefore, we explore the relationship of both Gender and Teaching Mode on the combination of Learning, Environment, and Class Interaction with a 3 x 2 MANOVA (see Exhibit 10). The multivariate tests indicate that the overall model is significant, likely due to the significant main effects of both Gender and Teaching Mode, but we found no Gender by Teaching Mode interaction.

Graphs of these relationships are shown in Exhibit 11a, and the descriptive statistics for each treatment level is presented in Exhibit 11b. Each graph in Exhibit 11a shows how the mean score on a factor is affected by Gender and Teaching Mode. Notice the consistent Gender effect—males consistently rated each factor higher than females for all Teaching Modes. This is consistent with males having higher preference or tolerance of technology and gadgetry, as mentioned in Flansburg (2004).



Citation: AIS Educator Journal 6, 1; 10.3194/1935-8156-6.1.32

The effects of Teaching Mode on the factors are much less straightforward. We observe significant differences between Teaching Modes on Learning and Environment, but no significant differences between Teaching Modes on Class Interaction. Interestingly, the order of teaching mode ratings is not consistent. Students in Quiz Mode rated Learning significantly higher than students in either Mixed Mode or Participation Mode (see the post hoc tests for Learning in Exhibit 12). Learning ratings for Participation Mode and Mixed Mode were not significantly different. On the other hand, students in Mixed Mode rated Environment significantly lower than students in either Quiz Mode or Participation Mode (see Exhibit 12)—Environment ratings in Quiz Mode and Participation Mode were not significantly different.

The Teaching Mode effects discussed above are consistent across Gender, and may indicate that Learning is improved as the stakes get progressively higher whereas the Environment was seen as damaged by the mixed signals conveyed by the Mixed Mode grading scheme—students earn points just for participating, but they also get marked down if they do not perform well.
DISCUSSION
We designed this study to determine whether or not it is worth the time, money, and effort to use clickers in our AIS courses. To achieve our goal, we created an instrument to gauge student satisfaction with clicker use in class. We also wanted to determine specific aspects of clicker use that contributed to, or detracted from, students' overall satisfaction with clicker use.
Our factor analysis revealed three constructs related to Satisfaction: Learning, Environment, and Classroom Interaction. Learning was the dominant factor in our solution, as it accounted for 46% of the sample's variance, and this was 79% (46%/58%) of the total explained variance of our three-factor model. This result did not surprise us because accounting students tend to be career-oriented. Another study (MacGeorge et al., 2008) found this factor, though it was not as strong, perhaps because their sample consisted of a cross-section of all university students. Improving the classroom environment also played a significant role in student satisfaction, though this effect is much smaller in our study as compared to MacGeorge et al. (2008), which we also attribute, at least partially, to differences between sample populations.
Including an overall measure of clicker satisfaction allowed us to extend prior research by examining the relationship between our three factors and overall satisfaction. Our stepwise regression demonstrated that both Learning and Classroom Environment are positively related to overall satisfaction. The Learning-by-Classroom-Environment interaction added little to the explanatory power of the model—likely due to the high correlation (0.64) between the two factors. Classroom Interaction also failed to add significant explanatory power to the model. We identified three possible explanations for this lack of significance: 1) high correlation between Classroom Interaction and the other two factors (0.64 with Learning and 0.49 with Classroom Environment), 2) inherent weakness in the construct itself (Cronbach's Alpha of 0.57), and 3) the importance of this factor to students—it has the lowest mean score of all three factors, as shown in Exhibit 5.
Our results related to gender may help explain the mixed results in other studies. We found no direct link between Gender and Overall Satisfaction. However, we found a persistent main effect between Gender and each of our three factors: males rate clickers higher than females for Learning, Environment and Class Interaction. Moreover, we found that this main effect persisted across methods of using clickers in class. In addition to showing that gender indirectly affects students' overall satisfaction with clickers through their effect on Learning and Environment, we extend prior research by measuring the direct and indirect effects of teaching mode on student satisfaction with clickers. Students reported higher overall satisfaction with clickers when their clicker grade was dependent on performance (Quiz Mode) as compared to when the clicker grade was only based on participation (see Exhibit 8). Additionally, Quiz Mode was positively related to both Learning and Environment.
We established that Learning and Environment are moderators of the effect of Gender on Satisfaction. Learning and Environment appear to be pure moderators between Gender and Satisfaction because Gender is not directly related to Satisfaction; Gender is related to Learning and to Environment, and both Learning and Environment are related to Satisfaction. We also showed that Learning and Environment moderate the relationship between Teaching Mode and Satisfaction, though Teaching Mode also has a direct relationship with Satisfaction. Moreover, Gender is related to Class Interaction, though we could not establish a relationship between Class Interaction and Satisfaction. Thus, we propose the model of satisfaction with the SRS technology shown in Exhibit 13. The dashed lines indicate relationships that were not significant in our analysis. We divide our model into three main parts:
Learning, Environment and Classroom Interaction. We believe that these three factors influence overall satisfaction. Consistent with prior research, we see favorable reaction to opportunities to be actively involved in classroom activities and provide immediate feedback to the instructor and classmates. Our results also indicate that two of the three factors independently moderate the relationships between both Gender and Teaching Mode on Overall Satisfaction.
Gender. Our study suggests that gender has a strong but indirect impact on overall satisfaction through its effect on Learning, Environment, and Classroom Interaction.
Teaching Mode. We demonstrated variation in specific use of SRS—participation vs. quiz—to have direct and indirect effects on overall satisfaction. These differences are in conflict with prior research (Graham et al., 2007) in that accounting students preferred the quiz mode rather than the participation or mixed modes of SRS use.



Citation: AIS Educator Journal 6, 1; 10.3194/1935-8156-6.1.32
Limitations and Future Research Opportunities
This exploratory analysis of SRS use and the discussion above have several apparent limitations. A main limitation relates to generalizability. The data analyzed in this study was from a single instructor at a single institution. This lack of diversity limits the generalizability of the results. To the extent the factors we found relate to a potential comfort with the use of the technology, some of the observed effects could be a reflection of the instructor's personal comfort level as well. Future research could further evaluate applications of the system across instructors and in a variety of courses and other contexts.
A second group of limitations relate to the survey setting and the instrument we used. The survey instrument used to gather the data asked for perceptions of students. Students may have responded to the survey based on the instructor's enthusiasm for the system rather than giving their true feelings. In addition, the survey responses were presented all in the same direction, with the most positive response to the right. It would have been better to include some items with reverse coding so we could identify and remove those surveys where the student appeared to be circling the same response all the way through the instrument and not reading the questions seriously.
The model presented above has not been tested or validated with any formal methods. Further research needs to evaluate and validate this model as one describing the elements of satisfaction with SRS use in AIS and other accounting and business courses.
CONCLUSION
As AIS educators, we are constantly dealing with changes in technology. We devote much of our energy to understanding and incorporating changing conditions into our classrooms. This includes changes in both hardware and software. This study presents information about the use of SRS hardware and software in the classroom. We find significant support for the use of this technology in the AIS classroom.
The practical implication of our research is that AIS students are at least mildly satisfied with the use of clickers in the classroom. This is very encouraging because clickers are an additional cost that students bear. This satisfaction does appear to be moderated by the way the system is used (preference for quiz mode) and the gender of the student (higher ratings from males).

LEARNING, ENVIRONMENT, and INTERACTION by Gender and Teaching Mode Mean Graphs

Proposed Model of SRS Satisfaction
