The Influence of Training Environment on Trainee Expertise
Competent computing skills are critical for successful business operations and the accountants who sustain them. Developing competent skills requires not only knowledgeable trainers but also facilities able to support and deliver instruction to accounting trainees in efficient ways. Technology-equipped training environments have long been espoused as essential environments needed to speed delivery and enhance the learning experience of trainees. This study examined the impact of training environment on knowledge and skill set development. Results suggest that there are limitations to the extent to which technology-equipped training environments influence learning.
INTRODUCTION AND MOTIVATION
Skilled Information Technology (IT) professionals increasingly influence business operations because the competitive global environment demands competent technical skills as part of an indispensable skill set for many employees, especially accountants. The successful transference of IT skills may depend on several behavioral characteristics of a trainee such as computer playfulness, computer anxiety, perceived ease of use, and the application expertise needed to use a specific technology. In addition, a well-ordered, technologically-equipped training environment is needed to successfully create the positive expectations needed for the retention of critical skills (Becker 1997; Harter and Harter 2004; Rose, Rose, and McKay 2007). As a result, organizations have invested heavily in technology-equipped training environments to better serve and educate their employees (Harter and Harter 2004).
Accountants represent a skilled group of IT professionals who must perform and perform well in an era of increased public scrutiny. By necessity, their skills require continuous updating of technology-based skills and mastery of specific accounting knowledge. Constant demands are being made on accounting education to improve accountant professional skills to include courses on new applications, updated spreadsheet functionality, databases, and Windows software (Rommney, Cherrington, and Denna 1996). While firms and schools commonly use several different training approaches, the literature generally suggests that high-tech training environments are an essential ingredient needed to transfer technical knowledge and skills to employees (Rakes 1989; Evans 1998).
Media attention and political debate argue that high-tech training environments are necessary for the acquisition of technical knowledge and skills (i.e. mastery learning). Touted as being essential to the successful acquisition of knowledge and skills, these high-tech training environments exist to equalize the “haves” and “have-nots” with the intent of effectively engaging the learning processes of a wide range of trainees. Successful training environments should reduce anxiety, increase playfulness, and create the impression that technology is easy to use (Hackbarth, Grover, and Yi. 2003). As a result, firms, learning centers, and universities are designing, outfitting, and maintaining costly, state-of-the-art high-tech training environments (Valenti 2002). Clearly, this suggests a positive expectation for a training environment to impact trainee behavior, knowledge acquisition, and skill set development.
Therefore, the objective of this study is to explore the relative effectiveness of three different training environments with respect to learning performance in order to answer the question: “To what extent do training environments influence performance through anxiety, playfulness, ease of use, and expertise?”
The remainder of the paper is organized as follows. Previous research is presented in section 2. Then, the research model and hypotheses are presented in section III. The following sections then describe the data analysis technique and present the results. The manuscript ends with limitations to research, implications of the findings, and issues for future research.
PREVIOUS RESEARCH
A training environment includes “all the physical surroundings, psychological or emotional conditions, and social or cultural influences affecting the growth and development of an adult engaged in an educational enterprise” (Emmons and Wilkinson 2001; Hiemstra 1991). An important aspect of the training environment is the use of technology to transfer and share the meaning of technical knowledge and skills (Hackbarth, Grover, and Yi. 2003). Nevertheless, simply having access to a high-tech environment does not ensure it will be used or used well (Thompson, Higgins, and Howell 1991). Thus, instructor training and preparation play a vital function in how technologies are used in a training environment.
Ineffective instructor preparation or poor instructor training with available technologies may lead to poor perceptions of that technology. For instance, one study found subjects reporting significantly less positive attitudes toward videoconferencing following exposure and use of the technology (Armstrong-Stassen, Landstrom, and Lumpkin 1998). Further, nearly three quarters (71%) of elementary school teachers and three quarters (75%) of middle and high school teachers reported high levels of technology integration in their training environments, but the students did not notice a difference (M2 Communications 2003).
Behavioral characteristics, such as computer playfulness, computer anxiety, and perceived ease of use can help to explain the transference of IT skills and tend to be stronger predictors of learning and satisfaction than technology characteristics (Sarbaugh-Thompson and Feldman 1998). Arbaugh (2002) finds that although technological characteristics are important, the primary predictors of successful training experiences are the extent to which participants interact with the technology. Arbaugh also found that system expertise significantly related to perceived ease of use whereas computer anxiety and computer playfulness may mediate the effect that system expertise has on perceived ease of use (Hackbarth, Grover, and Yi. 2003).
In our view, training environments typically tend to have the same kinds of chairs, windows, wall coverings, and other basic features within a particular organization. It is the technology within training environments that differentiates one environment from another. This is especially true when we permanently install computers, video equipment, and high-tech instructor consoles capable of delivering multi-media presentations (Evans 1998). We may partially replicate a high-technology environment by using a computer cart, for instance, but this is only a low-cost (and temporary) fix and does not reflect the environmental design needed to optimize trainee learning. While some expert trainers may compensate for the lack of available technology in a training environment, it is clear that this type of environment limits instructor flexibility to present and evaluate individual learning in innovative ways. Often trainers are faced with asymmetric learning situations, represented by high- and low-technology training environments, where they must compensate for differing training environments between classes or courses. It is the impact of asymmetric learning environments that is the focus of our study.
RESEARCH MODEL AND HYPOTHESIS
Computer Playfulness, Computer Anxiety, Perceived Ease of Use, and System Expertise have been used predominately to measure the effects of individual technologies, but not in the context of different training environments. Prior research suggests that the training environment matters and that people learn better in high-tech environments (Becker 1997; Harter and Harter 2004). A high-technology training environment is geared to improve a trainee's familiarity with technology and is particularly appropriate in this study since study participants will be learning an application (in this case, Microsoft Excel). It should be reemphasized that the objective of this particular study is not to engage in model testing, but to use an existing model to evaluate the impact of three different training environments on the level of trainee expertise.
To evaluate trainee performance within a training environment, we adapted the model from Hackbarth et al. (2003) as depicted in Figure 1. Hackbarth et al. looked at the relationships between computer playfulness, anxiety, and ease of use. Our research extends Hackbarth et al. by examining the impact of Computer Playfulness, Computer Anxiety, and Ease of Use on three expertise variables (Self Reported Application Expertise, Excel Test Time, and Excel Test Score) within the context of three different training environments. In addition to the general model put forth by Hackbarth et al., this general model, as depicted in Figure 1, also shows prior period Test Time, Test Score, and Self Reported Application Expertise as antecedents to Computer Playfulness, Computer Anxiety, and Ease of Use. Although the training environment is not shown explicitly in Figure 1, our model will be evaluated within the context of three different training environments.



Citation: AIS Educator Journal 5, 1; 10.3194/1935-8156-5.1.95
The high-technology training environment supported individually-equipped computer desks facing the instructor console. They were capable of either independent or linked access. The instructor console could be viewed directly on six large screen TV's, three on each side of the room. This layout is conducive for demonstrations as the training environment provides excellent sight lines. Other multimedia devices could be controlled through the instructor station. Trainees had online access to all teaching materials and software used in the course. The hightechnology environment allows trainees to model real-world tasks by exposing trainees to live or taped demonstrations of behaviors required for performance rather than working through computer-aided instruction alone.
The traditional-passive training environment supported presentations to a passive audience. No computers were present, and all computer assignments throughout the term were completed outside of class without the benefit of computer demonstrations. This was a typical training environment with integrated desks for trainees and a grease (marker) board at the front.
The hybrid-training environment was designed for presentation to a passive audience. This hybrid-training environment was essentially a combination of the two other training environments. Specifically, the first component mirrored the traditional training environment with one exception: the instructor utilized an instructor workstation at the front of the room. This layout was used for both lectures and demonstrations. Following the demonstrations, the subjects moved to a room with computers arranged facing forward where the instructor could move from station to station to address specific questions.
COMPUTER PLAYFULNESS
Computer playfulness refers to an individual's tendency to interact spontaneously with a computer (Martocchio 1992). With experience, trainees are more apt to explore and interact with the computer. Being in a high technology training environment would allow trainees greater opportunity to become familiar with and occasion to explore the technologies being used. Venkatesh (1999) theorized a more favorable ease of use perceptions existed in gamebased training because the method induced a higher level of playfulness and enhanced the user's intrinsic motivation. The more time a trainee has to learn and has success in learning the more playful they are likely to become. To the extent that trainers can prepare students and manage their expectations of success the more playful trainees are likely to become. Thus, exposure to better training environments can lead to an enhanced training experience and thus greater playfulness with the computer (Hackbarth, Grover, and Yi. 2003). Therefore we hypothesize:
H1: The training environment has a positive effect on computer playfulness whereby the high-tech environment yields the most positive effect and the traditional environment yields the least positive effect
Computer anxiety is the extent to which apprehension or fear occurs when an individual is faced with the possibility of using an Information System (IS) (Simonson et al. 1987; Stone, Arunachalam, and Chandler 1996) and is common among not only students but also experienced professionals (Lamberton, Fedorowicz, and Roohani 2005). Hackbarth et al. (2003) found that system expertise was significantly related to ease of use and was mediated by playfulness and computer anxiety. In fact, computer anxiety was a full mediator having twice the impact of playfulness. Computer anxiety exists when individuals become concerned about the implications of computer use such as the loss of important data or fear of other possible mistakes (Sievert, Albritton, and Roper 1988; Thatcher and Perrewe 2002). To the extent that a trainer can manage the trainees' time needed to complete a specific task with an expected degree of competency, trainees should experience less computer anxiety. We would expect that as trainees learn in a higher-technology environment, anxiety will diminish. As anxiety diminishes, trainees will perceive training materials easier to use. Therefore, we hypothesize:
H2: The training environment will reduce computer anxiety, whereby the high-tech environment will result in a greater reduction in anxiety and the traditional environment results in the smallest reduction in anxiety.
PERCEIVED EASE OF USE
Perceived Ease of Use is defined as the extent to which a person believes using a technology will be free of effort (Davis 1989; Davis, Bagozzi, and Warshaw 1989). Davis (1989) found that prior training and experience did not have a significant impact on current performance levels. However, baseline performance levels (past achievement) has been shown to have a positive relationship with current performance (Szajna 1996). Individuals can change their perception of ease of use of technology over time as they gain expertise with it. For example, it is reasonable to believe that as individuals gain experience with a particular technology, they become more comfortable with that technology and may reach a higher level of expertise. As such, past performance can reasonably be expected to be positively related to ease of use. Venkatesh (1999) finds that trainees using game-based training had higher levels of ease of use compared with traditional training. Therefore, we hypothesize the following:
H3: The training environment will have a positive effect on perceived ease of use, whereby the high-tech environment yields the most positive effect and the traditional environment will yield the least positive effect.
EXPERTISE
Expertise is the level of knowledge and experience demonstrated by application users who are typically categorized as novice, intermediate, or expert. Several researchers confirm a causal link between experience and ease of use (Kanfer and Ackerman 1989), further suggesting that Test Time, Test Score, and Application Expertise affect Computer Playfulness, Computer Anxiety, and Ease of Use. Hackbarth et al. (2003) found that prior application expertise is a significant antecedent of ease of use and that experience has no significant effect on ease of use over and above the effects mediated by anxiety and playfulness. Clearly, if the technological level of the training environment can positively affect the application expertise of a trainee, then we would expect lower levels of anxiety and thus a positive impact on ease of use as trainees become more familiar with the training materials. This would occur with a commensurate decrease in time, increase in accuracy (test score), and a perception of increased expertise as trainees experience increased success. As trainees build their application expertise, they become more familiar with an application and perceive a more favorable perception of ease of use (Curtis and Davis 2003). Thus we hypothesize:
H4: The training environment will have a positive effect on expertise, whereby the high-tech environment yields the most positive level of expertise and the traditional environment yields the least positive level of expertise.
METHODOLOGY: THE STUDY
There were three distinct phases in this study: the initial assessment, the computer-based training, and the final assessment. During the initial and final assessment phases, two discrete steps were followed. First, trainees completed the survey instrument and then worked through the MS Excel assessment protocol previously described. Trainees self-reported their level of expertise in using MS Excel and then their perceptual evaluation of MS Excel Computer Anxiety, level of Playfulness, and sense of Ease of Use. Trainees were given an initial assessment to determine their baseline test scores and test times. The trainees' test scores were determined by the percent of questions that they answered correctly. Their test time was the amount of time that it took to complete the assessment. For the six-week training phase of this project, subjects were exposed to one of the three training environments (high-tech, traditional, or hybrid). Following the training phase, subjects completed the final assessment and retook the survey.
Participants were 107 undergraduate (50 males and 57 females) subjects from two public universities. The subjects' average age was 23 years, and 83% of subjects were single. Most subjects had access to computers at home (97%) and were connected online at home (94%). In addition, more than half of the subjects had access to computers at work (56%) and were connected online at work (56%). Fifty percent of subjects used Internet at least six hours each week for either work or school activities while 32 percent of subjects used the Internet weekly for other reasons.
To examine our hypotheses, we conducted a field study using Accounting students taking a core Accounting Information Systems (AIS) Course at two large Midwestern universities. Three separate training environments were established: Traditional-Passive, Hybrid, and High- Tech in the context of each one of the three phases described above. At one university, students randomly enrolled themselves into a high-technology training environment or into a traditionalpassive training environment without any prior knowledge as to which training environment would be used. Students at the second university were placed into the hybrid-training environment. These students were unaware of any other choice of classroom as no other choices were available.
To evaluate differences in these three separate training environments, trainees in each of the three training environments used identical training materials, and performed the identical training routines related to learning the MS Excel application related to accounting problems and issues. The training and assessment of application expertise was assessed using a standardized tool called the “Training Online Manager/Skills Assessment Manager (TOM/SAM).” This tool evaluates trainee Excel application expertise by assigning weighted problems in difficulty and measuring accuracy and time to solve. Example questions of hard, medium, and easy difficulty (as defined by TOM/SAM) MS Excel questions follow:
-
Hard: “Use Conditional formatting to automatically apply Bold to all values between 300 and 450 in cells C6 through C11.”
-
Medium: “Enter a formula in cell B12 that uses Absolute references to multiply the value in cell B5 by the value in cell B6.”
-
Easy: “Move cells B4 through C6 to cell A4 without using cut and paste.”
This assessment protocol allowed trainees to solve a range of MS Excel problems with the option of using multiple techniques to arrive at the correct solution. Question complexity mirrored an approximate distribution of 50% easy, 30% medium, and 20% hard in terms of difficulty rankings as determined by the developers of the TOM/SAM software. MS Excel questions were selected by the authors based on the course learning objectives.
Seven items measured Computer Playfulness. These items were preceded by the statement: “The following questions ask you how you would characterize yourself when you use Excel for each adjective listed below, please circle the number that best matches a description of yourself when you interact with Excel” (Hackbarth, Grover, and Yi. 2003). Items 2, 6 and 7 are reversed scored.

Eight items were used to measure Computer Anxiety. These items were preceded by the statement: “The following questions ask you how you would characterize yourself when you use Excel. For each adjective listed below, please circle the number that best matches a description of yourself when you interact with Excel.” Items 3, 6, and 7 are reverse scored.

Four items measure Perceived Ease of Use. These items were preceded by the statement: “The following questions ask you how you would characterize yourself when you use Excel. For each adjective listed below, please circle the number that best matches a description of yourself when you interact with Excel” (Hackbarth, Grover, and Yi. 2003).

Information systems researchers argue that system expertise is not an objective timebased function, but rather an individual perception (Morrison and Brantner 1992). Adapted from Hackbarth et al. (2003), respondents identified themselves as novice, intermediate, or expert users of MS Excel to measure Application Expertise. “In general, how would you best characterize your experience with Excel? Place an X in the box next to the description that best describes your level of expertise with Excel.”
-
Novice - You are a beginner computer user with little or no experience using Excel. You can work with Excel using an Excel book, on-line tutorial, or the assistance of a knowledgeable Excel user to help build basic spreadsheets.
-
Intermediate - You have adequate knowledge and experience using Excel spreadsheet applications. You presume that your basic spreadsheet knowledge is transferable between different spreadsheet applications (i.e. build a simple budget spreadsheet in either Excel or Lotus 1-2-3 with little difficulty). You are able to apply templates and formulas to solve standard problems. You use Excel books and on-line tutorials to a lesser extent than the novice user but still seek answers to questions about lesser-used formulas, formatting, and issues that improve the user interface with Excel.
-
Expert - You have practical experience and knowledge using Excel spreadsheet applications. You consider yourself an advanced user even though you may not understand all the features available in Excel. You are reasonably comfortable applying Excel spreadsheet features to ill-defined problems. The spreadsheet knowledge you do have would be easy to transfer to other spreadsheet applications (Lotus 1-2-3, Quattro-Pro, etc.). You use a book or on-line tutorial to answer questions about rarely used functions and feel comfortable explaining Excel features to other Novice, Intermediate and Expert users.
RESULTS
The initial mean value assessment of students' expertise was evaluated as: test score=48.0, test time=2701 seconds; and self-reported expertise as 1.29. This assessment represents the initial assessment (at time period 0) prior to the computer-based training as depicted in Figure 1. The final mean value assessment of students' expertise was evaluated as: test score=63.5, test time=2361 seconds, and self-reported expertise as 1.82. This assessment represents the final assessment (at time period 1) following the computer-based training as depicted in Figure 1. Table 1 reports the mean standard deviation, item weights and loadings, variable composite reliability, and average variance extracted. Table 1 is discussed further in the following paragraphs.

Item reliability indicates whether the indicators for a particular latent variable measure only that latent variable. Following Hair (Hair et al. 2005), only items with loadings greater than or equal to 0.50 should be retained. As also depicted in Table 1, each of the factors met the minimum suggested requirement.
Construct validity indicates the degree to which a latent construct is representative of the true construct and is often measured using the composite reliability criterion. This is a coefficient somewhat similar to the Cronbach Alpha, but is not weighted by the number of items per construct. This measure draws on the standardized loadings and measurement error for each item. A popular rule of thumb is 0.70 (Fornell and Larcker 1981). As shown in Table 1, the convergent validity criterions were satisfied.
Discriminant validity represents the extent to which measures of a given latent construct differ from measures of other latent constructs in the same model. Essentially, a latent construct should share more variance with its indicators than it shares with other latent constructs. To assess discriminant validity, Fornell and Larker (1981) suggest the use of Average Variance Extracted, which is the average variance shared between a construct and its measures. The average variance extracted is obtained by the sum of the loading squared, divided by the number of items in the construct, whereas the variance shared between two constructs corresponds to the square of the coefficient of correlation between the latter. This measure should be greater than the variance (squared correlation) shared between the latent construct and other latent constructs in the model. Examination of Table 1 provides evidence of discriminant validity.
Table 2 presents the correlation matrix for the variables depicted in Figure 1. As shown, Computer Playfulness, Computer Anxiety, and Ease of Use are all highly correlated. This is consistent with prior literature that has examined these variables (c.f. Hackbarth, Grover, and Yi. 2003). The correlation between Expertise and Ease of Use is significantly negative. This is surprising given the extant literature suggesting that as users gain experience with an application, Ease of Use becomes less important (Davis 1989; Davis, Bagozzi, and Warshaw 1989).

However, one plausible explanation for this correlation is that as users gain expertise and become experts, they might discover additional advanced features and functionality of an application that they did not know previously existed. They begin to realize how much they do not know. As such, the negative correlation is reasonable. As users receive application training, we expect that they would be able to perform their tasks with more accuracy, more speed, and less anxiety—all of which are reflected in the correlation matrix. It is surprising that no significant correlation existed between Computer Playfulness and the Test Score, Test Time, and reported level of Expertise.
In this study, three separate training environments were established (Traditional-Passive, Hybrid, and High-Tech) within three distinct phases (the initial assessment, the computer-based training, and the final assessment). As discussed earlier, two discrete steps were followed during the initial and final assessment phases. First, the subjects completed the survey instrument and then worked through the MS Excel assessment. Table 3 depicts the subjects' self-reported levels of expertise.

As can be seen in Table 3, trainees were all relatively close in the time in took to complete the initial assessment, as well as their self-reported level of expertise prior to training. Subjects in the traditional training environment were marginally lower on their scores prior to training. On the other hand, trainees in the hybrid-training environment clearly outperformed the subjects in the other training environments in the MS Excel assessment at the end of the course. Their higher test score and faster time to complete the MS Excel assessment evidence this.
Post-survey interviews with trainees in the Traditional-Passive environment suggest that trainees did not feel as though they had gained the level of expertise that the other training environments provided. One interesting point is that the subjects in the high-tech training environment showed the least improvement in knowledge and skill set development while also showing the largest improvement in the self-reported measure of expertise.
Table 4, Panel A, reports the t-tests (the relevant means for the manifest variables are presented in Table 3) for each variable, by training environment for the initial assessment. Among the initial skills and knowledge assessment, the average subject in each type of training environment was essentially at the same point for each of the variables. However, subjects in the hybrid training environment achieved marginally higher scores on the initial assessment than did subjects in the high-tech training environment. Specifically, subjects in the hybridtraining environment achieved a higher test score and took less time on the assessment than did their counterparts in the high-tech training environment.

As can be seen in Table 4, Panel B, the only expertise factor that differed across training environments was the time required to perform the assessment. Specifically, subjects in the hybrid training environment performed the application assessment significantly faster than did subjects in both the high-tech and traditional training environments. However, neither the test score received nor the subjectively determined level of expertise differed across the three training environments. Further, subjects in the traditional training environment experienced higher levels of anxiety than did subjects in the other two training environments. Surprisingly, subjects in the hybrid-training environment felt as though the target application was not as easy to use as did subjects in the other two training environments. Trainees in the hybrid and technologicallyadvanced classroom were more playful to begin with, but this was not as important at the end of the training period, possibly being overshadowed by the higher levels of anxiety. It might be hard to be playful when you recognize how much there is to learn.
Given that the subjects in the three training environments differed on several variables in their initial assessment of skills and knowledge, it is important to evaluate the relative degree of improvement following the training phase. The only training environment that did not see a significant improvement in the time to complete the skills and knowledge assessment was the traditional training environment. Both the traditional and the high-tech training environments experienced significant changes in the levels of model variables. However, the hybrid-training environment did not see any significant changes in the levels of ease of use, playfulness, and anxiety.
FURTHER ANALYSIS AND DISCUSSION
In general, Hypotheses H1, H3, and H4 were supported. H2 was not supported. Surprisingly, Computer Anxiety increased in the High-Tech training environment rather than decreasing. As hypothesized, Playfulness (H1) was more positive in the High-Tech training environment than in the Traditional training environment. H3 was supported. Ease of Use was more positive in the High-Tech training environment that the Traditional training environment. H4 was supported. Trainees rated their Expertise more positively in the High-Tech training environment than in the Traditional training environment. While we find it useful to present the results of our hypothesis testing, the really interesting results came from the interactions of the different variables within and across the training environments. It is here that we should look for extensions to our study.
The model presented in Figure 1 suggested that Computer Playfulness, Computer Anxiety, and Ease of Use acted as three proxies for knowledge and skill set development, i.e. Test Time, Test Score, and MS Excel Expertise in the context of three training environments. In addition, Figure 1 also showed that Test Time, Test Score, and MS Excel Expertise acted as antecedents to Playfulness, Anxiety, and Ease of Use. In order to do this, evaluation of the structural paths of the model in Figure 1 for each of the training environments was conducted.
Construct differences are evaluated by looking at the path coefficients of the model presented in Figure 1. For each training environment, evaluation of the structural paths was carried out using the PLS Graph software package (Gefen and Straub 2005). Statistical significance levels of the estimated path coefficients were determined using the bootstrap procedure. The tvalue with n–1 degrees of freedom (where n is the number of sub-samples used in the bootstrap procedure) is the estimate of the bootstrap path coefficient divided by the standard error.
The path loadings for each of the three training environments and the corresponding tscores for the model described in Figure 1 is presented in Table 5. With this in mind, it is important to note that the PLS results are indicative of a differential impact of the three training environments on the model as presented in Figure 1. As shown in Table 5, two paths (which have been previously validated) are significant for each of the three training environments: Playfulness and Anxiety to Ease of Use. Surprisingly, only in the high-tech environment did Ease of Use positively impact a trainee's Test Score, Test Time, and self-reported assessment of Expertise. The issue here is that only the trainees in the high-tech training environment found a significant path from Ease of Use to Test Score, Test Time, and self-reported degree of Expertise. This is especially interesting in that trainees in the hybrid-training environment experienced the highest degrees of Ease of Use, but this was not translated into higher levels of Expertise.

The results of the PLS analysis generally help to explain the results of the hypothesis testing. This relationship between the training environment and Computer Playfulness is especially interesting in the traditional training environment, where trainees self-reported lower values of Expertise while actually having higher levels of Computer Playfulness than other trainees in the same environment. Similarly, in the hybrid-training environment, the better trainees did on their initial MS Excel assessment, the lower their respective levels of Computer Playfulness and Computer Anxiety. This may be the result of prior experience using MS Excel, which reflects a mismatch between a trainee's actual performance and self-reported level of expertise.
LIMITATIONS AND FUTURE DIRECTIONS
Individuals and training environments vary greatly. In our study, we looked at three different environments but feel they are representative of many other typical training environments, even though these manipulations have their own inherent limitations. The same instructor taught the high technology and traditional-passive training environments. Even though the classes were standardized and the instructor used the capabilities of each training environment to the maximum extent possible, there could still be differences based on time of day, previous MS Excel experience of the trainees, or other uncontrolled variables.
It could be argued that idiosyncratic differences between instructors could have driven the results. However, we selected TOM/SAM training because of its highly structured methodology in order to minimize instructor differences. As such, we do not believe that this is a significant contributing factor to differences among training environments.
It has been suggested that during training, trainers provide frequent and appropriate feedback that assists trainees in modifying dysfunctional performance attributions (Martocchio 1994). Future research should focus on more innovative technologies like the McGraw-Hill Training Environment Performance System that allows students to interact in real time with the instructor by keying responses into a keypad that are then displayed on the front overhead screen. More innovative technologies may change dynamics of the training environment by altering the current study's variables. In effect, we motivate students to learn more, increase the content of the course, and use the high-tech training environment to its fullest advantage.
CONCLUSIONS
We find that trainees scored the same at the end of the training independent of the training environment. More intriguing is that trainees in the high-technology environment completed the assessment faster, perhaps indicating less anxiety and a greater perception of ease of use. This suggests that high-technology environments may not be fully engaged or as necessary as some educators believe. It may be that trainees perceive a higher degree of expertise simply because they are in a high-technology training environment. Caporael (1985) suggests that the impact of training environment will depend on the purposes and emergent uses that technology is put to. We may be disillusioning our trainees with false expectations simply because they are in a high-technology environment. It may come down to good training and motivated learners (Rakes 1989). It may also be that trainees raise their level of effort to meet performance expectations in low-technology environments, further suggesting that we may not be using hightechnology environments to their full potential. By increasing course content, demanding more performance from students, and using the high-tech classroom as an aid in the delivery of course content we may really understand the effect of training environment on expertise. We may be “dumbing down courses” not because of less content, but because we are delivering content more efficiently and slowing the learning process to fill the available time.
Interestingly, past research suggests that early job knowledge training influenced subsequent job knowledge acquisition with a positive increase in performance. That early performance strongly influenced an increase in subsequent performance (Ree and Earles 1991). If hightechnology training environments were better in presenting knowledge, we would have expected individuals in high-technology training environments to have an early performance advantage and then maintain it, all else being equal. This did not happen. Research shows that individuals learn software programs faster while watching the instructor demonstrate the application and its functions. When trainees simply follow along, systematically imitating the instructor, the transfer of knowledge is stymied by the trainee's mimicking rather than learning. From a practical standpoint, a mixture of classroom environments with appropriately tailored lesson plans would be a more efficient and effective use of resources.

Application Expertise Mode (Adapted from Hackbarth, Yi, and Grover (2003))