ISSN: 1130-2496 • e-ISSN: 1988-2793
ARTÍCULOS
Abstract: INTRODUCTION. The way teaching platforms are used is an important factor determining the satisfaction students and lecturers have with in-person university teaching. This study aims to identify patterns of behaviour and factors related to the use of teaching platforms in in-person teaching in Spanish universities. METHOD. A questionnaire has been designed for the teaching staff at Spanish universities. In the first stage, a cluster analysis was performed and the existence of significant differences between clusters in relation to specific uses of the platforms was tested. In the second stage, a descriptive study of the clusters was performed. The relationship between clusters and demographics, perception of use, self-perception and contextual platforms variables was studied. Finally, the circumstances that could promote the use of teaching platforms for each group were analysed. RESULTS. Four clusters were identified, and differences were observed both in the composition of the clusters and in the score given to the variables for perception of the platforms and self-perception of technological competence. DISCUSSION. Not all usage patterns follow an order from lowest to highest use of the platform, but instead some seem to indicate different ways of working. The data do not support any gender and age stereotypes of teachers and their use of technology. If universities are interested in promoting the proper use of teaching platforms, it would not be advisable to look for a single solution and they should not focus on increasing the amount of training for lecturers.
Keywords: Higher education, Virtual campus, Questionnaire, Statistic analysis
Resumen: INTRODUCCION. La forma en que se hace uso de las plataformas docentes es un elemento relevante en la satisfacción de la enseñanza presencial universitaria para el alumnado y el profesorado. El presente estudio trata de identificar patrones de comportamiento y factores relacionados con el uso de plataformas docentes en la enseñanza presencial en las universidades españolas. MÉTODO. Se ha diseñado un cuestionario dirigido al profesorado de universidades españolas. En una primera etapa, se ha desarrollado un análisis clúster y se contrasta la existencia de diferencias significativas en usos concretos de las plataformas entre clústeres. En una segunda etapa, se desarrolla un estudio descriptivo de los clústeres. Se estudia la relación entre los clústeres y variables demográficas, de percepción de uso de las plataformas, de autopercepción y contextuales. Por último, se analizan las circunstancias que podrían impulsar el uso de plataformas docentes para cada colectivo. RESULTADOS. Se identifican cuatro clústeres y se observan diferencias tanto en la composición de los clústeres como en la puntuación dada a variables de percepción de las plataformas y autopercepción de competencia tecnológica. DISCUSIÓN. No todos los patrones de uso responden a una ordenación de menor a mayor uso de la plataforma, sino que parecen indicar también distintas formas de trabajar. Los datos no avalan estereotipos de género y edad del profesorado y uso de la tecnología. Si las universidades estuvieran interesadas en impulsar el buen uso de plataformas docentes, no sería recomendable buscar una solución única y ni centrarse en incrementar la oferta de formación del profesorado.
Palabras clave: Enseñanza superior, Campus virtual, Cuestionario, Análisis estadístico.
Index: 1. Introduction • 2. Method • 2.1. Instrument • 2.2. Application of the Instrument. Participants • 2.3. Data Analysis • 3. Results • 3.1. Stage 1: Identification of Clusters and Classification of Platform Use Variables • 3.2. Stage 2: Description of the Clusters • 4. Discussion • 5. Conclusions • 6. Bibliographic References
Cómo citar: Carabias López S., Arroyo-Barrigüete J. L., de Asís de Ribera F., Segura M. y Hernández A. (2025). El uso de plataformas digitales para la docencia presencial en las universidades españolas: análisis desde la percepción del profesorado. Revista Complutense de Educación, 36(4), 549-563. https://doi.org/10.5209/rced.96941
The UNESCO World Declaration on Higher Education for the Twenty-First Century explicitly expresses the aspiration for universities to make good use of technological innovations:
“Higher education institutions must set an example in taking advantage of the benefits and potential of new information and communication technologies, ensuring quality and maintaining high standards in educational practices and outcomes, with a spirit of openness, equity, and international cooperation” (UNESCO, 1998, p. 110, translated).
According to Martin et al. (2020), university lecturers consider that the most important digital technology for teaching is digital teaching platforms. These platforms are applications, usually based on web technology, designed to manage synchronous or asynchronous interactions between teachers and students as part of the learning processes (Alshammari et al., 2018). Teaching support through these platforms has been growing in higher education (Rhode et al., 2017), and the exceptional situation generated by the COVID-19 pandemic gave a final boost to their use, especially in relation to synchronous tools (Sayaf, 2023; Thach et al., 2021).
Educational platforms are essential for remote teaching but are also widely used as support for in-person teaching at universities, mainly in relation to the processes involved in distributing material, assessment, communication, and collaboration between teachers and students. Moreover, they can simplify organisation and monitoring tasks (Kultur & Yazici, 2014; Mallon et al., 2023; Washington, 2019). The greater or lesser contribution of these platforms to improving learning processes depends largely on how they are used (Cabero-Almenara et al., 2019; Gómez Contreras et al., 2022). When it comes to promoting measures that enhance efficiency and satisfaction in university teaching and learning processes supported by educational platforms, it is helpful to understand the usage patterns of these platforms, the factors that explain the varying utilisation of their features, and the reasons preventing greater exploitation. However, literature reviews by Al- Nuaimi and Al-Emran (2021), Gómez Contreras et al. (2022), and Salgado-Chamorro et al. (2023) reveal that research on the use of educational platforms as support for in-person teaching is scarce. In Spain, the study by Esnaola-Arribillaga and Bezanilla (2020), on levels of use of a digital platform for in-person teaching at a specific university, stands out. Lucas-Bacía and Roa (2021) highlight that studies on the use of platforms in Spanish universities are usually limited to a few institutions.
Studies on the use of digital platforms in higher education are also mostly quantitative, and data collection is usually based on questionnaires (Lucas-Bacía & Roa, 2021), which may be aimed at teaching staff (Cabero- Almenara et al., 2019; Esnaola-Arribillaga & Bezanilla, 2020; Fathema et al., 2015; Fearnley & Amora, 2020; Garone et al., 2019; Lavidas et al., 2022; Thach & Lai, 2021) or students (Kozlova & Pikhart, 2021; Sayaf, 2023; Thach et al., 2021). Other studies use data from platform usage logs (Machajewski et al., 2019; Rhode et al., 2017; Sholeh & Andayati, 2023). Using a single source of information always constitutes a limitation. Machajewski et al. (2019) describe the limitations of using platform usage logs as the sole data source, as it is impossible to integrate platform usage information with variables associated with the teaching staff. Works such as Galura et al. (2023) and Kaewsaiha and Chanchalor (2020) take information from various sources. Qualitative studies have also been developed, such as Walker et al. (2016) and Washington (2019), based on open-ended questionnaires to faculty members.
The present study collects, through a questionnaire, information from the teaching staff in 22 Spanish universities on the use of educational platforms in higher education with in-person mode, with the following objectives:
Identify patterns of use of teaching platforms among teaching staff in Spanish universities.
Identify variables that may relate to the different amount of use of teaching platforms.
Inform the design of measures that could be adopted to promote the use of teaching platforms.
The study is based on a non-experimental, descriptive, and ex-post-facto design, where participant selection was by convenience and the purpose was exploratory, in line with the stated objectives.
An ad hoc questionnaire was designed for university teaching staff, which measures: firstly, the use of a digital platform for various processes associated with in-person teaching; secondly, variables that may be related to the use of the platform; and finally, circumstances that could encourage the use of teaching platforms. An initial version of the questionnaire, which included a final open-ended question for suggestions about the instrument, was answered by a group of 11 lecturers from different areas (education, statistics, mathematics, business economics, law, languages), selected for their knowledge of the subject. Each member contributed their own insights, from perspectives such as specialisation in the application of information and communication technologies to higher education, expertise in questionnaire development, or extensive experience in using platforms for their own teaching and team coordination. The final questionnaire, described below, incorporates the suggestions received.
In formulating the questions about the use of each process, studies by Al-Hamed (2022), Esnaola- Arribillaga and Bezanilla (2020), Garrote and Pettersson (2007), Rhode et al. (2017), as well as the teaching experience of the authors and suggestions from the group commenting on the initial version, were considered. The questionnaire included 17 questions (see Table 1), all evaluated on a Likert scale from 1 (“Never”) to 5 (“Usually”). The questions correspond to tasks carried out with the support of teaching platforms. The first three questions (P1, P2, P3) relate to the distribution of materials. The next four pertain to personalised assessment: (P4, P5, P6) asynchronous assessment, and (P7), synchronous assessment. The subsequent questions deal with bidirectional communication: (P8 and P9), synchronous communication, (P10 and P11), asynchronous communication. Then, two further questions refer to organisational tasks (P12 and P13); two to monitoring tasks (P14, P15); and two questions to unidirectional communication (P16 and P17).
| P1 | Do you use the platform to distribute written content (PowerPoint presentations, Word documents, articles, etc.) among students? |
|---|---|
| P2 | Do you use the platform to distribute audiovisual content (videos, audio recordings, etc.) among students? |
| P3 | Do you use the platform to share material with other lecturers? |
| P4 | Do you use the platform to receive student submissions? |
| P5 | Do you use the platform to evaluate or provide feedback on student submissions? |
| P6 | Do you use the platform to check for plagiarism in submissions? |
| P7 | Do you use the platform to do exams to your students? |
| P8 | Do you use the platform to conduct workshops or group tutorials? |
| P9 | Do you use the platform to resolve student queries via video calls? |
| P10 | Do you use the platform to organise forums or chats? |
| P11 | Do you use the platform to communicate with students through messages? |
| P12 | Do you use the platform to schedule activities (planned exercises or assignments)? |
| P13 | Do you use the platform to organise collaborative student work? |
| P14 | Do you use the platform to review student activity reports? |
| P15 | Do you use the platform to check attendance? |
| P16 | Do you use the platform to publish grades? |
| P17 | Do you use the platform to post announcements? |
Regarding the variables that may be related to use of the platform, the literature shows multiple classifications (Salgado-Chamorro et al., 2023; Ziraba et al., 2020). According to the literature review by Al- Nuaimi and Al-Emran (2021), the most frequently considered variables are, in order, the TAM (Davis, 1989); the contribution of DeLone and McLean (2003); and the UTAUT, developed by Venkatesh et al. (2003). It was decided that it would be desirable not to discard any potentially relevant factors in the descriptive phase of the cluster analysis. Therefore, the variables proposed in the extensions of these theories were included, particularly for the use of teaching platforms in higher education, along with teaching vocation, suggested by the authors to provide a more comprehensive view of the participants’ professional self-perception. All the variables that can be directly influenced by external agents were grouped under factors that could promote the use of teaching platforms, to focus the data on the third objective of the work. As usual, demographic variables were collected and, less commonly (Al-Nuaimi & Al-Emran, 2021), so were contextual variables. The selected variables are shown in Table 2, along with the questions on the questionnaire and the most relevant references.
| Demographic Variables | Question | Al-Nuaimi and Al-Emran (2021) |
|---|---|---|
| Age | What is your age? Please write the answer as a number (e.g., 56) | |
| Gender | What is your gender? | |
| Platform Perception | Question | |
| Platform Satisfaction | What is your overall level of satisfaction with the platform you use? Please use a scale from 0 (completely dissatisfied) to 10 (completely satisfied) | DeLone and McLean (2003); García Murillo et al. (2020) |
| Platform Usefulness | On a scale from 0 to 10, where 0 means "no use" and 10 means "absolutely essential," what level of usefulness do you think digital platforms have in the teaching process? | Davis (1989); Fathema et al. (2015) |
| Workload | How does the use of the platform affect your workload? | Venkatesh et al. (2003) |
| Self-Perception | Question | |
| Teaching Vocation | On a scale from 0 to 10, where 0 means "none" and 10 means "total," how would you rate your teaching vocation? | |
| Technological Competence | On a scale from 0 to 10, where 0 means "none" and 10 means "total," how would you rate your competence in using technology? | Fathema et al. (2015); Fearnley and Amora (2020); Lavidas et al. (2022) |
| Teaching Context | Question | Al-Nuaimi and Al-Emran (2021) |
| Dedication | How do you perform your work at the university? (full-time/part-time) | |
| Field of Knowledge | In which areas do you usually teach (subjects you teach)? | |
| Type of Contract | Is your job at the university permanent (tenured or indefinite contract) or temporary? | |
| Type of University | Is the university where you mainly teach private or public? | |
| Years of Experience | How many years of experience do you have in university teaching (considering all universities where you have taught)? Please write the answer as a number (e.g., 15) | |
| Circumstances That Could Promote the Use of Teaching Platforms | ||
Please indicate which of the following circumstances could improve your level of satisfaction with the teaching platform you use (more time for teaching tasks, easier platform usability, more functionalities on the platform, more training sessions, more practical training, technical support for queries, greater recognition by the university) If there are other circumstances that could improve your satisfaction with the platform, please indicate them below. |
Davis (1989); Fathema et al. (2015); Fearnley and Amora (2020); Venkatesh et al. (2003) | |
Non-probabilistic convenience sampling was used. After verifying the identity of the recipients, emails were sent with a link to the survey in Microsoft Forms. A total of 303 responses were obtained. After the data cleansing process, in which incomplete or invalid observations were eliminated, the final sample consisted of a total of 288 complete, valid, anonymised records corresponding to 288 lecturers who had accepted the corresponding informed consent.
Twenty-two Spanish universities are represented in the sample, with a high percentage of responses corresponding to teaching staff at the Complutense University of Madrid (37.8%) and the Pontifical Comillas University (37.5%). 18.1% correspond to other public universities, and 6.6% to other private universities.
Those responding have an average age of 50.7 years, with a standard deviation of 8.7 years. 54% identify as female and 46% as male. Their average experience is 19.8 years, with a standard deviation of 10.6 years. 25% have a part-time contract at the university.
The data analysis was performed in two different stages. In the first, the aim was to identify patterns based on the platform use dimensions indicated in Table 1. The methodology applied for this objective is hierarchical agglomerative clustering, an option chosen since it offers greater interpretability than non-hierarchical clustering algorithms. We used Euclidean distance and opted for Ward’s linkage method, one of the most commonly used in practice, since besides being robust to outliers, it tends to generate compact clusters. The presence of significant differences was confirmed with a Kruskal-Wallis test for differences between clusters, with post-hoc analysis for those variables where differences were detected. A non-parametric test was chosen, given that the conditions to assume a normal distribution are not met.
In the second stage, a descriptive analysis of each cluster was carried out. This included, firstly, the demographic, contextual, self-perception, and platform perception variables, with the aim of offering insights into their relationship with the use of teaching platforms. We identified which of these variables showed significant differences between clusters using a Kruskal-Wallis test, with post-hoc analysis in those variables where differences were detected. Finally, we described the circumstances mentioned by the teaching staff as possible drivers for promoting the use of teaching platforms within each cluster and tested the equal distribution among clusters of the selected circumstances using chi-square tests and likelihood ratios.
The analysis was performed using IBM SPSS and the R programming environment, employing various libraries: “dplyr” (Wickham et al., 2021), “dunn.test” (Dinno, 2024), “fmsb” (Nakazawa, 2019), “gplots” (Warnes et al., 2019), and “NbClust” (Charrad et al., 2014). The selection of these two tools is due to specific reasons. SPSS was used for the more basic analyses since it is an accessible and reliable tool that allows for quick and efficient statistical analyses. In contrast, for more advanced analyses, such as cluster analysis, R was chosen because it offers greater flexibility in parameterisation and allows for the generation of highly customised graphics.
The results of the cluster analysis are shown in Figure 1, based on the 17 previously standardised questions from Table 1. The presence of four different groups of lecturers is observed (horizontal interpretation of Figure 1). To facilitate the interpretation of the results, Figure 2 shows the radar chart of the four identified clusters.
Cluster 1 can be characterised as the group that tends to make the least use of the platforms. It includes only 8.7% of the teaching staff who responded. They regularly use the platform for tasks such as distributing materials (P1) or checking for plagiarism in submissions (P6). In contrast, Cluster 4, which comprises 31.9% of the teaching staff, is the group that tends to use all the features of the platforms more than the others.
Clusters 2 and 3, with 32.3% and 27.1% of the responses respectively, make intermediate use of the platforms. Differences between them emerge depending on the type of task and tool. The noticeable difference between the two clusters is in the habit of sharing material with other teachers through the platform (P3), which is more frequent in Cluster 2 than in Cluster 3. An interpretation for these results will be proposed in the discussion section of the article.
As expected, the Kruskal-Wallis test shows significant differences for all the questionnaire’s variables. The post-hoc analyses allow comparisons between specific processes (see Table 3, which includes the difference and the p-value with Bonferroni adjustment):
There are significant differences between clusters C1 and C4 for all the tasks that appear in the questionnaire, although for P6 (plagiarism control), these differences are only significant at the 10% level.
Regarding synchronous tools (questions P7, P8, and P9), no significant differences are observed between clusters C3 and C1, nor between clusters C2 and C4.
In contrast, for asynchronous assessment and communication tasks (questions P5, P6, and P11), it is Cluster C2 that does not present significant differences with Cluster C1, and Cluster C3 behaves like Cluster C4.´
| P1 | P2 | P3 | P4 | P5 | P6 | P7 | P8 | P9 | P10 | P11 | P12 | P13 | P14 | P15 | P16 | P17 | ||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| KW | <0.001 | <0.001 | <0.001 | <0.001 | <0.001 | <0.001 | <0.001 | <0.001 | <0.001 | <0.001 | <0.001 | <0.001 | <0.001 | <0.001 | <0.001 | <0.001 | <0.001 | |
| C1-C2 | Diff | -0.5 | -1.3 | -1.3 | -0.8 | -0.4 | 0.4 | -0.9 | -1.3 | -1.6 | -0.6 | -0.2 | -1.6 | -0.8 | -1.0 | -0.7 | -1.4 | -1.0 |
| P-value | <0.001 | <0.001 | <0.001 | 0.037 | 1.000 | 0.619 | 0.002 | <0.001 | <0.001 | 0.010 | 1.000 | <0.001 | 0.004 | 0.002 | 0.016 | <0.001 | 0.024 | |
| Sig. | *** | *** | *** | ** | *** | *** | *** | *** | *** | *** | *** | ** | *** | ** | ||||
| C1-C3 | Diff | -0.7 | -0.9 | -0.8 | -1.7 | -1.4 | -0.7 | -0.1 | -0.1 | 0.1 | -1.3 | -0.9 | -2.4 | -0.8 | -0.4 | 0.0 | -1.5 | -1.5 |
| P-value | <0.001 | 0.006 | 0.009 | <0.001 | <0.001 | 0.110 | 1.000 | 1.000 | 1.000 | <0.001 | 0.006 | <0.001 | 0.016 | 0.334 | 1.000 | <0.001 | <0.001 | |
| Sig. | *** | *** | *** | *** | *** | *** | *** | *** | ** | *** | *** | |||||||
| C1-C4 | Diff | -0.8 | -2.0 | -1.6 | -1.8 | -2.3 | -0.6 | -1.2 | -1.6 | -1.3 | -1.7 | -1.4 | -2.6 | -2.3 | -2.3 | -1.5 | -2.2 | -2.2 |
| P-value | <0.001 | <0.001 | <0.001 | <0.001 | <0.001 | 0.093 | <0.001 | <0.001 | <0.001 | <0.001 | <0.001 | <0.001 | <0.001 | <0.001 | <0.001 | <0.001 | <0.001 | |
| Sig. | *** | *** | *** | *** | *** | * | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | |
| C2-C3 | Diff | -0.2 | 0.3 | 0.5 | -0.8 | -1.0 | -1.1 | 0.8 | 1.3 | 1.7 | -0.7 | -0.7 | -0.7 | 0.1 | 0.5 | 0.6 | -0.1 | -0.5 |
| P-value | 0.001 | 0.171 | 0.018 | <0.001 | <0.001 | <0.001 | <0.001 | <0.001 | <0.001 | 0.003 | 0.002 | <0.001 | 1.000 | 0.023 | 0.001 | 1.000 | 0.052 | |
| Sig. | *** | ** | *** | *** | *** | *** | *** | *** | *** | *** | *** | ** | *** | * | ||||
| C2-C4 | Diff | -0.3 | -0.7 | -0.2 | -0.9 | -1.9 | -1.0 | -0.3 | -0.3 | 0.3 | -1.1 | -1.2 | -1.0 | -1.4 | -1.3 | -0.8 | -0.8 | -1.2 |
| P-value | <0.001 | <0.001 | 0.880 | <0.001 | <0.001 | <0.001 | 0.721 | 0.726 | 0.466 | <0.001 | <0.001 | <0.001 | <0.001 | <0.001 | 0.001 | <0.001 | <0.001 | |
| Sig. | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | *** | |||||
| C3-C4 | Diff | 0.0 | -1.1 | -0.8 | -0.1 | -0.9 | 0.0 | -1.1 | -1.6 | -1.4 | -0.4 | -0.5 | -0.2 | -1.5 | -1.8 | -1.4 | -0.7 | -0.7 |
| P-value | 1.000 | <0.001 | 0.001 | 1.000 | <0.001 | 1.000 | <0.001 | <0.001 | <0.001 | 0.039 | 0.010 | 0.227 | <0.001 | <0.001 | <0.001 | <0.001 | 0.004 | |
| Sig. | *** | *** | *** | *** | *** | *** | ** | ** | *** | *** | *** | *** | *** | |||||
| p-value: *** 0.01, ** 0.05, * 0.1 | ||||||||||||||||||
The description of the clusters concerning demographic variables is summarised in Tables 4 and 5.
| Cluster 1 | Cluster 2 | Cluster 3 | Cluster 4 | Average | |
|---|---|---|---|---|---|
| Age | 52.5 | 52.3 | 47.6 | 51.3 | 50.7 |
| Years of Experience | 19.1 | 21.1 | 17.8 | 20.4 | 19.8 |
| Gender | |||||
| Female | 32% | 42% | 56% | 70% | 54% |
| Male | 68% | 58% | 44% | 30% | 46% |
| Age | Gender | |||
|---|---|---|---|---|
| Female | ||||
| KW | 0.003 | <0.001 | ||
| C1-C2 | Diff | 0.2 | -10% | 10% |
| P-value | 1.000 | 1.000 | ||
| Sig. | ||||
| C1-C3 | Diff | 4.9 | -24% | 24% |
| P-value | 0.048 | 0.349 | ||
| Sig. | ** | |||
| C1-C4 | Diff | 1.2 | -38% | 38% |
| P-value | 1.000 | 0.009 | ||
| Sig. | *** | |||
| C2-C3 | Diff | 4.7 | -14% | 14% |
| P-value | 0.002 | 0.499 | ||
| Sig. | *** | |||
| C2-C4 | Diff | 1.0 | -28% | 28% |
| P-value | 1.000 | 0.002 | ||
| Sig. | *** | |||
| C3-C4 | Diff | -3.7 | -13% | 13% |
| P-value | 0.015 | 0.637 | ||
| Sig. | ** | |||
| p-value: *** 0.01, ** 0.05, * 0.1 | ||||
The Kruskal-Wallis tests indicate that there are significant differences for both demographic variables (see Table 5). The post-hoc tests show that, in the case of age, Cluster C3 consists of younger teaching staff than the rest of the groups. Regarding gender, in the cluster that makes greatest use of the technologies (C4), there is a significantly higher percentage of women than in clusters C1 and C2. In both variables, we observe evidence contrary to some social stereotypes.
Table 6 summarises the description of the clusters based on the variables for the perception of the teaching platforms. Note that more than half of the teaching staff in Cluster C4 consider that the platforms decrease their workload, while only 20% of Cluster C1 share this opinion, which is consistent with the fact that these groups use them to a greater or lesser extent.
| Cluster 1 | Cluster 2 | Cluster 3 | Cluster 4 | Average | |
|---|---|---|---|---|---|
| Platform Satisfaction | 6.4 | 7.8 | 7.8 | 8.2 | 7.8 |
| Platform Usefulness | 6.3 | 8.5 | 8.6 | 9.2 | 8.6 |
| Workload | |||||
| Decreases | 20% | 37% | 38% | 53% | 41% |
| No effect | 40% | 22% | 19% | 15% | 20% |
| Increases | 40% | 42% | 42% | 32% | 39% |
The Kruskal-Wallis test shows significant differences regarding the three platform perception variables (see Table 7). The post-hoc tests reveal that such differences are always related to clusters C1 and C4, which have the least and greatest use of the platforms. However, there is no significantly different perception between clusters C2 and C3, consisting of teaching staff who make intermediate use of the platforms, with different way they are use. These results seem to suggest that perceptions of the platforms among teaching staff are associated with their level of utilisation.
| Platform Satisfaction | Platform Usefulness | Workload | ||||
|---|---|---|---|---|---|---|
| Decreases | No effect | Increases | ||||
| KW | 0.001 | <0.001 | 0.026 | |||
| C1-C2 | Diff | -1.4 | -2.2 | -17% | 18% | -2% |
| P-value | 0.091 | <0.001 | 0.698 | |||
| Sig. | * | *** | ||||
| C1-C3 | Diff | -1.3 | -2.3 | -18% | 21% | -2% |
| P-value | 0.078 | <0.001 | 0.427 | |||
| Sig. | * | *** | ||||
| C1-C4 | Diff | -1.8 | -2.9 | -33% | 25% | 8% |
| P-value | 0.001 | <0.001 | 0.024 | |||
| Sig. | *** | *** | ** | |||
| C2-C3 | Diff | 0.1 | -0.1 | -2% | 2% | 0% |
| P-value | 1.000 | 0.711 | 1.000 | |||
| Sig. | ||||||
| C2-C4 | Diff | -0.4 | -0.7 | -17% | 6% | 10% |
| P-value | 0.048 | <0.001 | 0.438 | |||
| Sig. | ** | *** | ||||
| C3-C4 | Diff | -0.4 | -0.6 | -15% | 4% | 11% |
| P-value | 0.096 | 0.019 | 0.927 | |||
| Sig. | * | ** | ||||
p-value: *** 0.01, ** 0.05, * 0.1
The scores for the self-perception variables for each cluster are presented in Table 8.
| Cluster 1 | Cluster 2 | Cluster 3 | Cluster 4 | Average | |
|---|---|---|---|---|---|
| Teaching Vocation | 8.9 | 9.3 | 8.9 | 9.2 | 9.1 |
| Technological Competence | 7.2 | 7.6 | 7.6 | 8.1 | 7.8 |
According to the Kruskal-Wallis test, no significant differences are observed for the teaching vocation variable, which has a very high score in all groups.
There are significant differences in the self-perception of technological competence. The post-hoc tests show that the score is significantly higher for Cluster C4 than for the rest of the clusters (see Table 9), which is consistent with it being the group that makes the greatest use of the platforms.
| Technological Competence | ||
|---|---|---|
| KW | 0.008 | |
| C1-C2 | Diff | -0.5 |
| P-value | 1.000 | |
| Sig. | ||
| C1-C3 | Diff | -0.5 |
| P-value | 1.000 | |
| Sig. | ||
| C1-C4 | Diff | -1.0 |
| P-value | 0.025 | |
| Sig. | ** | |
| C2-C3 | Diff | 0.0 |
| P-value | 1.000 | |
| Sig. | ||
| C2-C4 | Diff | -0.5 |
| P-value | 0.021 | |
| Sig. | ** | |
| C3-C4 | Diff | -0.5 |
| P-value | 0.032 | |
| Sig. | ** | |
Table 10 summarises the measurements of the contextual variables from the questionnaire for each cluster. According to the Kruskal-Wallis test, significant differences appear only based on whether the university is public or private. Table 11 shows the results of the post-hoc analyses for this variable.
| Cluster 1 | Cluster 2 | Cluster 3 | Cluster 4 | Average | |
|---|---|---|---|---|---|
| Working hours | |||||
| Part-time | 28% | 23% | 18% | 25% | 23% |
| Full-time | 72% | 77% | 82% | 75% | 77% |
| Academic field | |||||
| Sciences | 16% | 22% | 12% | 12% | 15% |
| Social Sciences | 48% | 56% | 54% | 55% | 55% |
| Humanities | 20% | 15% | 14% | 20% | 17% |
| Others | 16% | 8% | 21% | 13% | 14% |
| Type of Contract | |||||
| Temporary | 20% | 24% | 21% | 32% | 25% |
| Tenured/Indefinite Contract | 80% | 76% | 79% | 68% | 75% |
| Type of University | |||||
| Public | 40% | 43% | 71% | 62% | 56% |
| Private | 60% | 57% | 29% | 38% | 44% |
| Years of Experience | 19.1 | 21.1 | 17.8 | 20.4 | 19.8 |
| Type of University | |||
|---|---|---|---|
| Public | Private | ||
| KW | 0.001 | ||
| C1-C2 | Diff | -3% | 3% |
| P-value | 1.000 | ||
| Sig. | |||
| C1-C3 | Diff | -31% | 31% |
| P-value | 0.072 | ||
| Sig. | * | ||
| C1-C4 | Diff | -22% | 22% |
| P-value | 0.491 | ||
| Sig. | |||
| C2-C3 | Diff | -28% | 28% |
| P-value | 0.003 | ||
| Sig. | *** | ||
| C2-C4 | Diff | -19% | 19% |
| P-value | 0.090 | ||
| Sig. | * | ||
| C3-C4 | Diff | 9% | -9% |
| P-value | 1.000 | ||
| Sig. | |||
The clearest differences by type of university are observed between clusters C2 and C3, which do not differ very much in intensity of platform use but do differ in the way they use platforms.
Finally, the questionnaire asked the teaching staff to identify the circumstances that could promote the use of teaching platforms. Table 12 indicates the percentage of teaching staff in each cluster who selected each factor (the majority response in each cluster, which in all cases exceeds 40%, is highlighted in bold). The responses given to the open-ended question did not provide new avenues but tended to make their answers more specific, indicating, for example, the type of features they wanted from the platform.
| Cluster 1 | Cluster 2 | Cluster 3 | Cluster 4 | Overall | |
|---|---|---|---|---|---|
| Having more time for your teaching work | 24.00% | 40.90% | 56.40% | 37.00% | 42.40% |
| Greater ease of use of the platform | 44.00% | 36.60% | 41.00% | 30.40% | 36.50% |
| Having technical support for queries | 20.00% | 31.20% | 34.60% | 35.90% | 32.60% |
| Greater recognition by the university | 16.00% | 21.50% | 16.70% | 47.80% | 28.10% |
| Having more practical training | 24.00% | 28.00% | 29.50% | 27.20% | 27.80% |
| Having more features on the platform | 20.00% | 19.40% | 30.80% | 35.90% | 27.80% |
| Having more training sessions | 1.00% | 11.80% | 20.50% | 14.10% | 14.90% |
One can observe the low rating that increased training receives across all the groups. When the proposal is for such training be more practical, the score increases considerably, and having technical support to respond to queries is valued even higher. No significant differences are observed between clusters in any of these three demands (see Table 13).
| Chi-Square | Likelihood Ratios | |||
|---|---|---|---|---|
| P-value | Sig. | P-value | Sig. | |
| Having more time for your teaching work | 0.012 | ** | 0.011 | ** |
| Greater ease of use of the platform | 0.298 | 0.297 | ||
| Having technical support for queries | 0,499 | 0.468 | ||
| Greater recognition by the University | < 0.001 | *** | < 0.001 | *** |
| Having more practical training | 0.958 | 0.957 | ||
| Having more features on the platform | 0.074 | * | 0.070 | * |
| Having more training sessions | 0.418 | 0.436 | ||
p-value: *** 0.01, ** 0.05, * 0.1
Overall, the most common demand among faculty members is having more time for teaching work. This is also the most frequent response in clusters C2 and C3.
In Cluster C1, however, the most limiting circumstance for the use of platforms is the difficulty of use (selected by 44% of the faculty members), which is consistent with the low perceived technological competence score of this group.
In Cluster C4, the most common response is greater recognition by the University (selected by 47.8% of the teaching staff). It is striking that, in the rest of the groups, this circumstance scores between 16% and 21.5%. In fact, the chi-square and likelihood ratio tests of equiprobability that have been conducted show significant differences at a 1% significance level, as can be seen in Table 13.
The results obtained suggest interesting interpretations for the objectives of the study. A first relevant finding is that greater use of the platforms is related to the perception variables for these platforms, in terms of higher perceived usefulness and satisfaction. Their minimum and maximum values correspond to the lowest and highest use of the platforms, and there are no differences between clusters which differ due to the way they use them but not level of use.
Furthermore, the level of use is related to the self-perception of higher technological competence, as pointed out by the study of Thach and Lai (2021) for remote teaching, and Fearnley and Amora (2020) concerning faculty members’ acceptance of platforms in higher education.
Information from the demographic variables does not support stereotypes of greater technology use by males or younger individuals. In the case of gender, Del-Prete and Cabero-Almenara (2020) also provide evidence.
Effective measures to promote the use of platforms do not seem to be the same for different groups. In general, the most important factor affecting the use of platforms appears to be time constraints, whose importance was highlighted in Walker et al. (2016). However, for the group that makes the greatest use of platforms, recognition by the university is even more relevant, while the group that makes the least use feels hindered by the difficulty involved. It is worth noting the low rating given to increased training, whose limited impact has been highlighted in previous studies such as Thach and Lai (2021) for remote teaching, and Fearnley and Amora (2020). It is common in the literature to recommend increased training as a measure to promote platforms (Ziraba et al., 2020), and although it would be interesting to study whether its effect might change in another context, in this study it does not seem to be something demanded by teachers. Our results suggest that increasing training would not raise the level of platform use. It is true that the rating rises when the training is specified as practical, which may align with the need for training in didactic uses rather than purely technical ones (Del-Prete & Cabero-Almenara, 2019). This corroborates that working with a combined construct of facilities provided to teachers would lead to a loss of nuances.
The patterns identified in this work do not correspond solely to greater or lesser use of the platforms. This is important since the literature highlights that it is the type of use that determines the impact on teaching quality. Indeed, two groups are identified whose use of the platforms depends on their features. Their responses on using the platforms to share materials with the teaching team suggest that the difference in the way they use platforms may stem from different ways of organising those platforms: by subject or by group. If this explanation is accepted, organising the platform by group would promote greater use of the platform for assessment—a feature that has a positive impact on teaching (Cabero-Almenara et al., 2019)—as well as for asynchronous communication tools, organising forums or chats, and scheduling subjects. Organising the platform by subject would lead to greater use, comparatively, of synchronous communication tools and automated monitoring.
The percentage of public and private universities differs between these two groups, which is the only significant contextual factor. Indeed, in public universities, the percentage of teaching staff who state that they often or usually share material with other teachers is 15%, while in private universities it is 31%. Eliminating the two major universities from the sample to avoid any possible bias they might cause, the percentages become 18.9% in public universities and 31.6% in private ones. Nothing indicates that the public or private nature of a university determines the way platforms are organised, but there does seem to be a relationship, perhaps caused by another factor that it would be interesting to identify, such as the size of the university.
In relation to the first objective of the study, it can be concluded that among teaching staff in Spanish universities, teaching platform use patterns can be identified both for the level of use and for the way they are used, which can impact the learning quality.
Regarding the second objective, it can be concluded that perception of technological competence, usefulness, and satisfaction with the platforms seem to be related to the level of use. For the demographic variables, there is no evidence supporting stereotypes related to gender and age. As for contextual variables, the results suggest different working styles between public and private universities.
The effective measures sought to promote the use of platforms, as the third objective, do not appear to be common across all teaching staff. Different groups have different demands, but none of them seem to place much value on increased training. To encourage teaching staff who make less use of the platforms, it seems advisable to try to simplify them, while to empower the more advanced users, making them feel that their work is recognised is most effective. A consultation should be held before implementing concrete measures.
The use of a single source of information constitutes a limitation of the study, as does its local nature. Therefore, the results should be interpreted in terms of the perception of faculty members from Spanish universities. The most evident future line of research is the incorporation of other sources. For instance, understanding students’ perceptions of the impact of different uses of the platforms on learning would lead to more practical recommendations. Collecting information from teaching staff through open-ended questions or in-depth interviews could suggest new relevant variables.
A second limitation arises from variables not considered to date, such as teaching load and involvement in other activities, availability of support resources at the university, or the level of student interaction. Including more variables could answer open questions, such as what causes the differences in style observed between public and private universities.
Al-Hamed, N. (2022). Moodle as a learning management system: perceived efficacy and actual use. Journal of Educators online, 19(3), 1-19. https://doi.org/10.9743/JEO.2022.19.3.2
Alshammari, S. H., Bilal Ali, M. y Rosli, M. S. (2018). LMS, CMS and LCMS: The confusion among them. Science International, 30(3), 455-459.
Al-Nuaimi, M. N. y Al-Emran, M. (2021). Learning management systems and technology acceptance models:
A systematic review. Education and Information Technologies, 26(5), 5499-5533. https://doi.org/10.1007/ s10639-021-10513-3
Cabero-Almenara, J., Arancibia, M. y DelPrete, A. (2019). Technical and Didactic Knowledge of the Moodle LMS in Higher Education. Beyond Functional Use. Journal of New Approaches in Educational Research, 8(1), 25-33. https://doi.org/10.7821/naer.2019.1.327
Charrad, M., Ghazzali, N., Boiteau, V. y Niknafs, A. (2014). NbClust: An R Package for Determining the Relevant Number of Clusters in a Data Set. Journal of Statistical Software, 61(6), 1-36.
Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS quarterly, 319-340. https://doi.org/10.2307/249008
DeLone, W. H. y McLean, E. R. (2003). The DeLone and McLean model of information systems success: a ten-year update. Journal of management information systems, 19(4), 9-30. https://doi.org/10.1080/07421 222.2003.11045748
Del-Prete, A. y Cabero-Almenara, J. (2019). Las plataformas de formación virtual: algunas variables que determinan su utilización. Apertura, 11(2), 138-153. http://dx.doi.org/10.32870/Ap.v11n2.1521
Del-Prete, A. y Cabero-Almenara, J. (2020). El uso del Ambiente Virtual de Aprendizaje entre el profesorado de educación superior: un análisis de género. Revista de Educación a Distancia, 20(62). https://doi. org/10.6018/red.400061
Dinno, A. (2024). dunn.test: Dunn’s Test of Multiple Comparisons Using Rank Sums. R package version 1.3.6. Esnaola-Arribillaga, I. y Bezanilla, M. J. (2020). Levels of Moodle use to support university face-to-face teaching. IEEE Revista Iberoamericana de Tecnologias del Aprendizaje, 15(3), 129-137. http://dx.doi. org/10.1109/RITA.2020.3008376
Fathema, N., Shannon, D. y Ross, M. (2015). Expanding the Technology Acceptance Model (TAM) to examine faculty use of Learning Management Systems (LMSs) in higher education institutions. Journal of Online Learning & Teaching, 11(2).
Fearnley, M. y Amora, J. (2020). Learning Management System Adoption in Higher Education Using the Extended Technology Acceptance Model. IAFOR Journal of Education. 8. 89-106. http://dx.doi. org/10.22492/ije.8.2.05
Galura, J. C., Delos Reyes, E. G. y Pineda, J. L. S. (2023). C5-LMS design using Google Classroom: user acceptance based on extended Unified Theory of Acceptance and Use of Technology. Interactive Learning Environments, 31(9), 6074-6083. https://doi.org/10.1080/10494820.2022.2028852
García-Murillo, G., Novoa-Hernández, P. y Rodríguez, R. S. (2020). Technological satisfaction about Moodle in higher education—A meta-analysis. IEEE Revista Iberoamericana de Tecnologías del Aprendizaje, 15(4), 281-290. http://dx.doi.org/10.1109/RITA.2020.3033201
Garone, A., Pynoo, B., Tondeur, J., Cocquyt, C., Vanslambrouck, S., Bruggeman, B. y Struyven, K. (2019). Clustering university teaching staff through UTAUT: Implications for the acceptance of a new learning management system. British Journal of Educational Technology, 50(5), 2466-2483. http://dx.doi. org/10.1111/bjet.12867
Garrote, R. y Pettersson, T. (2007). Lecturers’ attitudes about the use of learning management systems in engineering education: A Swedish case study. Australasian Journal of Educational Technology, 23(3). http://dx.doi.org/10.14742/ajet.1256
Gómez Contreras, J. L., Bonilla Torres, C. A. y Esteban Ojeda, Y. C. (2022). Uso de TIC y TAC en la educación superior: Un análisis bibliométrico. Revista complutense de educación. https://doi.org/10.5209/ rced.73922
Kaewsaiha, P. y Chanchalor, S. (2021). Factors affecting the usage of learning management systems in higher education. Education and Information Technologies, 26(3), 2919-2939. https://link.springer.com/ article/10.1007/s10639-020-10374-2
Kozlova, D. y Pikhart, M. (2021). The use of ICT in higher education from the perspective of the university students. Procedia Computer Science, 192, 2309-2317. https://doi.org/10.1016/j.procs.2021.08.221
Kultur, C. y Yazici, C. (2014). Adoption, diffusion, and implementation of course management systems: A faculty focus. In A. D. Benson y A. Whitworth (Eds.), Research on course management systems in higher education (21-46). Charlotte, NC: Information Age.
Lavidas, K., Komis, V. y Achriani, A. (2022). Explaining faculty members’ behavioral intention to use learning management systems. Journal of Computers in Education, 9(4), 707-725. https://doi.org/10.1007/s40692- 021-00217-5
Lucas-Barcia, E. y Roa González, J. (2021). Análisis de la producción científica sobre enseñanza universitaria virtual y semipresencial en revistas españolas de alto impacto. Revista complutense de educación, 32(4), 517-616. https://doi.org/10.5209/rced.70887
Machajewski, S., Steffen, A., Romero Fuerte, E. y Rivera, E. (2019). Patterns in faculty learning management system use. TechTrends, 63, 543-549.
Mallon, S., Richards, C. y Rixon, A. (2023). Student and teacher experiences of online synchronous learning. Journal of Applied Research in Higher Education, 15(5), 1688-1705. http://dx.doi.org/10.1108/ JARHE-01-2022-0011
Martin, F., Polly, D., Coles, S. y Wang, C. (2020). Examining higher education faculty use of current digital technologies: Importance, competence, and motivation. International Journal of Teaching and Learning in Higher Education, 32(1), 73-86. http://www.isetl.org/ijtlhe/
Nakazawa, M. (2019). fmsb: Functions for Medical Statistics Book with some Demographic Data. R package version 0.7.0.
Rhode, J., Richter, S., Gowen, P., Miller, T. y Wills, C. (2017). Understanding faculty use of the learning management system. Online Learning, 21(3), 68-86. http://dx.doi.org/10.24059/olj.v21i3.1217
Salgado-Chamorro, D. L., Noble-Ramos, V. M. y Gomez-Jaramillo, S. (2023). Adoption of Learning Management Systems in Face-to-Face Learning: A Systematic Literature Review of Variables, Relationships, and Models. International Journal of Learning, Teaching and Educational Research, 22(12), 326-350. http:// dx.doi.org/10.1007/s10639-021-10513-3
Sayaf, A. M. (2023). Adoption of e-learning systems: An integration of ISSM and constructivism theories in higher education. Heliyon, 9(2), Article e13014. https://doi.org/10.1016/j.heliyon.2023.e13014
Sholeh, M. y Andayati, D. (2023). Application of k-means algorithm in clustering model for learning management system usage evaluation. Journal of Applied Business and Technology, 4(3), 189-197. http:// dx.doi.org/10.35145/jabt.v4i3.130
Thach, P. N. y Lai, P. H. (2021). Lecturer attitudes and behavioural intentions to use learning management systems in Vietnam. Journal of Open, Flexible and Distance Learning, 25(2), 35-54. https://doi. org/10.61468/jofdl.v25i2.465
Thach, P., Lai, P., Nguyen, V. y Nguyen, H. (2021). Online learning amid Covid-19 pandemic: students’ experience and satisfaction. Journal of E-learning and Knowledge Society, 17(1), 39-48. https://doi.org/10.20368/1971- 8829/1135293
UNESCO (1998). Declaración mundial sobre la educación superior en el siglo XXI: visión y acción. Revista Educación Superior y Sociedad, 9(2), 97-113. https://www.iesalc.unesco.org/ess/index.php/ess3/article/ view/171
Venkatesh, V., Morris, M. G., Davis, G. B. y Davis, F. D. (2003). User acceptance of information technology: Toward a unified view. MIS quarterly, 425-478. https://doi.org/10.2307/30036540
Walker, D. S., Lindner, J. R., Murphrey T. P., y Dooley, K. (2016). Learning management system usage. Quarterly Review of Distance Education, 17(2), 41-50.
Warnes, G. R., Bolker, B., Bonebakker, L., Gentleman, R., Huber, W., Liaw, A., Lumley, T., Maechler, M., Magnusson, A., Moeller, S., Schwartz, M. y Venables, B. (2019). gplots: Various R Programming Tools for Plotting Data. R package version 3.0.1.1.
Washington, G. Y. (2019). The learning management system matters in face-to-face higher education courses. Journal of Educational Technology Systems, 48(2), 255-275. http://dx.doi.org/10.1177/0047239519874037
Wickham, H., François, R., Henry, L. y Müller, K. (2021). dplyr: A Grammar of Data Manipulation. R package version 1.0.4.
Ziraba, A., Akwene, G. C. y Lwanga, S. C. (2020). The adoption and use of Moodle learning management system in higher institutions of learning: A systematic literature review. American Journal of Online and Distance Learning, 2(1), 1-21. https://doi.org/10.47672/ajodl.489
Funding Information: Ministerio de Ciencia e Innovación (PID202020-116293RBI00) and Santalucía Chair of Analytics for Education↩︎