Results and discussion

Demographic profile of the respondents

Table 7 shows the demographic characteristics of the respondents. Among 285 respondents, 164 (75.5%) are male, while 121 (42.5%) are female. The data was collected from different universities in China and Pakistan. The table shows that 142 (50.2%) are Chinese students, and 141 (49.8%) are Pakistani students. The age group section shows that the students are divided into three age groups, <20 years, 20–25 years, and 26 years and above. Most students belong to the age group 20–25 years, which is 140 (49.1%), while 26 (9.1%) are <20 years old and 119 (41.8%) are 26 years and above. The fourth and last section of the table shows the program of the student’s studies. According to this, 149 (52.3%) students are undergraduates, 119 (41.8%) are graduates, and 17 (6%) are post-graduates.

Table 7 Demographic distribution of respondents.

Structural model

The structural model explains the relationships among study variables. The proposed structural model is exhibited in Fig. 2.

Fig. 2
figure 2

Results model for the Impact of artificial intelligence on human loss in decision-making, laziness, and safety in education.

Regression analysis

Table 8 shows the total direct relationships in the model. The first direct relationship is between artificial intelligence to loss in human decision-making, with a beta value of 0.277. The beta value shows that one unit increase in artificial intelligence will lose human decision-making by 0.277 units among university students in Pakistan and China. This relationship having the t value of 5.040, greater than the threshold value of 1.96, and a p-value of 0.000, <0.05, shows that the relationship is statistically significant. The second relationship is between artificial intelligence the human laziness. The beta value for this relationship is 0.689, which shows that one unit increase in artificial intelligence will make the students of Pakistan and China universities lazy by 0.689 units. The t-value for the relationship is 23.257, which is greater than the threshold value of 1.96, and a p-value of 0.000, which is smaller than the threshold value of 0.05, which shows that this relationship is also statistically significant. The third and last relationship is from artificial intelligence to security and privacy issues of Pakistani and Chinese university students. The beta value for this relationship is 0.686, which shows that a one-unit increase in artificial intelligence will increase security and privacy issues by 0.686. The t-value for the relationship is 17.105, which is greater than the threshold value of 1.96, and the p-value is 0.000, which is smaller than a threshold value of 0.05, indicating that this relationship is also statistically significant.

Table 8 Regression analysis.

Hypothesis testing

Table 8 also indicates that the results support all three hypotheses.

Model fitness

Once the reliability and validity of the measurement model are confirmed, the structural model fitness must be assessed in the next step. For the model fitness, several measures are available in the SmartPLS, like SRMR, Chi-square, NFI, etc., but most of the researcher recommends the SRMR for the model fitness in the PLS-SEM. When applying PLS-SEM, a value <0.08 is generally considered a good fit (Hu and Bentler, 1998). However, the table of model fitness shows that the SRMR value is 0.06, which is less than the threshold value of 0.08, which indicates that the model is fit.

Predictive relevance of the model

Table 9 shows the model’s prediction power, as we know that the model has total dependent variables. Then there are three predictive values for the model for each variable. The threshold value for predicting the model power is greater than zero. However, Q2 values of 0.02, 0.15, and 0.35, respectively, indicate that an independent variable of the model has a low, moderate, or high predictive relevance for a certain endogenous construct (Hair et al., 2013). Human laziness has the highest predictive relevance, with a Q2 value of 0.338, which shows a moderate effect. Safety and security issues have the second largest predictive relevance with the Q2 value of 0.314, which also show a moderate effect. The last and smallest predictive relevance in decision-making with a Q2 value of 0.033 which shows a low effect. A greater Q2 value shows that the variable or model has the highest prediction power.

Table 9 IPMA analysis.

Importance performance matrix analysis (IPMA)

Table 10 shows the importance and performance of each independent variable for the dependent variables. We see that artificial intelligence has the same performance of 68.78% for all three variables: human laziness, decision-making, safety, and security. While the importance of artificial intelligence, human laziness is 68.9%, loss in decision-making is 25.1%, and safety and security are 74.6%. This table shows that safety and privacy have the highest importance, and their performance is recommended to be increased to meet the important requirements. Figures 35 also show all three variables’ importance compared to performance with artificial intelligence.

Table 10 Multi-group (analysis of gender)a.
Fig. 3
figure 3

Importance-performance map—human loss in decision making and artificial intelligence.

Fig. 4
figure 4

Importance-performance map—human laziness and artificial intelligence.

Fig. 5
figure 5

Importance-performance map—safety and privacy and artificial intelligence.

Multi-group analysis (MGA)

Multigroup analysis is a technique in structural equation modeling that compares the effects of two classes of categorical variables on the model’s relationships. The first category is gender, composed of male and female subgroups or types. Table 10 shows the gender comparison for all three relationships. The data record shows that there were 164 males and 121 females. The p-values of all three relationships are >0.05, which shows that gender is not moderate in any of the relationships. Table 10 shows the country-wise comparison for all three relationships in the model. The p-values of all three relationships are >0.05, indicating no moderating effect of the country on all three relationships. The data records show 143 Pakistanis and 142 Chinese based on the country’s origin.

Discussion

AI is becoming an increasingly important element of our lives, with its impact felt in various aspects of our daily life. Like any other technological advancement, there are both benefits and challenges. This study examined the association of AI with human loss in decision-making, laziness and safety and privacy concerns. The results given Tables 11 and 12 show that AI has a significant positive relationship with all these variables. The findings of this study also support that the use of AI technologies is creating problems for users related to security and privacy. Previous research has also shown similar results (Bartoletti, 2019; Saura et al., 2022; Bartneck et al., 2021). Using AI technology in an educational organization also leads to security and privacy issues for students, teachers, and institutions. In today’s information age, security and privacy are critical concerns of AI technology use in educational organizations (Kamenskih, 2022). Skills specific to using AI technology are required for its effective use. Insufficient knowledge about the use will lead to security and privacy issues (Vazhayil and Shetty, 2019). Mostly, educational firms do not have AI technology experts in managing it, which again increases its vulnerability in the context of security and privacy issues. Even if its users have sound skills and the firms have experienced AI managers, no one can deny that any security or privacy control could be broken by mistake and could lead to serious security and privacy problems. Moreover, the fact that people with different levels of skills and competence interact in educational organizations also leads to the hacking or leaking of personal and institutional data (Kamenskih, 2022). AI is based on algorithms and uses large data sets to automate instruction (Araujo et al., 2020). Any mistake in the algorithms will create serious problems, and unlike humans, it will repeat the same mistake in making its own decisions. It also increases the threat to institutional and student data security and privacy. The same challenge is coming from the student end. They can be easily victimized as they are not excellently trained to use AI (Asaro, 2019). With the increase in the number of users, competence division and distance, safety and privacy concerns increase (Lv and Singh, 2020). The consequences depend upon the nature of the attack and the data been leaked or used by the attackers (Vassileva, 2008).

Table 11 Model fitness.
Table 12 Predictive relevance of the model.

The findings show that AI-based products and services are increasing the human laziness factor among those relying more on AI. However, there were not too many studies conducted on this factor by the researcher in the past, but the numerous researchers available in the literature also endorse the findings of this study (Farrow, 2022; Bartoletti, 2019). AI in education leads to the creation of laziness in humans. AI performs repetitive tasks in an automated manner and does not let humans memorize, use analytical mind skills, or use cognition (Nikita, 2023). It leads to an addiction behavior not to use human capabilities, thus making humans lazy. Teachers and students who use AI technology will slowly and gradually lose interest in doing tasks themselves. This is another important concern of AI in the education sector (Crispin Andrews). The teachers and students are getting lazy and losing their decision-making abilities as much of the work is assisted or replaced by AI technology (BARON, 2023). Posner and Fei-Fei (2020) suggested it is time to change AI for education.

The findings also show that the access use of AI will gradually lead to the loss of human decision-making power. The results also endorsed the statement that AI is one of the major causes of the human loss of decision-making power. Several researchers from the past have also found that AI is a major cause responsible for the gradual loss of people’s decision-making (Pomerol, 1997; Duan et al., 2019; Cukurova et al., 2019). AI performs repetitive tasks in an automated manner and does not let humans memorize, use analytical mind skills, or use cognition, leading to the loss of decision-making capabilities (Nikita, 2023). An online environment for education can be a good option (VanLangen, 2021), but the classroom’s physical environment is the prioritized education mode (Dib and Adamo, 2014). In a real environment, there is a significant level of interaction between the teacher and students, which develop the character and civic bases of the students, e.g., students can learn from other students, ask teachers questions, and even feel the education environment. Along with the curriculum, they can learn and adopt many positive understandings (Quinlan et al., 2014). They can learn to use their cognitive power to choose options, etc. But unfortunately, the use of AI technology minimizes the real-time physical interaction (Mantello et al., 2021) and the education environment between students and teachers, which has a considerable impact on students’ schooling, character, civic responsibility, and their power to make decisions, i.e., use their cognition. AI technology reduces the cognitive power of humans who make their own decisions (Hassani and Unger, 2020).

AI technology has undoubtedly transformed or at least affected many fields (IEEE, 2019; Al-Ansi and Al-Ansi, 2023). Its applications have been developed for the benefit of humankind (Justin and Mizuko, 2017). As technology assists employees in many ways, they must be aware of the pros and cons of the technology and must know its applications in a particular field (Nadir et al., 2012). Technology and humans are closely connected; the success of one is strongly dependent on the other; therefore, there is a need to ensure the acceptance of technology for human welfare (Ho et al., 2022). Many researchers have discussed the user’s perception of a technology (Vazhayil and Shetty, 2019), and many have emphasized its legislative and regulatory issues (Khan et al., 2014). Therefore, careful selection is necessary to adopt or implement any technology (Ahmad and Shahid, 2015). Once imagined in films, AI now runs a significant portion of the technology, i.e., health, transport, space, and business. As AI enters the education sector, it has been affected to a greater extent (Hübner, 2021). AI further strengthened its role in education, especially during the recent COVID-19 pandemic, and invaded the traditional way of teaching by providing many opportunities to educational institutions, teachers, and students to continue their educational processes (Štrbo, 2020; Al-Ansi, 2022; Akram et al., 2021). AI applications/technology like chatbots, virtual reality, personalized learning systems, social robots, tutoring systems, etc., assist the educational environment in facing modern-day challenges and shape education and learning processes (Schiff, 2021). In addition, it is also helping with administrative tasks like admission, grading, curriculum setting, and record-keeping, to name a few (Andreotta and Kirkham, 2021). It can be said that AI is likely to affect, enter and shape the educational process on both the institutional and student sides to a greater extent (Xie et al., 2021). This phenomenon hosts some questions regarding the ethical concerns of AI technology, its implementation, and its impact on universities, teachers, and students.

The study has similar findings to the report published by the Harvard Kennedy School, where AI concerns like privacy, automation of tasks, and decision-making are discussed. It says that AI is not the solution to government problems but helps enhance efficiency. It is important to note that the report does not deny the role of AI but highlights the issues. Another study says that AI-based and human decisions must be combined for more effective decisions. i.e., the decisions made by AI must be evaluated and checked, and the best will be chosen by humans from the ones recommended by AI (Shrestha et al., 2019). The role of AI cannot be ignored in today’s technological world. It assists humans in performing complex tasks, providing solutions to many complex problems, assisting in decision-making, etc. But on the other hand, it is replacing humans, automating tasks, etc., which creates challenges and demands for a solution (Duan et al., 2019). People are generally concerned about risks and have conflicting opinions about the fairness and effectiveness of AI decision-making, with broad perspectives altered by individual traits (Araujo et al. 2020).

There may be many reasons for these controversial findings, but the cultural factor was considered one of the main factors (Elliott, 2019). According to researchers, people with high cultural values have not adopted the AI problem, so this cultural constraint remains a barrier for the AI to influence their behaviors (Di Vaio et al., 2020; Mantelero, 2018). The other thing is that privacy is a term that has a different meaning from culture to culture (Ho et al., 2022). In some cultures, people consider minimal interference in personal life a big privacy issue, while in some cultures, people even ignore these types of things (Mantello et al., 2021). The results are similar to Zhang et al. (2022), Aiken and Epstein (2000), and Bhbosale et al. (2020), which focus on the ethical issues of AI in education. These studies show that AI use in education is the reason for laziness among students and teachers. In short, the researchers are divided on the AI concerns in education, just like in other sectors. But they agree on the positive role AI plays in education. AI in education leads to laziness, loss of decision-making capabilities, and security or privacy issues. But all these issues can be minimized if AI is properly implemented, managed, and used in education.

Implications

The research has important implications for technology developers, the organization that adopts the technology, and the policymakers. The study highlights the importance of addressing ethical concerns during AI technology’s development and implementation stage. It also provides guidelines for the government and policymakers regarding the issues arising with AI technology and its implementation in any organization, especially in education. AI can revolutionize the education sector, but it has some potential drawbacks. Implications suggest that we must be aware of the possible impact of AI on laziness, decision-making, privacy, and security and that we should design AI systems that have a very minimal impact.

Managerial Implications

Those associated with the development and use of AI technology in education need to find out the advantages and challenges of AI in this sector and balance these advantages with the challenges of laziness, decision-making, and privacy or security while protecting human creativity and intuition. AI systems should be designed to be transparent and ethical in all manners. Educational organizations should use AI technology to assist their teachers in their routine activities, not to replace them.

Theoretical Implications

A loss of human decision-making capacity is one of the implications of AI in education. Since AI systems are capable of processing enormous amounts of data and producing precise predictions, there is a risk that humans would become overly dependent on AI in making decisions. This may reduce critical thinking and innovation for both students and teachers, which could lower the standard of education. Educators should be aware of how AI influences decision-making processes and must balance the benefits of AI with human intuition and creativity. AI may potentially affect school security. AI systems can track student behavior, identify potential dangers, and identify situations where children might require more help. There are worries that AI could be applied to unjustly target particular student groups or violate students’ privacy. Therefore, educators must be aware of the potential ethical ramifications of AI and design AI systems that prioritize security and privacy for users and educational organizations. AI makes people lazier is another potential impact on education. Teachers and learners may become more dependent on AI systems and lose interest in performing activities or learning new skills or methodologies. This might lead to a decline in educational quality and a lack of personal development among people. Therefore, teachers must be aware of the possible detrimental impacts of AI on learners’ motivation and should create educational environments that motivate them to participate actively in getting an education.

Conclusion

AI can significantly affect the education sector. Though it benefits education and assists in many academic and administrative tasks, its concerns about the loss of decision-making, laziness, and security may not be ignored. It supports decision-making, helps teachers and students perform various tasks, and automates many processes. Slowly and gradually, AI adoption and dependency in the education sector are increasing, which invites these challenges. The results show that using AI in education increases the loss of human decision-making capabilities, makes users lazy by performing and automating the work, and increases security and privacy issues.

Recommendations

  1. 1.The designer’s foremost priority should be ensuring that AI will not cause any ethical concerns in education. Realistically, it is impossible, but at least severe ethical problems (both individual and societal) can be minimized during this phase.
  2. 2.AI technology and applications in education need to be backed by solid and secure algorithms that ensure the technology’s security, privacy, and users.
  3. 3.Bias behavior of AI must be minimized, and issues of loss of human decision-making and laziness must be addressed.
  4. 4.Dependency on AI technology in decision-making must be reduced to a certain level to protect human cognition.
  5. 5.Teachers and students should be given training before using AI technology.

Future work

  1. 1.Research can be conducted to study the other concerns of AI in education which were not studied.
  2. 2.Description and enumeration of the documents under analysis.
  3. 3.Procedure for the analysis of documents. Discourse analysis and categorization.
  4. 4.Similar studies can be conducted in other geographic areas and countries.

Limitations

This study is limited to three basic ethical concerns of AI: loss of decision-making, human laziness, and privacy and security. Several other ethical concerns need to be studied. Other research methodologies can be adopted to make it more general.