THE CYBER SECURITY CHALLENGE OVER AI GROWTH

The cyber security challenge over AI growth, the main and the first emergency trigger is the human intervene in all AI implementations



Abstract

Cyber security issues are quite common in the case of AI systems and IoTs but there are a lack of research regarding this topic. It can be seen that while there are multiple researches involving how AIs are used in order to deal with cybersecurity threats while very few involved assessing the methods to improve the operations of AIs in order to prevent additional cybersecurity issues. The study has been conducted to assess the different cybersecurity threats that occur within AI operations and also provides specific focus towards understanding the triggers or factors that lead to human interventions in AI. The various forms of human interventions in AI, their causes and effects have been discussed in detail within this study. It was found that the interventions are highly necessary as AIs cannot operate without prompts provided by humans and it also cannot understand human emotions and thus is unable to provide best results. Furthermore, it is possible for hackers and data thieves to access the system and in that case immediate actions by people are necessary which can only be achieved with the availability of such skilled personnel with understanding of prompt engineering.

Chapter 1: Introduction

1.1 Brief introduction

The rise of Artificial Intelligence (AI) in recent years has contributed to significant ease of access for a wide range of operations. Since the 2010s, there had been consistent changes and implementations being made for the growth of AI. Additionally, AI also started being implemented within organisational operations since around 2012 which involved providing guidance and centralized handling of brand activities. However, as per Vla?i? et al. (2021), since 2020, there has been an increase in access of AI solutions for B2C markets. Marketing operations have been the primary goals for such implementations which included the use of predictive models. The availability of AI in the hands of the local population allows for a wide range of activities such as text generation, image to text, text to image, image generation from prompts and more. However, the human interaction is essential towards achieving such goals. This study provides a detailed view of the factors influencing human interactions with AI and also discusses the importance and need of such human interactions. It also considers the cybersecurity challenges faced by AI and the human interaction impacts on dealing with such problems.

1.2 Background

AI has the ability to perform more logically than humans and thereby leads to easy management of complex actions and computing operations. However, it lacks the human emotion aspect and thereby makes mistakes stemming from such differences. While humans understand the ethical values and emotional aspects, thought processes and beliefs of other people, this is not the case for AI which makes decisions on the basis of logical thinking (Zhou, 2021). Even though human interventions can lead to AI making more balanced decisions but such actions could lead to unexplainable deviations and unpredictable actions. These could lead to negative impacts on decision-making thereby leading to potential crisis situations. Therefore, human interventions is a major requirement in order to ensure that AI systems operate to the best of their abilities (Korteling et al. 2021). This involves AI being used to making plans and decisions which can then be reviewed and tweaked by humans to be more in tune with the emotional aspects of people without sacrificing predictability.

Additionally, AI makes decisions on the basis of data it has access to. This may include company data pools or the internet from which the AI collects the information. However, as AI cannot discriminate between true data and biased or wrong data, it cannot be stated as completely accurate (Janssen et al. 2020). Furthermore, there might be security issues involving the fact that AI can collect negative and harmful data that could have dire consequences for people. Since AI is used in surveillance, advanced military weaponry and autonomous vehicles, harmful data that influence these systems to go against humans can lead to the AI going rogue and causing huge problems in managing the aftermath. Thus, human interventions are necessary to ensure that the data is not collected from harmful sources.

1.3 Aims and Objectives

The aims of the study are to identify the cybersecurity challenges due to AI growth, the roles of humans in managing such challenges and the impact of the human-machine interaction on dealing with such problems. Similarly, the objectives of the study can be stated as follows:

·?? To identify the factors leading to human interactions within AI operations

·?? To assess the cybersecurity challenges faced due to the growth of AI

·?? To analyse the impact of human interactions in the prevention of these challenges within AI operations

·? To recommend additional actions that can be taken in order to deal with the cybersecurity challenges from the growth of AI in recent years

1.4 Research Questions

Based on the data presented above, the following research questions can be crafted.

·?? What are the factors contributing to human interactions within AI-based operations?

·? What cybersecurity challenges are usually faced due to the growth of AI?

·?? What are the impacts of human interactions in the prevention of these challenges within AI operations?

·?? What additional actions can be taken in order to deal with the cybersecurity challenges faced due to the growth of AI in recent years?

1.5 Rationale

This study takes into account the cybersecurity issues occurring within AI technology utilisations and the role of human interactions in dealing with such issues. As seen in recent times, the artificial intelligence software market experienced significant growth over the years. It was seen that the sector had the lowest levels of market revenue in 2018 which consistently increased and grew to about approximately near $70b by 2023 (explodingtopics.com , 2024). The market is also expected to reach higher levels of more than $100b within 2025. Thus, there is a high demand for such services.

Figure 1: Revenue in AI software market

(Source: explodingtopics.com , 2024)

The primary issue considered, due to which the study is conducted includes fact that the ability of artificial intelligence to collect data from harmful online sources can lead to the systems malfunctioning, going rogue or contributing to dire situations. Furthermore, these problems could also influence hackers, unauthorized personnel and data thieves from feeding these incorrect or harmful data within the AI system, thereby leading to major potential loss of life, finances or internal data. Additionally, as per Kr?ger et al. (2021), AI's ability to handle a wide range of data can allow the data thieves to tap into personal details of multiple people, targeting them individually. The possibility of such an issue is the highest during this time due to the rise of AI software that allow customization as well the needs of people. The free availability of AI models in locations such as Github and Huggingface leads to individuals developing their own AI systems for personal or business functions. The lack of knowledge regarding the security issues can easily lead to serious negative situations and threat to life.

1.6 Research Outline

Figure 2: Research Outline

(Source: Created by Author)

The structure of the study includes a 5-step process which involves starting with an introduction chapter that presents the goals and objectives of the study followed by the literature review chapter that reviews the concepts involved in the study along with various theoretical implementations. The third chapter includes methodology which refers to the methods used in order to conduct the research. This includes the methods used for data collection, sources used, ethical values followed and more. This is followed by the data analysis section which provides a detailed view of the findings gained from the study in regards to the topic. Finally, the conclusion chapter presents overall findings received followed by recommendations for what can be done further in order to deal with the issue.

?

Chapter 2: Literature Review

2.1 Chapter Introduction

This section of the study deals with analysing the concepts and theories involved along with the collection and evaluation of past literatures on the topic. Past researches have stated AI to be the future and its applications were considered to be revolutionary. In 2022 and 2023, the future came into existence with the availability of AI applications to the local population with GPT3.5 and 4textgen AI technologies, LLamaCpp, SDXL 1.0 and 1.5 applications for image generation and more. The rise of these technologies also led to opportunities for unauthorized personnel to target the vulnerabilities of such systems in order to collect personal details from people, thereby leading to serious hazardous situations for people and a rise in cybersecurity issues (Sarker et al. 2021). The factors causing humans to be involved in AI operations and the ways by which it allows the prevention of such cybersecurity issues have been presented in details within this section.

2.2 Concepts of AI and Human-machine interaction


Figure 3: SqUID robots

(Source: bionichive.com , 2024)

The concept of AI refers to a tool, robot or software system that is able to learn on its own and evolve with more and more access to data. AI was first implemented for robotics operations in order to act as a 'brain' for robots to function. A major example includes the SqUID robots developed by Bionichive in order to ensure easy carrying of materials from online location to another within warehouses (bionichive.com , 2024). These robots involved the use of IoT (Internet of Things) devices such as motion sensors, proximity sensors and more which allowed for easy collection of data for navigation though the warehouse while the centralized AI system provided the decisions and actions that needed to be carried out and thereby the robots operated accordingly. This included climbing up racking in order to carry packages in and out. Eventually, the AI applications were polished and improved until it could operate without a robotic body and act as a software application (bionichive.com , 2024). The recent programs such as ChatGPT, Stable diffusion, Dall-E and more are examples of such software systems that can provide users with requirements based on data collected from online sources. These AI implementations in business operations such as marketing processes, template setup and business plan development have contributed to significant growth and positive impacts for organisations in recent times.


Figure 4: Modern AI applications

(Source: Google Search)

The human-machine interactions mainly refer to the involvement of humans in managing and overseeing AI operations. Unsupervised AI models can lead to the collection of harmful data that may not be appropriate or could have dire consequences and thus can affect people negatively (Brito et al. 2022). However, supervised models involve human involvement in ensuring that specific forms of data is not collected by the AI systems. This ensures the prevention of harmful data entering the AI system and thus also prevents possible harmful impacts on users. Additionally, human involvements in the overseeing of AI operations also prevents data thieves, hackers and more from targeting the systems as they are in constant surveillance by people. In regards to the recent software system developments like ChatGPT, Stable diffusion and Dall-E as presented earlier also are limited in their scope of activities in order to ensure safety of users. These tools also operate on the basis of actions taken by people. This involves the user providing prompts for a goal to be achieved and the AI systems curating content, images, videos and other services on the basis of these requirements.

2.3 Cyber security issues involved with AI operations

In regards to AI operations the various cyber security issues that could occur are as follows.

Data Thefts

Running AI systems that can be accessed online contributes to potential opportunities for hackers and data thieves to target and tap into the conversations of people and collect their personal details like phone numbers, email address, financial details and more. While financial details are not readily available, it is normal for people to use tools like ChatGPT for creating mails or important documents (Sarker et al. 2021). Thus, the inclusion of seriously important data into the AI tool can lead to potential exposure of the data if they are collected by outsiders. It is essential to prevent the usage of such data within online tools. Using offline processes with AI tools operating within the computer system or mobile platform instead of online can be more secure but could also include security issues leading to easy collection of data when connected to the internet.

Data Alteration and integrity failure

The accuracy of data gained from the AI systems are questionable as most times organisations have a vast amount of data which needs evaluation. The AI systems lack an understanding on which of the aspects are important. The AI system might consider something that is logically valuable on paper as the most important aspect. However, there is a possibility of a requirement that is emotionally valuable which the AI system is unable to pick up on and thus leads to wrong decisions or omission of important details (Bonfanti, 2022). Such systems, if left unsupervised can lead to the alteration of data where some of the important aspects that have a link with emotional factors may be ignored by the AI system. Additionally, outsiders and direct competitors of the brand may also try to access these systems in order to alter data which can lead to serious negative impacts on the brand as it can affect their ability to reach goals or may lead to errors in calculated actions. Depending on the seriousness of the issue the brand could experience a financial failure or even a complete failure from which they would be unable to recoup.

AI automation for malicious use

AI systems have a high chance of possible failure from malwares. Hackers and unauthorized personnel may utilise AI programming techniques to develop malwares that can modify itself to avoid detection. These malwares can contribute to automated actions of the AI system that the owner may not have integrated. Reprogramming the AI to delete or modify important data within the system or to ignore commands by the owner could lead to serious hazards as secretive company information could be public knowledge. AI for defence systems are usual targets for such processes as it can provide opponents with an advantage in case of warfare (Dash et al. 2022). Similarly, in case of hospitals, important records can be collected or wiped clean using these AI programs thereby leading to chaos and problems with patient management. Such issues may be common in case of acts of terrorism and can seriously jeopardize health and safety.

Apart from these processes, there are also a large number of different issues that could occur. Data poisoning is a common issue involving corrupting the behaviours of AI models (Shen and Xia, 2020). Additionally, there could be a lot of false positive detections as several important aspects of the AI models could be flagged as bugs or wrong programming, thereby leading to serious problems for the IT team to manage operations. Similarly, it can slow down or completely stop the AI model operations as it is going to function inconsistently.

2.4 Role of human intervention on dealing with the issues

Human interventions are necessary in the use of AI due to the fact that it can help prevent access to the system by unauthorized people. As per Kyriakou and Otterbacher (2023), even though technological processes are effective as well as essential for overseeing operations and keep track of possible unauthorized entry, fraudulent activities and more, these systems are still at the risk of being hacked. This is mainly due to the fact that hackers and data thieves are mostly highly skilled in data collection through intrusion and can very easily protect themselves from being caught. However, the availability of people overseeing the complete network operations can help to ensure that no such issues occur or that manual methods of protecting the data and preventing the data breach can be implemented in that moment.

In order to ensure effective operations of the organisation, the development of proper cybersecurity measures is also a major tasks in human interventions. While the AI systems can operate as per their programmed processes, these systems lack the ability to develop cybersecurity measures on their own (Kyriakou and Otterbacher, 2023). Organisations have their own various processes for maintaining cybersecurity with each brand following the legislations in place within their country as well as having their own strategies in place. The use of firewalls for the prevention of intrusion is a common process used by all organisations. However, as argued by Mahmood et al. (2020), the use of methods such as network trunking is not always used by all organisations as it is an uncommon process which is mostly used by IT firms. These differences occur due to human interventions and leads to complexity for hackers and data thieves from collecting or deleting important company information.

Additionally, humans are also involved in devising ethical guidelines. While AI systems may be able to draft a few rules and regulations which can be acceptable for the organisation to implement, it cannot take into account ethical factors. Things that are ethically acceptable and those that are not cannot be differentiated by AIs due to the fact that it does not have the same human-like understanding and behaviours. As stated by Saleem et al. (2022), neural mapping within AIs to copy human behaviours is possible but is a complex challenge. No such human-like exact AI models are available which can operate in this way and thus human interventions are necessary at this stage.

Cyber security threats also keep increasing and adapting over time. There are multiple ways by which hackers and data thieves may attack the systems which includes bruteforce password cracking, using malwares to trick the AI systems, intruding the system with small command-based files which are indistinguishable to collect information and more (Chakraborty et al. 2023). Humans are involved in constantly improving the AI systems to prevent these issues and organisations highly invest in improving the systems to protect internal data and proprietary methods of operations. Continuous Improvement (CI) is a concept which allows for this constant improvement and evolution which can help to ensure the prevention of these cybersecurity issues (Karamitsos et al. 2020). It involves using Agile principles in order to ensure continuous learning and updating of security protocols within the network, AI systems, connected devices and more. Human intervention is thereby highly necessary in order to achieve such results and effective implementation of strategies.

Finally, decision-making is the most important task for humans which leads to their interventions. While the AI systems are able to provide insights on what decisions could be taken in a situation, human involvement is also necessary to ensure that the decisions are proper and based on facts. As stated by McCarthy (2022), AI systems operate on the basis of logical thinking and therefore provides decisions that are logically sound on paper. However, as argued by Marcos-Pablos and García-Pe?alvo (2022), logical thinking is not always applicable in real life situations something that is acceptable in theory may not be acceptable in real life. There is a significant need for devising realism concepts to achieve the results efficiently which can only be

2.5 Factors that trigger the human intervention in AI operations

The various widespread factors that trigger the need for human interventions have been presented as follows.

False Results

As AIs are involved in collecting data from online sources or a data pool that is provided to them for information collection, these systems may face problems with efficiency in operations. There are multiple ways to deal with a situation which the human mind can comprehend. However, AIs provide insights on making decisions through only their programmed way of handling the data. Logical thinking is the primary catalyst for lower-end AI devices or software systems which leads to complicated actions or decisions that feel sound on paper but not in real life (McCarthy, 2022). On the other hand, working on increasing creative expression of the AI systems to develop more unique solutions could lead to random results. These extremities in results can lead to possible failure for the entire system as multiple strategies together are usually effective for best results which can only be handled by humans.

Uncertainty

Uncertainty mainly occurs mostly when the AI systems are fine-tuned to provide more creative results. While logical thinking leads to a structured process of dealing with issues which may not work in real life, creative thinking allows for activities that might work in real life. However, the randomization of the results leads to possible opportunities for failure of the entire system and organisation. This random process of decision making makes the resulting strategies uncertain to work as expected and thus needs the availability of humans to make the final decision.

Legal and Ethical issue

While AI devices and systems can collect data about legal and ethical factors, they are unable to gain a proper understanding and grasp of the issues or situation faced. This is mainly because of the lack of human-like emotional thinking processes as it leads to a lack of understanding of the importance of following legal guidelines as well as ethics (Zhou, 2021). Legalities need to be maintained by people in order to prevent major backlashes from government bodies and major organisations worldwide. It can also help to prevent possible failure and shutdown of the brand operations or just penalizations due to a lack of compliance to legal values. On the other hand, as argued by Devillers (2021), ethical issues need human understanding as they are based on emotional concepts which the AI systems are unable to understand.

Biased actions

AI systems are widely dependent only on the data collected from various online sources or data pools for their evaluation. This dependency leads to serious negative impacts on understanding possible biases within the data. Human-made assessments can help to track actions and activities which may contain biases and it can allow developing strategies to get rid of biased data. A major example may include data collected from a person who talks about the quality of services from their brand instead of all others in the market (Ntoutsi et al. 2020). This person working within the brand may contribute to them being biased towards their company which may lead to the overall findings and collected data to be untrue. Human intervention is thereby necessary here as it can help identify biases and provide a cleaner data for evaluation and gaining access to expected findings.

Feedback management

Collecting feedbacks from people is also a major way for improved operations within the brand. As the organisations operate in order to provide customers with the best quality of products and services, the use of proper feedback management is essential. In order to collect these feedbacks, however, there is a need for human interactions with positive employee-customer relations. These feedbacks can help to improve system operations and lead higher levels of customer satisfaction and loyalty towards the brand (Raisch and Krakowski, 2021). Once the feedbacks are collected, these can be processed using AI systems to develop new and improved decisions. However, the results might be uncertain due to the differing opinions and needs of others.?

2.6 Clarke Wilson Security Model

Figure 5: Clarke Wilson Security Model

(Source: Xu et al. 2022)

The Clark Wilson security model mainly deals with the protection of data from being tampered with. This is in line with the possible issues of hackers or data thieves releasing malwares into the system in order to try and collect, delete or modify data. This model works based on the concept of well-formed transaction (Xu et al. 2022). This well-formed transaction refers to the idea of having clear reasons for the changes within data and acting as per constraints and boundaries of the system in order to prevent any sort of manipulation. Internal data accuracy is the prime goal for such a model as it focuses towards protecting the data from any external threats (Olanrewaju et al. 2022). Based on this model, the data modification can only be carried out if it goes through a series of checks. This involves a three step process where the data objects can be accessed through a specific program that notifies the owner with a possible breach. The second step similarly involves enforcing the separation of duties of each user accessing the system in order to prevent additional unauthorized entry and finally the third stage involves a detailed auditing to ensure that the data has not been manipulated or changed.

2.7 Literature Gap

Although the literature review presents a detailed analysis of the latest developments in AI growth and cybersecurity challenges, there is still a lack of understanding about subtle aspects related to ethical considerations and accountability in human-AI interactions. The available literature mainly highlights the requirement for human intervention aimed at curtailing security risks that lead to unlawful entry into confidential data (Mohanta et al. 2020). Nevertheless, the ethical issues surrounding human supervision, decision-making, and responsibilities in AI systems are still largely neglected. In particular, the literature lacks a detailed investigation of ethical considerations in human-AI interactions and situations wherein AI systems autonomously make decisions that may have societal consequences (Taddeo et al. 2019). The consideration of ethics such as transparency, responsibility, and algorithmic bias is vital in ensuring that AI applications replicate the following human values. It is vital to understand how human operators approach these ethical aspects, especially when presented with choices that could have far-reaching results, for the development of a sustainable framework regarding AI deployment.

In addition, the literature gap includes the study of frameworks and guidelines that can help AI operations in ethical decision-making. Though the Clark-Wilson Security Model is presented as a security-oriented approach, it only defines data integrity and does not address an ethical component in decision-making (Bécue et al. 2021). A thorough understanding of the ethical aspects involved in human-AI interactions is necessary for an efficient formulation of imperative guidelines that encourage responsible AI use.

2.8 Summary

The literature study examines the growth of AI applications while highlighting the need for human involvement in the management of cybersecurity risks. It covers possible security risks, the possibility of AI automation for nefarious ends, and the susceptibility of AI systems to biased data. The review presents the Clark Wilson Security Model for data integrity and emphasizes the significance of human-machine interactions in preventing data breaches and manipulation. However, a vacuum in the literature is noted about the responsibility and complex ethical issues in human-AI interactions, which calls for more research. The analysis lays the groundwork for an extensive investigation into the moral implications of AI development and cybersecurity issues.

?

Chapter 3: Methodology

3.1 Chapter Introduction

A research methodology is a depiction of the process and techniques utilized to recognize and evaluate data on the particular research topic. Research methodology is a procedure that enables researchers to design the study to meet its objectives using the selected research instruments. It encompasses entire significant aspects of research, including data collection techniques, research design, data analysis methods, as well as the whole framework within which the study is executed. It offers an in depth action plan that enable researchers to be on track, resulting in a effective, manageable and smooth process. A researcher's methodology empowers the reader to acknowledge the methods and approach utilized to deliver conclusions. The research methodology will include the suitable paradigms on the cyber security challenges over the AI growth which will help conduct an effective study. It will provide an effective way to deliver a conclusive study.

3.2 Research Onion


Figure 6: Research Onion

(Source: Saunders et al. 2007)

The study applies a layered approach, similar to the onion model of research systematic processes used to unravel the intricate complexities of AI development, human intervention, and cybersecurity challenges. As influenced by Saunders et al. (2007), the philosophy layer of the research onion encompasses interpretivism that fosters a strong relation to the focus of the study on interpreting subjective elements of human encounters with AI. This works perfectly well with the requirement of a human being to oversee things under AI logic because rational decisions cannot be made during emergencies. The inductive nature of the approach layer is a good choice in terms of reflecting the exploratory character of this case study (Cheatham et al. 2019). The selected exploratory design supports the speculative nature of this complex exploration of cybersecurity issues and human-machine interplay identified in the reviewed literature, while the action research strategy matches with the pragmatic goal of suggesting real-life solutions. This way, the research onion model is used to describe a scaffolded structure that helps researchers by peeling back layers to reveal detailed mechanisms of AI integration and its consequences for cybersecurity utilizing an interpretivism philosophy.

3.3 Research Philosophy

Research philosophy represents the underlying assumptions that underpin the researcher’s theory of knowledge. The chosen philosophy of this study is interpretivism, which argues that reality exists not by itself but in social construction. It thus appreciates human phenomenological experiences and such meanings attributed to phenomena. Unlike positivism, which focuses on objectivity and evidence that can be quantified, interpretivism adheres to the qualitative nature of research targeting the depiction of human interaction with AI alongside its contextual and subtle aspects.

For this particular study, interpretivism is the best to adopt as it is suitable for my research focus on human-machine interactions and cybersecurity issues. Thus, the use of human experiences with AI systems requires an approach that honours idiosyncratic impressions and situational import. With the help of interpretivism, it is possible to analyze the varieties of human reactions to AI development in a more comprehensive way, contributing also to a deeper understanding of the complex situation (Ansari et al. 2022). This aligns with the qualitative data collection methods chosen, emphasizing depth and insight over quantitative measurement, making interpretivism the most suitable and enriching philosophical stance for this research.

3.4 Research Approach

The method of reasoning used in the study to lead to conclusions and insights is outlined by the research approach. Inductive reasoning entails progressing from particular facts to more general conclusions, detecting patterns and trends within data. On the other hand, deductive reasoning involves making general statements and then using them in particular cases(Kaloudi, and Li, 2020). For the study conducted on cybersecurity huddles in AI development, an inductive method is considered ideal. The exploratory nature of this studycan only highlight the multilevel complexity of human interventions and AI cybersecurity challenges in an open-ended way. Qualitative data helps new theories and insights to arise from inductive reasoning. Given the nature of the research, which seeks to identify subtle patterns in an increasingly fluid field of AI and cybersecurity, the inductive approach’s emphasis on qualitative data turns out to be spot-on for supporting exploratory designs, delivering a flexible methodology that is perfectly suited to such complexities.The inductive study methodology adopts the process of identifying general features from specific observations using qualitative data analysis. It helps in identifying patterns, themes and the emerging theories within the investigated phenomena (Jain, 2021). This practice is particularly helpful in cases where the research aims at discovering unique realities and meanings. In relation to the human-machine interaction and associated challenges in cybersecurity, the study furthers adoption of an inductive approach that allows a more subtle analysis within AI growth framework for exploratory design where new insights emerge out from data.

3.5 Research Design

The design of the research is a construction plan for the study, describing all stages of data gathering and processing. The research design chosen here is the exploratory one as it favours its flexibility and adaptability to probing new and complex phenomena. Exploratory research is aimed at creating insights, revealing relationships, and detecting new variables or inquiries for study. The approach is particularly helpful when the body of the existing literature is scarce or when a research topic changes over time (Kaloudi and Li, 2020). There is other two common research designs, descriptive, which merely attempts to describe a phenomenon, and explanatory, which aims at the identification of causation. In the context of the research on cybersecurity challenges in AI growth, the exploratory design is optimal. This technique provides an unbounded engagement on human interventions and their role toward answering cybersecurity mission challenges because of the growth of AI. Since the AI field is characterized by constant change and evolution, an exploratory design facilitates the discovery of new perspectives and relationships that correspond to the study’s overall objective of understanding every angle of identified issues.Through the exploratory research design, this paper uncovers new insights about various aspects of the phenomenon of interest. It provides a very dynamic view of the nature and interrelationships between human influences and AI growth risks. The exploratory design helps identify new dimensions, patterns, and insights emerging into themes, hence the study is able to respond in terms of adaptability (Jain, 2021). This design consideration supports the objectives of this researched to understand better AI and cybersecurity multidimensional nuances.

3.6 Research Strategy

The research strategy refers to the implementation plan of the selected research design and methodology. Action research can be stated as a collaborative and iterative approach is chosen for its pragmatic design that addresses practical solutions to the problem. While technical action research helps address a particular problem in some given context, practical action research is aimed at the improvement of specific practices or processes and emancipatory action research intends to empower participants and overcome social inequality. Action research strategy is the best method to adopt for analysis of cybersecurity challenges in AI growth because it aims at solving real problems that exist in reality. The recursive nature of planning and reflection permits the constant refinement and improvement of strategies and solutions. The collaborative nature makes it applicable since stakeholders are actively engaged with the findings that help to combat cyber threats as AI grows.

3.7 Data Sources

Sources of data relate to the origin of information that is used in the research. The two main categories of data sources are primary and secondary. Primary data sources are first-hand, authentic material that the researcher has gathered through procedures such as experiments, interviews, and surveys.Secondary data is used first to be gathered by subordinates through interviews or surveys while primary data is obtained directly from the respondent. Secondary data sources can be of different types which include academic literature or research such as reports, articles, and other published materials (Soni, 2020). The secondary data use conforms to the subject of exploratory and qualitative research on cyber risks in AI development. It builds on pre-existing knowledge and insights drawn from multiple perspectives to offer a broad overview of the current extent of understanding. Based on a vast amount of literature referring to it and documented cases in the field, human-machine interaction and cybersecurity issues that could be associated with AI are thoroughly explored.

3.8 Data Collection and Tools

Data collection refers to seeking the information needed to answer research questions. The two types of data are qualitative and quantitative. Qualitative data provides information on attitudes, behaviours, and experiences, which are often generated using narrow methods such as interviews or focus groups and even content analysis. Quantitative data is numerical analysis and uses statistical methods. Qualitative data collection is proposed for consideration of cybersecurity challenges in AI growth uncovered by the research (Jain, 2021). Qualitative methods provide an in-depth analysis of people’s relationships, ethical concerns, and the intricate relations between AI and cybercrime. Frequently used qualitative tools are interviews, content analysis, and case studies. These techniques portray human richness and other details, which is in line with interpretive philosophical research (Alhayani et al. 2021). Qualitative data can be generated from primary sources tactics such as open-ended responses and observations, to explore the subjective experience of human-machine interactions in addition to providing more. The selected method guarantees a full understanding of the problems under consideration, enabling a deeper and more comprehensive analysis of research outcomes.

3.9 Data Analysis method

Data analysis is a technique to collect, model and evaluate the data by use of different logical and statistical methods. Data analysis is considered significant in a research as it makes the data simpler and more accurate. Its main aim is procure ultimate insights that are unbiased and are confined to objectives of the research. Data analysis offers a detail understanding on the issue by providing a clear and relevant insight on the issue to foster a further study (Ravindran, 2019). The data analysis will be conducted on the cyber security challenges over the growth of the AI

Thematic analysis is a form of data analysis that will be used in the study. Thematic analysis is a qualitative approach to go through data sets searching for patterns to derive themes to the research. Thematic analysis offers a methodological element of the data analysis which enables the researcher to interconnect the analysis of the frequency of a theme with the entire content (Vaismoradi and Snelgrove, 2019). Thematic analysis provides accuracy and increases the entire meaning of the study. Thematic analysis will give an opportunity to acknowledge the potential of the subject in a wide manner.? By using such data analysis technique, the issue of cyber security challenge over the AI growth.? With different themes, it will focus on the relevant and different aspect of the issue to demonstrate the whole issue effectively. Thus, the data analysis conducted will offer a conclusive understanding on the cyber security challenges over the growth of AI. It will provide detail and relevant sights fostering wide perspective on the issue which can offer an effective decision making and acknowledging the efficacy of the topic.

3.10 Ethical Considerations

Ethical considerations?in the study are a group of principles that steer the? research practices and designs towards the objectives. The principles include informed consent, anonymity, confidentiality, quality and voluntary participation. Involving the research ethics enhances the credibility of the study and trust on the reliability of the research. Ethical considerations in the study provide a trusted source of information on which the readers can rely (Arifin, 2018). The study offers a ethical piece of study which offers a reliable and valid source of information. The data has been gathered from right sources which presents every accurate information. Also the data has been stored fostering the confidentiality and dependability of the information. This will help in trusting the information which will lead to effective study.

3.11 Timel

Table 1: Gantt Chart

(Source: Created by Author)

From the initial step of a broad review of existing literature, everything culminates in the rational deployment of the selected study methodology. The following stages entail qualitative research to provide a detailed contextual understanding of human action in AI (Ansari et al. 2022). The time frame ends with the comprehensive breakdown of the conducted analysis and interpretation of findings while formulating actionable recommendations pursuing a systematic and quicker versioned flow of research. The timeline of the research gives a well-organized view on touchpoints that can be considered as milestones in the course of study development. It starts with an in-depth survey of the literature, to provide for a broad understanding of what is known. The following steps involve methodological development, qualitative data collection, and intricately analysis. The timeline also allows time to consider findings and create recommendations that are implementable. This systematic approach enables a sound progress, whereby every stage foster in unison to the broad objective of understanding cyber security challenges during AI development and human interventions.

?

Chapter 4: Data Analysis

4.1 Introduction

This chapter involves conducting a detailed analysis of the research concepts. A thematic analysis has been conducted here with the most common themes being identified on the basis of the literature review. These themes are also evaluated on the basis of relation with the aims, objectives and research questions in order to ensure that no irrelevant assessments are conducted. After the presentation of findings on the basis of these themes, a discussion of the overall findings gained from both the literature review and the data analysis section has been provided.? This can help to gain detailed understanding on the required aims and objectives of the project.

4.2 Theme identification

Based on the collected literatures, it was found that human interventions are a necessary part of AI operations. The primary causes were found to be due to the inability of AI systems to take into account realistic scenarios, thought processes and beliefs of humans, lack of emotional understanding, lack of understanding regarding the legal and ethical issues and more. However, there was a lack of understanding on the actual impacts of such interventions. Since the first objective involves identifying the factors leading to human interventions while the third objective involves assessing the impacts of these interventions, the first theme was developed to address these points. The first theme development has been stated as 'Human interventions and their impact on AI operations' which takes into account the causes for human intervention needs and also presents insights regarding the impact of these interventions. It can help to address almost 50% of the primary goals of this research.

The second objective also involves collecting insights regarding the various cybersecurity challenges that are faced by organisations due to the growth of AI. These challenges refer to security issues that could occur within the brand and the possible impacts of these challenges on the organisation as well as the AI systems. This aspect was evaluated within the literature review section in the form of possible issues and how hackers may be able to access these AI systems. In order to evaluate further, the various security challenges faced by the AI system and connected networks have been analysed as there are multiple ways to intrude into the system. Therefore, the second theme developed on the basis of the second objective has been stated as 'Security challenges faced within AI systems and connected networks'. Furthermore, simply discussing the issues are not effective and there is a need for devising plans and methods to prevent the issues. This has thereby been addressed in the third theme stated as 'Role of human interventions in managing the security challenges in AI'. This theme can help to understand the human intervention roles in the case of this specific issue involving the failure of the AI systems in managing security. Finally, other factors that trigger the human interventions in AI and additional strategies to deal with them have been presented with the use of research and brainstorming as this is the primary goal of the research topic. This involves the fourth and final theme within the study.

4.3 Thematic Analysis

4.3.1 Human interventions and their impact on AI operations

The primary human interventions as found previously involve the role of humans in providing the necessary direction for AI systems to operate, develop strategies and tactics for operations and safety of the system, devise policies which are followed in order to ensure the proper results from the systems and more. It was seen that the AI systems are unable to develop plans of operations on their own and human throught processes and operating patterns are necessary is setting it up (Helo and Hao, 2022). Additionally, AI is highly dependent on user input both in terms of data collection as well as results presentation. The connectivity of AI to the internet allows for the collection of a wide range of data for proper functionality. It allows for effective decision-making. However, collection of this data needs to be overseen by a person for the best results and can be referred to as the supervisation on AI systems. The various impacts of these impacts have been stated as follows.

Collecting data from online sources can lead to a lot of harmful and mixed data collection. People have differing opinions on a wide range of facts online which may be collected by the AI system, thereby leading to incorrect results sometimes. These incorrect results mainly stem from biased data collected from multiple sources and the AI system is unable to distinguish between which results are real and which ones are biased. Humans, on the other hand, are able to make such distinguishment on the basis of argumentative assessments collected from a wide range of sources of data available online (Boni, 2021). Additionally, while some actions are possible in theory, they are not applicable in real life which can be understood by humans easily if they involve biased views. Furthermore, the collection of data by AI systems from online sources can lead to the collection of a lot of potential harmful data which may include negative experiences, negative actions to certain situations by people and more which may lead to the generation of blatantly negative results or possible even the failure of AI system or the system going rogue and out of control of organisations.

There is also a significant need for complying to legal guidelines and ethical values which may contribute to privacy issues if these actions are left up to AI systems. In order for an organisation to use AI for effective operations, the AI systems need to be fed with essential company data for making decisions. However, as argued by Larsson (2020), in order to comply with legal guidelines there is a significant need for the prevention of data breaches or exposure of private information of people working within the organisation. AI systems are unable to understand these factors and their results may lead to the getting information on people such as names, contact details, payment processes and more being shared online. This is therefore the task of humans to ensure that the data of employees within the brand are not shared or exposed as it can lead to vulnerability and possible attacks on them.

4.3.2 Security challenges faced within AI systems and connected networks

In regards to AI systems and connected networks, the most security challenges occur when they are implemented alongside legacy systems which have not been updated. As per Sarker (2022), AI systems are implemented with modern strategies and systems that prevent easy accessibility by unauthorized personnel. However, this is not the case for legacy systems as old and inefficiently operating old devices have problems with security that can be used to intrude into the system. Thus, it is highly important to upgrade the entire system rather than just implementing an additional AI within the network for operations of the brand. As per Gulati et al. (2022), IoT devices are mostly implemented in networks for data collection and old IoT devices have a lot more potential for security issues than the newer ones which are being constantly updated and improved.

The security of the entire network to which the AI is connected also plays a significant role in the security of the entire system. Usual networks mainly include organisations using a Class C IP network for operations which allows connectivity of upto 255 devices in a single network (Uroz and Rodríguez, 2022). A major example includes the office within an organisation having 200 people working. An example IP network of 199.168.20.0 contains IP addresses from 199.168.20.1 to 199.168.20.255, thereby allowing access to multiple devices. The use of the old IPV4 contributes to potential failure issues due to lower levels of security which can be managed with the use of the newer IPV6 method of IP addressing. Additionally, the networks are connected by a main router which is set up with authentication and login processes in order to access the network (Nair and Nair, 2021). These authentications include the WEP, WPA, WPA2 and other forms which have positive impacts on preventing entry by others. However, there is a significant need for additional strategies like the use of Network Address Translation, Trunking, DNS implementation and more.

Network Address Translation (NAT) refers to showing a specific IP address as a different address to prevent accessibility for others. For example- An IP address of 199.168.20.1 can be shown ?as 203.170.20.6, thereby keeping it safe from unauthorized people from accessing the network. The IP addresses can be customized as per needs using this method. On the other hand, as argued by Lyu et al. (2022), the use of DNS can allow naming a specific IP address with a name. A major example includes Google as the IP address of Google is masked by using the name Google.com as the denotation for the IP address. Finally, the use of VPN (Virtual Private Networks) can help to prevent the network from being found easily as the brand uses a different network address internally in respect to the ones used outside. Google uses this method as their main IP address that can be seen externally is non-existent while the internal IP addresses and network addresses are different. These methods help to pinpoint the actual network address and thereby prevents access to any of the server devices or the AI system and IoT devices that may be connected to the network.

4.3.3 Role of human interventions in managing the security challenges in AI

Secure integrations are essential within the AIs in order to ensure efficient operations of the entire system. In order to achieve that, human intervention is necessary as while the AI systems are able to work efficiently on their own, their security aspects are dependent on a lot of other aspects that need to be managed by people. One of the main interventions in this regards include the upgradation of legacy systems to prevent intrusions into the system from other areas. As per Kutscher et al. (2020), IoT devices, routers and more that were launched years ago often have outdated technological implementations which can be easily hacked by modern tools and techniques used by hackers, data thieves and unauthorized people. IoT devices like temperature controllers, motion detectors and more mostly have serious issues involving security as they have access points and nodes that can be easily targeted for intrusion. Thus, human interventions are necessary to integrate more modernized IoTs with AI systems to create the best results in regards to preventing access. Additionally, as per Awouda et al. (2024), encapsulating these modern IoTs with AI systems to develop a digital twin system can help to keep both the IoTs as well as AIs safe, thereby leading to easy operations of the organisation without the fear of being hacked or facing data deletion, exposure or modification by unauthorized people.

In regards to the network system presented in the previous theme, it can be stated that human interventions are necessary to set it up. The primary tasks involve using cross-cables for connectivity between devices within the network for easy operations followed by the use of CLI-based routing commands. The use of routing protocols like RIP (Routing Information Protocol) and network monitoring protocols like SNMP (Simple Network Management Protocol) can be useful in managing the network operations and keep records in case of potential intrusion issues (Dayapala et al. 2022). Additionally, as per Kim and Park (2021), the use of a technique called trunking allows the development of multiple channels through which information could be transferred within the network which leads to a complexity for accessibility of the system by unauthorized people. Implementing these trunking processes and multiple other methods as presented within the previous theme, namely VPN, DNS and NAT, can help to add multiple layers or security and complexity for hackers and data thieves.

The AI system can further be integrated with a cloud system which allows the collection and storage of all data after processing by the AI systems. Cloud systems are effective in further improving the security of the entire system due to the use of another unique authentication and encryption process. As per Khakim et al. (202), AES (Advanced Encryption Standard) is an effective process that can be implemented by the chief technician within the brand to keep the data as well as network system safe. AES 128 and AES 256 are commonly used within cloud networks in order to increase the security of the system.

4.3.4 Triggers for human interventions in AI and additional strategies for managing cybersecurity issues in AI

The primary triggers for human interventions in AI are similar to that of the first theme. However, there are multiple factors that trigger its need which are not limited to just the ethical issues, legal complications, uncertainty and more. AIs simply cannot operate without a person providing the direction based on which it should operate. While AIs have access to a wide range of data from online sources and data pools which allows it to make decisions, it cannot implement any action without the direction by humans to implement a specific process or method of operation. As per Afzaal et al. (2021), the use of effective prompts is necessary for quality performances and actionable results from the AI systems. These prompts acts as the direction towards which the AI is supposed to work.

Similarly, another major trigger for human intervention includes overseeing for any potential breaches and attacks within the network. AIs, while having significant amounts of safety and data protection, might still face intrusion by unauthorized personnel if they are skilled or are able to find unique ways to enter the network. In such cases, the AI systems are unable to take any actions to prevent the issues. However, as argued by Boni (2021), human involvement in regards to keeping track of potential issues like these can lead to positive impacts on their prevention as skilled personnel will be able to stop and block entry of such attempts on the spot using other bruteforce actions. Effective model testing, training data improvements and more can help to ensure the effective operations of the system and the prevention of system failures and additional hacking and intrusion issues (Yurkofsky et al. 2020). The use of the Continuous Improvement (CI) process on the basis of the Agile principles can be beneficial for improved operations.

In regards to additional strategies, consistently updating algorithms to keep track of intrusion and entry of people within the system on the basis of specific time-stamps can be beneficial. This involves keeping a record of each person who enters the system along with time-stamps and IP addresses of the devices used to enter provides a list based on which legal actions can be taken by targeting the people involved (Kundel et al. 2022). Further methods include ensuring an increased security of APIs (Application Programming Interfaces) and related interfaces to prevent accessibility by others. Authentication based on multiple passkeys, authorization based personnel data and encryption key usage can be used to deal with the situation to protect data within the network which is being used by the AI system as well for functionality.

4.4 Summary and Discussion

Based on the above assessments, it can be stated that human intervention is a necessary factor. As per the collected literatures, it was found that AI requires a direction which is provided for it to operate and provide results accordingly. AIs thereby cannot operated in an automated way and required criteria needs to be provided for it to provide the best results. The data analysis section clarified this aspect in further details. As per Boni (2021), it was stated that there is a significant need for people to provide prompts in order for the AI system to work in an efficient manner. Additionally, it was also stated that the AI systems are unable to take the human aspects like emotions into account and thus it leads to logical results which may not be applicable. Theoretical knowledge is much different from practical implementations which the AI cannot take into consideration. Therefore, human involvement is a necessary factor.

In regards to the issues involving cybersecurity, based on the literature review, it was found that it is common for hackers and data thieves to attack networks and AI systems in case of large organisations or defence operations of major government bodies. It was stated that this issue could be managed by implementing more encryptions. However, as per the secondary research, there was different findings. It was found that while encryption is a part of the data protection process, there are multiple factors and processes that need to be implemented for achieving the best results. As per Gulati et al. (2022), it was stated that IoT devices connected to the networks needed to be upgraded and the legacy system needed to be updated with more latest devices and tools to prevent easy access to the system. Furthermore, as per Lyu et al. (2022), further improvements are needed in regards to the implementation of NAT, VLAN Trunking, VPN utilisation and DNS implementation to achieve better and multiple levels of security. In order to further manage and prevent additional security threats, the use of Continuous Improvement (CI) was also found to be highly efficient.

?

Chapter 5: Conclusions and Recommendations

5.1 Conclusion

Based on the above assessments, it can be stated that the human intervention in AI is a necessity and the AI systems cannot operate without such interventions. This is mainly due to the fact that while AI can collect data from online sources or a data pool within the organisation, it lacks the ability to make decisions based on what is required to be achieved. A person skilled in using AIs need to be available in order to provide prompts that will lead to specific results that are required by the brand. Additionally, while the AI is fine for logical problem solving processes, it is inefficient in regards to understanding the feelings of people. This lack of emotional understanding contributes to inefficient decision making which may not always be applicable in real life. It lacks the understanding of realism and feasibility of various tasks for humans and thereby the plans and strategies need to be adjusted by humans for best results.

With the increased usage of AI in recent years and its availability to the local population in recent years has led to significant data collection and presentation. The use of personal details in order to simplify tasks often lead to the data being stored which can be used by hackers and data thieves not only to target for data collection but also may involve hacking of funds within bank accounts thereby affecting their lifestyles negatively. Each government has their own version of legislations to protect data and prevent such issues like the Data Protection Act of 2018 in the UK. ?

5.2 Recommendations

Based on the assessments the following recommendations can be provided in order to deal with cybersecurity threats within AI systems and their connected networks:.

·? Scheduled security audits are the primary actions that need to be implemented in order to deal with these threats within AI (Akula and Garibay, 2021). These scheduled security audits involve people working in order to assess any changes within the system, potential data thefts, intrusion and more followed by efficient firewall management of ensure that no unauthorized network devices can affect the system.

·?? The development and implementation of a server within the network that can specifically focus on managing and maintaining the security via keeping track of changes can be highly efficient. This can be automated with the use of IoTs encapsulated within an digital twin technology working alongside AIs. This server system can help to keep store of SNMP results which can allow understanding and keeping track of the people who are accessing the data, when they are accessing it and if they are modifying or deleting it in any way.

·? Access control and authentication is necessary as they help to control the priviledges of system users. The use of strong password and policies can lead to the prevention of such data theft issues (Patwary et al. 2021). The use of employee training programs can be highly beneficial to increase their awareness regarding all these issues and the methods that can be implemented to manage them.

5.3 Limitations and Future Scope

This study was limited to conducting only secondary researches and thus future researches could be more focused towards multiple forms of data. Conducting both primary as well as secondary assessments could have contributed to the availability of a larger amount of data thereby leading to easier management of biases. The use of surveys to collect data from people who use AI can be highly beneficial to learn more about them while interviews may be conducted for people who are authoritative figures and can shed light on these sorts of issues. Additionally, future researches should focus more on having higher budgets available. These budgets can be used in order to collect data from various books, journals, articles and websites which can contribute to significant improvements in addressing and dealing with major issues affecting security of AI systems Additionally, having a higher time-frame for the research (about 12 weeks in total) could allow for better opportunities to collect data. Collecting data from various paid resources like paid journals and books can also be achieved within a higher budget and time.

?

Reference List

Afzaal, M., Nouri, J., Zia, A., Papapetrou, P., Fors, U., Wu, Y., Li, X. and Weegar, R., 2021. Explainable AI for data-driven feedback and intelligent action recommendations to support students self-regulation.?Frontiers in Artificial Intelligence,?4, p.723447.

Akula, R. and Garibay, I., 2021. Audit and assurance of AI algorithms: a framework to ensure ethical algorithmic practices in artificial intelligence.?arXiv preprint arXiv:2107.14046.

Alhayani, B., Mohammed, H.J., Chaloob, I.Z. and Ahmed, J.S., 2021. Effectiveness of artificial intelligence techniques against cyber security risks apply of IT industry.?Materials Today: Proceedings,?531.

Ansari, M.F., Dash, B., Sharma, P. and Yathiraju, N., 2022. The Impact and Limitations of Artificial Intelligence in Cybersecurity: A Literature Review.?International Journal of Advanced Research in Computer and Communication Engineering.

Arifin, S.R.M., 2018. Ethical considerations in qualitative study.?International journal of care scholars,?1(2), pp.30-33.

Bécue, A., Pra?a, I. and Gama, J., 2021. Artificial intelligence, cyber-threats and Industry 4.0: Challenges and opportunities.?Artificial Intelligence Review,?54(5), pp.3849-3886.

bionichive.com , 2024 Our CompanyAvailable at https://bionichive.com/company/about-us/ [Accessed 27/01/2024]

Bonfanti, M.E., 2022. Artificial intelligence and the offence-defence balance in cyber security.?Cyber Security: Socio-Technological Uncertainty and Political Fragmentation. London: Routledge, pp.64-79.

Boni, M., 2021. The ethical dimension of human–artificial intelligence collaboration.?European View,?20(2), pp.182-190.

Brito, L.C., Susto, G.A., Brito, J.N. and Duarte, M.A., 2022. An explainable artificial intelligence approach for unsupervised fault detection and diagnosis in rotating machinery.?Mechanical Systems and Signal Processing,?163, p.108105.

Chakraborty, A., Biswas, A. and Khan, A.K., 2023. Artificial intelligence for cybersecurity: Threats, attacks and mitigation. In?Artificial Intelligence for Societal Issues?(pp. 3-25). Cham: Springer International Publishing.

Cheatham, B., Javanmardian, K. and Samandari, H., 2019. Confronting the risks of artificial intelligence.?McKinsey Quarterly,?2(38), pp.1-9.

Dash, B., Ansari, M.F., Sharma, P. and Ali, A., 2022. Threats and Opportunities with AI-based Cyber Security Intrusion Detection: A Review.?International Journal of Software Engineering & Applications (IJSEA),?13(5).

Dayapala, B., Palanisamy, V. and Suthaharan, S., 2022, December. Investigation of Routing Techniques to Develop a Model for Software-Defined Networks using Border Gateway Protocol. In?2022 4th International Conference on Advancements in Computing (ICAC)?(pp. 150-155). IEEE.

Devillers, L., 2021. Human–robot interactions and affective computing: The ethical implications.?Robotics, AI, and Humanity: Science, Ethics, and Policy, pp.205-211.

explodingtopics.com , 202457 NEW AI Statistics (Jan 2024) Available at https://explodingtopics.com/blog/ai-statistics [Accessed 27/01/2024]

Gulati, K., Boddu, R.S.K., Kapila, D., Bangare, S.L., Chandnani, N. and Saravanan, G., 2022. A review paper on wireless sensor network techniques in Internet of Things (IoT).?Materials Today: Proceedings,?51, pp.161-165.

Helo, P. and Hao, Y., 2022. Artificial intelligence in operations management and supply chain management: An exploratory case study.?Production Planning & Control,?33(16), pp.1573-1590.

Jain, J., 2021. Artificial intelligence in the cyber security environment.?Artificial Intelligence and Data Mining Approaches in Security Frameworks, pp.101-117.

Janssen, M., Brous, P., Estevez, E., Barbosa, L.S. and Janowski, T., 2020. Data governance: Organizing data for trustworthy Artificial Intelligence.?Government Information Quarterly,?37(3), p.101493.

Kaloudi, N. and Li, J., 2020. The ai-based cyber threat landscape: A survey.?ACM Computing Surveys (CSUR),?53(1), pp.1-34.

Karamitsos, I., Albarhami, S. and Apostolopoulos, C., 2020. Applying DevOps practices of continuous automation for machine learning.?Information,?11(7), p.363.

Khakim, L., Mukhlisin, M. and Suharjono, A., 2020. Security system design for cloud computing by using the combination of AES256 and MD5 algorithm. In?IOP Conference Series: Materials Science and Engineering?(Vol. 732, No. 1, p. 012044). IOP Publishing.

Kim, N.W. and Park, J.S., 2021. A Case Study of the Implementation and Verification of VLAN-applied Network Based on a Five-step Scenario.?The Journal of the Korea institute of electronic communication sciences,?16(1), pp.25-36.

Kolianov, A.Y., 2021. Artificial Intelligence in Media Discourse of 2010s.?Дискурс,?7(4), p.59.

Korteling, J.H., van de Boer-Visschedijk, G.C., Blankendaal, R.A., Boonekamp, R.C. and Eikelboom, A.R., 2021. Human-versus artificial intelligence.?Frontiers in artificial intelligence,?4, p.622364.

Kr?ger, J.L., Miceli, M. and Müller, F., 2021. How data can be used against people: A classification of personal data misuses.?Available at SSRN 3887097.

Kundel, R., Siegmund, F., Hark, R., Rizk, A. and Koldehofe, B., 2022. Network testing utilizing programmable network hardware.?IEEE Communications Magazine,?60(2), pp.12-17.

Kutscher, V., Olbort, J., Anokhin, O., Bambach, L. and Anderl, R., 2020. Upgrading of legacy systems to cyber-physical systems.?Proceedings of TMCE 2020.

Kyriakou, K. and Otterbacher, J., 2023. In humans, we trust: Multidisciplinary perspectives on the requirements for human oversight in algorithmic processes.?Discover Artificial Intelligence,?3(1), p.44.

Larsson, S., 2020. On the governance of artificial intelligence through ethics guidelines.?Asian Journal of Law and Society,?7(3), pp.437-451.

Lyu, M., Gharakheili, H.H. and Sivaraman, V., 2022. A survey on DNS encryption: Current development, malware misuse, and inference techniques.?ACM Computing Surveys,?55(8), pp.1-28.

Mahmood, S., Mohsin, S.M. and Akber, S.M.A., 2020, January. Network security issues of data link layer: An overview. In?2020 3rd International Conference on Computing, Mathematics and Engineering Technologies (iCoMET)?(pp. 1-6). IEEE.

Marcos-Pablos, S. and García-Pe?alvo, F.J., 2022. Emotional intelligence in robotics: a scoping review. In?New Trends in Disruptive Technologies, Tech Ethics and Artificial Intelligence: The DITTET Collection 1?(pp. 66-75). Springer International Publishing.

McCarthy, J., 2022. Artificial Intelligence, Logic, and Formalising Common Sense.?Machine Learning and the City: Applications in Architecture and Urban Design, pp.69-90.

Mohanta, B.K., Jena, D., Satapathy, U. and Patnaik, S., 2020. Survey on IoT security: Challenges and solution using machine learning, artificial intelligence and blockchain technology.?Internet of Things,?11, p.100227.

Müller, V.C., 2020. Ethics of artificial intelligence and robotics.

Nair, K.K. and Nair, H.D., 2021, August. Security considerations in the Internet of Things Protocol Stack. In?2021 International Conference on Artificial Intelligence, Big Data, Computing and Data Communication Systems (icABCD)?(pp. 1-6). IEEE.

Ntoutsi, E., Fafalios, P., Gadiraju, U., Iosifidis, V., Nejdl, W., Vidal, M.E., Ruggieri, S., Turini, F., Papadopoulos, S., Krasanakis, E. and Kompatsiaris, I., 2020. Bias in data‐driven artificial intelligence systems—An introductory survey.?Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery,?10(3), p.e1356.

Olanrewaju, O.I., Enegbuma, W. and Donn, M., 2022. Data Quality Assurance in Environmental Product Declaration Electronic Database: An Integrated Clark-Wilson Model, Machine Learning and Blockchain Conceptual Framework.

Patwary, A.A.N., Naha, R.K., Garg, S., Battula, S.K., Patwary, M.A.K., Aghasian, E., Amin, M.B., Mahanti, A. and Gong, M., 2021. Towards secure fog computing: A survey on trust management, privacy, authentication, threats and access control.?Electronics,?10(10), p.1171.

Raisch, S. and Krakowski, S., 2021. Artificial intelligence and management: The automation–augmentation paradox.?Academy of management review,?46(1), pp.192-210.

Ravindran, V., 2019. Data analysis in qualitative research.?Indian Journal of Continuing Nursing Education,?20(1), pp.40-45.

Saleem, R., Yuan, B., Kurugollu, F., Anjum, A. and Liu, L., 2022. Explaining deep neural networks: A survey on the global interpretation methods.?Neurocomputing.

Sarker, I.H., 2022. Ai-based modeling: Techniques, applications and research issues towards automation, intelligent and smart systems.?SN Computer Science,?3(2), p.158.

Sarker, I.H., Furhad, M.H. and Nowrozy, R., 2021. Ai-driven cybersecurity: an overview, security intelligence modeling and research directions.?SN Computer Science,?2, pp.1-18.

Saunders, M., Lewis, P.H.I.L.I.P. and Thornhill, A.D.R.I.A.N., 2007. Research methods.?Business Students 4th edition Pearson Education Limited, England,?6(3), pp.1-268.

Shen, J. and Xia, M., 2020. AI Data poisoning attack: Manipulating game AI of Go.?arXiv preprint arXiv:2007.11820.

Soni, V.D., 2020. Challenges and Solution for Artificial Intelligence in Cybersecurity of the USA.?Available at SSRN 3624487.

Taddeo, M., McCutcheon, T. and Floridi, L., 2019. Trusting artificial intelligence in cybersecurity is a double-edged sword.?Nature Machine Intelligence,?1(12), pp.557-560.

Uroz, D. and Rodríguez, R.J., 2022. Characterization and evaluation of IoT protocols for data exfiltration.?IEEE Internet of Things Journal,?9(19), pp.19062-19072.

Vaismoradi, M. and Snelgrove, S., 2019. Theme in qualitative content analysis and thematic analysis.

Vla?i?, B., Corbo, L., e Silva, S.C. and Dabi?, M., 2021. The evolving role of artificial intelligence in marketing: A review and research agenda.?Journal of Business Research,?128, pp.187-203.

Xu, D., Wang, W., Zhu, L., Zhao, J., Wu, F. and Gao, J., 2022. CL-BC: A Secure Data Storage Model for Social Networks.?Security and Communication Networks,?2022.

Yurkofsky, M.M., Peterson, A.J., Mehta, J.D., Horwitz-Willis, R. and Frumin, K.M., 2020. Research on continuous improvement: Exploring the complexities of managing educational change.?Review of Research in Education,?44(1), pp.403-433.

Zhou, Z., 2021. Emotional thinking as the foundation of consciousness in artificial intelligence.?Cultures of Science,?4(3), pp.112-123.



AZMAT ULLAH

Police Sub-Inspector | LL.M Student | Learning Cyber Security

1 个月

Insightful ??

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了