Use of a Value Model to Ethically Govern Various Applications of Small UAS

: Widespread use of small unmanned aircraft systems is becoming prominent in the US. From structural health monitoring to journalism, small unmanned aerial systems (sUAS) are allowing people to gain a view of their surroundings and conduct their jobs in ways like never before. With this come many ethical concerns that must be addressed before the sight of a sUAS ﬂying overhead is a widely acceptable occurrence to a majority of the population. Currently, UAS operations used in civil airspace are governed by the Federal Aviation Administration (FAA) Part 107 rules, but these regulations do not address certain ethical considerations. This paper will use the concept of a value model to quantify these ethical concerns so that they may be encoded into the design of a UAS and evaluation of missions before the missions are conducted. This could prove valuable in addressing the ethical challenges that are faced when implementing unmanned aerial systems (UAS) operations into the airspace, especially when UASs are in airspace in densely populated areas.


Introduction
Civil applications of unmanned aerial systems (UAS) are becoming more prevalent across various industries in society [1]. The benefits of these applications, such as structural health monitoring and 3-D mapping are not largely debated. However, there are a wide variety of obstacles that are making the implementation and regulation of these various applications difficult. The authors, herein, will focus on ways in which to establish a rigorous mathematical model for the ethical governance of small unmanned aerial systems (sUAS) civil applications as well as government applications that have to do with the interacting with the civilian population, i.e., surveillance. To attempt this, a method will be proposed to quantify the various qualitative concerns in such a way that they may be modeled to govern the ethical use of UAS amongst the civilian population. Currently, ethical concerns are quantified through a focus on the ethical concern of safety and conducting risk assessment. Lazatin [2] created a method of estimating the risk of operating UAS by estimating the number of casualties and the cost of property damage using probability factors for air to air and ground collisions of aircraft. Whereas he focused on risk, this study will focus on other prevalent ethical concerns within a mission space in addition to safety risk. Weibel and Hansman [3] illustrate three areas of risk mitigation and raise key issues on how to determine appropriate UAS policies to ensure that an acceptable level of safety is achieved. Weibel and Hansman also created a model of ground impact that was used to calculate the UAV system reliability required to meet a target level of safety [4]. The recurring theme in these approaches is safety, but safety is not the only ethical concern to consider when conducting UAS operations. Other concerns are privacy and legality. The primary goal of this paper is to establish a foundation for a model that can combine quantifiable factors like the autonomy and the technology readiness levels with harder to quantify qualitative ethical concerns of a mission space. A way to quantify safety, privacy, and legal concerns into metrics and create a method to evaluate a mission along ethical lines is proposed.

Background
The establishment of this framework requires drawing information from literature steeped in the ethics involved in UAS operations. Literature in machine ethics was utilized to establish a basis for what ethical rules an autonomous agent should follow when conducting any kind of operation. This literature helped in establishing what ethical concerns needed to be considered when autonomous machines are collecting data from the populous as well as a discussion on liability associated with accidents or incidents when operating sUAS. Literature in the ethical use of sUAS amongst civilian populations was collected to not only find more ethical concerns that may arise but to also find various applications of sUAS that could have various ethical concerns. Lastly, literature on the term ethical governance gave rise to the topic of this paper in evaluating if a mission is ethically sound before the mission is executed. This is different from machine ethics as usually implemented, i.e., during the operation.

Machine Ethics
A consideration of the ethics concerning autonomous machines was essential in getting a basis for what it means for a machine, let alone a UAS, to operate ethically. There is a known code of ethics called privacy, accuracy, intellectual property, and access (PAPA) [5]. PAPA was created to address the need to deal with threats to a person's dignity as the result of a misuse of information. These four elements pose multiple questions that should be asked when questioning the ethicality of a machine's operation.
The privacy element asks: What information about one's self or one's associations must a person reveal to others under what conditions and with what safeguards? What things can people keep to themselves and not be forced to reveal to others [5]?
The accuracy element asks: Who is responsible for the authenticity, fidelity, and accuracy of information? Similarly, who is to be held accountable for errors in information and how is the injured party to be made whole [5]?
The property element asks: Who owns information? What are the just and fair prices for its exchange? Who owns the channels, especially the airways, through which information is transmitted? How should access to this source resource be allocated [5]?
Lastly, the accessibility element asks: What information does a person or an organization have a right or a privilege to obtain under what conditions and with what safeguards [5]? This concept is relevant to this study because UAS are at the very least information gathering technology and as a result, should follow the same code of ethics as any information technology.
There is a discussion within machine ethics about whether the responsibility falls on the manufacturer, designer, owner, or user of an autonomous robot when the operation of that robot leads to a harmful event. With advances in programming techniques that equip the agent with the ability to learn and, hence, depart from its original program, it becomes impossible for the programmer to exhaustively test the behaviors of his or her creations [6]. Thus, in this regard, the programmer cannot be held responsible if they did not create with the intent for harm. But the opposing argument is that engineers cannot be freed from all responsibility on the sole ground that they do not have complete control over the causal chains implied by the actions of their robots [6]. So, there become two arguments. One argument asks the question: For a highly autonomous machine, is the designer responsible for actions the machine takes outside of its desired intentions? The opposing argument asks the question: Are designers ever completely free from blame for the actions of their creations? These arguments lead to the notion that as robots become more autonomous, they should be treated more as colleagues than as manufacturing tools. This notion, in turn, may lead to the further notion that robots are treated as moral agents, especially if the robot is being used as an expert recommendation. This increase in the assignment of moral agency could, in turn, lead to the blame falling solely on the autonomous robot itself and no one else.

Ethical Use of the UAS Amongst Civilian Populations
There are multiple literary works that discuss what ethics revolve around the use of UAS. Ethical principles must be adhered to for any project that deals with information; these principles being respect for human autonomy, avoiding harm, beneficence, and justice 7 . For ethical aerial scientific data collection, there are three elements: Airspace management, data mission success, and ethics: privacy by design [7]. This incorporates the mission environment as well as the design of the UAS itself to ensure ethical scientific data collection.
An ethical concern that is prevalent in the civil applications of UAS is the concern for privacy. A civilian's right to privacy can be divided into four different categories: intrusion upon seclusion, appropriation of a name or likeness for profit, public disclosure of private facts, and presentation of an individual in a false light [8]. This alludes to the notion of whether or not it is ethical for a UAS flying overhead to be able to collect data on an individual without that individual's knowledge or prior consent. Regarding this dilemma, it is proposed that possible solutions could be privacy by design, privacy by assessment, and education. Privacy by design has to do with embedding privacy and data protection throughout the entire life cycle of a technology, from early design stages to disposal [8]. This notion adds to the concept of assigning the issue of privacy not only to the users of the UAS but to the designers of the UAS as well. This gives way to the potential of pursuing privacy in the design phase, rather than post design in the form of regulation [9]. Privacy by assessment is a structured process that will assist in evaluating the influence that a UAS equipped with new technology will have on an individual's privacy [8]. Education in this regard has to do with taking steps to educate the public regarding the benefits associated with the use of UASs while also educating civilian operators about the damaging consequences that their actions can have on an individual's right to privacy [8]. A damaging consequence could occur from the use of visible and constant surveillance technology. The use of surveillance technology may inhibit an individuals' freedom of assembling or freedom of expression, i.e., a chilling effect. The chilling effect discourages individual participation in social movements or public dissent activities [10]. So, it is argued that the mass development of UAS surveillance could lead to an environment where individuals believe that a UAS is watching them at all times [10].
Along with the concern for privacy, there is a notion as to what applications are most acceptable to the civilian population. In a study, it was concluded that the support of the use of drones by the civilian population is based on the perceived social benefit of the application [11].

Ethical Governance
An ethical governor is a mechanism that conducts an evaluation of the ethical appropriateness of a plan before its execution, prohibiting plans deemed unethical. In the study where this term ethical governor is used, the list of formal ethical concerns was: do not harm people, do not harm animals, do not damage self, and do not damage property [12]. The authors utilize a Beliefs, Desires, and Intentions (BDI) language to create an ethical governor that would operate within a UAS using ethical concerns as constraints for the machine. The ethical governor can be divided into two main processes: Evidential reasoning and constraint application. The evidential reasoning process transforms incoming perceptual, motor, and situational awareness data into evidence in the form of logical assertions to be used by the constraint application process [13]. The constraint application process is the result of two interacting components: The evidence generation module and the evidence blackboard. The evidence generation module receives perpetual info, target info, overt behavioral responses from the behavioral control system, and geographical constraints. The evidence blackboard is the communications medium between evidential reasoning and constraint application [13].

The Learned Hand Formula
The Learned Hand [14] formula is an algebraic formula (B = PL). This illustrates a relationship between investment in precaution (B) and the product of the probability (P) and magnitude of harm (L) resulting from an accident. If PL is greater than B, then the defendant should be liable. If B equals or exceeds PL, then the defendant should not be held liable. This concept will be used in this model stating that in this case B is the social benefit, P will be replaced with L and is the probability of an ethical concern being violated, and L will be replaced with M which will be the value given based on the number of ethical concerns within the mission space. So, if the product of magnitude (M) and likelihood (L) exceeds the social benefit (B), then the mission will be deemed unethical to execute.

Proposed Method for Ethical Governance of UAS
The ethical governance proposed in this paper is only similar to the ethical governance mentioned in the literature in that it will evaluate the ethical appropriateness of a mission before the mission is executed. Most conversations concerning the ethics of the machine have to do with how the machine conducts itself either autonomously or semi-autonomously during the mission's operation. The value model that is being proposed will create metrics out of the various ethical concerns that arise when conducting sUAS operations amongst the civilian population. This will include operations regarding civil applications as well as applications used by law enforcement, e.g., surveillance.

Value Model Formulation of Autonomy Level and Technological Readiness Level
The first part of the value model will concern the characteristics of the sUAS itself that will influence how this ethical governor will affect the sUAS. One of these characteristics is the autonomy level (AL) of the sUAS. The AL, shown in Table 1 below, influences how the ethical governor will be used. This goes back to the notion of who carries the blame for ethical violations. If the AL is high, then the ethical governor will be encoded into the sUAS systems itself so that it evaluates if a mission is ethical or not and acts accordingly. With this, there needs to be an accompanying metric of the UASs technological readiness level (TRL), shown in Table 2 below, which addresses how much testing has been done on the system itself along with the AL. These two characteristics allow an Autonomy and Technological Readiness Assessment (ATRA) to be used [15]. An example of the relationship between the TRL and AL as they conduct the ATRA is shown in Figure 1. Swarm Group Decision Making 10 Fully Autonomous Table 2. Technology readiness levels [15].

Technology Readiness Level
Level Description

Incorporating Ethical Concerns
As mentioned, there is a notion that the perceived social benefit can be a way of measuring an application as more acceptable to the public. This could be turned into a metric where various applications are rated based on their benefits to society. The application of UASs to structural health monitoring, for example, benefits society by not only enabling monitoring the health of infrastructure but from also eliminating the need for roads, bridges, etc. to be shut down for monitoring to occur because of reduced human personnel. On the other hand, the application of a hobbyist flying his sUAS around town to take pictures for monetary values that only he receives has a relatively low social benefit. Through this, there needs to be a distinction between applications that can be only acceptable at the time they are needed and not at a constant rate. For example, the application of using sUAS for surveillance by the police has a perceptively high social benefit. But there needs to be a distinction between surveillance only being used by the police as a mission calls for it and constant surveillance. Constant surveillance can lead to detrimental effects on society [10] that may outweigh its perceived benefit to society. Outside of the social benefit, the possible ethical concerns that could be violated by certain applications should be quantified into the model. Since privacy is a broad ethical concern, it can be divided into the aforementioned categories: Intrusion upon seclusion, appropriation of a name or likeness for profit, public disclosure of private facts, and presentation of an individual in a false light [8]. This privacy concern will apply to any operation where a sUAS is operating a camera, i.e., aerial visual inspection, surveillance, journalism, etc. Another ethical concern is safety; this is about ensuring that the public will not come to harm while sUAS are in operation. This directly correlates to the ATRA. The ATRA needs to indicate that with a high AL, there is a high TRL otherwise there may be a concern for the sUASs susceptibility to hacking and system malfunctions which could result in crashes harmful to the populous. This, in turn, affects the ethical concern regarding safety. This is where the mission objectives and the environment wherein the mission being conducted comes into play (Figure 2).

Incorporating Ethical Concerns
As mentioned, there is a notion that the perceived social benefit can be a way of measuring an application as more acceptable to the public. This could be turned into a metric where various applications are rated based on their benefits to society. The application of UASs to structural health monitoring, for example, benefits society by not only enabling monitoring the health of infrastructure but from also eliminating the need for roads, bridges, etc. to be shut down for monitoring to occur because of reduced human personnel. On the other hand, the application of a hobbyist flying his sUAS around town to take pictures for monetary values that only he receives has a relatively low social benefit. Through this, there needs to be a distinction between applications that can be only acceptable at the time they are needed and not at a constant rate. For example, the application of using sUAS for surveillance by the police has a perceptively high social benefit. But there needs to be a distinction between surveillance only being used by the police as a mission calls for it and constant surveillance. Constant surveillance can lead to detrimental effects on society [10] that may outweigh its perceived benefit to society. Outside of the social benefit, the possible ethical concerns that could be violated by certain applications should be quantified into the model. Since privacy is a broad ethical concern, it can be divided into the aforementioned categories: Intrusion upon seclusion, appropriation of a name or likeness for profit, public disclosure of private facts, and presentation of an individual in a false light [8]. This privacy concern will apply to any operation where a sUAS is operating a camera, i.e., aerial visual inspection, surveillance, journalism, etc. Another ethical concern is safety; this is about ensuring that the public will not come to harm while sUAS are in operation. This directly correlates to the ATRA. The ATRA needs to indicate that with a high AL, there is a high TRL otherwise there may be a concern for the sUASs susceptibility to hacking and system malfunctions which could result in crashes harmful to the populous. This, in turn, affects the ethical concern regarding safety. This is where the mission objectives and the environment wherein the mission being conducted comes into play (Figure 2). Depending on the environment and the objective of the mission, certain ethical concerns may or may not come into play. For example, a structural health monitoring operation on a wind turbine in an unpopulated area will not have any ethical concerns regarding the civilian population whereas a health monitoring operation done on a busy bridge has ethical concerns regarding safety as the sUAS conducts its mission.

Proposed Model
The challenge with quantifying such qualitative concerns stems from deciding how metrics can be used to illustrate the various factors effectively in the model. These factors cannot be as easily quantified as the autonomy level and technological readiness for example. The proposed way to quantify these concerns is based on determining and evaluating the tradeoff between perceived social benefits and possible ethical violations. Once again, the example of using sUAS for police surveillance illustrates that tradeoff. If the police are pursuing a suspect and that suspect has gone into hiding, the social benefit of the police using a sUAS overhead to find the suspect is high. On the other hand, constant generalized surveillance of an area has different implications. The social benefit would be solely based on the use of a UAS in a specific event in anticipation of a favorable outcome that could be captured on film. Outside of that instance, the constant surveillance will more likely than not be collecting information on everyone in the area without their permission or possibly their knowledge. This violates the concern for privacy. Therefore, an ethical governor may keep this sort of mission of continuous surveillance from being conducted with the sUAS that it is encoded on. The question remains, how can a metric be created for ethical concerns? To create a metric, the ethical concerns would have to be valued for them to be used in the model. If the literature is any indication, privacy is the top ethical concern in regards to sUAS used amongst the civilian population and safety being the next in line [8]. While this may be counterintuitive, the public is more concerned with what the drone is doing flying around and less about the possibility of it crashing. The evaluation of ethical concerns is proposed to be the probability of an ethical violation multiplied by the magnitude of said violation. This value should be compared to the value given to the social benefit of the mission depending on where that mission falls within the social benefit metric. If the ethical concern value exceeds the perceived social benefit, then the ethical governor will keep the sUAS from executing that particular mission.
As stated the Learned Hand formula model will be broken down into three parts. The social benefit (B), the likelihood of an ethical concern (L) being violated, and the value assigned to the magnitude (M) based on the number of ethical concerns within the mission space. The social benefit of the mission will be based on the task being conducted. Essentially, if a mission is potentially saving Depending on the environment and the objective of the mission, certain ethical concerns may or may not come into play. For example, a structural health monitoring operation on a wind turbine in an unpopulated area will not have any ethical concerns regarding the civilian population whereas a health monitoring operation done on a busy bridge has ethical concerns regarding safety as the sUAS conducts its mission.

Proposed Model
The challenge with quantifying such qualitative concerns stems from deciding how metrics can be used to illustrate the various factors effectively in the model. These factors cannot be as easily quantified as the autonomy level and technological readiness for example. The proposed way to quantify these concerns is based on determining and evaluating the tradeoff between perceived social benefits and possible ethical violations. Once again, the example of using sUAS for police surveillance illustrates that tradeoff. If the police are pursuing a suspect and that suspect has gone into hiding, the social benefit of the police using a sUAS overhead to find the suspect is high. On the other hand, constant generalized surveillance of an area has different implications. The social benefit would be solely based on the use of a UAS in a specific event in anticipation of a favorable outcome that could be captured on film. Outside of that instance, the constant surveillance will more likely than not be collecting information on everyone in the area without their permission or possibly their knowledge. This violates the concern for privacy. Therefore, an ethical governor may keep this sort of mission of continuous surveillance from being conducted with the sUAS that it is encoded on. The question remains, how can a metric be created for ethical concerns? To create a metric, the ethical concerns would have to be valued for them to be used in the model. If the literature is any indication, privacy is the top ethical concern in regards to sUAS used amongst the civilian population and safety being the next in line [8]. While this may be counterintuitive, the public is more concerned with what the drone is doing flying around and less about the possibility of it crashing. The evaluation of ethical concerns is proposed to be the probability of an ethical violation multiplied by the magnitude of said violation. This value should be compared to the value given to the social benefit of the mission depending on where that mission falls within the social benefit metric. If the ethical concern value exceeds the perceived social benefit, then the ethical governor will keep the sUAS from executing that particular mission.
As stated the Learned Hand formula model will be broken down into three parts. The social benefit (B), the likelihood of an ethical concern (L) being violated, and the value assigned to the magnitude (M) based on the number of ethical concerns within the mission space. The social benefit of the mission will be based on the task being conducted. Essentially, if a mission is potentially saving lives, it has the highest social benefit, and if the mission is only being conducted for the pilot's personal gain, then it has the lowest social benefit. Table 3 gives a metric for social benefit below. In this table, Public Service refers to jobs such as police or firefighters use. Construction refers to any work being done using UAS on infrastructure. Journalism is using UAS for video and pictures that are being submitted for news reporting purposes. Photography simply describes the various businesses that are contracted to take aerial pictures, and the Hobbyist category is anyone that decides to use a UAS for operations where no monetary gain is given. This metric gives the value to which the probability and magnitude must either match or be below for the mission to be deemed ethical to be conducted.
The second part, the "Magnitude of Ethical Concerns" has to do with the factors of the environment that could lead to ethical concerns being violated. Currently, the factors are population density, structures in mission space, size of UAS, and prevalence of private property. The population density simply put, says how many people are potentially in danger if a UAS fails and crashes, and how many people could potentially have their privacy violated by a UAS flying overhead. Structures in mission space states how many buildings are in the area that the UAS would have to avoid during the mission. Size of the UAS is not based on all UAS but on UAS that are allowed to operate within civilian airspace (i.e., UAS weighing less than 55 lbs). Prevalence of private property is the number of private residences and other properties within the mission space. Each of these factors is given a value of 1-3 based on the actual measurement of the factor itself shown in Table 4.

Factor Metric
Population Density (P d ) 1-3 Structures in mission space (S m ) 1-3 Size of UAS (S U ) 1-3 Prevalence of private property (P P ) 1-3 The third part, the "Likelihood of violating an ethical concern" is ultimately going to decide if a mission is ethical or not within this model. This part is broken down into the factors of Proximity to restricted airspace, Proximity to private property, TRL, AL, Obstacle avoidance sensors, and Payload. The Proximity to restricted airspace has to do with how close the operation is to airport airspace or other airspaces where flight is prohibited, or permission is needed. Obstacle avoidance sensors will rule out the possibility of a UAS crashing into a structure if still operational, and Payload will state whether the payload of the UAS in considered intrusive or non-intrusive. An example of an intrusive payload would be infrared imaging compared to non-intrusive being regular photography. Table 5. Likelihood.
It should be noted, as shown above in Table 5, that the likelihoods are equally assigned the same metric stating that they could each contribute to the likelihood of a violation equally with each factor being able to contribute 15% to the likelihood of a violation. The assumption is that there is never a 100% likelihood of an ethical violation. Each of the aforementioned factors for both the likelihood and the magnitude have their own ways of being measured and based on those measurements, the value they are given within the metric will be determined. If the product of M and L is less than the value of B, then the mission is deemed ethical.

Example 1 Using Proposed Model
A hobbyist photographer is taking pictures of the landscape in his neighborhood using a DJI Mavic Pro, and his neighborhood is near the airport.
Starting with the "social benefit" metric, this operator is a hobbyist meaning his social benefit would receive a value of 1. For establishing the magnitude (M), the "population density" would be given a value of 2 because of his conducting the mission in a suburban neighborhood with a moderate population density. There are many residences in the area giving a value of 3 to "structures in the mission space" as well as giving a value of 3 to "prevalence of private property" because of the numerous private residences in the area. The size of the Mavic Pro can be given a value of 2 for not being the largest UAS that someone can purchase but not a micro UAS either. These values are shown in Table 6. Table 6. Magnitude for example 1.

Factor Metric
Population Density (P d ) 2 Structures in mission space (S m ) 3 Size of UAS (S U ) 1 Prevalence of private property (P P ) 3 Next, the likelihood of an ethical violation occurring needs to be evaluated. For this example, since the neighborhood is near an airport, the value assigned to "proximity to restricted airspace" factor would be 0.15. Since the mission is being operated in a residential area with many private properties, the "proximity to private property" factor is assigned a 0.15. The TRL of a Mavic Pro is high, so that factor is assigned 0 and its AL is medium, so the AL factor is assigned a 0.08. The Mavic Pro has obstacle avoidance, so that factor is given a 0, and the camera on a Mavic pro is a regular HD camera, so the "payload" factor is given a 0 as well. Table 7 displays these values.
In this example, the product of the magnitude of the ethical concerns and the likelihood of ethical concerns being violated is 3.42 and outweighs the social benefit of the mission being 1. Per this model, the mission in this example is deemed unethical by the proposed model.

Example 2 Using Proposed Model
The local police department is planning on using in-House built UAS modeled after the DJI Matrice 900 equipped with an infrared camera to search for a murder suspect in hiding within a downtown apartment building.
Since this mission type falls under "Public Service" the social benefit of this mission is a 5. The population density of a downtown residential area is high so that factor receives a value of 3. There are numerous structures including skyscrapers in the area, so the "structures in mission space" factor receives a 3. The size of the UAS is given a value of 2, and the prevalence of private property in the area is moderate so that factor is given a 2. The magnitude of this example is displayed in Table 8.
For likelihood shown in Table 9, the mission does not take place anywhere near restricted airspace, so the "proximity to restricted airspace" factor receives a value of 0. The mission will operate close to some private properties in that downtown area, so the "proximity to private property" factor receives a value of 0.15. Being in-house built, the UAS has a medium TRL receiving a value of 0.08 and is semi-autonomous, so it receives a value of 0.08 for the AL factor. Sensors were placed on the UAS, so the obstacle avoidance factor receives a value of 0, and since the UAS is equipped with an infrared camera which is considered an invasive payload, its "payload" factor is given a value of 0.15.
In this example, the product of the magnitude and likelihood do not outweigh the value of the social benefit, so this model deems this operation ethical.
These examples illustrate how this model can be used. While the results of each example can seem intuitive, it is important to see how different values in any of the metrics, especially the metrics within the likelihood, would make even a mission with a high social benefit become unethical. This leads to the notion that the metrics for the social benefit need to be re-evaluated, potentially breaking the social benefit element into multiple factors that affect that value like the likelihood and the magnitude. This will require further research into the best way to do so. In addition, it should be noted that the metrics were assumed based on the environment. In the future, these metrics will have defining measures, i.e., population density will need to be a certain number to fall under a certain value. For example, in example 2, the assumption is that all downtown residential areas will have a high population density. Further research will be conducted to validate those types of notions.

Discussion and Future Work
This value model illustrates a simplified method for evaluating operational scenarios using attributes that are measured differently and combining them with metrics. The value model can illustrate the strengths and weaknesses of a mission as well as illustrate what tradeoffs can be made to deem a mission ethical. This could allow operations to be tailored around the value of the defined social benefit of the operation. We can see from the examples that even though each factor is measured differently, the metrics allow us to combine all of the factors in the model based on their values. Having the ability to encode a model that will ethically govern the civil applications of sUAS can prove to be beneficial in reassuring the public that the sUAS they see flying overhead are operating within regulations and ethics. The difficulty with this implementation is the complexity of the decision space. For missions that do not operate within the vicinity of population centers, the need for ethical governance may not be as essential to the design of sUASs. Operations conducted in the midst of civilian population that draw ethical concern takes precedence. In this paper, the proposed approach to ethical concerns assigns importance based on what the public perceives as beneficial. Another limitation to this study is the bias that may be at play when the metrics are developed, mainly the metric of social benefit. For the other factors that define the magnitude and likelihood, those factors can be quantified and placed within metrics based off of data of the environment of the operation and the tools being used. Even the concern of privacy can be quantified by illustrating the spaces in which people should feel entitled to their privacy (i.e., within their home). The social benefit, on the other hand, is solely at the discretion of the researcher or stakeholder. Others may have different opinions on what the social benefit of different UAS operations is. In addition, there is an assumption that UAS technology having a lessening ethical impact on the public is a priority for companies and various other entities conducting UAS operations. Stakeholders may not hold these ethics in high regard if there are either weak consequences for ethical violations, or benefits of getting the mission completed outweigh any ramifications. Further research will be conducted to provide a more concise measure of social benefit and ethical governance. This research will continue by breaking down the factors into ranges of measurement that fall into a metric further. For instance, how far away (in feet or miles) does an operation need to be to a private residence so as to not violate their privacy. This will determine more precisely what value will be given to that factor. In addition, more factors will be sought to further define the magnitude and likelihood attributes to increase the complexity of this model. In addition, the social benefit will be broken further to more consistently give a social benefit value to the task being conducted by the UAS. The hope is that this simple model will be transformed over time into a highly complex model that incorporates a multitude of factors that encompass all possible ethical concerns within a UAS operation conducted in the civilian airspace.
Funding: This research received no external funding.