How should engineers deploy AI responsibly?
3/3/2023 Autonomous & Intelligent Systems Expert knowledge embedded world

How should engineers deploy AI responsibly?

In the past decade, artificial intelligence (AI) has gone from an emerging tool emanating from research groups to a technology that embedded systems developers must grapple with. With it, a host of ethical questions emerge that demand answers. If AI is allowed to make decisions, what role should humans have in defining the parameters it uses? And should they have recourse to contest the decisions AI made? So how should they be addressed, can we learn from other disciplines, and do any ethical codes of conduct already exist? The article by Stuart Cording, electronics engineer and freelance writer, addresses some of these issues.

A woman and a humanoid robot face each other If AI is allowed to make decisions, what role should humans have in defining the parameters it uses?

AI and embedded systems


Ethical issues arise in the use of AI

Engineers rarely deal with ethics, a branch of philosophy that, at its simplest, deals with what we consider to be right and wrong. After all, much of what engineers deal with is black and white, functioning or non-functioning, with little room for gray zones. It could be argued that the natural desire to “do the right thing” is inherent in the engineering psyche, and thus, we are, ethically speaking, always trying to do good and improve the world.

For example, the developers of an automotive braking system are inherently focused on delivering a safe system that functions correctly under all conditions. Additionally, there are standards and checks in place that ensure the safety of the product that results. The same applies to industrial engineers developing robotic systems operating in close proximity to humans.


AI isn’t simply a new tool

So why can’t AI simply be incorporated into the engineering toolbox like other technologies before it? Well, AI and its other branches, such as machine learning (ML) and deep learning (DL), enable capabilities that, previously, could only have been implemented by humans. In the past decade alone, image recognition accuracy of DL tools has gone from 70% to around 98%. By comparison, humans average 95%. Such capability is often available as ready-to-use open-source models that anyone can use, and the hardware required is relatively cheap and easy to source.

Thus, the barrier to entry is very low. Suddenly, a task that would have required a human to review images can be done by a machine. In and of itself, this is no immediate threat and is comparable to building a robot that can replace a human assembly operator. The real issue is with the ease of scalability. Suddenly, thousands of images per second can be reviewed, with only financial investment and hardware availability being limiting factors. While this could benefit an optical inspection system by improving quality in a factory, it could also be deployed for nefarious use in authoritarian states.

The dual-use dilemma

This dual-use dilemma has existed for centuries. The humble butter knife is also a dagger, and a ticking clockwork mechanism can be the trigger for a bomb. There are always two types of users: those who use technology as intended and those who use or repurpose it for malevolent objectives.

Scientists have grappled with this issue often while developing viruses and dangerous chemicals. In their paper “Ethical and Philosophical Consideration of the Dual-use Dilemma in the Biological Sciences,” Miller and Selgelid discuss these issues at length. For example, should chemicals be developed that could cause mass destruction so that antidotes can be developed? And if such work is undertaken, should the results be shared fully with the research community, or should the outcomes be shared but in a manner that limits a reader’s ability to replicate the experiment?

Their paper provides some options for regulating dual-use experiments and sharing the resultant information. One extreme leaves the decision in the hands of those conducting the experiments, while on the other, it is up to governments to legislate. Research institutes and government or independent authorities are proposed as arbiters in the middle ground. The authors recommend these middle ways as the best approach, providing a balance between supporting the moral value of academic freedom and overriding it.
When considering AI related to embedded systems, this paper provides ideas on dealing with some of the ethical challenges.

Engineers must also be aware of the increased number of domains they touch with AI-driven technology. For example, ML algorithms can improve the safety of drones, enabling them to avoid collisions with objects or people. But that same hardware and software framework could also be reprogrammed with a little effort for nefarious or military purposes. The addition of face recognition technology could allow the device to attack and injure a human target autonomously. The ethical question based upon this potential use may be, are we obliged to implement a form of security that hinders the execution of non-authorized code?

Would you like to delve deeper into the topic?
At embedded world Exhibition&Conference 2025 
from March 11 to 13, 2025,
you will have the opportunity to exchange ideas with industry experts. 

Weaknesses in deep learning algorithms

Another issue is that of potential weaknesses in DL algorithms that make their way into production code. In their paper “Making an Invisibility Cloak: Real World Adversarial Attacks on Object Detectors,” Wu et al. create t-shirts with images printed on them that, when worn, cause them not to be classified as humans by an AI camera. Other research has shown that autonomous vehicles can be fooled by applying stickers with patterns on roads and road signs, with results that endanger all types of road users. Considering that these blind corners of the algorithms behind these capabilities cannot be deduced, what is the correct way forward upon their discovery? Should all affected vehicles be taken off the road until the issue is resolved?

Microcontrollers only have a limited amount of performance when it comes to neural networks, especially compared to the capability of servers running AI, such as ChatGPT or DALL•E. However, the semiconductor industry is investing heavily to support embedded edge AI, with many new devices providing some neural network or AI acceleration. As the capabilities of the algorithms that run on such devices grow, understanding the precise implementation diminishes. And this is where the risks of undiscovered problems lurk and ethical concerns fester.


Finding engineering resources for ethical AI deployment

Many organizations have developed AI ethics principles, as noted by Blackman in his article “A Practical Guide to Building Ethical AI.” However, many of these statements are high-level, using terms such as “fairness,” which is difficult for engineers to operationalize. He also raises the point that, while engineers are acutely aware of business-relevant risks, they lack the training provided to academics in answering ethical questions and institutional support.

So, where should we go for our AI ethical guidelines? UNESCO provides a “Recommendation on the Ethics of Artificial Intelligence.” It mainly encourages member states and governments to develop policy frameworks and oversight mechanisms, covering AI in all its guises. But, for embedded developers, extracting concrete guidelines that could be used in an engineering organization is challenging. The European Commission also has “Ethics Guidelines for Trustworthy AI.” Here a clear focus is provided that states trustworthy AI should be lawful, ethical, and robust. Chapter two provides an in-depth explanation of ethical principles and the tensions that may arise between them. As a result, this resource is probably better for engineering teams attempting to grapple with ethics around AI technology.

Then there is the report “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation” by Brundage et al. This provides four high-level recommendations. The first suggests that policymakers collaborate closely with researchers to mitigate the potential malicious use of AI. Second, researchers and engineers are encouraged to take the dual-use nature of their work seriously. Collection of best practices is the third, while the fourth seeks to expand the range of stakeholders and domain experts involved in discussing these challenges.
IBM offers a clear explanation of its approach to AI ethics. It sees AI as augmenting human intelligence and declares that such technology should be transparent and explainable. It also declares that data, the element that makes training AI possible, and its insights should belong to their creator.

But perhaps the most straightforward document to consume is the “In brief: Bosch code of ethics for AI.” With strong principles already in place within the organization, the use and role of AI is broken down into three approaches. The human-in-command (HIC) approach sees people using the results of AI to make decisions, and AI is used as an aid. Human-in-the-loop (HITL) allows humans to influence or change AI decisions. Finally, human-on-the-loop (HOTL) sees AI making decisions, but humans defining the parameters used for the decisions. They also have the opportunity to appeal a decision for review.


An image recognition AI detects vehicles on a road Dual-use dilemma: AI can be used for the greater good, but also for malicious purposes

Ethics guidelines for AI use exist

Responsible use of AI requires answers to complex ethical issues. Without experience in dealing with ethical questions, it is easy for embedded systems engineers to get tied in knots trying to deliver answers. Autonomous vehicles are already pushing the boundaries of AI and have been responsible for several road injuries and deaths. This, coupled with deep fake images, videos, and news stories, puts AI as a technology in a bad light, resulting in a reticence by the general public to accept this new advancement.

However, every new technology introduction has its teething issues. Railways were notoriously dangerous at their inception, and it took time to recognize the challenges and respond to them appropriately. As highlighted, ethical challenges and concerns around dual use have long existed in the scientific community. But, as shown here, there are answers and some helpful resources available from within our engineering community as we grapple with the ethical challenges arising from AI deployment.