The Pentagon is confronted with an ethical dilemma concerning the use of artificial intelligence (AI) in combat: should there be full human supervision, which may lack speed and accuracy, or should AI operate without any human control, which could lead to unethical behavior. These authors propose a novel approach, the “AI operator,” which facilitates a synergistic relationship between military personnel and AI without compromising ethics. This solution draws inspiration from the historical use of war elephants in combat, where a dedicated operator managed and directed these creatures. AI operators, like these historical trainers, would oversee the AI’s development, training, and deployment, while also maintaining ethical considerations.
Key Points:
- The use of AI in warfare presents a challenge between maintaining ethical standards via human oversight and the potential for faster, more accurate unsupervised AI operations.
- The concept of an “AI operator” is suggested, which models after the historical role of a mahout overseeing a war elephant, thus maintaining a balance between human oversight and AI efficiency.
- The AI operator would bridge the gap between military personnel and AI, understanding the nuances of the AI system, and ensuring that ethical standards are met.
- To prevent model biases, diverse teams should create AI models using unique data sets and training environments rather than tweaking and customizing existing models.
- Embracing this collaborative approach between humans and AI would turn the commitment to ethical values from a perceived limitation into a strategic advantage and can be extended beyond the battlefield into other applications where ethical considerations are important.
Continue reading at https://www.c4isrnet.com/opinion/2023/06/14/what-war-elephants-can-teach-us-about-the-future-of-ai-in-combat/