A cybersecurity entrepreneur named Josh Lospinoso talks on the dangers and potential uses of artificial intelligence (AI) in military systems. He emphasizes how crucial it is to secure current systems and use AI responsibly while advising against hasty development without taking security into account.
Josh Lospinoso, a former Army captain and cybersecurity entrepreneur, discussed the threats and applications of artificial intelligence (AI) in an interview. He explained that data poisoning, which involves crafting false data for AI systems, can have a significant impact on their operations. Although data poisoning has not been widespread, there have been notable cases such as Microsoft’s chatbot Tay. Lospinoso highlighted the use of AI in cybersecurity, including email filters and malware detection, as well as the existence of adversarial AI used by hackers. He expressed concerns about the vulnerabilities of military software systems, emphasizing the need to secure existing systems before integrating AI capabilities. While he acknowledged the potential benefits of AI in areas like maintenance and operational intelligence, he cautioned against rushing AI development without ensuring security measures. Lospinoso also emphasized that AI algorithms are not yet suitable for making decisions in lethal weapon systems.