OpenAI's recent deal with the US military has sparked controversy, with the company facing backlash from users and concerns over AI's role in war. The initial agreement, deemed 'opportunistic and sloppy', has been revised, but questions remain about the power dynamics between government and private entities in AI development and deployment. OpenAI's CEO, Altman, acknowledged the mistake of rushing the initial deal and emphasized the complexity of the issues involved, promising further changes to ensure the system isn't used for domestic surveillance and to limit access by intelligence agencies without specific modifications. The controversy has led to a surge in ChatGPT uninstalls, with Sensor Tower reporting a 200% increase in uninstall rates. Meanwhile, Anthropic's Claude has gained popularity, despite its past refusal to create autonomous weapons. The use of AI in military operations, as exemplified by Palantir's Maven, raises ethical concerns. While Palantir supports a 'human in the loop' approach, the absence of a 'safety-conscious actor' in the Pentagon's AI considerations is a significant issue, according to Oxford University Professor Mariarosaria Taddeo. The BBC's AI Unpacked week explores these complex topics, shedding light on the implications of AI in warfare and beyond.