Artificial Intelligence in Security and Defense: Ally or Risk for Uruguay?

What happens to your data as you walk through the city? Cameras and sensors analyze your movements, record patterns and can even predict behavior. All thanks to artificial intelligence (AI). This technology is already part of our daily lives and, although it can make us feel safer, it also raises concerns about our privacy and the ethical boundaries we must establish. Is Uruguay prepared for this challenge?
The Promise and the Peril

AI can help prevent crime, detect cyber threats and optimize military resources. But what happens if that technology is based on biased data? What if it becomes a tool for mass surveillance? The risks are not minor: loss of privacy, discrimination and failures that could endanger human lives.
Big Brother: Who Controls AI?
In terms of laws and regulations, at a global level, organizations such as the UN and the OECD are already warning about these dangers and promoting the responsible use of AI. In Latin America, the OAS is promoting ethical principles. In Uruguay, the National Artificial Intelligence Strategy 2024-2030 and the National Cybersecurity Strategy 2024-2030 are setting a course, but is it enough? There is still no independent body to oversee the use of AI in security and defense.
What is Uruguay doing?
The government has taken some initial steps, although not specifically aimed at the military and police:
- AI training for civil servants in general, offered by AGESIC in 2024, focusing on text analysis and data processing.
- Postgraduate courses such as the Specialization in Strategic Intelligence, taught at the Center for Higher National Studies (CALEN) of the Ministry of Defense, which deals with information analysis issues, although not specifically AI.
In operational matters, the following has been implemented:
- Pilot programs with cameras that identify criminal patterns.
- Use of drones and predictive analysis tools at borders.
Meanwhile, the private sector is advancing rapidly. Banks and insurance companies use AI mainly in closed environments to detect fraud and analyze risk, working on their own databases. In contrast, public surveillance with cameras and drones works with live images and personal data that is not always stored, which poses different challenges. The balance here is different: companies seek profitability, while the state must safeguard fundamental rights.
AI in Public Safety: Surveillance with Electronic Eyes
Today in Uruguay, AI helps detect criminal patterns and monitor borders. But in other countries, it has been shown that these systems can discriminate based on skin color or make mistakes in identifying people. Furthermore, there is no council to evaluate whether these technologies respect human rights.
AI in Defense: Drones and Cyberdefense
In the military sphere, AI is already used in drones, intelligence analysis and cyber defense. There are projects for the predictive maintenance of equipment and systems that seek to anticipate threats. However, the risk that autonomous weapons may one day act without human intervention is generating debate at a global level.
In cyber defense, an additional concern arises: the most advanced AI solutions tend to operate in the cloud, with servers located in other countries. Sending sensitive national security data to foreign infrastructures could expose the state’s digital sovereignty. This dilemma is being discussed today in international forums, and some countries have already decided to migrate their entire infrastructure to the cloud, while others choose to maintain local control.
Critical Thinking and the Risk of Passive Automation

While AI-based automation promises to facilitate faster and more efficient decision-making, it also poses less visible but equally critical risks. One of these is the weakening of critical thinking in human operators.
When automated systems constantly generate predictions or recommendations, there is a danger that operators will accept these results automatically, without questioning them. This “passive automation” can lead to a loss of the analytical capacity to detect errors, anomalies or biases in the results produced by AI.
Added to this is another challenge: cognitive overload. In emergency situations, the combination of multiple streams of real-time information (sensors, cameras, predictive analytics, drones) can overwhelm decision-makers.
Paradoxically, an excess of data can lead to hasty or erroneous decisions.
Therefore, strengthening critical thinking skills must be a central pillar in the training of operators working with AI. The ability to question automated decisions and to maintain human intervention as a counterweight is key to avoiding the blind delegation of responsibilities to systems that, while powerful, can also fail.
What More Could Be Done?
AI could optimize police and military logistics, and state logistics in general, if properly managed. It could also improve state cybersecurity by automating processes and even anticipating attacks before they occur. The implementation of AI in cybersecurity systems would make it possible to detect vulnerabilities in real time, automate responses to incidents, and strengthen the protection of critical infrastructures.
Furthermore, its integration into internal security operations and into the National Emergency System (SINAE) could revolutionize crisis and emergency management. In the case of SINAE, AI could assist in the prediction of extreme weather events, better coordinate the distribution of resources in floods or fires and optimize integration with national health services.
For example, in Finland AI has been used to coordinate rapid responses to snowstorms and in Spain, the Red Cross implements predictive analysis systems to manage health emergencies and natural disasters.
This technology could also contribute to optimizing police patrol routes in real time and improving the distribution of medical resources in crisis situations, creating a digital ecosystem that allows for faster and more accurate responses.
Likewise, AI-based process automation could increase the efficiency of the State in administrative management. It could speed up the certainty and speed of administrative procedures, improve state procurement processes and the verification of regulatory compliance in tenders. It would also facilitate the monitoring and registration of weapons, ammunition and other hazardous materials, such as explosives, chemical or nuclear substances, improving the control and traceability of these critical resources.
To think about, or create a prompt?
Uruguay is at a turning point in terms of the adoption of AI in security and defense. The benefits can be enormous, but it is necessary to move forward responsibly. Here are some ideas to think about together:
Could it be useful to have an Independent Council of Ethics in AI, to ensure that technological decisions are transparent and fair?
Should we guarantee that there is always human supervision in critical decisions assisted by AI?
Automation helps, but human experience is still key.
How can we ensure that the data collected by the State is handled with total transparency, informing citizens about its use?
Perhaps it is time to expand AI training for police and the military, preparing our security forces for a more technological future.
Could we strengthen the SINAE by integrating AI to anticipate emergencies, improving coordination with health and internal security?
Improving state cybersecurity with AI sounds promising. Automating the detection of and response to digital threats would better protect our infrastructures.
Would it be positive if AI streamlined administrative procedures, tenders and the registration of weapons and hazardous materials, reducing time and human error?
Finally, how can we encourage an open debate with society on these issues, ensuring that everyone feels part of the technological change?
Retired Colonel of the Uruguayan National Army Pedro M. Gómez is an instructor of Strategy and Cyberdefence at the Centre for Higher National Studies (CALEN) in Uruguay, and holds a Master’s in Security and Defense from the Inter-American Defense College. He was commander of the Uruguayan Army’s Cyberdefense Unit.
Disclaimer: The views and opinions expressed in this article are those of the author. They do not necessarily reflect the official policy or position of any agency of the U.S. government, Diálogo magazine, or its members, nor the Uruguayan government. This article was machine translated.
References:
AGESIC. 2024. “Estrategia Nacional de Inteligencia Artificial 2024-2030.”
Asaro, Peter. 2012. “On Banning Autonomous Lethal Systems: Human Rights, Automation, and the Dehumanization of Lethal Decision-making.” International Review of the Red Cross 94 (886).
Benjamin, Ruha. 2019. Race after Technology: Abolitionist Tools for the New Jim Code. Cambridge: Polity.
Bostrom, Nick. 2014. Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University Press.
Brundage, Miles, et al. 2018. “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation.”
Gerlich, Michael. 2023. AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking. Oxford University Press.
MDN. 2024. “Especialización en Inteligencia Estratégica.”
OCDE. 2019. “Principios sobre Inteligencia Artificial.”
O’Neil, Cathy. 2016. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York: Crown Publishing Group.
ONU. 2024. “Pacto para el Futuro.”
OEA. 2023. “Lineamientos Interamericanos sobre Inteligencia Artificial.”
WEF. 2020. “The Future of Jobs Report 2020.” World Economic Forum.
Zuboff, Shoshana. 2019. The Age of Surveillance Capitalism. New York: PublicAffairs.
link