Software Engineer Military - This article is the second in a series on digital security. The first article looks at how the US should bring technology experts to the Department of Defense and the new ideas that are being developed at a faster pace, and why these innovations should be shared with allies. The series concludes with an essay on how the Army's 1941 Louisiana Maneuvers can serve as a model for effective training for today's armed forces to deal with future conflicts, not past wars.
The US Department of Defense has taken a growing interest in artificial intelligence (AI). Military planners have run a series of war games to determine whether artificial intelligence can replace humans in combat. Based on these early tests, computers seem to be able to outperform human operators—in one recent test, an AI system defeated its human opponent in a simulated dogfight. Some military planners even worry that keeping people informed is dangerously limiting decision-making in high-level situations.
Software Engineer Military
But using artificial intelligence instead of humans in combat ignores the lessons Silicon Valley has learned from using technology elsewhere. Experiences with artificial intelligence have shown how dangerous it is to think that computers are "better" or "more accurate" than humans. Increasing the accuracy, speed and efficiency of complex processes through automation can make failure less frequent, but it can also make it more difficult. As a result, successful exploitation of AI does not mean replacing humans with computers to perform discrete tasks. Instead, it seeks to embrace the relationship between technology and the people who interact with it.
Software Innovation Drives 'revolutionary' Zero Trust Effort > Wright Patterson Afb > Article Display
How should the military feel about this? We believe our experience in Rebellion Defense can provide valuable insights. We are a software company that helps intelligence and homeland security customers use artificial intelligence in their work. In our development process, we discourage defense partners from focusing on where machines can take over an existing system, and instead explore how the system itself could change if people and machines could work together. This brings us to three key principles: a) create information, don't lose it, b) find mixed intelligence opportunities, and c) budget for mistakes.
As Silicon Valley discovered, introducing AI into complex systems fundamentally changes them, changing the types of errors that occur and their consequences.
Our first brush with the problems of using machines to eliminate human error came in automating the work that system administrators do to set up, upgrade, and configure servers. Software engineers build automated systems that can read plain text files containing configuration instructions and reconfigure servers on the fly if needed. This greatly reduced the number of errors caused by poorly typed commands or forgetting to update a server or two in the list. Companies that implement this automation approach reduce errors and increase efficiency. Eliminating human error also allowed systems to grow, become faster and more complex. But this growth came at a price. In automated systems, errors can appear in many subsystems very quickly, often bringing down a large number of seemingly unrelated tasks. In November 2020, a bug shut down much of the Internet for several hours. System failures caused by the unpredictable behavior of error mitigation technologies cause catastrophic failure of the world's top technology companies several times a year.
The unintended consequences of AI have also taken more tangible forms. Amazon has more than 110 distribution centers across the country. They have been developed to improve efficiency and safety. But to the company's surprise, smart robots and artificial intelligence were found to increase workplace injuries and accidents in online retailers' warehouses. In some robot warehouses, injury rates were five times higher than the industry average.
The History Of Coding And The Military
Also, the reduction of errors allowed the programs to run faster and more complex. Amazon's robots increase human injuries because they increase the speed of warehouse operations beyond what the human body can tolerate. There were fewer accidents caused by moving people around the warehouse, but more injuries related to repetitive stress and fatigue as workers sat at their processing stations at an ever-increasing pace. In developing robotic workers, Amazon attempted to trade humans for machines without approaching the system entirely or considering how humans might adapt to automation. Truly secure systems consider the interaction between humans and machines, rather than trying to replace one with the other.
Due to such situations, Google is famous for calling machine learning and artificial intelligence as a "high-interest credit card": the benefits were very real, but preventing unexpected or dangerous behavior in automated systems was more difficult and more expensive in the long term than that. Google. in case. When previously non-communicating systems start sharing data with AI models, the models relearn based on the data produced by the other models. Side effects are guaranteed. As a result, Google has tried to manage these effects by closely connecting the employees who use their systems with the researchers who develop artificial intelligence.
Based on this knowledge, how should the Department of Defense use AI in military contexts? We support three main principles.
Both the US and Russia have stories of commanders preventing the Cold War from accidentally devolving into a hot nuclear war when warnings from early detection systems were ignored or overruled by experienced human operators who told them the end of the machine was absurd.
U.s. Army To Use Hololens Technology In High Tech Headsets For Soldiers
- that is, human supervision of AI results. However, this oversight does not help if the human operator does not have enough experience or knowledge to determine what the correct result is. Security researchers point to the ironies of automation: automating tasks makes the people who use the systems that rely on them for good judgment less knowledgeable and more experienced. The initial step of integrating AI into an existing system often leaves the most complex and trivial analysis to human workers, while handing over basic analysis to machines. Conventional thinking assumes that a person works well without the burden of simple tasks. In fact, the accuracy of these difficult tasks comes from experience with simple tasks. In business, this is sometimes called the "moral crinkle zone." The human operator is ultimately the scapegoat for the machine's error because he has a controlling role, but the increased complexity of the machine makes him unable to understand what the correct output should be.
Instead of improving systems to run faster or reduce errors, we focus on how people gain knowledge about systems and observe machines to increase and maintain their understanding of operations. For example, intelligence, monitoring, and review tasks typically involve a range of analysts, from low-level analysts who focus on annotation and data entry to high-level analysts who build and refine the interpretation of reports based on that data. Those with a higher vision of intelligence, surveillance, and intelligence think that the best way to introduce AI to improve this process is for AI to replace low-level analysts, or at least to filter incoming data for relevance. After all, people get tired, miss things, and make mistakes that AI can catch.
But when we interviewed military personnel working in intelligence, surveillance and reconnaissance, they were quick to point out that analysis at every level was done by analysts from lower levels. When low-level analysts take on tasks that machines cannot perform, their professional judgment is honed by hours and hours of tedious data entry work. If we replace low-level analysts with artificial intelligence, that expertise will be lost. As a result, the entire system becomes vulnerable, and the mistakes it makes have very dangerous and far-reaching consequences.
One of the advantages of artificial intelligence is that the mistakes that humans and machines make are very different from each other. The so-called hybrid intelligence aims to combine the complementary strengths of human and artificial intelligence to achieve better results.
Ex Google Worker Fears 'killer Robots' Could Cause Mass Atrocities
At the heart of hybrid intelligence is the paradox of Moravec, which tells us that pattern matching is difficult and very intensive for computers, but cheap and easy for humans, while computation is difficult and dependent on human resources, but cheap and easy for computers. For decades, computers have been able to take metadata from a photo and cross-reference it with geographic boundary data to determine the country where it was taken. But only the most advanced computers could figure out what this photo was about. Rather than looking at the shortcomings of humans and machines separately, we should appreciate how best they can work together as a team.
It starts in the product development phase, when designers look for ways to create mixed intelligence by creating a system map that breaks down tasks such as pattern matching or computation. Consider the task of maintaining a fleet of armored vehicles. Artificial intelligence can predict and prevent equipment failures by making calculations based on sensor feeds from thousands of moving parts.
How to become a military engineer, military combat engineer, us military engineer, military engineer jobs, military civil engineer, military software engineer salary, military engineer, military engineer services, military mechanical engineer, military electrical engineer, military aerospace engineer, military software engineer
0 Comments