The Pentagon pushes for autonomous weapons to offset China - Politics | PoFo

Wandering the information superhighway, he came upon the last refuge of civilization, PoFo, the only forum on the internet ...

Military vehicles, aircraft, ships, guns and other military equipment. Plus any general military discussions that don't belong elsewhere on the board.

Moderator: PoFo The Lounge Mods

#14655165 ... story.html

The exotic new weapons the Pentagon wants to deter Russia and China

Little noticed amid the daily news bulletins about the Islamic State and Syria, the Pentagon has begun a push for exotic new weapons that can deter Russia and China.

Pentagon officials have started talking openly about using the latest tools of artificial intelligence and machine learning to create robot weapons, “human-machine teams” and enhanced, super-powered soldiers. It may sound like science fiction, but Pentagon officials say they have concluded that such high-tech systems are the best way to combat rapid improvements by the Russian and Chinese militaries.

David Ignatius writes a twice-a-week foreign affairs column and contributes to the PostPartisan blog. View Archive
These potentially revolutionary U.S. weapons systems were explained in an interview last week by Robert Work, the deputy secretary of defense, and Air Force Gen. Paul Selva, the vice chairman of the Joint Chiefs of Staff. Their comments were the latest in a series of unusual recent disclosures about what, until a few months ago, was some of the military’s most secret research.

“This is how we will make our battle networks more powerful, hopefully, and inject enough uncertainty in the minds of the Russians and the Chinese that, you know, if they ever did come to blows with us, would be able to prevail in a conventional [non-nuclear] way. That, for me, is the definition of conventional deterrence,” Work explained.

Within the Pentagon, this high-tech approach is known by the dull phrase “third offset strategy,” emulating two earlier “offsets” that checked Russian military advances during the Cold War. The first offset was tactical nuclear weapons; the second was precision-guided conventional weapons. The latest version assumes that smart, robot weapons can help restore deterrence that has been eroded by Russian and Chinese progress.

Gen. Joseph F. Dunford, the chairman of the Joint Chiefs of Staff, voiced an early warning during his confirmation hearing in July when he said that Russia posed the greatest “existential” threat to the United States. Work said in a recent speech that because the United States has focused on the Middle East since 2001, “our program has been slow to adapt as these high-end threats have started to re-emerge.”

The Pentagon’s 2017 budget includes some money to prime the high-tech pump: $3 billion for advanced weapons to counter, say, a Chinese long-range attack on U.S. naval forces; $3 billion to upgrade undersea systems; $3 billion for human-machine teaming and “swarming” operations by unmanned drones; $1.7 billion for cyber and electronic systems that use artificial intelligence; and $500 million for war-gaming and other testing of the new concepts.

The Obama administration, sometimes chided for being slow to respond to Russian and Chinese threats, seems to have concluded that America’s best strategy is to leverage its biggest advantage, which is technology. The concepts are reminiscent of President Reagan’s “Star Wars” initiative, but 30 years on.

The high-tech resurgence got a boost last year from the blue-ribbon Defense Science Board, which conducted a “summer study” of autonomous, robot weapons. “Imagine if we are unprepared to counter such capabilities in the hands of our adversaries,” the board warned.

The game partly is about messaging the Russians and Chinese. Work has described Russia as “a resurgent great power” and China as “a rising power with impressive latent technological capabilities [that] probably embodies a more enduring strategic challenge.” In a Feb. 2 budget announcement, Defense Secretary Ashton Carter spoke of Russian “aggression” in Europe and said: “We haven’t had to worry about this for 25 years, and while I wish it were otherwise, now we do.”

Carter raised some eyebrows in that budget message when he described the Pentagon’s “Strategic Capabilities Office,” a highly classified initiative that he began in 2012 when he was undersecretary. He noted that the office was working on advanced navigation for smart weapons using micro-cameras and sensors; missile-defense systems using hypervelocity projectiles; and swarming drones that are “really fast, really resistant.”

Work illustrated the new willingness to discuss exotic weaponry. During the interview, he showed off a small “Perdix” micro-drone, less than a foot long, which flew with 25 of its mates in a tight grid last summer after being launched from a large plane. These organized drones are part of the Pentagon’s vision of future combat.

The Ukraine and Syria battlefields have offered sobering demonstrations of Russian capabilities. In the interview and other public comments, Work catalogued Russian military advances that include automated battle networks, advanced sensors, drones, anti-personnel weapons and jamming devices.

“Our adversaries, quite frankly, are pursuing enhanced human operations,” Work warned a gathering at the Center for a New American Security in December. “And it scares the crap out of us, really.”

This is bad. Very, very, very, very, bad.
No, very, very, very, very, very bad is increasing the potential for gigadeath wars that could end the human race before it has a chance to go into space.

This open letter was announced July 28 at the opening of the IJCAI 2015 conference on July 28.
Journalists who wish to see the press release may contact Toby Walsh.
Hosting, signature verification and list management are supported by FLI; for administrative questions about this letter, please contact Max Tegmark.

Autonomous weapons select and engage targets without human intervention. They might include, for example, armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions. Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.

Many arguments have been made for and against autonomous weapons, for example that replacing human soldiers by machines is good by reducing casualties for the owner but bad by thereby lowering the threshold for going to battle. The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.

Just as most chemists and biologists have no interest in building chemical or biological weapons, most AI researchers have no interest in building AI weapons — and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits. Indeed, chemists and biologists have broadly supported international agreements that have successfully prohibited chemical and biological weapons, just as most physicists supported the treaties banning space-based nuclear weapons and blinding laser weapons.

In summary, we believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so. Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.

reference: ... ignatories

The list of Signatories for this letter includes:

Stuart Russell Berkeley, Professor of Computer Science, director of the Center for Intelligent Systems, and co-author of the standard textbook “Artificial Intelligence: a Modern Approach"
Nils J. Nilsson, Department of Computer Science, Stanford University, Kumagai Professor of Engineering, Emeritus, past president of AAAI
Barbara J. Grosz Harvard University, Higgins Professor of Natural Sciences, former president AAAI, former chair of IJCAI Board of Trustees
Tom Mitchell CMU, past president of AAAI, Fredkin University Professor and Head of the Machine Learning Department
Eric Horvitz, Microsoft Research, Managing director, Microsoft Research, past president of AAAI, co-chair of AAAI Presidential Panel on Long-term AI Futures, member of ACM, IEEE CIS
Martha E. Pollack University of Michigan, Provost, Professor of Computer Science & Professor of Information, past president of AAAI, Fellow of AAAS, ACM & AAAI
Henry Kautz, University of Rochester, Professor of Computer Science, past president of AAAI, member of ACM
Demis Hassabis, Google DeepMind, CEO
Yann LeCun, New York University & Facebook AI Research, Professor of Computer Science & Director of AI Research
Oren Etzioni, Allen Institute for AI, CEO, member of AAAI, ACM
Peter Norvig, Google, Research Director, member of AAAI, ACM
Geoffrey Hinton University of Toronto and Google, Emeritus Professor, AAAI Fellow
Yoshua Bengio, Université de Montréal, Professor
Erik Sandewall, Linköping University, Sweden, Professor of Computer Science, member of AAAI, ACM, Swedish Artificial Intelligence Society
Francesca Rossi Padova & Harvard, Professor of Computer Science, IJCAI President and Co-chair of AAAI committee on impact of AI and Ethical Issues, member of ACM
Bart Selman Cornell, Professor of Computer Science, co-chair of the AAAI presidential panel on long-term AI futures, member of ACM
Joseph Y. Halpern, Cornell, Professor, member of AAAI, ACM, IEEE
Richard S. Sutton Univ. of Alberta, Professor of Computer Science and author of the textbook “Reinforcement Learning: An Introduction"
Toby Walsh Univ. of New South Wales & NICTA, Professor of AI and President of the AI Access Foundation
David C. Parkes David Parkes, Harvard University, Area Dean for Computer Science, Chair of ACM SIGecom, AAAI Fellow and member of AAAI presidential panel on long-term AI futures, member of ACM
Berthold K.P. Horn, MIT EECS & CSAIL, Professor EECS, member of AAAI, IEEE CS
Gerhard Brewka, Leipzig University, Professor for Intelligent Systems, past president of ECCAI, member of AAAI
John S Shawe-Taylor, University College London, Professor of Computational Statistics and Machine Learning, member of IEEE CS
Hector Levesque, University of Toronto, Professor Emeritus, Past President of IJCAI, member of AAAI
Ivan Bratko, University of Ljubljana, Professor of Computer Science, ECCAI Fellow, member of SLAIS
Pierre Wolper, University of Liège, Professor of Computer Science, member of AAAI, ACM, IEEE CS
Bonnie Webber, University of Edinburgh, Professor in Informatics, member of AAAI, Association for Computational Linguistics
Ernest Davis, New York University, Professor of Computer Science, member of AAAI, ACM
Mary-Anne Williams, University of Technology Sydney, Founder and Director, Innovation and Enterprise Lab (The Magic Lab); ACM Committee Eugene L. Lawler Award for Humanitarian Contributions within Computer Science and Informatics; Fellow, Australian Computer Society, member of AAAI, IEEE CIS, IEEE CS, IEEE RAS
Frank van Harmelen, VU University Amsterdam, Professor of Knowledge Representation, ECCAI Fellow, member of the Academia Europeana
Csaba Szepesvari, University of Alberta, Professor of Computer Science, member of AAAI, ACM
Raja Chatila, CNRS, University Pierre and Marie Curie, Paris., Resercher in Robotics and AI, member of AAAI, ACM, IEEE CS, IEEE RAS, President IEEE Robotics and Automation Society (Disclaimer: my views represent my own)
Noel Sharkey, University of Sheffield and ICRAC, Emeritus Professor, member of British Computer Society, Institution of Engineering and Technology UK
Ramon Lopez de Mantaras Artificial Intelligence Research Institute, Spanish National Research Council, Director, ECCAI Fellow, former President of the Board of Trustees of IJCAI, recipient of the AAAI Robert S. Engelmore Memorial Lecture Award
Carla Brodley, Northeastern University, Dean and Professor of the College of Computer and Information Science, member of AAAI, ACM, IEEE CS
Nowe Ann, Vrije Universiteit Brussel, Professor of Computer Science (AI), ECCAI board member, BNVKI chairman, member of IEEE CIS, IEEE CS, IEEE RAS
Stefanuk, Vadim, (Moscow) IITP RAS, RUPF, Leading Researcher, Professor of AI in RUPF, ECCAI Fellow, Vice-Chairman of RAAI
Bruno Siciliano, University of Naples Federico II, Professor of Robotics, Fellow of IEEE, ASME, IFAC, Past-President IEEE Robotics and Automation Society
Bernhard Schölkopf, Max Planck Institute for Intelligent Systems, Director, member of ACM, IEEE CIS, IMLS board & NIPS board
Mustafa Suleyman, Google DeepMind, Co-Founder & Head of Applied AI
Juergen Schmidhuber, The Swiss AI Lab IDSIA, USI & SUPSI, Professor of Artificial Intelligence
Dileep George, Vicarious, Co-founder
D. Scott Phoenix, Vicarious, Co-founder
Ronald J. Brachman, Yahoo, Chief Scientist and Head of Yahoo Labs, member of AAAI, ACM, Former president of AAAI, former Secretary-Treasurer of IJCAI, Inc., Fellow of AAAI, ACM, and IEEE.
Stephen Hawking Director of research at the Department of Applied Mathematics and Theoretical Physics at Cambridge, 2012 Fundamental Physics Prize laureate for his work on quantum gravity
Elon Musk SpaceX, Tesla, Solar City
Steve Wozniak, Apple Inc., Co-founder, member of IEEE CS
Jaan Tallinn co-founder of Skype, CSER and FLI
Frank Wilczek MIT, Professor of Physics, Nobel Laureate for his work on the strong nuclear force
Max Tegmark MIT, Professor of Physics, co-founder of FLI
Daniel C. Dennett, Tufts University, Professor, Co-Director, Center for Cognitive Studies, member of AAAI
Noam Chomsky MIT, Institute Professor emeritus, inductee in IEEE Intelligent Systems Hall of Fame, Franklin medalist in Computer and Cognitive Science
Barbara Simons IBM Research (retired), Past President ACM, ACM Fellow, AAAS Fellow
Stephen Goose Director of Human Rights Watch's Arms Division
Anthony Aguirre, UCSC, Professor of Physics, co-founder of FLI
Lisa Randall, Harvard, Professor of Physics
Martin Rees Co-founder of CSER and Astrophysicist
Susan Holden Martin, Lifeboat Foundation, Advisory Board, Robotics/AI
Peter H. Diamandis, XPRIZE Foundation, Chairman & CEO
Hon. Jean Jacques Blais, Founding Chair, Pearson Peacekeeping Center, Former Minister of Defence for Canada (1982-83)

Full list here
#14655389 ... -go-to-war

Except that the world economic forum has been recently discussing the topic with one gentleman representing the western arms industry and he would love to see this go forward because it would mean a lot of government contracts for his company and those similar to that which he represents. As we all know the US is also the worlds largest arms dealer ... ters/3105/
#14655403 ... n#t-554341 ... anguage=en

All I ask is that you don't respond right away. Just watch the video (or videos if you can), sleep on it and then perhaps respond the next day after digesting the information a bit. ... t-carrier/

Iran uses drone to spy on Aircraft carrier

Boston dynamics just released a new version of the atlas robot

Last edited by Ummon on 25 Feb 2016 11:28, edited 1 time in total.
Bottom line is that human resources available to Russia and China is larger than to Nato put together. In case of Sino-Russian and other sattelittes that might follow, we will be short on manpower so improving the quality of an individual soldier is required to combat the threats that the future might posses. This is not a realm for philosophers nor humanists, it is a realm of what we have to do to be safe.
Dagoth Ur wrote:I am sure they would love to have such a product to push as it would be ludicrously expensive and would make them fabulously wealthy. But in reality no one is going to okay a programme of billion-dollar-a-piece soldiers when you can get a bunch of meat to clog up the grinder for dirt cheap.

You are mistaken on two points:

a) The human cost. Any US war is lost past the thousandth death. At this point the public opinion wants it to end. On the other hand no one cares about the thousandth destroyed drone.

b) The financial cost. Strap a gun on a commercial drone with duct tape, put the good software on it and you have a soldier. The hardware will soon cost 100$ and be readily available, while the software will be open-source.

Besides existing equipment (drones, artillery, tanks, trucks, submarines, etc) will be less expensive and more effective once they are automated (soon). Human operators are expensive: wages, food, energy, oxygen, housing, sanitation, entertainment, training, medical costs, fuel, shielding, space, etc.

And because you cannot afford to lose human beings, you cannot freely sacrifice them. You need recon, you need backup, etc. This is expensive. But drones are expendable. For example you can cover all villages in a 100km² area in one day with 100$ drones. Villagers will get angry and destroy 10% of them, so what?

Human soldiers will persist: they will become cops. The killings will be performed by the cheap and expendable drones that will escort them. As for the hardware and vehicles, they will be automated with a few exceptions.

Ummon wrote:No, very, very, very, very, very bad is increasing the potential for gigadeath wars that could end the human race before it has a chance to go into space.

As long as your drones are autonomous and can only receive limited remote orders, no force can massively turn them against you. Those drones are designed with the idea that they will be hacked by enemy forces: at worse they will be able to order some of them to go back to your HQ to get new orders.

Autonomous does not mean "that can rebel". It does not mean either "controlled by a central AI that may decide to rebel". I understand your concern but do not jump to conclusions.

Also we now need autonomous police drones to protect our skies on homeland. Strapping a gun on an off-the-shelf drone is too easy and the police is currently mostly impotent against those threats. And they must quickly act: sometimes you they have less than a minute between take-off and action.
Saeko wrote:People vastly overestimate the capabilities of current ai tech in dealing with real world situations. This is nothing more than a welfare scam by the arms industry.

There are many tasks that are very reasonable and safe from the software point of view:
* Automatic destruction of submarines in a given perimeter.
* Automatic destruction of intercontinental missiles in a given perimeter.
* Automatic scouting of a given perimeter by swarm of cheap and unarmed drones.

Avoiding any problem for those is not that much of a big deal. The hardware is the real problem imo.

That's how it used to work across the Iron Curtai[…]

You've done nothing except talk down to me. I to[…]

Is Marxism old-fashioned?

Well this is conjecture and I suspect not even cl[…]

@B0ycey I don't want to jump to conclusions e[…]