Wednesday , March 3 2021

Academy experts NIPS 2018 Challenges Visual Challenge Events Announced: CMU Xingbo Team Wins Two Champions | Model | Xing Bo | Challenge |



Central Long

Author: Wieland Brendel

The heart of the machine

Participation: Zhang Qian, Wang Shuting

Today, NIPS 2018 Anti-Visual Challenge results were released. The game is divided into three units: Defense, Unsorted Attacks and Targeted Attacks. The CMU Xingbo team won two championships, the second match was taken from Canada to LIVIA and Tsinghua TSAIL team won the "Untargeted Attack" competition. This article describes how these groups work, but the details will be published at the NIPS Competition seminar on December 7, 9: 15-10: 30.

NIPS 2018 Consortium Visual Challenge Address: https://www.crowdai.org/challenges/nips-2018-adversarial-vision-challenge-robust-model-track

Today, the NIPS 2018 NIPS Adversarial Vision Challenge 2018 results were announced and more than 3,000 participants sent more than 3,000 models and attacks. This year's competition focused on real-world scenarios with a small number of models (up to 1,000 samples). The model returns only the final result that they give instead of slope or reliability. This approach simulates the typical threat scenarios encountered by the introduction of machine learning systems, which are expected to contribute to the development of effective decision-making processes and to building more robust models.

Ready design template on CrowdAI platform.

All winners will perform at least the order of magnitude better than the standard baseline (such as transition from a regular pattern or boundary line) (based on the median size of the L2 disorder). We asked their approach to the top three in each game (defense, unreported attack, targeted attack). The winners will present their approach to the NIPS competition seminar on December 7, 9: 15-10: 30.

The common theme of the offensive track winners is the low frequency conversion of the cross-out and the combination of different defense methods as an alternative model. In a cascade, the winners used a new robust model approach (details may not be known before the seminar) and the new gradient-based iterative L2 attack training. Over the next few weeks, we'll be rediscovering to get more information about the results, including visualization of sampling from the defense model. The winning team will be notified within a few weeks.

defense

First place: Petuum-CMU Team (code name "91YXLT" on the leaderboard)

Author: Yaodong Yu *, Hongyang Zhang *, Susu Xu, Hongbao Zhang, Pengtao Xie and Eric P. Xing (* contributing equally), respectively, Petuum Inc., Carnegie Mellon University, University of Virginia.

In order to learn a deep network for robust sampling, the authors analyze the model's overall capability for sample sampling. Based on the analysis, the author proposes a new formula to learn solid models with generalization and sustainability guarantees.

Second place: Wilson team (has not received a response from the team yet)

Third place: The LIVIA team (code-named "Jerome R" Leaderboard)

Author: Jérôme Rony & Luiz Gustavo Hafemann, Montreal, Quebec, Canada Higher Technical School (ETS Montreal, Canada)

The authors trained a robust model using the proposed inclination-based new iterative L2 invasion (loose direction and norm, DDN) that is fast enough for training. At each stage of training, the author finds a counter-sample (using DDN) near the decision limit and minimizes the cross-section of this example. Model architecture has not changed and has no effect on the justification.

Non-targeted attack

First place: The LIVIA team (under the code "Jerome R" Leaderboard)

Author: Jérôme Rony & Luiz Gustavo Hafemann, Montreal, Quebec, Canada Higher Technical School

The attack method is based on a number of proxy models (including the new attack method proposed by the author – a robust DDN training model). For each model, the author chose two directions to attack: the original grade cross-paced deviation and the DDN attack execution direction. For each directional author, the author makes the binary domain according to the standard by finding the boundary of the decision. The author is the best attack and improves it by means of a border attack method in connection with attacks against the decision-making process: Reliable attacks against the Black Box machine learning models.

Second place: TSAIL Team (code name "csy530216" on Leaderboard)

Author: Shuyu Cheng & Yinpeng Dong

The author uses a heuristic search algorithm to improve sampling that is similar to the boundary attack method. The BIM attack used the starting point of the Adversarial Logit Pairing transition and found the starting point. In each iteration, random disturbance is displayed on the Gaussian distribution with the diagonal covariance matrix, which is updated with past successful experiments to simulate the search direction. The author restricts disturbances to the 64 * 64 * 3 center 40 * 40 * 3. It first produces a 10 * 10 * 3 sound and then adjusts it to 40 * 40 * 3 using bilinear interpolation. Limiting the search mode makes the algorithm more efficient.

Third place: Petuum-CMU Team (code name "91YXLT" on the leaderboard)

Author: Yaodong Yu *, Hongyang Zhang *, Susu Xu, Hongbao Zhang, Pengtao Xie and Eric P. Xing (* contributing equally), respectively, Petuum Inc., Carnegie Mellon University, University of Virginia.

Writers integrate a variety of robust designs and various countermeasures to prevent Foolbox's distance meter metering to produce disturbances. In addition, they chose the best attack method that minimizes the maximum distance when attacking robust models at different distances.

Targeted attack

First place: Petuum-CMU Team (code name "91YXLT" on the leaderboard)

Author: Yaodong Yu *, Hongyang Zhang *, Susu Xu, Hongbao Zhang, Pengtao Xie and Eric P. Xing (* contributing equally), respectively, Petuum Inc., Carnegie Mellon University, University of Virginia.

The authors used Foolbox to integrate a variety of solid models and different sampling methods to produce anti-malfunction. They found that the integration approach makes the target print model more efficient for different robotic models.

Second place: a strong team (code name "ttbrunner" on the leaderboard)

Author: Thomas Brunner & Frederik Diehl, Institute GmbH from Germany in Fortiss

This attack method is similar to a border attack, but it is not shown as a random normal distribution. In addition, the author uses low-frequency mode that is well-transferred and which the defender can not easily filter. The author also uses a replacement model for a project gradient prior to sampling. In this way, they combine both advantages (PGD and Limit Attack) to elasticity and sample efficiency.

Third place: The LIVIA team (code-named "Jerome R" Leaderboard)

Author: Jérôme Rony & Luiz Gustavo Hafemann, Montreal, Quebec, Canada Higher Technical School

from

<! –

from

Disclaimer: Sinan's exclusive script, unauthorized copying is forbidden.

->


Source link