Jump to content

Talk:Adversarial machine learning

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Citation overkill

[edit]

I removed a slew of references from the first sentence because it's not a controversial statement. This broke some references by name, which some bot will surely come and fix in a short while. More generally, I think this article has too many references per statement, and could do with some trimming so that only the best/most cited (pick any one) remain. QVVERTYVS (hm?) 09:23, 15 January 2015 (UTC)[reply]

The first sentence makes it clear that this article is about the security applications of adversarial machine learning, not adversarial machine learning itself. This article should be linked to by a topic exclusively on adversarial machine learning. Adversarial machine learning tunes it's learning to precise learning rather than average learning. It optimizes relative to a minimax value of the game approach. I've only read one story about adversarial machine learning, but this article does not tell me anything about it outside of security applications. I am not an expert on this at all, but adversarial machine learning has many applications beyond security. Dave44000 (talk) 12:08, 17 October 2016 (UTC)[reply]

My wording edits

[edit]

I am not knowledgeable in this field, but I just made a few small edits that--I hope--clear up some confusions. I'm still left with many parts of this article that are hard to follow, and where I don't feel confident enough to make a change. For example, if Google mangled a picture of a dog so *both* humans and computer vision systems mis-classified it, what does that have to do with adversarial machine learning? Sounds more like significant image distortion. How does denial of service "increase the wrong classification rate"? (taxonomy section) What is "Snort"? (referred to in the "attacks against clustering algorithms" section) "If clustering can be safely adopted in such settings, this remains questionable": what does "this" refer to? (same section) What is a "ladder algorithm" or a "Kaggle-style competition"? Here hyperlinks, or at least references to outside discussion, are needed. — Preceding unsigned comment added by Mcswell (talkcontribs) 04:30, 1 June 2019 (UTC)[reply]

And your point?

[edit]

This sentence doesn't appear obviously relevant: "Clustering algorithms are used in security applications. Malware and computer virus analysis aims to identify malware families, and to generate specific detection signatures." If it is relevant, can someone make explicit the reasoning? other than that clustering is a method for machine learning. Mcswell (talk) 21:29, 25 February 2021 (UTC)[reply]

[edit]

Hello fellow Wikipedians,

I have just modified one external link on Adversarial machine learning. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}} (last update: 5 June 2024).

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers.—InternetArchiveBot (Report bug) 09:38, 27 June 2017 (UTC)[reply]

Expanding the definition of Adversarial Machine Learning

[edit]

This wiki-document is a nice summary of the main referenced paper, but the definitions are incorrect. Adversarial is a generic term that does not need to apply to only the context of malicious behaviours. In the context of machine learning, adversarial can mean competition between learning systems in order to accelerate and augment the learning process. This is exactly the case for Generative Adversarial Networks. I realize the article makes some claims regarding the term; nevertheless, applying the term in such a narrow way is incorrect.

Alternatively, adversarial machine learning applies to the more general idea of coordinating the results of multiple systems that have conflicting goals in order to train one or all of the systems in some optimized way. With this definition, we can categorize the work of self-play game play intelligence as adversarial machine learning.

Here is an example: https://subscription.packtpub.com/book/game_development/9781789138139/5/ch05lvl1sec38/adversarial-self-play

The article should be expanded to take into account other adversarial machine learning ideas and the definition changed to update this.

Bruce.matichuk (talk) 18:55, 25 June 2019 (UTC)Bruce.matichuk[reply]

Could rename the article "Adversarial attack and defense". Olli Niemitalo (talk) 09:40, 13 July 2019 (UTC)[reply]
@Olli Niemitalo and Bruce.matichuk: I believe that's really narrow, but what about "Adversarial attacks in machine learning". It's broad enough as to maintain the definitions well within scope, and it's clear to anyone new. I think that modifying the entire article to encompass Adversarial Generative Networks would be an overkill at this point. Tell me what you think :) agucova (talk) 20:52, 13 July 2019 (UTC)[reply]
Yes, that title sounds very clear. Olli Niemitalo (talk) 11:04, 14 July 2019 (UTC)[reply]
That wouldn't help, it would still be ambiguous. By layman english definitions, GANs are also "adversarial attacks" in that one part of the system is attacking another. Rolf H Nelson (talk) 18:16, 14 July 2019 (UTC)[reply]
Strongly oppose expanding the scope, we already have an article for generative adversarial networks. Adversarial self-play and GANs are a different topic from hardening systems against adversarial attacks masterminded by human beings. Rolf H Nelson (talk) 18:16, 14 July 2019 (UTC)[reply]
As far as renaming, there's no perfect term. The vast majority of the research community, and the media, use "adversarial machine learning" to refer specifically to attacks from "sophisticated" or "malicious" (i.e. human) parties. If there's a strong source presenting an alternate term, we can consider it. Rolf H Nelson (talk) 18:16, 14 July 2019 (UTC)[reply]
Well I'm disappointed to learn this because I am developing adversarial learning methods that are not based on networks. Adversarial Machine Learning seemed like the right article.
I've never heard of an "adversarial attack". Would that just be an "attack"? The article ought to match the terminology used in existing fields. It's called cybersecurity, and in all the varieties listed under both Cybersecurity and Vulnerability nobody thought to add a redundant "adversarial" to the variety. I am highly skeptical that "adversarial machine learning" would more often refer to the security aspects of machine learning than to adversarial learning methods.
For some of the sources, what is called "adversarial machine learning" should instead be called "adversarial machine teaching".
After looking at source [2] ("Adversarial Machine Learning-Industry Perspectives". 2020), the 3 corporate websites the authors cite when first introducing "Adversarial Machine Learning" are
[1] Google; Where they say "Machine learning security" but not "adversarial machine learning".
[2] Microsoft; Where they say "Securing machine learning". "adversarial" appears only in a reference "Attacking machine learning with adversarial examples", but this reference only mentions "adversarial examples".
[3] IBM; IBM finally uses it "Adversarial Machine Learning". Apparently that's where this terrible misuse came from. They use it to refer to tampering with training data, whereas the Microsoft reference mentioned above is concerned with inputs to a pretrained network, so the corporate websites are not even talking about the same things but the authors of source [2] put both topics under IBM's phrase. We really ought to be citing IBM's website instead of source [2], but does a company's website count as a valid source? 2406:3100:1018:2:5A:0:1299:3D35 (talk) 07:59, 16 March 2023 (UTC)[reply]

Some work needed

[edit]

Hi, I want to propose some edits to this article, since it's a bit weak as it currently stands. Compare this page to something like robust statistics, and it should be relatively clear what I mean. Here are some changes I suggest:

  • Include motivation for learning robust predictors (e.g. spam filtering is good, but also add self-driving cars, recent news article citations, etc.)
  • Rewriting the history section to include a broad summary of the field's origins (i.e. adversarial/robust statistics, robust optimization, etc.)
  • Include the canonical definitions of adversarial robustness in the context of robust statistics (error of learning/discriminating distributions from corrupted samples)
  • Include more recent definitions/measures of adversarial robustness in the context of machine learning (epsilon robustness)
  • Include desirable properties of adversarially robust (e.g. smoothness/Lipschitzness, might be nice to give the 3-line proof of this)
  • Include historically significant methods for learning adversarially robust predictors (e.g. adversarial training & randomized smoothing) & maybe mention the fundamental challenge of learning robustly in the context of deep learning

Let me know your thoughts! Sebastian-JF (talk) 18:06, 12 August 2020 (UTC)[reply]

2013 or 2014?

[edit]

Citations needed for: "until 2013 researchers are still hoping that non-linear classifier is free of adversarial attacks".

It is also wrong that the first adversarial example for DNN was discovered in 2014. — Preceding unsigned comment added by 24.80.47.80 (talk) 16:20, 30 September 2020 (UTC)[reply]

Request to include bullet on certified defenses against adversarial examples

[edit]

As robust machine learning is an active area of research, the section on defenses is quite outdated. Specifically, I found there was no discussion of certified defenses against adversarial examples. I have suggested the following edits, but in the spirit of obstinately following procedures, I'm submitting them here to be reviewed. This is merely a start to modernize the section on defenses. Ultimately I agree with the comment of Sebastian-JF above that definitions of robustness to adversarial inputs, desirable properties, and families of approaches to verifying these properties would be in order.

  • Specific text to be added or removed (below bullet point for "Adversarial training" under "Defenses"): * Robustness certification/certified training: while adversarial training can increase a model's resistance to adversarial examples, it provides no guarantee that the resulting model will be invulnerable to adaptive attacks.[1] To deal with this, some approaches provide certificates of local robustness, which formally guarantees no adversarial examples exist within an ε-ball around a point under consideration.[2][3][4] While certification can be applied post hoc to a pre-trained model, this is computationally intractable for large models; thus, state-of-the-art certification methods employ specialized training procedures that produce models that can be efficiently certified against adversarial examples.[5] Typically, these methods conservatively over-approximate the adversary during training. In order to be computationally efficient, these over-approximations may be very loose on typical networks; however when incorporated into training, it is possible to learn networks on which they are tight enough for practical purposes.[6]
  • Reason for the change: As mentioned, there is a large volume of work on deterministically provable defenses against adversarial examples that is completely missed in the article.
  • References supporting change: See below.

DrKlas (talk) 00:38, 11 February 2023 (UTC) DrKlas (talk) 00:38, 11 February 2023 (UTC)[reply]

References

  1. ^ Cite error: The named reference :5 was invoked but never defined (see the help page).
  2. ^ Jordan, Matt; Lewis, Justin; Dimakis, Alexandros (2019). "Provable Certificates for Adversarial Examples: Fitting a Ball in the Union of Polytopes". Advances in Neural Information Processing Systems (NIPS). arXiv:1903.08778.
  3. ^ Tjeng, Vincent; Tedrake, Russ (2017). "Verifying Neural Networks with Mixed Integer Programming". arXiv:1711.07356.
  4. ^ Fromherz, Aymeric; Leino, Klas; Fredrikson, Matt; Parno, Bryan; Păsăreanu, Corina (2021). "Fast Geometric Projections for Local Robustness Certification". International Conference on Learning Representations (ICLR). arXiv:2002.04742.
  5. ^ Li, Linyi; Qi, Xiangyu; Xie, Tao; Li, Bo (2021). "SoK: Certified Robustness for Deep Neural Networks". arXiv:2009.04131.
  6. ^ Leino, Klas; Wang, Zifan; Fredrikson, Matt (2021). "Globally Robust Neural Networks". International Conference on Machine Learning (ICML). arXiv:2102.08452.

Wiki Education assignment: Linguistics in the Digital Age

[edit]

This article was the subject of a Wiki Education Foundation-supported course assignment, between 21 August 2023 and 11 December 2023. Further details are available on the course page. Student editor(s): Jolsen1022 (article contribs).

— Assignment last updated by Fedfed2 (talk) 00:53, 9 December 2023 (UTC)[reply]