Upcoming Computer Science Research Seminar: “Vulnerability Relationship between Feature Vector Scale and Convolutional Neural Networks with Adversarial Examples”

On Thursday, October 21 at 9 am EDT join the Department of Computer Science at BU MET for its next research seminar with Dr. Sang-Woong Lee, Professor at Gachon University. Entitled, “Vulnerability Relationship between Feature Vector Scale and Convolutional Neural Networks with Adversarial Examples,” this virtual seminar will be moderated Dr. Reza Rawassizadeh, Associate Professor of Computer Science.

The abstract for “Vulnerability Relationship between Feature Vector Scale and Convolutional Neural Networks with Adversarial Examples,” is as follows:

In the field of image classification, Deep Convolutional Neural Networks can misclassify images by perturbation noise. Images that are artificially created to cause misclassification are called adversarial examples. There are various conjectures about why DCNN is vulnerable to noise such as adversarial examples. We hypothesize that DCNN is vulnerable to noise because of ‘unfair data learning’. Furthermore, we assume that ‘unfair data learning’ learns the scale of feature vectors differently in feature spaces. Thus, the trained data will exhibit different vulnerabilities against noise depending on the scale of the feature vector. We use DCNN and CIFAR-10 datasets to conduct vulnerability tests for each scale section of feature vectors. Vulnerability experiments compare cosine similarity from each feature vector of the original and noisy images and observe the error rate by scale sections. Experimental results showed sensitive results with low cosine similarity and a high error rate in the small-scale section. On the other hand, in the high-scale section, the result shows robustness for noise with high cosine similarity and low error rate.

>> Learn more and register.

Archived seminars from this series can be found on the Department of Computer Science events web page.

View all posts