Pemanfaatan Deep Convolutional Auto-encoder untuk Mitigasi Serangan Adversarial Attack pada Citra Digital

##plugins.themes.bootstrap3.article.main##

Putu Widiarsa Kurniawan S Yosi Kristian Joan Santoso

Abstrak

Serangan adversarial pada citra digital merupakan ancaman serius bagi penggunaan teknologi machine learning dalam berbagai aplikasi kehidupan sehari-hari. Teknik Fast Gradient Sign Method (FGSM) telah terbukti efektif dalam melakukan serangan pada model machine learning, termasuk pada citra digital yang terdapat dalam dataset ImageNet. Penelitian ini bertujuan untuk mengatasi permasalahan tersebut dengan memanfaatkan teknik Deep Convolutional Auto-encoder (AE) sebagai metode mitigasi serangan adversarial pada citra digital. Hasil penelitian menunjukkan bahwa serangan FGSM dapat dilakukan pada sebagian besar citra digital, namun ada beberapa citra yang lebih tahan terhadap serangan. Selain itu, teknik mitigasi AE efektif dalam mengurangi dampak dari serangan adversarial pada sebagian besar citra digital. Akurasi model serangan dan mitigasi masing-masing sebesar 14.58% dan 91.67%.

##plugins.themes.bootstrap3.article.details##

Bagian
Articles
Referensi
Bank, Dor, Noam Koenigstein, and Raja Giryes. 2020. “Autoencoders.”
Bhagoji, Arjun Nitin, Daniel Cullina, Chawin Sitawarin, and Prateek Mittal. 2017. “Enhancing Robustness of Machine Learning Systems via Data Transformations.”
Carlini, Nicholas, and David Wagner. 2016. “Towards Evaluating the Robustness of Neural Networks.” Proceedings - IEEE Symposium on Security and Privacy 39–57. doi: 10.48550/arxiv.1608.04644.
Chakraborty, Anirban, Manaar Alam, Vishal Dey, Anupam Chattopadhyay, and Debdeep Mukhopadhyay. 2018. “Adversarial Attacks and Defences: A Survey.”
Chrabaszcz, Patryk, Ilya Loshchilov, and Frank Hutter. 2017. “A Downsampled Variant of ImageNet as an Alternative to the CIFAR Datasets.”
Dyas Irvan Masruri Sugeng Widodo, and Febry Eka Purwiantono. 2021. “Implementation Of k-Means For Information Systems For The Spread Of Epidemic Diseases In Kota Malang”. Vol 9 No 02 (2021): J-Intech : Journal of Information and Technology. https://doi.org/10.32664/j-intech.v9i02.638
Goodfellow, Ian J., Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. “Generative Adversarial Networks.”
Goodfellow, Ian J., Jonathon Shlens, and Christian Szegedy. 2014. “Explaining and Harnessing Adversarial Examples.”
Gu, Shixiang, and Luca Rigazio. 2014. “Towards Deep Neural Network Architectures Robust to Adversarial Examples.”
Howard, Andrew G., Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. 2017. “MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications.”
Marzi, Zhinus, Soorya Gopalakrishnan, Upamanyu Madhow, and Ramtin Pedarsani. 2018. “Sparsity-Based Defense against Adversarial Attacks on Linear Classifiers.”
Papernot, Nicolas, Fartash Faghri, Nicholas Carlini, Ian Goodfellow, Reuben Feinman, Alexey Kurakin, Cihang Xie, Yash Sharma, Tom Brown, Aurko Roy, Alexander Matyasko, Vahid Behzadan, Karen Hambardzumyan, Zhishuai Zhang, Yi-Lin Juang, Zhi Li, Ryan Sheatsley, Abhibhav Garg, Jonathan Uesato, Willi Gierke, Yinpeng Dong, David Berthelot, Paul Hendricks, Jonas Rauber, Rujun Long, and Patrick McDaniel. 2016. “Technical Report on the CleverHans v2.1.0 Adversarial Examples Library.”
Russakovsky, Olga, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. 2014. “ImageNet Large Scale Visual Recognition Challenge.”
Sahay, Rajeev, Rehana Mahfuz, and Aly El Gamal. 2018. “Combatting Adversarial Attacks through Denoising and Dimensionality Reduction: A Cascaded Autoencoder Approach.”
Sandler, Mark, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. 2018. “MobileNetV2: Inverted Residuals and Linear Bottlenecks.”