![computer vision](https://blogs.gwu.edu/aparna-shankar/files/2024/02/download-3e5996c8860bfa21.jpg)
In the evolving landscape of image classification, understanding how classifiers respond to image corruptions and potential adversarial attacks is paramount. Adversarial attacks seek to trick classifiers into misclassifying images through carefully crafted perturbations. These intentional distortions are subtle to the human eye but wield the potential to dismantle the accuracy of even the most robust models. This blog post delves into an exploration of image corruption and its impact on the classification results.
Using blur as a corruption method provides an interesting perspective on the impact of image degradation on classification accuracy. Why blur? Blur, in its various forms, mimics real-world distortions encountered in images due to factors like motion, environmental conditions, or lens imperfections. By progressively increasing the blur levels in a controlled manner, we sought to reveal the classifier's response to images that are gradually losing their clarity.
Now, for the classifier at hand, I used a pre-trained Inception-V3 model trained on a dataset featuring six distinct categories of nature images: Buildings, Streets, Forests, Mountains, Glaciers, and Seas. This model showcased an impressive accuracy rate of 84%. To make our experiments smoother, we've saved the trained model as a .h5 file that lets us put our classifier through its paces on new image inputs. However, the intrigue lies in our attempts to challenge and ultimately break this seemingly robust classifier using stack blur effects. The basic idea behind a stack blur is to average the color values of neighboring pixels in a way that simulates a blurring effect.
Experimenting with corruption levels, I applied the stack blur effect with radii increasing from 0 to 50 to an image depicting a forest-like area. To make things more clearer, witness how the images below gracefully degrade across different blur levels, mimicking the challenges posed by real-world scenarios.
The classifier's ability to make accurate predictions diminishes with higher blur levels, mirroring real-world scenarios where image clarity is compromised. As the blur radius increases, the model encounters difficulty in extracting relevant features, which is why I think it is prone to misclassifying the image category. As the image underwent escalating levels of corruption, the classifier's ability to accurately predict its categorization as "forest" reduced to a mere 2.9%. Caught in the web of these distortions, the classifier fell victim to misclassification. To provide a visual depiction, here is a plotted representation of the extent of corruption against the accuracy of classification, that reveals a decreasing graph as the intensity of blur increases.
![accuracy Vs amount of corruption plot](https://blogs.gwu.edu/aparna-shankar/files/2024/02/graph-74fc6e0cb9f025e9-1024x506.png)
Our exploration into the impact of blur underscores the vulnerabilities that classifiers face when confronted with manipulated visual input. The world of image classification keeps changing, urging us to explore further, learn more, and strengthen our defenses against various distortions. As we uncover the illusions and nuances, the need for strong and durable image classifiers becomes clearer.