How to Apply Filters to Images Using Python and OpenCV, Image filtering
To begin with, we first need to understand that images are basically matrices filled with numbers spanning between 0-255 which is an 8-bit range. Those values are corresponding to the brightness level of each pixel and they are dependent on the bit range they are on. Those bit ranges are spanning between 8-bit to 30-bit. The 24-bit range is commonly referred as "true-color". The reasoning behind that is, it involves 3, 8-bit ranges for each channel in RGB.
As now we have covered how images are structured, we can move towards to the concept of “Kernel”. This is the fundamental structure behind image filtering. Kernels are minified pixel blocks with a purpose of going over the whole image and applying mathematical operations on top the corresponding pixels they are currently on. This operation is called “2D Convolution”. It works in a following manner,
1. Start off by picking a kernel size, it is generally picked 3x3 pixels. The kernel size changes the feature localization property. If it is large, then the features will more likely to be global instead of local. In addition to that, larger kernels are less computationally efficient due to increased amount of individual calculations. Also, larger kernels tend to reduce the noise better, although it may cause artifacts in the image. So finding the correct and balanced kernel size is quite important when applying filter to the image.
2. Moving further, fill out the kernel with filter specific values. For example,
Those values are the determining factor of the filter behavior. They can be set up for many purposes such as blur, sharpen-unsharpen, edge detection and so on.
3. After the values of filter are decided, place it in on the top left pixel of the image. The center of the kernel should correspond to that pixel. Then each pixel in the kernel is multiplied with its corresponding image counterpart and the result is summed up. To preserve the original brightness, it should be normalized. Therefore, divide the summation to the number of elements in the kernel.
4. Finally, apply previous step to the image from top left pixel to bottom right pixel in a row-by-row manner. In this step it is important to not overwrite the image because previous operations will probably interfere with the next operations.
As you can see from the processes above, although it algorithmically looks simple, implementing it manually is rather time consuming. Therefore, it is better to use OpenCV for implementation, which can be seen below.
import cv2 import numpy as np img = cv2.imread("HeliView.jpg") img = cv2.resize(img, (0, 0), None, .25, .25) gaussianBlurKernel = np.array(([[1, 2, 1], [2, 4, 2], [1, 2, 1]]), np.float32)/9 sharpenKernel = np.array(([[0, -1, 0], [-1, 9, -1], [0, -1, 0]]), np.float32)/9 meanBlurKernel = np.ones((3, 3), np.float32)/9 gaussianBlur = cv2.filter2D(src=img, kernel=gaussianBlurKernel, ddepth=-1) meanBlur = cv2.filter2D(src=img, kernel=meanBlurKernel, ddepth=-1) sharpen = cv2.filter2D(src=img, kernel=sharpenKernel, ddepth=-1) horizontalStack = np.concatenate((img, gaussianBlur, meanBlur, sharpen), axis=1) cv2.imwrite("Output.jpg", horizontalStack) cv2.imshow("2D Convolution Example", horizontalStack) cv2.waitKey(0) cv2.destroyAllWindows()
In the code, we initially import the required libraries, which are OpenCV and Numpy. After that, we read the image in the project directory, then resize it to quarter of its size in order to compare it with its multiple instances. In this case we use .jpg as our image format. More information about image formats could be found here. Then, we define our aforementioned kernels. In our case, we use 3 kernels but number of examples could be increased. Also notice that we divide every kernel to 9, in order to preserve the original brightness. Moving further, we apply our filters to image and creating multiple variations for each filter. We do that by using the filter2D method inside the OpenCV library. After we have obtained our images, we combine them horizontally using Numpy included concetenate function. And last but not least we save our image in our working directory. And the result is following,
The images correspond to,
From this outcome, the effects of kernel changes could be observed. As you can see the mean blur does not focus on features, it rather creates an globally even blur. On the other hand, Gaussian blur does weight greater local blurs, which can be observed by comparing the cockpit and the sky. Be sure to run code locally and observe the changes on your own, in order to have better understanding of the subject. It is highly recommended to try various kernels, especially the ones with the horizontal and vertical edge detection.
To conclude, the concept behind the filters are really interesting but one important thing to mention here is the practical usage. When the word "Image Filter" mentioned, a lot of people thing about filters that we see in photography, mobile apps or photo editing software. Those filters are actually the combination of the multiple basic filters that we mentioned during this post and many more in order to create the desired effect.
Although, those filters and programs are useful in many cases they were still unable to provide one important feature, which is practicality. It is not easy to apply effect to images without installing a dedicated program or coding your own just like we did. Which is obviously not a feasible option for many people. As from this idea, we incorporated most commonly used image processing functionalities inside image4io such as blur and sharpen. Users can modify and share images everywhere without any compromises by using a simplistic URL.
Visual Explanation of Image Kernels by Victor Powell: https://setosa.io/ev/image-kernels/
OpenCV Documentation: https://docs.opencv.org/2.4/doc/tutorials/imgproc/imgtrans/filter_2d/filter_2d.html
Gaussian Blur Wiki: https://en.wikipedia.org/wiki/Gaussian_blur