Develop an algorithm to delete near-duplicate images in Hadoop
تطوير خوارزمية لحذف الصور شبه المكرّرة في Hadoop
الكلمات المفتاحية:
الملخص
The concept of near-duplicate images refers to images that are subjected to noise, that have been compressed, or whose resolution is reduced as a result of their transmission, and other images to which digital image operations are applied. The ideal storage system aims to optimize the storage space, by managing, structuring and organizing data in an efficient manner, so that the storage space is preserved including valuable and useful information, and we get rid of useless data. The space occupied by insignificant data is called wasted space, and this space increases with the increase of these files, resulting in a waste of storage space, which makes it difficult to manage storage space and organize data, which affects the overall system performance. Hadoop is used to store and process big data, and depends on branching in storing data, as the data is divided into parts (blocks), and these parts are distributed in computer devices, called these devices (Data Nodes). Researchers have developed techniques to get rid of fragments of duplicate data, in order to save storage space in the Hadoop system, but each node may contain unimportant files occupying part of this space, so we will present in this research a technique to delete the near-duplicate images stored within data nodes, using a Discrete Cosine Transform (DCT).