Holistic Analytics of Digital Artifacts: Unique Metadata Association Model

Holistic Analytics of Digital Artifacts: Unique Metadata Association Model

Ashok Kumar Mohan, Sethumadhavan Madathil, Lakshmy K. V.
Copyright: © 2021 |Pages: 23
DOI: 10.4018/IJDCF.20210901.oa5
Article PDF Download
Open access articles are freely available for download

Abstract

Investigation of every crime scene with digital evidence is predominantly required in identifying almost all atomic files behind the scenes that have been intentionally scrubbed out. Apart from the data generated across digital devices and the use of diverse technology that slows down the traditional digital forensic investigation strategies. Dynamically scrutinizing the concealed or sparse metadata matches from the less frequent archives of evidence spread across heterogeneous sources and finding their association with other artifacts across the collection is still a horrendous task for the investigators. The effort of this article via unique pockets (UP), unique groups (UG), and unique association (UA) model is to address the exclusive challenges mixed up in identifying incoherent associations that are buried well within the meager metadata field-value pairs. Both the existing similarity models and proposed unique mapping models are verified by the unique metadata association model.
Article Preview
Top

Introduction

Metadata, in general, is "data about data" and in principle, it is a unique set of attributes (data) that describes the inconsistent possessions about the object (data) it's tailgating at all times. A digital forensic investigation visualizes the same definition as "evidence about evidence", resembling a set of clues (evidence) about an object of digital archaeological interest (evidence) as quoted in the digital forensic research works of Raghavan, S. (2013). Having the capability to pass through a filter over metadata that puts together the missing dots to locate a precise suspect document and prove its origin via reconstructing the timeline in a forensically sound manner. Most metadata are piggybacked to the context file displaying information such as file name, file size, file extension, modified, accessed, and created (MAC) timings. Metadata for a digital forensic investigator is a unique way to know something or everything that is fused around the actual data. It can be visualized as a cover layer closely surrounding a piece of evidence completely or partially at all times. So that the forensic analyst will have a better idea of what that evidence is all about and the potential clue it reveals to support the hypothesis of the investigator. Everything from the unique name, information on how data combines together, when and by whom the data was created, by whom the data was reproduced and lists of web pages visited by people, and even network packets and system logs can be classified as metadata. Balasubramanian, V., Doraisamy, S. G., et al., (2016) explains the ever-evolving lecture videos and proposes a multimodal metadata extraction system based on Naive Bayes and rule-based classification on keyphrases and topic-based segments of the video files.

The primary purpose of metadata is meant for sorting out the huge library, indexing them for easy access, fixing bugs, and versioning for tracking objects. The supplementary task of any standard library model in particular to metadata is helping the investigator to find the actual information they are looking for. It would make better sense for evidential data to be associated using a compelling relationship with each other via unique metadata matches. This classification of metadata not only makes their job easier but also promising to give a good reason for their algorithm proven right away. The traditional file system based metadata as portrayed by Daniel, L., & Daniel, L. (2012) covers the broad categories of more common types of metadata. It holds the time-stamp for their associated time zones accumulated by the operating system and chronologically rendered when an artifact/file is produced, accessed, or modified. The current day NTFS file system as explained by Casey, E. (2009) depicts the metadata created by the file system resides well within its traditional indexing data structure called Master File Table ($MFT). When compared with the traditional FAT based file systems, this NTFS metadata comprises several complementary metadata information like the origin, the current active status (disk or trash), and the access control permissions of the file.The present-day advancement in big data technologies via Hadoop and Cassandra by now has an inbuilt feature called a backup node by Krishnan, K. (2013) which contains the exact copy of the majority of the file system metadata. About one-third of the population of the files collected from the annual snapshots of windows computers by Agrawal, N., Bolosky, W. J., et al., (2007) were from the most commonly used top ten windows file formats namely exe, gif, jpg, mp3, wma, dll, htm, cpp, lib, and h. Rajendiran, K., Kannan, K., et al., (2020) emphasized the application of machine learning in cyber forensics to automate and enhance the investigation strategies.

Complete Article List

Search this Journal:
Reset
Volume 16: 1 Issue (2024)
Volume 15: 1 Issue (2023)
Volume 14: 3 Issues (2022)
Volume 13: 6 Issues (2021)
Volume 12: 4 Issues (2020)
Volume 11: 4 Issues (2019)
Volume 10: 4 Issues (2018)
Volume 9: 4 Issues (2017)
Volume 8: 4 Issues (2016)
Volume 7: 4 Issues (2015)
Volume 6: 4 Issues (2014)
Volume 5: 4 Issues (2013)
Volume 4: 4 Issues (2012)
Volume 3: 4 Issues (2011)
Volume 2: 4 Issues (2010)
Volume 1: 4 Issues (2009)
View Complete Journal Contents Listing