Psychological Study of Cyber-Bullying Against Adolescent Girls in India Using Twitter

Due to the rise in digital activity of students as well as increased social media presence, the lack of regulation of platforms has given rise to another form of bullying, popularly known as cyberbullying. Cyberbullying is one of the most adverse issues prevalent in schools nationwide. Cyberbullying refers to bullying that happens over any web-interfaced or electronic platform. It is an activity that significantly affects the mental and physical health of its victims. With increased secrecy, the frequency and propagation of cyberbullying remain high due to the information technology infrastructure available today. Understanding cyberbullying trends and preventing them, using suitable machine learning algorithms, could help numerous school students lead better lives, as well as make better decisions, which help them grow and flourish into capable future leaders. Hence, the authors’ aim for this research paper is to focus on adolescent girls using various tools and techniques like text analytics and image analytics. For this paper, the authors study a sample of netizens. The location where the analysis is conducted is New Delhi, and the real-world data is extracted from Twitter in English. The real-world data is extracted using appropriate data mining algorithms to find hidden patterns and then conduct the analyses required to understand the psychology of girls and boys and the tonality and voice of the tweets/posts. This is done from the open-source


INTRODUCTION
Using technology to annoy, intimidate, shame, or target another individual is called cyberbullying. Threats made online and meant offensive or disrespectful emails, comments, blogs, or notifications are all considered under this. Posting personal information, photos, or videos intending to harm or shame another person is also prohibited. Cyberbullying often involves images, tweets, or web pages that are not taken down until the user has called for them to be removed. When discussing adolescence, one must realize that it is an increasingly sensitive period of one's life, where one is vulnerable to external duress. Discrimination is described as intimidation or derogatory remarks directed at a person's gender, sexuality, sexual identity, ethnicity, or physical distinctions and is illegal in several states. As a result, the police could get involved, and bullies may suffer drastic consequences (Ben-Joseph, 2018). Although several scholars have studied the impact of cyberbullying on teens and attempted to develop automated tools for detecting cyberbullying, those techniques have failed to take into account the vastly different social media world that teenagers currently live in, which is unlike the one that existed even five or ten years ago. Teenagers are well-known for their prolific use of image and videosharing applications and limited-time tweets. Visual content, in particular, accounts for more than 70% of all online traffic. Around the same time, image and video imagery use for cyberbullying has increased significantly, with some claiming that "cyberbullying grows bigger and meaner with images and videos." In reality, the growing prevalence of image and multimodal content for cyberbullying was one of the major themes found in recent cyberbullying reports. Although it is widely recognized that decoding multimodal content is critical for cyberbullying detection, the cyberbullying detection literature is still primarily based on (sophisticated) text processing, and their accuracy is minimal. There are currently few projects that use visual features to spot cyberbullying. Understanding cyberbullying trends and preventing them using suitable Machine Learning algorithms could help numerous school students lead better lives and make better decisions, which help them grow and flourish into capable future leaders. Hence, this research paper aims to focus on adolescent girls using various tools and techniques like Text Analytics and Image Analytics (Reynolds et al, 2011). Hate speech tends to be an offensive form of interaction in which a hate agenda is expressed through misconceptions. Hate speech targets protected characteristics such as gender, sexuality, race, and disability. As a result of hate speech, unwelcome crimes can result from someone or a group of people being disheartened. The real-world data can be extracted using appropriate data mining algorithms to find hidden patterns and then conduct the analyses required to understand the psychology of girls and boys and the tonality and voice of the tweets/posts. Understanding psychology, color, and personality traits will help draw insights from the expressions collected. The authors will be studying the sample's user bios, likes, and comments using a lexical and syntactical approach. Since the data is extracted using Twitter, i.e., a secondary data source, the authors will address the gap in current psychological analyses. They understood the extracted database and ensured that the authors looked at textual data and heavily focused on geospatial locations and images. It is a known fact that 70% of web-based social media websites' content comprises images. Hence, it is essential to focus not just on the posts or captions but also on the images to get a clear picture of the online scenario. Girls are much more vulnerable to perceiving negative comments and taking them negatively and seriously, which is more likely to harm their mental health. This severely impacts the quality of their mental health and hinders them from achieving their potential to the fullest. It is also noted that cyberbullying is a phenomenon that has been around for a while, yet very few literature pieces focus on the research gap taken up by the authors. Through this article, the authors will comprehensively understand and scrape through the respondents' profiles. They will ensure that they can obtain all the information about the users through their profiles, assess textual, social, and visual clues to form their analysis, and finally declare a tweet flagged due to its explicit content. They will be using Machine Learning algorithms for the analysis and create a system that constantly keeps learning -one system that can change the life of not just one adolescent but many more. Such a comprehensive methodology will try to eliminate the need for self-administered questionnaires that are subject to responder bias and are used widely worldwide to understand practices like cyberbullying, cyber victimization, etc. A self-administered test will enable the respondents to choose the option that applies to them most manually. This leads to an unknown bias between the respondent's thoughts and how he is. Such a system can constantly keep learning and eliminate this bias, thereby providing a clear picture of the internet Twitter scenario. The authors will use a corpus from data scraping via Twitter and refine their results. Once the authors have the right sample size and population, the next step is to ensure the data is pre-processed and ready for analysis. In this paper, they will use other techniques on numerical datasets, like transformation, to get a balanced dataset that provides accurate results. Once this is complete, the next phase would be to move on to a number of machine learning models and choose the one that provides the most accurate results. Extensive experimental evaluations of real-world multimodal social network datasets demonstrate and validate the fact that the authors' approach outperforms current cyberbullying identification models. They will concentrate on the data collection and feature engineering process, emphasizing feature selection algorithms before employing a variety of machine learning algorithms to predict cyberbullying behaviors. Finally, the problems and obstacles have been identified, presenting new investigative avenues for researchers to investigate. The authors will focus on deepening the role of ML in cyberbullying detection and prevention. Specifically, the following issues (Angelis & Perasso, 2020) are addressed: • ML models predicting cyberbullying; • Identifying the most used ML algorithms and their evaluation methods; • Understanding the implication of ML for prevention; • Highlighting the main theoretical and methodological issues of ML algorithms in predicting cyberbullying. Huang et al. (2014) discuss detecting cyberbullying using textual and social features. The authors use a Twitter corpus and ask three students to label the tweets as bullying or not bullying. They then analyze the social networks features, like the number of friends and network embeddedness, and focus on improving the accuracy of detection. They use the ego network to understand and draw insights from the corpus and use algorithms like J48, SMO, Dagging, Naïve Bayes, ZeroR, etc., to classify the tweets by balancing them using SMOTE. Singh et al. (2017) Chatzakouy et al. (2017) propose a principled and scalable method for detecting bullying and offensive activity on Twitter in their article. They suggest a rigorous approach for extracting content, user, and network-based attributes, with the aim of determining what distinguishes bullies and aggressors from casual users. Bullies make fewer posts, engage in fewer online forums, and are less well-known than regular users. Aggressor posts tend to be more negative. Machine learning recognition algorithms like J48, LADTree, LMT, NBTree, Random Forest (RF), and Functional Tree are used to identify users displaying bullying and violent activity using a corpus of 1.6M tweets shared over three months, using a corpus of 1.6M tweets with 90% AUC accuracy. Cheng, Li et al. (2019) investigate the novel issue of detecting cyberbullying in a multimodal setting by collaborating on social media information extraction through text, spatial location, and visual cues. However, this challenge is difficult due to the complex combination of cross-modal similarities across various modalities, systemic dependencies between separate social network sessions, like Instagram and vine, and diverse attribute knowledge of different modalities. They suggest XBully, a novel cyberbullying identification system that reformulates multimodal social media data as a heterogeneous network and then attempts to learn embedding node representations from it. Extensive experimental evaluations of real-world multimodal social network datasets demonstrate that the XBully system outperforms current cyberbullying identification models. Al-Hashedi et al. (2019) conducted an observational analysis of the efficacy and efficiency of deep learning algorithms combined with word embeddings in identifying cyberbullying texts performed in this article. GRU, LSTM, and BLSTM were three deep-learning algorithms that were tested. Four separate word embedding models were investigated for function representations, including word2vec, GloVe, Reddit, and ELMO. Elmo took control of word sense by extracting detail from the word's environment, removing any flaws of pre-trained word embedding models. The 10-fold cross-validation methodology was used to ensure correct performance. The findings of the experiments revealed that BLSTM outperforms ELMO in identifying cyberbullying messages. Formspring. me provided the results, which included 12,772 posts. Chen et al. (2012) elaborates that current literature on message-level offensive language identification cannot reliably identify offensive content because the textual contents of online social media are highly unstructured and informal. A more practical solution is to track user offensiveness. The authors propose the Lexical Syntactic Feature design to detect offensive content and classify possible offensive users of social media. In terms of detecting aggressive material, the LSF system outperformed current methods substantially. In sentence offensive detection, it has a precision of 98.24 percent and a recall of 94.34 percent. In user offensive detection, it has a precision of 77.9 percent and a recall of 77.8 percent by taking 10 msec per sentence. Li and Tagami (2014) concentrate on identifying relation-based cyberbullying, which is a human-to-human assault. Relationship-based cyberbullying has recently gained recognition as a new form of cyberbullying and finding it remains a challenge. When it attacks a human relationship, the detection should keep track of how the relationship changes. They suggest generating a communication network as the first step in relation-based cyberbullying identification. The system is divided into two steps to reduce false negatives, which occur when students are friends in school but are not detected as friends in the Social Networking Service (SNS), a major issue in identifying cyberbullying. Capua et al. (2016) think that using methods derived from NLP (Natural Language Processing) and machine learning, the authors suggest a potential solution for the automated identification of bully traces through a social network. They create a model based on Growing Hierarchical SOMs that can effectively cluster documents containing bully traces based on semantic and syntactic features of textual sentences. The GHSOM Network model was perfectly all right to be used for Twitter, but it was also checked against other social media platforms like YouTube and Formspring. Finally, the findings suggest that the proposed unsupervised solution can be used successfully in certain situations with decent results by adopting K-Fold validation. In their paper, Foong and Oussalah (2017) outline an online framework for detecting and tracking cyberbullying incidents in online networks and groups. Insults, swear, and second-person pronouns are the three basic natural language elements that the machine detects. A classification scheme and ontology-like logic were used to identify the presence of certain entities in the forum documents, which would send a warning to security, prompting them to take necessary action. The machine has been reviewed on two different forums and has shown to be capable of detecting objects. The authors have wisely used the support vector machine (SVM) classifier because of its demonstrated utility in binary classification and theoretical soundness. Misuse of emerging networks, such as social media (SM) sites, has spawned a modern breed of online aggression and abuse. Garadi et al. (2019) highlight a new way of demonstrating violent behavior on social media platforms. The reasons for developing prediction models to combat offensive behavior in SM were also discussed. The authors examine cyberbullying prediction models in depth and discuss the major problems that arise when building cyberbullying prediction models in SM. This paper gives an outline of the general mechanism for detecting cyberbullying and, most specifically, the approach. Despite the fact that the data collection and function engineering processes have been detailed, the focus is always on the data. Despite the fact that the data collection and feature engineering processes have been detailed, the focus is mostly on feature selection algorithms and then the application of various machine learning algorithms to forecast cyberbullying behaviors. Ratadiya and Mishra (2019) believe that Classic convolution and recurrence-based sequential models have been used in deep learningbased methods to simplify the method. This version, on the other hand, is computationally inefficient and requires more memory. The authors suggest a multiheaded attention-based method for detecting profane text in this paper. The model is combined with power-weighted average ensemble techniques to boost efficiency even more. In comparison to previous methods, the current solution needs no extra memory and is less complex. Their model's enhanced performance on publicly accessible real-world data further supports this claim through flexible and lightweight models to understand the evils of cyberspace. Andleeb et al. (2019) note that in contrast to a previous analysis on the same dataset that only considered textual features, this study extracts three categories of features from the dataset: textual, behavioral, and demographic features.

BACKGROUND/ReVIew OF LITeRATURe
Textual characteristics contain such bullying terms that, if present in the text, can result in a true cyberbullying outcome. Personality attribute characteristics are derived for users who have been bullied in the past and may bully again in the future. Age, gender, and position are among the demographic characteristics derived from the dataset. The method is tested using various consistency metrics with both classifiers used, and the SVM classifier outperforms the Bernoulli NB with an average accuracy of 87.14 percent. As per Abbass et al. (2020), using data derived from social media websites, a system is created to forecast significant categories of social media crimes. Data (tweet) pre-processing, classifying model generator, and prediction are the three modules that make up the proposed architecture. To construct a predictive model to classify given data into various types of crime, Multinomial Naive Bayes (MNB), K-nearest Neighbors (KNN), and Support Vector Machine (SVM) are used. Furthermore, the N-Gram language model is used in conjunction with these machine learning algorithms to determine the best value of n and assess the system's accuracy at various stages, including Unigram, Bigram, Trigram, and 4-gram. The results show that all three algorithms achieve accuracy values greater than 90%, with the Support vector machine outperforming the others marginally. Alasadi et al. (2020) suggests a fairness-aware fusion mechanism that guarantees that fairness and consistency remain essential considerations when integrating data from different modalities. The contributions from various modalities are incorporated in this Bayesian context in a way that considers the different trust levels associated with each function and the interdependencies between features. This system, in particular, applies weights to various modalities depending on their precision and their justice. The results of using the system to solve a multimodal (visual + text) cyberbullying identification problem show how effective it is at achieving accuracy and justice. Roy et al. (2020) believe it is important to monitor user posts and filter hate speech-related content before it spreads. On the other hand, Twitter gets over 600 messages every second and about 500 million tweets daily. It is almost impossible to manually filter any detail from such a large amount of incoming traffic. In this regard, a Deep Convolutional Neural Network is used to create an integrated framework. The proposed DCNN model uses the tweet text and the GloVe embedding vector to extract the meanings of tweets through convolution, and it outperformed current models with precision, recall, and F1-score values of 0.97, 0.88, and 0.92 for the best case, respectively. According to Behzadi et al. (2021), many people use their social media platforms to spread hatred online, which is why many experts have focused on the issue of cyberbullying awareness over the last decade. The authors of this paper use transfer learning to address this problem. They use a variety of small BERT models that they fine-tune with hate-speech information. They often use the Focal Loss feature to deal with class mismatch data. On the hate-speech dataset, the writers were able to obtain state-of-the-art findings of 0.91 accuracies, 0.92 memory, and 0.91 F1-score using this method. The more lightweight BERT models are considerably faster in detection and ideal for real-time cyberbullying implementations using a transfer learning pipeline.
In this paper, Gutiérrez-Esparza et al. (2019) discuss findings from studies on identifying instances of cyber-aggression on social media, focusing on Spanish-language users in Mexico. To promote the characterization of offensive remarks in three specific cases of cyber-aggression: bigotry, abuse based on sexual identity, and violence against women, they used Random Forest, Variable Importance Measures (VIMs), and OneR. Experiments with OneR show that it improves the comment classification process in the three cyber-aggression cases by more than 90%. The proper definition of cyber-aggression remarks will aid in developing strategies to combat the phenomenon. Potha and Maragoudakis (2014) take a sequential data modeling approach to the issue, formulating the predator's questions using a Singular Value Decomposition representation. This procedure aimed to see if classification techniques could accurately forecast the severity of a cyberbullying attack and look for similarities in each predator's linguistic style. Every signal is parsed by a neural network that predicts the degree of insult within a query given a window between two and three previous questions using feature weighting and dimensionality reduction techniques. They saw that the plot of the time series data was very similar to the plot of the class attribute after applying SVD to it and considering the second dimension. Hee et al. (2015) created and implemented a new cyberbullying annotation scheme that explains the presence and nature of cyberbullying, the status of the post author (harasser, perpetrator, or bystander), and various fine-grained cyberbullying categories such as insults and threats. They presented their findings on the automated detection of cyberbullying in web blogs and the possibility of detecting more fine-grained cyberbullying types. An F-score of 55.39 percent is obtained for the first mission. It was also found that detecting fine-grained categories is more difficult, owing to data scarcity and the fact that they are often articulated in a subtle and tacit manner. Meliana et al. (2019) believed that if words on social media were legally justified, one example is intimidation; intimidation is one of the ITE Law posts, and intimidation would be removed from Twitter's social media; everyone will find examples of how much intimidation there is on Twitter. There are many techniques for retrieving data from social network platforms, one of which is the clustering or data grouping process. The Naive Bayes and Decision Tree J48 classification methods were used in the analysis. The Naive Bayes method, which obtained an average value of 92 percent success rate and 8% not observed for the overall scenario, and the Decision Tree J48 method, which obtained an accuracy value of 100 percent, are the results to be obtained in this analysis. Psychology-based bullying or related cyberbullying is the most common form of cyberbullying.

ReSeARCH DeSIGN
Data Collection will be done through Twitter. Six thousand tweets are extracted from Twitter. Exploratory Data Analysis will follow this extraction. The authors will then try and understand the various user profiles on the available information. In the analysis, the authors will consider the tweets which have the most impact. These will be the tweets which have the highest polarity and subjectivity values in order to determine maximum impact. One may be able to sort the tweets to obtain the required information for deep-diving into the analysis. The steps must be performed cautiously to ensure the correct information is collected from the analysis. Beginning with Twitter extraction, the authors ask a sample population to filter the 20 most common bad words or profanities noticed on social media platforms. These words are then further used for the analysis. They are used to filter out tweets and then extract tweets which will be further analyzed. The pictorial flowchart representation of the research design is as follows: Once the data is extracted and it is time to deep-dive into the tweets, the next step would be to focus on the following three major components and sub-components. They can be mentioned in the form of a summary table to ensure further clarity on why the software detected the tweets based on the high values scored in metrics (those are polarity and subjectivity). This helps keep the analysis accurate and ensures good results, which can be used to help those in need.
• Extract Data from Twitter using keywords. •

OBSeRVATIONS AND INFeReNCeS
The authors have conducted a sentiment analysis on a set of 6,000 tweets. The tweets were extracted through the Tweepy library available in Python. Through the TextBlob package available, they could perform sentiment analysis. Sentiment analysis is a tool that enables us to understand tweets relating to a particular subject or topic. In this case, the sentiment analysis helped us understand the overall sentiment about the degree of cyberbullying that takes place on this platform. For this technique, once the tweets were extracted from Twitter, they were arranged in a data frame. This will form the corpus. A corpus is a collection of text documents which contains data that lets us capture sentiments. The data frame was further streamlined, and the polarity and subjectivity of tweets were assessed. The tweets were filtered using some of the most common swear words on social media platforms. The list of these words was decided by floating a questionnaire to respondents from the age of 18-29 residing in urban areas. The respondents selected a list of the 20 most common swear words, and their various variations were accounted for. This helped further streamline the program and search for the most relevant tweets based on common filters. The polarity is a measure that gives each tweet a score. This ranges from -1 to +1. A score of -1 denotes a strong negative sentiment, whereas a score of +1 denotes a strong positive sentiment. Subjectivity denotes the degree of opinion present in the tweet. As the name suggests, it simply denotes how subjective a tweet is. The score for subjectivity also lies between -1 to +1.
In the analysis, the authors find that multiple tweets were retweeted and favorited by others on the platform. The top 10 tweets by retweets are listed as follows (Table 1).
Furthermore, it is noticed that the count of tweets which are positive in nature, was only 1,871, out of 6,000. This means that about 30% of the tweets contained a positive connotation and were not negatively impacting people's emotions (Table 2).
What is interesting about this diagnosis is that multiple tweets scored a 1 out of 1, indicating that they definitely denote positive sentiments. A check-in factor that the algorithm works properly is the fact that none of the polarity digits exceed 1.
There were 2,921 tweets that portrayed a polarity less than zero, indicating negative sentiments and hereby, were the main focus of the authors' study. This comprised of almost 50% of the dataset of tweets extracted and hence, these tweets were the tweets that could be perceived as offensive. The following were the tweets that could be indicative of cyber bullying on Twitter (Table 3).
It is again interesting to note that upon sorting the tweets in ascending order, the authors note that the smallest value obtained is -1, which also validates the authenticity of the algorithm used.  The third category is the neutral one. Here, the focus is to understand the number and proportion of tweets extracted which were neutral. These tweets could be facts or statements and do not have any impact on people's sentiments. These comprised of only 1,208 tweets which was about 20% of the entire dataset. Therefore, analyses of these statements are beyond the scope of this paper (Table 4).
The tweets which had a polarity ranging from 0 to -1 are arranged in ascending order. It is done to ensure that the tweets that have maximum impact are highlighted in the analysis. This will lead to a more accurate and effective implicative instance generation. This gives us the following results (Table 5): It is seen that a negative 100% polarity value for the tweets is obtained when it is arranged in ascending order. Similarly, the lowest value the authors observe was a negative 1.11%, until the digits reach 0. A higher value indicates a higher probability of the tweet being perceived negatively. Conversely, the same logic follows all positive percentage amounts.
Looking at retweets, it is noticed that there are multiple tweets that rank high on polarity as well as subjectivity. These are the tweets that are flagged due to the severity of the negative sentiments associated with them. They can further understand the user behaviors by understanding the most popular users' follower counts, whether they have a profile picture, assessed pinned tweets, if any, to understand the online presence personality of the user. Later, this information may be used to understand the online persona of those who have violative content in their tweets.

CONCLUSION
The authors notice that out of the 10 most popular accounts, 3 of these are actual verified accounts. This implies how influential account holders may also be the ones creating an unhealthy social world. Further analysis of the tweets of verified users shows that even though they are popular tweets, they are not all necessarily promoting a negative cyber environment. Only 1/3 rd of the tweets have been flagged as portraying a negative emotion. 2 of the tweets extracted are either positive or neutral, as per the score obtained (Tables 6-9).
Further analysis of the top tweets portraying only a negative sentiment out of the most retweeted tweets in the corpus shows that there is a prevalent issue of cyberbullying on the social media website. These could be ranging from snide remarks to words intending to hurt someone or a community     through words. These users are the to-be 'aggressors' or have the potential to be deemed as 'flagged' users as per the results of the Twitter extraction study. The users who have been flagged by the analysis seem to have a large number of posted tweets. This indicates that they are active users who have been tweeting regularly and almost all of the users have a moderately high number of followers ranging in thousands (in one case, even millions). Furthermore, it is also noted that the users have maintained a certain frequency of tweets, which may be deemed high. With the lowest number of tweets being flagged amounting up to 754 and the highest being equivalent to 148,700. This shows that meanwhile the user's tweets do not conform to a pattern, there are certain conversations which have been flagged by the program. Although all profiles did have profile pictures, not all of these images were distinctly clear (barring the exception of few), which helped them maintain their anonymity and thereby, draft tweets which may not be suitable for all audiences. Hurting political or emotional sentiments is extremely easy and there is always a digital footprint or copy that is left. Hence, the audiences and masses must exercise caution in this day and age where information can be malleable and taken in a meaning not previously or futuristically intended.

FUTURe wORK AND LIMITATIONS
Although there are certain areas where the program needs further attention, there are also advantages to it, as discussed in prior sections. Understanding a tweet's true meaning as a human would perceive it requires much work. There may be many instances where tweets flagged are not profane -they have a certain amount of commonly used profane language in the bodies of their texts. This further illustrates that there will always be some error percentage, which will hamper the analysis. Since the participants are not taking questionnaires or surveys, the entire process is undertaken with the help of technology. Hence, a limitation of the analysis is that the participants are not voluntarily involved. This will make it difficult to reach out to such netizens to provide them with actual support in severe cases. Hence, it can be noted that although analyzing without a questionnaire will eliminate the bias component; it will make it difficult to ensure proper care to either the aggressor or the aggresse. What is offensive to one may not be so to another. Future work in this domain can include setting alarms to raise specific objections known as flags. This can further help in the regulation of the social media platform to bring down the severity of offensiveness on the particular platform. When an alarm goes off, it can be used to determine the following aspects: • To what extent was the tweet offensive: This can be indicated with the help of certain metrics, including a percentage of offensiveness. • Why was it tweeted: Was the tweet a religious, political, emotional, or psychological one? Was it based on current events? Or was it tweeted just to 'get back' at someone via a platform that protected the aggressor? • Did it make people feel inferior: To what extent did the tweet not conform to social norms? Was it responsible for making people inferior? If yes, what segment of people felt directly attacked by such a tweet? Was it consistent across all geographies?
Raising such flags would really help understand the geographic locations of netizens and further take actions to mitigate such instances, raise awareness, or just ensure that proper psychological care is imparted. This can be done with the help of various bots and 'talk to us' commands. Furthermore, on the basis of the user's profile, a system of verification can be implemented. Some starting points of this system include: • Colour the perpetrators 'Red' -Those who're indulging in bad behaviour. This could be indicated publicly on the user's profile with the help of an exclamation mark emoji which could ensure that all users are on their best behaviour on the platform.
• This format can be helpful especially for girls or women who are harassed on a daily basis. They can ensure that all that needs to be done is reporting with the help of authentic proof which can be used to flag the accounts.
Sending a Friend Request (FR) to anyone should not mean that the party initiating contact is interested in the other; it can simply portray that they want to be academically attached. People Misinterpret it. For those who want to connect with industry experts or just make friends on the platform by sending requests to accounts -this could help them make better decisions and foster a safer online environment. This can be especially helpful to women who may seem too direct or leading someone on, which might not be the case. Kavya Sharma is a practitioner of data science and analytics. After her master's degree in data science from Symbiosis International University Pune, she is working in Gartner Inc, India.

APPeNDIX
Krishna Kumar Singh is an educator and trainer with more than 20 years of experience. He is certified professional and trainer of smart IT infrastructure, big data analytics and decision science. He has been working as associate professor and Board of studies (BoS)