Preprint
Article

Psychometric Properties of a Cyberaggression Measure in Mexican Students

Altmetrics

Downloads

71

Views

29

Comments

0

A peer-reviewed article of this preprint also exists.

Submitted:

18 November 2023

Posted:

20 November 2023

You are already at the latest version

Alerts
Abstract
Cyberaggression is an important problem today; it can affect adolescents in different ways. Therefore, reliable and valid measures are necessary to better study the phenomenon. The aim of the present study was to generate validity and reliability evidence for a Spanish-language cyberaggression scale in a population of adolescents from northwestern Mexico. The results of this study contribute to the research and focus on cyberaggression in adolescents in Mexico. The measure in this article differs from other measures by detecting the different roles, including the bystander, contributing to the state of knowledge about cyberaggression, ratifying the presence of this actor that has not been sufficiently explored in the cyber context.
Keywords: 
Subject: Social Sciences  -   Psychology

1. Introduction

Throughout human history, aggression has existed in interpersonal relationships. However, as different times and changes in culture, society, and technology have occurred, aggressive behaviors have changed and adapted to their evolving context. Currently, much social interaction among young people occurs through social networks/cyberspace [1]. Information and Communication Technologies (ICTs) have given rise to new opportunities such as access to information, stimulating cross-cultural interaction, and improving communication and socialization. Despite that, cyberaggression issues such as cyberbullying, catfishing, scamming, cyber dating violence, sextortion, etc. have emerged [1]. It has become a global problem for adolescents in recent years, and scholars have devoted attention to this [2].
Researchers have used the terms “cyber victimization, e-bullying or electronic bullying, cyberstalking, electronic aggression, online harassment, cyber harassment, electronic victimization, peer victimization in cyberspace, cyber violence, online bullying, or ill-intended behaviors in cyberspace” interchangeably [3(p. 3)], and there is also no consensus on the definition, since some take into account aspects such as vulnerability or difference in power, repetition, the intentionality of the act, and other studies do not [3]. For this reason, the term cyberaggression is used in this study, because it is an umbrella term that covers all cyber-based aggressive behaviors, defined as an act that seeks to cause harm through electronic means [4,5,6], even when there is no difference in power between the perpetrator and the victim and there is no repetition by the perpetrator. However, when referring to research, we retain the term used by the authors.
Cyberaggression has been related to various negative outcomes in people who are involved in any role, but most of the studies of its consequences have focused on perpetrators and victims, even though it can affect bystanders too [7]. The negative effects can be physical, emotional, social, and psychological. Some of harmful outcomes are distress [8], mental health problems such as anxiety, depression, fear [9], low levels of self-esteem and empathy [10], social exclusion [11] and suicide attempts [12]
Findings regarding the prevalence of cyberaggression or cyberbullying are inconsistent, which may be due to measurement differences [3]. From ten studies carried out in Mexico, Vega-Cauich [13] determined that victimization prevalence rates ranged from 3% to 52%, while perpetration rates ranged from 3% to 23%. In addition to this, existing research in Mexico on the characterization of the three roles (aggressors, targets, bystanders) is sparse. Therefore, the present study seeks to validate a Spanish-language scale that measures cyberaggression that can be used by Mexico and other Latin American countries, from the perspective of the participant roles [14] involved.

1.1. Cyberaggression in Mexico

Cyberaggression is recognized as a problem internationally, although most studies have been carried out in the United States, Europe, Australia and to a lesser extent in other countries of the Global North, leaving research in the Global South lacking [10,15,16].
The National Survey on Availability and Use of Information Technologies in Households (ENDUTIH) reported that in Mexico in 2020, 84.1 million people in Mexico were Internet users, which represents 72% of the population six years of age or more. Regarding location, 78.3% of the urban population is an Internet user, contrasting with 50.4% of the population in rural areas [17]. Seventy-six percent of the Mexican population aged six or older use a cell phone, 91.8% of which are smartphones. The most used applications are instant messaging and tools for access to social networks.
The Module on Cyberbullying (MOCIBA), whose aim was to generate statistical information on the prevalence of the problem in people aged 12 and over who use the Internet, in addition to characterizing the situations experienced, was administered in 2020 to a sample of 103.5 million Mexicans [18]. The results indicated that 75% of the sample stated that they had used the Internet in the last three months; of the 75% who had used the Internet, 21% declared that they had experienced some type of cyberaggression in the last 12 months. The distribution of cybervictimization was mainly concentrated in persons ages 12 to 19 years old, where 22.2% of the males and 29.2% of the females have been victims, and those from 20 to 29 years old with very similar numbers (23.3% males, 29% females).
Regarding gender, on the various types of cybervictimization, males reported greater victimization in most situations, with only small difference between genders. However, it is notable that females had high prevalence of situations of a sexual nature, with differences of more than 15% between genders [18]. On the other hand, among the victims who claimed to know their aggressor, it was found that both men (59.4%) and women (53.2%) report that most of the aggressors are males, while females are less frequently reported as aggressors (males 13.7%, females 18.6%).
Vega-Cauich [13] carried out a meta-analysis of bullying and cyberbullying, which synthesizes the studies published in Mexico between 2009 and 2017. This researcher concluded that cybervictimization occurs in 21% of the student population between 10 and 22 years old, and that cyberaggression is perpetrated by 11% of students. Other investigations carried out in the country reported rates between 2.4% and 44% who experienced cybervictimization [19,20,21].

1.2. Participant Roles in Cyberaggression

According to Salmivalli [22] there are several participant roles in bullying: There are the victims, who are those students who suffer aggression; on the other hand, the bullies are the perpetrators of violent conduct against other students; and finally, there are the bystanders, who are the witnesses of violent events. These roles have also been described in cyberaggression.
Although there are different reasons why perpetrators commit aggressive behavior, the search for attention is very important [23,24]. Therefore, the role of observers is crucial in the phenomenon of aggression between peers, since by witnessing violent acts and not intervening, they function as reinforcers of the behavior. Salmivalli et al. [14] classified the observers into four different types: 1) assistants are those who help the aggressors; 2) reinforcers are those who support the aggressor by encouraging them and laughing at the victim; 3) defenders are those who intervene by helping victims or reporting incidents to a school authority; 4) uninvolved are those who avoid aggressive events and do not act.
Most of the extant research on cyberaggression has focused on the victims and aggressors [7], but the current research considers it relevant to also characterize the bystanders, since they are also affected by the problem, and in addition, the role of these adolescents can be crucial in stopping or encouraging aggressors [7].

1.3. Factors Related to Cyberaggression

Meta-analyses published in recent years [25,26] have shown that there is considerable variation in the prevalence of cyberaggression, whether as a victim or as a perpetrator, according to demographic and individual factors such as gender, race or ethnicity, sexuality, personality, weight, among many others. These meta-analyses highlight that gender is relevant, but they point out that it is still not clear how this relation works, since some studies point to females as the main victims, and males as aggressors, others have shown opposite results, and finally, there are studies that indicate that there is no significant relationship. Chun et al. [3] in their review of cyberbullying measurements mention that there is a need for measures that are sensitive to sex because even though many studies have found differences between females and males, this aspect is not considered when creating or validating the measures, and this is necessary to understand the relative impact of cyberbullying.
Regarding age, it has been found that even though cyberbullying can occur at an early age during primary education (under 12 years of age), its highest peak is in adolescence, declining between 17-18 years and adulthood [26,27]. Likewise, factors like time spent online and presence on social media using different apps are important [27], but what is perhaps more interesting is the interaction among these factors and those mentioned before, such as presence in social media and gender, access to technology (time spend online) and age, and also, age and gender [16].

1.4. Cyberaggression/Cyberbullying Measurements

Self-report has been the most widely used technique to measure cyberbullying, and it has been shown that descriptive, analytical, and explanatory analyzes can be carried out with that method (Espinoza & Juvonen, 2013). In addition, when aiming to make multidimensional conceptual models or study the prevalence from the different ways in which bullying occurs, using a multi-item scale is recommended (Thomas et al., 2015).
In recent years, several studies where cyberbullying and cyberaggression measures are developed have been published. The most complete and current review to date is that carried out by Chun et al. [3], in which they analyzed measures published until May 2020. These previous studies have provided important contributions to knowledge of the cyberbullying phenomenon; however, it is important to examine some common limitations that are mentioned in this study.
The first limitation of previously published measures is the evident problem of the lack of agreement between research on conceptualization and operationalization, which can lead to confusion about what is measured and what is not, as well as conclusions about the relationships with other constructs that are not valid. Ansary [28] mentions that in the face of this problem, some researchers have used global measures of cyberaggression as indicators of cyberbullying. However, this compromises the internal validity and distorts findings on the true prevalence values of both problems.
Another limitation Chun et al. [3] emphasize is that all the studies were conducted in developed countries. Knowing that the sociocultural context affects the responses and understanding of the interviewees, it should be considered when developing a scale. The review by Herrera-López et al. [15] indicates that the values of cyberbullying in developing countries are close to those reported in developed countries, but that the publications are very few and of low impact, which fuels a technological gap between countries.
Chun et al. (2020) also mentioned that even though 17.2% of the studies claim not to have observed gender differences regarding the victimization or perpetration of cyberbullying, 42.2% of the studies that did report this difference. Despite this, they did not find measures that are sensitive to the variable, even though the results suggest that the way in which male and female experience the problem is not necessarily the same, so the authors suggest that gender-sensitive cyberbullying measures are needed to reflect the reality of adolescents in a more reliable way.
Finally, a fourth limitation mentioned in the systematic review is that the studies do not follow a guide for the development of the scales; in addition, the vast majority do not report the necessary psychometric analysis and tend to underestimate the importance of validation. The Standards for Educational and Psychological Testing proposed by the American Educational Research Association (AERA), the American Psychological Association (APA) and the National Council on Measurement in Education (NCME) exist precisely to promote a systematization in the development of tests and give a basis for researchers to support the quality of the measures, so it is important that they be used as a reference framework to guarantee solid practices in the procedures of validity and reliability of the measures.
There are several features that researchers must consider when we seek to create or validate a measure. Morgado et al. [29] urge researchers to pay special attention to numbers. The first quantity to contemplate is the number of participants, since it is important that the studies are carried out with large and representative samples. The second quantity emphasized is the number of tests to be carried out, since it is important not only to check the statistical reliability, but also to demonstrate the validity of the construct, an aspect that according to Chun et al. [3], some studies overlook. The last important quantity to consider is the number of items, because even though small scales require less time from the respondent, the reliability of the scale can be compromised, so it is necessary to prioritize scales with enough items, which will allow reliability to remain within the acceptable range.
Translating and adapting a measure from one culture to another must be done through an appropriate methodology that guarantees the stability of its meaning and metric characteristics across cultures. Beaton et al. [30] and Ortiz-Gutierrez and Cruz-Avelar [31] propose processes of cross-cultural adaptation of measures. In these processes, the problems of cultural and language adaptation that arise when taking an instrument from one environment to another are reviewed.
Herdmann et al. [32] states that words can have different meanings from one culture to another or may not have an equivalent term in a culture; even when the language is the same, the meanings can be totally opposite due the sociocultural context. Therefore, it is important that developers of new measures modify the items that are not suitable at a conceptual level in the new context. Researchers always need to consider that the process of adapting and translating a measure to another culture does not guarantee that it will be valid in that new culture.
Because most of the studies use the term cyberbullying, in the present study measures were sought that, even when this term was used, could be used to measure cyberaggression, such is the case of the Cyberbullying Test from Garaigordobil [33]. This scale demonstrates through factor analysis that it identifies cybervictims and cyberaggressors and can also identify those people who are part of both groups (cybervictim / cyberaggressors), and includes a dimension to detect cyberbystanders, a feature that few measures have.
Garaigordobil [33] reports acceptable reliability values of the three dimensions, cybervictimization with α=.82, cyberaggression with α=.91, and cyberbystander with α=.87. An EFA is presented as a test of internal validity and reported an explained 42.39% of the variances, with a three-factor structure. In addition, it demonstrates good convergent and divergent validity. The author also performed a confirmatory factor analysis that confirmed the fit of the model in the three factors with good statistical values [x2/df = 4.88, CFI=.91, GFI=.92, RMSEA = .056, SRMR = .050]. This scale was administered to a representative sample of secondary and high school students in Spain, including adolescents between 12 and 18 years old, in an equitable sample with respect to gender.
By using the results without differentiating between the frequency of the responses, the Garaigordobil’s Cyberbullying Test is pertinent to this study, first, because its structure makes it possible to identify bystanders of cyberbullying, fundamental actors in the dynamic, since their attitude can stop or encourage aggressors [7]. The second aspect is that their sample consisted of 3,026 adolescents between the ages of 12 and 18 which means that they are similar to the target population of this research.

1.5. Present study

Without a doubt, cyberaggression is an important problem today, it can affect adolescents in different ways regardless of the role they play, since it has been associated with mental health and other problems. In addition, its different roles are associated with different genders: aggressors are more often males, while victims, especially of sexual assaults, are often females, suggesting an urgent need for a gender sensitive measure; and that the greatest participation in cyberaggression is during adolescence before the age of 17. In this sense, the research that is presented here aims to generate evidence of validity and reliability for a scale that measures cyberaggression, establishing its psychometric properties in a population of adolescent students in northwestern Mexico from the perspective of the participant roles involved. To achieve this aim in the most reliable way possible, the Standards for Educational and Psychological Testing proposed by AERA, APA and NCME [34], were used as a framework for the psychometric analysis procedure.

2. Materials and Methods

2.1. Participants

A stratified random sampling was carried out, where community, private and multigrade secondary schools were excluded, and 10% of the schools distributed in nine municipalities were chosen at random, in order to obtain greater variability and a representative sample for the state of Sonora in northwestern Mexico. The classrooms in each school were also randomly chosen, taking a group from the second grade and one from the third grade (equivalent to 8th and 9th grade in the American system). Students who, due to some physical or cognitive condition, could not answer were excluded from the sample.
The sample was made up of 1,695 students, distributed among 55 schools. The second-grade sample was made up of 760 students (44.8%) and the third-grade sample of 935 (55.2%). Age was not reported in the survey, but it is known that in these grades it ranges between 12 and 15 years; and with respect to gender, equitable samples were obtained, with 873 women (51.5%) and 822 men (48.5%).
During data collection, survey administrators reviewed the response sheets to identify mischievous or inattentive response patterns to exclude such subjects. Subsequently, during data capture, those participants who had more than 10% of missing data were also excluded while responses were imputed with the mean of the item when participants had less than 10% of missing responses [35].

2.2. Procedure

In 2017, an agreement was established with the School Safety Management of the Ministry of Education and Culture (SEC) to study antisocial behavior in public elementary schools in the state of Sonora. A work team, made up of psychologists and psychology students, was trained to standardize the application procedure. The government agency notified the selected schools about the study by email and work teams of 2 to 5 people visited the schools.
The administration of all the measures took place in each school during class hours from October to November 2018. The trained staff presented themselves to the responsible management or teaching staff, who provided support with the selection of groups and location of the classrooms for survey administration. Within the classrooms, the students were asked to participate in the study, explaining that it was voluntary and confidential, they were instructed to provide their answers on the electronic sheets, and they were asked to sign an informed consent. The procedure performed complied with the Code of Ethics of the Psychologist, of the Mexican Society of Psychology [36].

2.3. Measures

A sociodemographic questionnaire was included, with personal and academic data of the student, such as gender, grade, school performance, social networks used, among other aspects.

2.3.1. Cyberbullying Measure

The Cyberbullying Test created by Garaigordobil [33], was modified to adapt the vocabulary to the population participating in the study. This measure includes 15 cyberbullying behaviors grouped by the role that is enacted in the cyberbullying phenomenon, repeating the same question for each role, depending on whether they were a victim (e.g.), aggressor (e.g.), or observer or bystander (e.g.). These items focus on the behavior, regardless of the electronic means by which it was carried out. The responses were on a Likert-type scale with four frequencies options ranging from never to always.

2.3.2. Cultural adaptation

A cultural adaptation of the Spanish measure was carried out considering linguistic and cultural factors. An iterative filtering procedure was carried out with a group of native researchers from the Northwest zone and from the state of Sonora who independently carried out their linguistic adjustment of the material in Spanish from Spain taking care that the items conserved a semantic, conceptual, idiomatic, and experiential or cultural equivalence, adapting the terms to the Mexican culture (e.g., mobile modified to cell phone, hang it on internet changed to upload it to the internet).
Finally, it was reviewed by five specialists in social sciences who know the language and culture of young people between the ages of 12 and 15, following the recommendations of Beaton et al. [30] and Ortiz-Gutiérrez et al. [31], considering that this process of adaptation and translation of a measure to another culture does not guarantee that it is valid in that new culture.

2.4. Data analysis

Data were first subjected to Parallel Analyzes (PA) given the strong evidence that this is the most suitable method to determine the appropriate number of factors to retain [37].
Confirmatory Factor Analysis (CFA) was used to collect evidence about the structural validity based on the theoretical framework of the measure of Cyberbullying Test [33], which has three subscales: victim, perpetrator, and bystander. To test the global fit, we used the following fit criteria the Comparative Fit Index (CFI) and the Tucker-Lewis index (TLI) higher than 0.9; and values of Root Mean Square Error of Approximation (RMSEA) and Standardized Root Mean Square Residual (SRMR) lower than 0.08 [38,39]. The loading of the first item of each factor was set to 1 to identify the model. The internal consistency was measured using Cronbach’s alpha.
With the results obtained by the CFA, the Item Response Theory (IRT) approach was used with the Rasch model, using the Andrich Rating Scale Model (RSM) for polytomous items, to assess the fit of items to the model and confirm the unidimensionality of the subscales, based on infit and outfit MnSq with range between 0.5 and 1.5, and the difficulty of each item was also analyzed [40].
Correlations measured by Pearson's r coefficient were used to assess convergent and discriminant validity of the measure, using the presence of correlation with a bullying measure [41] as evidence of convergent validity and the absence of correlation with attachment to the neighborhood [42] as evidence of discriminant validity.
As the last construct validity test, a factorial invariance across the gender samples was performed. Using the three-factor model (cyberaggressor, cybervictim and cyberbystander) we computed a Multi-Group Confirmatory Factor Analysis (MGCFA) with males and females to addresses the configural invariance, and measurement invariance of the model. The change in the value of CFI (∆CFI) and the change in the value of RMSEA (∆RMSEA) caused by invariance constraints were examined, considering that there should not be a ∆CFI ≤ -.01 or a ∆RMSEA ≥ .015 [43].
PA and convergent and discriminant validity were analyzed using SPSS 26 [44]; internal consistency and Rasch models using WINSTEPS 3.65.0 [45]; and CFA and MGCFA using AMOS [46].

3. Results

In the PA, the random values generated by Monte Carlo simulations and the 95th percentile were used, which showed that the optimal number of factors is three (Table 1), so the three subscales of the original measure were used in the calibration of items by Rasch model procedure. In the first round of analysis, one item on each scale revealed an infit or outfit adjustment surpassing the established criteria, so three items were removed because they were inconsistent with the Rasch model (“Have you been harassed to try to isolate yourself from your contacts on social networks?” "Have you sent offensive and insulting messages by cell phone or Internet?" "Have you seen sending offensive and insulting messages via cell phone or over the Internet?"). The analyses were performed again for each scale where all item scores presented a satisfactory fit with their construct; values were between 0.75 and 1.42 as minimum and maximum values of infit and 0.58 and 1.44 as minimum and maximum values of outfit (table 2).
Taking the results of the previous analysis as support, a CFA was performed using only the items that resulted in satisfactory values in the Rasch analysis. The initial model showed a poor fit [χ2(df)= 5876.518(816), χ2/df= 7.202, TLI= .88, CFI= .89, RMSEA= 0.061 (90% CI =.059 - 062), SRMR= .042]. The modification indices were reviewed, then the covariances between some errors were allowed and two items from the dimension of cybervictimization ("Have you been sent offensive and insulting messages via cell phone or the Internet?" and "Have someone blackmailed you, forcing you to do things you didn't want in exchange for not divulging your private things online?"), one item of cyberaggression (Have you blackmailed or forced someone to do things they did not want in exchange for not divulging their private things on the Internet?") and one of cyberobservers were eliminated ("Have you seen how someone has been blackmailed or forced to do things they didn't want in exchange for not divulging their private things on the Internet?"), resulting in a three-dimensional model with an acceptable fit [χ2(df)= 2965.71 (635), χ2/df= 4.670, TLI= .93, CFI= .94, RMSEA= 0.047 (90% CI= .045 - .048), SRMR= .039].
To measure convergent validity, each dimension was correlated with the dimensions of victim, aggressor and bystander and significant correlations were obtained at the 0.01 level, and discriminant validity was determined in a correlation with the measure of the Attachment to the Neighborhood, with which none of the three dimensions of cyberbullying obtained significant correlation results (Table 3).
Finally, the MGCFA was performed to test the measurement invariance of the cyberaggression scale. Initially, the configuration invariance model (M1) was tested, where factor loadings, intercepts, and error variances were allowed to be freely estimated. The indices obtained (CFI=.913; RMSEA=.041; χ2/df= 3.806) indicated that the fit of the model to the data was adequate (Table 4). In Model 2, the factor loadings were restricted to be equal between men and women, and the comparison with Model 1 in the ∆CFI was <0.01 and the ∆RMSEA was <0.015. When comparing Model 3 – in which, in addition to the factor loadings, the intercepts between the groups were restricted – with Model 2, there were no significant changes between the CFI and the RMSEA. Finally, with Model 4, which is strict invariance, the error variances were also restricted, and in its comparison with Model 3 the ∆CFI >0.01 and the ARMSEA >0.015, contrary to expectations, resulting in a partial invariance, but enough to perform analyzes of moderation effects between genders.

4. Discussion

The aim of the present study was to generate validity and reliability evidence for a cyberaggression scale in a population of adolescents from northwestern Mexico. The Garaigordobil Cyberbullying Test[33] was designed to obtain information on the different roles of cyberbullying in the Spanish population. However, even though the original measure was in Spanish, it was necessary to adapt the items to the context and terms used in Mexico so that Mexican adolescents had an adequate understanding of the measure.
The results of this study contribute to the research and focus on cyberaggression in adolescents in Mexico. The measure in this article differs from other measures by detecting the different roles, including the bystander, contributing to the state of knowledge about cyberaggression, ratifying the presence of this actor that has not been sufficiently explored in the cyber context [7]. This has implications in practice, since it is an adequate measure to evaluate cyberaggression in Mexican adolescents and facilitates the identification of aggressors, victims, and bystanders of cyberaggression and thus the planning of actions to address the problem.
The elimination of certain items in the measure of cyberaggression based on rigorous analysis results allowed us to obtain a more precise and reliable measure of the phenomenon. By performing analyzes such as the Rasch and the confirmatory factor analysis, we can identify problems in the items of a scale and eliminate those that do not meet the necessary standards to guarantee its validity and reliability. The removal of items that do not work as they should, due to inconsistencies in responses, discrimination issues, social sensitivity, redundancy issues, or ambiguity, can significantly improve the quality of the measure and its ability to adequately capture the phenomenon it is intended to study. In addition, these analyzes not only make it possible to improve the quality of the scales, but also help to guarantee the applicability of the results obtained in intervention and prevention contexts.
Each of the analyzes and indicators presented in this study are important since they represent a methodological contribution to the field of study of cyberaggression. Of these, the analysis of invariance by gender is noteworthy, which is added in this study to the analyzes previously carried out in the validation of Garaigordobil [33]. This analysis meets the need exposed by Chun et al. [3] on measures sensitive to this variable, allowing comparative analyzes to be made of these groups with means of other factors [47]. Mexican researchers will be able to use this measure with confidence that it is adequate since the necessary psychometric tests have been carried out to support its validity and reliability. Currently, validity and reliability cannot be a property of the measures, but rather represent the legitimacy of their use for specific objectives [34]. The metric is useful for researchers because it allows carrying out an orderly and systematic application of the theory, with reliable statistical models that allow precision and specificity in relation to the measurements. Having valid and reliable measures for populations with specific sociocultural characteristics contributes to the understanding of the phenomenon studied, since it makes it possible to analyze the individual differences of the subjects more precisely.

Limitations and Recommendations

Our study has some limitations that must be acknowledged. Firstly, we used a cross-sectional design, which means that we collected data at a single point in time. Although this design is useful for studying certain phenomena, it has some limitations, particularly with respect to generalizability. Specifically, it is possible that our findings may not fully represent all regions of the country. This could be due to differences in demographic, social, cultural, or economic factors across different regions, which may affect the prevalence, incidence, or severity of the phenomenon under investigation, cross-cultural studies are essential to assess the replicability of the measurement model in a culturally diverse population. Consequently, caution should be exercised when interpreting our results and generalizing them to other contexts. Data collection was carried out through self-reports; therefore, the students’ responses could be influenced by social desirability [48].

5. Conclusions

We strongly recommend that researchers in other regions replicate the validity procedure that we conducted in this study, to advance our understanding of the phenomenon and inform evidence-based interventions and policies. As researchers, it is essential that we ensure the scales and measures we use are valid and reliable. Invalid or unreliable measures can lead to inaccurate or inconsistent findings, which can have serious implications for research and practice. Therefore, we believe it is crucial for researchers to rigorously test the validity of their measures before using them in their studies. By doing so, we can increase the confidence and trustworthiness of our findings and advance the scientific understanding of cyberaggression.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Mexican Society of Psychology.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Paat, Y.-F., & Markham, C. Digital crime, trauma, and abuse: Internet safety and cyber risks for adolescents and emerging adults in the 21 st century. Social Work in Mental Health 2021, 19, 18–40. [CrossRef]
  2. Machackova, H., Dedkova, L., Sevcikova, A., & Cerna, A. Bystanders’ Supportive and Passive Responses to Cyberaggression. Journal of School Violence 2018, 17, 99–110. [CrossRef]
  3. Chun, J., Lee, J., Kim, J., & Lee, S. An international systematic review of cyberbullying measurements. Computers in Human Behavior 2020, 113, 106485. [CrossRef]
  4. Corcoran, L., Guckin, C. M., & Prentice, G. Cyberbullying or Cyber Aggression?: A Review of Existing Definitions of Cyber-Based Peer-to-Peer Aggression. Societies, 2015; 5, 2. [CrossRef]
  5. Grigg, D. W. W. Cyber-aggression: Definition and concept of cyberbullying. Australian Journal of Guidance and Counselling 2010, 20, 143–156. [CrossRef]
  6. Pyżalski, J. From cyberbullying to electronic aggression: Typology of the phenomenon. Emotional and Behavioural Difficulties 2012, 17, 305–317. [Google Scholar] [CrossRef]
  7. González, Ví, Prendes, M. P., & Bernal, C. Investigación sobre adolescentes que son observadores de situaciones de ciberacoso. Revista de Investigación Educativa 2020, 38, 259–273. [CrossRef]
  8. Cao, X., Khan, A. N., Ali, A., & Khan, N.A. Consequences of Cyberbullying and Social Overload while Using SNSs: A Study of Users’ Discontinuous Usage Behavior in SNSs. Information Systems Frontiers 2020, 22, 1343–1356. [CrossRef]
  9. Baier, D. Consequences of Bullying on Adolescents’ Mental Health in Germany: Comparing Face-to-Face Bullying and Cyberbullying. Journal of Child and Family Studies 2019, 11. [Google Scholar] [CrossRef]
  10. Zych, I., Baldry, A. C., Farrington, D. P., & Llorent, V. J. Are children involved in cyberbullying low on empathy? A systematic review and meta-analysis of research on empathy versus different cyberbullying roles. Aggression and Violent Behavior 2019, 45, 83–97. [CrossRef]
  11. Jawaid, A., Riby, D. M., Owens, J., White, S. W., Tarar, T., & Schulz, P.E. “Too withdrawn” or “too friendly”: Considering social vulnerability in two neuro-developmental disorders. Journal of Intellectual Disability Research: JIDR 2012, 56, 335–350. [CrossRef]
  12. Elgar, F. J., Napoletano, A., Saul, G., Dirks, M. A., Craig, W., Poteat, V. P., Holt, M., & Koenig, B. W. Cyberbullying Victimization and Mental Health in Adolescents and the Moderating Role of Family Dinners. JAMA Pediatrics 2014, 168, 1015–1022. [CrossRef]
  13. Vega-Cauich, J. I. Prevalencia del bullying en México: Un meta-análisis del bullying tradicional y cyberbullying. 2019, 15, 18. [Google Scholar]
  14. Salmivalli, C., Lagerspetz, K., Björkqvist, K., Österman, K., & Kaukiainen, A. Bullying as a group process: Participant roles and their relations to social status within the group. Aggressive Behavior 1996, 22, 1–15. [CrossRef]
  15. Herrera-López, M., Romera, E. M., Ortega-Ruiz, R., Herrera-López, M., Romera, E. M., & Ortega-Ruiz, R. Bullying y Cyberbullying en Latinoamérica. Un estudio bibliométrico. Revista mexicana de investigación educativa 2018, 23, 125–155.
  16. Smith, P. K., Görzig, A., & Robinson, S. (2019). Cyberbullying in Schools: Cross-Cultural Issues. In G. W. Giumetti & R. M. Kowalski (Eds.), Cyberbullying in Schools, Workplaces, and Romantic Relationships (1st ed., pp. 49–68. Routledge.
  17. INEGI. Comunicado de prensa (352/21). INEGI, IFT, SCT. 2021. [Google Scholar]
  18. INEGI. Módulo sobre Ciberacoso MOCIBA 2020. Principales Resultados. INEGI. 2021. [Google Scholar]
  19. Frías, S. M., & Finkelhor, D. Victimizations of Mexican youth (12–17 years old): A 2014 national survey. Child Abuse & Neglect 2017, 67, 86–97. [CrossRef]
  20. Gámez-Guadix, M., Villa-George, F., & Calvete, E. Psychometric Properties of the Cyberbullying Questionnaire (CBQ) Among Mexican Adolescents. Violence and Victims 2014, 29, 232–247. [CrossRef]
  21. Martínez, R., Pozas, J., Jiménez, K., Morales, T., David A. Miranda, Delgado, M. E., & Cuenca, V. Prevención de la violencia escolar cara a cara y virtual en bachillerato. Psychology, Society & Education 2015, 7, 2. [CrossRef]
  22. Salmivalli, C. Participant role approach to school bullying: Implications for interventions. Journal of Adolescence 1999, 22, 453–459. [Google Scholar] [CrossRef]
  23. Austin, S. M., Reynolds, G. P., & Barnes, S. L. School Leadership and Counselors Working Together to Address Bullying. Education 2012, 133, 283–290.
  24. Salamn Almahasnih, A. F. The Phenomenon of Bullying: A Case Study of Jordanian Schools at Tafila. World Journal of Education 2019, 9, 243. [Google Scholar] [CrossRef]
  25. Kowalski, R. M., Limber, S. P., & McCord, A. A developmental approach to cyberbullying: Prevalence and protective factors. Aggression and Violent Behavior 2019, 45, 20–32. [CrossRef]
  26. Lozano-Blasco, R., Cortés-Pascual, A., & Latorre-Martínez, M. P. Being a cybervictim and a cyberbully – The duality of cyberbullying: A meta-analysis. Computers in Human Behavior 2020, 111, 106444. [CrossRef]
  27. Kowalski, R. M., Giumetti, G. W., & Cox, H. (2019). Differences in Technology Use Among Demographic Groups: Implications for Cyberbullying Research. In R. M. Kowalski & G. W. Giumetti (Eds.), Cyberbullying in Schools, Workplaces, and Romantic Relationships (1st ed., pp. 15–31). Routledge.
  28. Ansary, N. S. Cyberbullying: Concepts, theories, and correlates informing evidence-based best practices for prevention. Aggression and Violent Behavior 2020, 50, 101343. [Google Scholar] [CrossRef]
  29. Morgado, F. F. R., Meireles, J. F. F., Neves, C. M., Amaral, A. C. S., & Ferreira, M. E. C. Scale development: Ten main limitations and recommendations to improve future research practices. Psicologia: Reflexão e Crítica 2017, 30, 3. [CrossRef]
  30. Beaton, D. E., Bombardier, C., Guillemin, F., & Ferraz, M. B. Guidelines for the Process of Cross-Cultural Adaptation of Self-Report Measures. Spine 2000, 25, 3186–3191. [CrossRef]
  31. Ortiz-Gutiérrez, S., & Cruz-Avelar, A. Proceso de traducción y adaptación cultural de instrumentos de medición en salud. Actas Dermo-Sifiliográficas 2018, 109, 202–206. [CrossRef]
  32. Herdman, M., Fox-Rushby, J., & Badia, X. A model of equivalence in the cultural adaptation of HRQoL instruments: The universalist approach. Quality of Life Research: An International Journal of Quality of Life Aspects of Treatment, Care and Rehabilitation 1998, 7, 323–335. [CrossRef]
  33. Garaigordobil, M. Psychometric Properties of the Cyberbullying Test, a Screening Instrument to Measure Cybervictimization, Cyberaggression, and Cyberobservation. Journal of Interpersonal Violence 2017, 32, 3556–3576. [Google Scholar] [CrossRef]
  34. American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. Estándares para Pruebas Educativas y Psicológicas. American Educational Research Association. 2018. [Google Scholar] [CrossRef]
  35. Dodeen, H. M. Effectiveness of Valid Mean Substitution in Treating Missing Data in Attitude Assessment. Assessment & Evaluation in Higher Education 2003, 28, 505–513. [Google Scholar] [CrossRef]
  36. Sociedad Mexicana de Psicología A., C. Código Ético del Psicólogo, 4a ed.; Trillas, 2007. [Google Scholar]
  37. Hayton, J. C., Allen, D. G., & Scarpello, V. Factor Retention Decisions in Exploratory Factor Analysis: A Tutorial on Parallel Analysis. Organizational Research Methods 2004, 7, 191–205. [CrossRef]
  38. Kline, R. B. Principles and practice of structural equation modeling. 2016. [Google Scholar]
  39. Marsh, H. W., Hau, K.-T., & Wen, Z. In Search of Golden Rules: Comment on Hypothesis-Testing Approaches to Setting Cutoff Values for Fit Indexes and Dangers in Overgeneralizing Hu and Bentler’s (1999) Findings. Structural Equation Modeling: A Multidisciplinary Journal 2004, 11, 320–341. [CrossRef]
  40. Linacre, J. M. Winsteps® Rasch measurement computer program User’s Guide. Version 3.61.2. Winsteps.com. 2007. [Google Scholar]
  41. Fregoso, D., Vera, J. A., Duarte, K. G., Tánori, J., & Cuevas, M. C. (press). Efecto de familia, comunidad y escuela sobre la percepción de violencia según el observador alentador y defensor. Interdisciplinaria.
  42. Duarte, K. G. Percepción de los estudiantes de secundaria sobre la socialización en las colonias y su relación con la violencia escolar. [Master not published]. Centro de Investigación en Alimentación y Desarrollo A.C. 2018. [Google Scholar]
  43. Chen, F. F. Sensitivity of Goodness of Fit Indexes to Lack of Measurement Invariance. Structural Equation Modeling: A Multidisciplinary Journal 2007, 14, 464–504. [Google Scholar] [CrossRef]
  44. IBM Corp. IBM SPSS Statistics for Windows (26.0) [Computer Program]. IBM Corp, 2019. [Google Scholar]
  45. Linacre, J. M. Winsteps® (3.65.0) [Computer Program]. Winsteps.com. 2009. [Google Scholar]
  46. Arbuckle, J. L. Amos (Version 22) [Computer Program]. IBM SPSS, 2013. [Google Scholar]
  47. Dimitrov, D. M. Testing for Factorial Invariance in the Context of Construct Validation. Measurement and Evaluation in Counseling and Development 2010, 43, 121–149. [Google Scholar] [CrossRef]
  48. Fisher, R. J., & Katz, J. E. Social-desirability bias and the validity of self-reported values. Psychology & Marketing 2000, 17, 105–120. [CrossRef]
Table 1. Actual and Random Eigenvalues for Parallel Analysis.
Table 1. Actual and Random Eigenvalues for Parallel Analysis.
Actual Eigenvalue Average Eigenvalues 95th Percentile Eigenvalue
18.712 1.301 1.328
3.429 1.271 1.292
2.803 1.248 1.266
1.217 1.229 1.247
1.037 1.212 1.225
Table 2. Rasch parameters of the measure.
Table 2. Rasch parameters of the measure.
α Infit Outfit Difficulty
Min Max Min Max Min Max
Cybervictim
Model 1 a .93 0.67 1.46 0.49 1.48 -0.63 0.35
Model 2 b .92 0.75 1.42 0.58 1.44 -0.60 0.36
Cyberaggressor
Model 1 a .95 0.84 1.60 0.50 1.57 -0.79 0.23
Model 2 b .95 0.86 1.26 0.60 1.33 -0.25 0.19
Cyberbystander
Model 1 a .94 0.83 1.41 0.67 1.51 -0.33 0.39
Model 2 b .93 0.84 1.30 0.69 1.21 -0.34 0.38
a Model with all participants and all items. b Model without items with high infit/outfit values.
Table 3. Convergent and discriminant validity
Table 3. Convergent and discriminant validity
Victim of Bullying Aggressor of bullying Bystander of Bullying Attachment to the Neighborhood
Cybervictim .454** .436** .320** -.016
Cyberaggressor .324** .378** .216** .014
Cyberbystanders .327** .337** .313** -.003
** Correlation is significant at the 0.01 level.
Table 4. Factorial Invariance
Table 4. Factorial Invariance
χ2 (df) χ2/df CFI RMSEA
(IC 90%)
Contrast ∆ χ2 p>.05 ∆CFI
≤0.01
∆RMSEA ≤0.015
Model 1. 4833.124 (1270) 3.806 .913 .041 (.039-.042)
Model 2. 5080.127 (1305) 3.893 .908 .041 (.040-.043) M2 vs M1 247.003* -.005 0
Model 3. 5256.986 (1343) 3.914 .904 .041 (.040-.043) M3 vs M2 176.858* -.004 0
Model 4. 11017.737 (1414) 7.792 .765 .063 (.062 .064) M4 vs M3 5760.752* -.139 .022
Notes. Model 1= Configural Invariance; Model 2= M1+ Weak Measurement Invariance; Model 3= M2+ Strong Measurement Invariance; Model 4= M3+ Strict Measurement Invariance; *p < 0.0001.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated