Review Analysis
State of the Art- AI-based Neuroimaging Technology: Neuroprediction involves the use of structural or functional brain characteristics to forecast the results of therapy, prognoses, and behavioral predictions. The use of Neurovariables, though a new technology, does not raise ethical issues until a certain period (Morse, 2015). Effective brain-mapping technologies are likely to overcome a number of challenges, such as the challenge of continually observing and changing neural activity. Additionally, simple open-loop neurostimulation devices with a closed-loop approach describe the moment-to-moment state of the brain (Herron et al., 2017). Novel experimental frameworks leveraging clever computational approaches that can rapidly perceive, understand, and modify vast volumes of data from behaviorally important brain circuits are needed (Redish and Gordon, 2016). AI/ML in computational psychiatry and other emerging approaches are such examples.
Explainable artificial intelligence, a relatively new set of methodologies, combines sophisticated AI and ML algorithms with potent explanatory methodologies to produce explainable solutions that have been successful in a variety of domains (Fellous et al., 2019). Recent studies have shown that basic brain circuit changes and therapeutic interventions may be guided by the XAI
6 (Holzinger et al., 2017; Langlotz et al., 2019). XAI for neurostimulation in mental health is a development of the BMI
7 design (Vu et al., 2018). Data analysis of the nature of multivoxel pattern analysis involves the study of multivoxel patterns in the human brain to distinguish between more delicate cognitive activities or subject areas, combining data from several voxels within a region (Ombao et al., 2017). Noninvasive anatomical and functional neuroimaging technologies have advanced significantly over the last 10 years, providing a significant quantity of data and statistical software. High-dimensional dataset modeling and learning approaches are crucial for employing statistical machine learning techniques for neuroimaging of enormous volumes of neuronal data with increasing accuracy and high-dimensional dataset modeling (Alexandre et al., 2014). BMI intervention may stop moving up to 200 ms after it has started in the instance of motor decision-making both before and during movement execution. The introduction of MVPA
8 methods has gained popularity in neuroimaging in health and clinical research (Hampshire and Sharp, 2015). Neural data existing in populations relating to veto self-initiated movements after being triggered within 200 ms can be utilized to decode (Schultze-Kraft et al., 2016). To some extent, intentions, perceptual states, and healthy and diseased brains can be distinguished via lie-detection methods (Blitz, 2017). Clinical applications are focused on neurological disorders due to the broad agreement of response inhibition as an emergent property of a network of distinct brain regions (Jiang et al., 2019).
Behavioral traits can be associated with aspects of the human brain, opening up new opportunities for predictive algorithms to be constructed and allowing the prediction of the criminal dispositions of an individual (Mirabella and Lebedev, 2017). The validity of the prediction models is judged by their ability to generalize; for most learning algorithms, the standard practice is to estimate the generalization performance. The adoption of Neuroprediction, as has been defined, requires approaches to frame inference from the group level to individual predictions (Tortora et al., 2020). Scientific advancements have played a crucial role in shaping our understanding of the world. The progress of neuroimaging in conjunction with AI, particularly the use of ML techniques, such as brain mapping,
fMRI9, CNN
10, NLP
11 and speech recognition techniques, has resulted in the development of brain-reading gadgets with cloud-based neuro biomarker banks. Potential future applications of these technologies may include the areas of deception detection, neuromarketing, and BCI
12. Some of these methods are possibly useful in the field of forensic psychiatry (Meynen, G. 2019). The prospective use of
fMRI has been demonstrated in forecasting rates of recidivism among individuals with criminal backgrounds (Aharoni et al., 2013). Thus, studies have focused on the use of neural data for prediction functions within the field of criminal justice.
Convergence of AI and Neuroprediction in Forensics: Structural and functional neuromarkers of personality disorders whose main characteristic is persistent antisocial conduct, such as ASPD
13 and psychopathy, as they are most correlated with high rates of recidivism. (Umbach et al., 2015). Because of the need to collect biomarkers of the "
criminal" brain and integrate neuro-biology, neuro-prediction should aid in Socio-rehabilitation strategies rather than curbing individual rights (Coppola, 2018). By using various techniques, the accuracy of risk evaluations and uncovering effective therapies in the field of forensic psychiatry can be improved. This method, known as "
A.I. Neuroprediction" (Zico Junius Fernando et al, 2023), involves identifying neurocognitive factors that might predict the likelihood of reoffending. It is necessary to identify the enduring effects of these tools while recognizing the contributions of neuroscience and artificial intelligence to the assessment of the risk of violence (Bzdok, D., and Meyer-Lindenberg, A., 2018).
The combination of Neuroprediction and AI has potential for supporting law enforcement and judicial institutions in early risk assessment, intervention, and rehabilitation initiatives (Gaudet et al., 2016) (Jackson et al., 2017) (Greely & Farahany, 2019) (Hayward & Maas, 2020). However, this confluence also presents ethical, legal, and privacy problems. For example, Privacy (Farayola et al., 2023), Bias and Discrimination (Ntoutsi et al., 2020) (Srinivasan & Chander, 2021) (Belenguer, 2022) (Shams et al., 2023), Consent and Coercion (Ghandour et al., 2013) (Klein & Ojemann, 2016)(Rebers et al., 2016), and Cognitive Libertry (Muñoz, 2023) (Shah et al., 2021) (Daly et al., 2019) (Lavazza, 2018) (Ienca & Andorno, 2017) (Sommaggio et al., 2017) (Ienca, 2017). The ethical consequences of anticipating criminal propensities and the possible exploitation of such insights underscore the necessity for rigorous ethical frameworks and strict laws (Poldrack et al., 2018) (Eickhoff & Langner, 2019). Moreover, guaranteeing openness, accountability, and fairness in the employment of these technologies inside the criminal justice system becomes crucial (Meynen, 2019). The use of AI-powered brain-mapping technology (L. Belenguer, 2022) to predict acts of violence and subsequent rearrests is a cause for concern and distress. Such methodologies may be used in the future within the fields of forensic psychiatry and criminal justice; however, diluting the right to privacy (Ligthart SLTJ, 2019) can lead to potential ethical and legal consequences.
Technologies used incrime detection, investigation and prediction: This section includes traditional AI, computer vision, data mining and AI-decision-making models for the criminal justice system. In recent years, between 2018 and 2023, there was a large influx of literature reviews across interdisciplinary domains discussing various technologies and software instruments that are used in the Criminal Justice System (Varun Mandalpu et al., 2023). The field of machine learning is a subset of artificial intelligence, while deep learning and data mining methods are a subset of ML. Machine learning methods use various statistical models and algorithms to first analyze and subsequently predict from a set of data. On the other hand, deep learning uses neural networks with multiple layers to construct complex and intricate relationships between the inputs and outputs (C. Janiesch et al., 2021) (W. Safat et al., 2021). ML techniques involve training datasets, which are generated mainly through supervised and unsupervised learning methods. Traditional AI and ML technologies, such as support vector machines decision trees, random forests and logistic regression, have been heavily exploited for analysis of the facts of the crime, and identification of the pattern and identifying patterns to further predict similar criminal activities (S. Kim et al. 2018). These traditional AI tools also achieve very high case accuracy in anomaly detection and crime data analysis with limited datasets (S. Goel et al., 2021). A few notable examples of ML regression techniques include the use of the ARIMAX
14 (E.P. Utomo et al., 2018) method in the city of Yogyakarta, with an RMSE of 6.68; the use of crime data (C. Catlett et al., 2019) via ARIMA
15, (RF)
16 mRepTree and ZeroR (D.M. Raza et al., 2021); and the use of RSME
17-CDR
181-57.8, CDR2-29.85, and CDR3-16.19 in Chicago crimes (C. Catlett et al.,2014). Clustering methods (V. Ingilevich and S. Ivanov, 2018) include LR
19, LOR
20 and gradient boosting, which are used in Saint-Petersburg Russia Crime, with an R-square
21 of 0.9. The RFR
22 (LK.G.A. Alves et al., 2018) used by Dept. of Informatics of the Brazilian Public Health System (DATASUS), which has up to 97% accuracy with an adjusted R-square of 80% on average. Machine learning methods such as deep learning algorithms such as convolution and RNNs
23 are promising for crime prediction (Sarker, 2021). Using these algorithms and training on crime data with either spatial or temporal components, predictive policing has been found to be quite accurate in specific cities in the USA (A. Meijer and M.Wessels, 2019). Predictive models often use pretrained data such as time, location, and type of crime incident to predict future criminal activities and identify criminal hotspots (S. Hossain et al, 2020).
With computer vision and video analysis used for crime prediction (Neil Shal et al, 2021), technologies analyze video footage from surveillance cameras from various locations to detect, identify and classify criminal activities such as theft, assault and robbery. Even when monitoring a city’s safety and security, surveillance is conducted by drone and aerial technologies. Deep learning algorithms (M. Saraiva et al., 2022) are used for analyzing criminal data from various sources, enhancing the ability and responsiveness to crime prevention in real time. The methods used in data mining (T. Chandrakala et al, 2020) stand as an amazing asset offering tenets of criminal investigative procedures. With respect to digital forensics, a very well-known technology known as the NSVNN
24 (Umar Islam et al, 2023) is currently being developed. This approach is assumed to be reliable for anomaly detection in this field of criminal investigation. Additionally, other deep learning mechanisms, such as the DBN
25 and clustering-based methods (Ashraf et al, 2022), provide novel approaches for anomaly identification in digital forensics. Additionally, DNN
26 use a feature-level data fusion method (Kang HW, Kang HB, 2017) that can efficiently fuse multimodal data from several domains within related environmental contexts. Researchers have also used Google TensorFlow to forecast crime hotspots and evaluated three options in the RNN (Zhuang Y, 2017) architecture: precision, accuracy and recall. A comparative study (McClendon L, Meghanathan N, 2015) between violent crime patterns was carried out using the open-source data mining software WEKA
27. Here, three algorithms, namely, linear regression, additive regression and decision stump, were implemented to determine the efficiency and efficacy of the ML algorithms. This approach was intended to predict violent crime patterns and determine criminal hotspots, criminal profiles and criminal trends.
Fairness versus Biasness: The process models circumventing these technologies are often accused of being biased with no profound fairness in predictive or deterministic algorithms. The word fairness in the justice system is the rule of law. When AI-based investigations and justice delivery occur, fairness and unbiases are of paramount importance. AI algorithms must prioritize fairness as their use expands across many jurisdictions worldwide in forecasting recidivism risk. In this study, the discrimination, bias, fairness and trustworthiness of AI algorithms were measured to ensure the absence of prejudice (Daniel Varona et al, 2022). However, uncensored discrimination creates unfairness in AI algorithms for predicting recidivism (Ninareh Mehrabi et al, 2021). Scholars have attributed the logical argumentation of GIGO
28 or RIRO
29 to the quality of pretrained datasets, leading to unfair AI algorithms. The term “discrimination in AI/ML algorithms” was defined as follows (Verma & Rubin, 2018): “biasness in modeling, training, and usage” (Ferrer, 2021). Arguably, algorithms cannot eliminate discrimination alone because the outcomes are shaped according to the initial data received. When the underlying data are unfair, AI systems can perpetuate widespread inequality (Chen, 2023). Frameworks for discovering and removing two types of discrimination (Lu Zhang et al., 2016) are conducted where indirect discrimination is caused by direct discrimination. Like a group classifier (direct discrimination based on historical data), tuning a neutral nonprotected attribute in the system (indirect discrimination) causes unfairness and inequality. Analysis of direct discrimination to audit black-box algorithms to mitigate biasness based on pretrained datasets or attributes, such as discrimination, biasness, unfairness, and untrustworthiness, has been conducted (Daniel Varona et al, 2022). Additionally, a novel probabilistic formation has been introduced for indirect (unintended and necessarily not unfair) data preprocessing to limit control group discrimination and distortion in individual datasets (Flavido du Pin Calmon et al, 2018). Sources of unfairness are not limited to discrimination but also to biasness. The types of bias included data bias, model bias and model evaluation bias, as described in the review (Michael Mayowa Farayola et al., 2023). In one study (Richard et al., 2023; Dana Pessach et al., 2022; Eike Peterson et al., 2023), the use of historical data was found to cause measurement bias. Even having fair data is not sufficient, as there can be a trigger from the model being biased, causing unfair prediction without justification (Davinder kaur et al, 2022). In this study (Arpita Biswas and Suvam Mukherjee, 2021), there is a use case scenario in which unfairness can increase due to incorrect evaluation metrics, i.e., biased feedback. The fairness pipeline model, which includes preprocessing, in-processing and postprocessing steps, has been constructed (Mingyang Wan et al, 2023; Felix Petersen, 2021). While preprocessing guarantees the ethical growth of the AI model, the in-processing phase focuses on tuning the algorithm. The postprocessing phase aims at the assessment stage of the AI lifecycle to address concerns relating to prejudice and biasness.
AI delivering Justice: Using the neurodata and other neural biomarkers used to predict recidivism can clearly be of interest for additional objectives, such as for health insurers or when evaluating potential employees, also raising consent issues (Caulfield and Murdoch, 2017). Artificial intelligence should not be allowed to hallucinate in critical arenas, such as those of the criminal justice system. Additionally, it is imperative that data integrity holds importance, as a thorough examination of pretrained data is needed to detect and correct biases at their origin. The admissibility of neurological evidence gathered using neuroimaging methods, such as fMRI, in court has been doubted by legal cases in the most developed nations compared with the United States v. Jones (2012). Additionally, adherence to algorithmic transparency can never be negated; thus, closed-source risk assessment tools need to be overridden. The courts encountered challenges in assessing the dependability and pertinence of the evidence. Additionally, AI plays an impactful role in sentencing and decision-making across many nations around the globe. There has been a range of judicial rulings concerning the utilization of AI algorithms in the context of sentencing. The case of Wisconsin v. Loomis (2016) in the United Nations highlighted the need for openness in the use of AI-generated risk assessments within the context of sentence determinations. Additionally, Carpenter v. United States (2018) highlighted the constitutional consequences of using people's brain data for predictive objectives, therefore addressing apprehensions around privacy and the gathering of data.
The COMPAS
30 algorithm (L. Belenguer, 2022), developed by Northpointe, is a tool used in U.S. courts to assess the likelihood of a defendant committing another offense. It uses risk assessment scales to predict general and violent recidivism, as well as pretrial offending. The algorithm's practitioner's guide uses behavioral and psychological aspects to predict reoffending and criminal paths. The General Recidivism Scale predicts the probability of engaging in new criminal behavior after being released, while the Violent Recidivism Scale assesses the probability of reoffending violent crimes after a prior conviction. However, a ProPublica investigation (C. Rudin, 2019) revealed that individuals who were mainly of black origin, such as those of African descent, were almost twice as likely to be classified as having a higher risk by COMPAS, even if they did not actually reoffend. The COMPAS_AI algorithm promises to demonstrate a superior degree of precision compared to that of individuals without criminal justice expertise, but it does not reach the same level of accuracy.
Existing AI technologies in India: In India, Punjab Police, in collaboration with Staque Technologies, has implemented an artificial intelligence-powered facial recognition system. The Cuttack Police has used AI-powered devices to assist investigative officers in adhering to investigative protocols. The Uttar Pradesh police has introduced an AI-powered facial recognition application named 'Trinetra' to effectively resolve criminal cases. The government of Andhra Pradesh has introduced 'e-pragati', a database containing electronic Know Your Customer (e-KYC) information for millions of individuals in the state. In collaboration with IIT Delhi, the Delhi Police has established an artificial intelligence center to manage criminal activities. (Varun VM, 2020). It is important to note that the right to privacy holds paramount importance guaranteed in Article 21 of the Indian Constitution, and banking of neuro-biomarkers may not be allowed if there is such violation per se. Utilizing artificial intelligence in judicial settings has the potential to impact the results of cases and may also lead to disparities in the imposition of sentences. Additionally, without any succinct neuro-biobanks, designing such AI algorithms for predictive policing, assessing the risk of recidivism and offering deterministic judgments is likely to be impossible. The use of neuroprediction and artificial intelligence in the criminal justice system, if incorporated in India, will likely give rise to ethical considerations about biases and the possibility of prejudice.
Conclusion
Summary of Key Findings: The key findings of the review shed light on the optimal prioritization strategy for addressing biases in AI technologies, particularly focusing on the context of humanly biased pretrained datasets and algorithmic learning/training models. The incorporation of techniques such as model bias evaluation and processing in phase checks is needed to identify biases inside the learning and training algorithms, guaranteeing that they do not perpetuate or magnify preexisting prejudices. The need for ongoing assessment holds quintessential and needs a consistent evaluation and improvement in both the data and algorithms to minimize any biases that may arise or remain. To ascertain the default outcome in the setting of inaccurate predictions, it is necessary to comprehend the origins of biases and their dissemination inside the AI system. Ensuring responsibility and correction mechanisms are in place throughout both the data curation and algorithmic learning phases, which is also essential for establishing fairness and accuracy in decision-making powered by artificial intelligence. However, thorough cross-validation techniques, recalibration, scrupulous data gathering and simultaneous verification are essential for a wide range of brain data sources. This approach ensures privacy, promotes fairness, confronts prejudices and simultaneously enumerates human‒machine dependability. Undoubtedly, a fair and unbiased trial demands an equitable and flawless algorithm. Pretrained data previously impacted by human biases might naturally introduce biases into the system. This principle applies to all logical argumentation: soundness implies validity, but validity does not imply soundness.
While the optimal strategy depends on the specific context, addressing biases in pretrained datasets is considered foundational due to their direct impact on biased outputs regardless of the model used. Once datasets are verified for biases, evaluating algorithmic learning/training models becomes crucial to ensure that they do not introduce additional biases. Furthermore, the review emphasizes the importance of scrutinizing processing models alongside algorithms and training data to safeguard against biases introduced during human–machine interactions. Additionally, the review highlights that the increase in false positives and false negatives in deterministic/predictive methods can be influenced by both pretrained datasets and default settings of training models. Biased datasets are identified as a fundamental issue leading to biased predictions, while adjusting model settings such as decision thresholds can impact the balance between false positives and negatives. These findings underscore the importance of meticulous consideration and calibration of both datasets and model settings to minimize errors and maintain unbiasedness and accuracy to ensure delivery of Justice/ Governance by Fair Algocracy. The concept of "bias in, bias out" elucidates the fundamental challenge in AI development, emphasizing the necessity of unbiased and representative data to mitigate the perpetuation of systemic biases. In contexts such as criminal justice, where AI-driven risk assessment tools can exacerbate existing biases, meticulous attention should be given to data collection and processing to foster fairness and accuracy in AI systems.
Closing Remarks: In conclusion, the review literature has focused mainly on existing software currently used across the globe, with its performance analysis and criticism across the public domain. From Bytes to Bar, the review here describes the use of the AI algorithm to either send or keep criminals in Jail or at least to predict their likelihood of committing crimes in a similar manner. AI algorithms are thus now under public periciliary, and their deterministic approach is likely to be under public auction. Such an examination of AI algorithms is due to their perturbed efficacy for predictive policing, crime pattern analysis, and resource allocation. This highlights the importance of careful calibration to minimize errors and ensure equitable outcomes, as these algorithms use previous crime data to forecast upcoming criminal activity and alert law enforcement. Nevertheless, the presence of biases in historical data poses issues, as already discussed above, which may lead to the continuation of excessive policing in some groups or classes of citizens. In the current scenario, AI now uses advanced algorithms to analyze large datasets and detect trends and irregularities in criminal behavior. Nevertheless, the effectiveness of these methods depends on the precision of the data, the strength of the algorithms, and the capacity to comprehend the results. AI aids in optimizing resource allocation by forecasting regions that need heightened law enforcement. Additionally, ethical issues, algorithmic transparency, and accountability are of utmost importance. The use of AI in judicial courts needs to be closely examined since it may lead to inconsistencies in sentencing. To fully use the promise of AI while ensuring fairness and ethical norms, it is crucial to adopt a comprehensive strategy that includes the collaboration of AI specialists, legal professionals, ethicists, and lawmakers. There is definite difficulty in determining the underlying source of biases that result in false-positive and false-negative outcomes. The learning and training algorithms may also unintentionally magnify these biases or be ineffective at mitigating them if the training is achieved under an unsupervised learning model. The pursuit of fairness, equality and equity now requires a comprehensive methodology, as per this study. Thus, the key takeaway is finding, addressing and removing any form of bias at every stage of AI algorithms to maintain fairness and accuracy in decision-making processes.