This study addressed algorithmic bias in predictive policing, focusing on the Chicago Police Department's Strategic Subject List (SSL) dataset. We specifically focused on identifying and mitigating age-related biases, a notably underexplored area in prior research. Our research introduced Conditional Score Recalibration as a bias mitigation strategy alongside the well-established Class Balancing technique. Conditional Score Recalibration involved reassessing and adjusting risk scores for individuals initially assigned moderately high-risk scores in the dataset. This recalibration marked such individuals as low risk if they met three conditions, namely: no prior arrests for violent offenses, no previous arrests for narcotic offenses, and having never been involved in shooting incidents. These fairness strategies were implemented on the Random Forest model, and the fairness metrics employed included Equality of Opportunity Difference, Average Odds Difference, and Demographic Parity. The results showed a significant improvement in model fairness, particularly for age biases, without compromising the model's accuracy. These findings challenged the often-assumed trade-off between fairness and accuracy, underscoring the feasibility of achieving fairness without compromising accuracy.