Language models like BERT dominate current NLP research due to their robust performance, but they are vulnerable to backdoor attacks. Such attacks cause the model to consistently generate incorrect predictions when specific triggers are present in the input, while maintaining normal behavior on clean inputs. In this paper, we propose a straightforward data poisoning method targeting the BERT architecture. Our approach does not involve complex modifications to the model or its training process; instead, it relies solely on altering a small portion of the training data. By introducing simple perturbations into just 10% of the training dataset, we demonstrate the feasibility of injecting a backdoor into the model. Our experimental results show a high attack success rate, indicating that the model trained on the poisoned data can reliably associate the trigger with the attacker's desired outputs while its performance on clean data remains unaffected. This highlights the stealth and effectiveness of our method, emphasizing the need for improved defensive strategies to protect against such threats. Our study underscores the critical importance of ongoing research and development to safeguard AI systems from malicious exploitation, ensuring the security and reliability of NLP applications.