American Sign Language is a popular language for deaf individuals. Communication is made easier for these people through sign language. However, in a digital era like today, there is a need for these people to be able to communicate online, and even get help from technology to communicate in person with non sign language speakers. This research will present a program able to translate American sign language to plain English. This study aims to use the OpenCV library to recognize hand signals, also a trained model to identify images so that the program can then translate them to words and letters. The program uses a data set of over 2000 images which will be in this case the largest data set available. With over 90\% of accuracy it results in a basic computer program with the largest data set available that would make possible for users to communicate with a wide variety of words and expressions.