The Transformer architecture, while adept at capturing context through self-attention, falls short in encapsulating complex syntactic structures effectively. Addressing this gap, we introduce the Linguistic Structure through Graphical Interpretation with BERT (LSGIB) approach in Machine Translation (MT) frameworks. Combining the strengths of Graph Attention Network (GAT) and BERT, LSGIB intricately captures syntactic dependencies as explicit knowledge from the source language. This enhances the source language representation and aids in more accurate target language generation. Our empirical analysis leverages gold-standard syntax-annotated sentences and employs a Quality Estimation (QE) model. This approach enables us to assess translation improvements in terms of syntactic accuracy, extending beyond traditional BLEU score metrics. The LSGIB model demonstrates superior translation quality across diverse MT tasks, maintaining robust BLEU scores. Our study delves into the optimal sentence lengths benefiting from LSGIB and identifies which syntactic dependencies are more precisely captured. We observe that GAT's ability to learn specific dependency relations directly influences the translation quality of sentences with those relations. Additionally, we discover that incorporating syntactic structure into BERT's intermediate and lower layers offers a novel approach to modeling linguistic structure in source sentences.