Fundus image registration plays a crucial role in the clinical evaluation of ocular diseases, such as diabetic retinopathy and macular degeneration, necessitating meticulous monitoring. The alignment of multiple fundus images enables the longitudinal analysis of patient progression, widening the visual scope, or augmenting resolution for detailed examinations. Currently, prevalent methodologies rely on feature-based approaches for fundus registration. However, certain methods exhibit high feature point density, posing challenges in matching due to point similarity. This study introduces a novel fundus image registration technique integrating U-Net for the extraction of feature points employing FIVES for its training and evaluation, a novel and large dataset for blood vessels segmentation, prioritizing point distribution over abundance. Subsequently, the method employs medial axis transform and pattern detection to obtain feature points characterized by the Fast Retina Keypoint (FREAK) descriptor, facilitating matching for transformation matrix computation. Assessment of the vessel segmentation achieves 0.7559 for Intersection Over Union (IoU), while evaluation on the Fundus Image Registration Dataset (FIRE) demonstrates the method’s comparative performance against existing methods, yielding a registration error of 0.596 for area under the curve, refining similar earlier methods and suggesting promising performance comparable to prior methodologies.