In recent times, models like the Transformer have showcased remarkable prowess in tasks related to natural language processing. However, these models tend to be excessively intricate and demand extensive training. Additionally, while the multi-head self-attention mechanism in the Transformer model aims to capture semantic connections between words in a sequence, it encounters limitations when handling short sequences, thereby limiting its effectiveness in 5-polarity Arabic sentiment analysis tasks. The switch-transformer model has recently emerged as a high-performing alternative. Nevertheless, when these models are trained using single-task learning, they often fall short of achieving exceptional performance and struggle to generate robust latent feature representations, especially when working with compact datasets. This challenge is particularly pronounced in the case of the Arabic dialect, which is considered a low-resource language. Given these constraints, this research introduces a novel approach to sentiment analysis in Arabic text. This method leverages multitask learning in tandem with the switch-transformer shared encoder to enhance model adaptability and refine sentence representation. By introducing a mixture of expert (MoE) mechanism that break down the problem into smaller, more manageable sub-problems, the model becomes adept at handling lengthy sequences and intricate input-output relationships, benefiting both five-point and three-polarity Arabic sentiment analysis tasks. This proposed model effectively discerns sentiment in Arabic dialect sentences. The empirical results highlight the outstanding performance of the suggested model, as evidenced in evaluations on the Hotel Arabic-Reviews Dataset, the Book Reviews Arabic Dataset, and the LARB dataset.