The reliability of the RCS was tested. The overall Cronbach’s alpha for the seven items was 0.835, indicating a high level of internal consistency. Reliability remained robust when broken down by professional group: GPs had a Cronbach’s alpha of 0.884, hospital physicians 0.857, administrative personnel 0.776, and nurses 0.788.
3.2. Relational Coordination Scoring
The RC scores within the SABES–ASAA healthcare setting revealed varying degrees of perceived collaboration quality, both within and between professional groups (
Table 3).
Internally, nurses reported the best scores for RC within their group (4.25), which were rated as ‘moderate’. To a lesser extent, GPs rated their internal within-group RC as ‘moderate’, with a score of 4.17. Hospital physicians and administrative staff scored 3.92 and 3.22, respectively. These two groups fall into the ‘weak’ category. Externally, RC between professional groups is generally perceived as weaker. GPs rated the quality of their RC with hospital physicians (3.30) and administrative personnel (2.93) as ‘weak’ and with nurses (3.97) as ‘moderate’. Hospital doctors rated their external RC with GPs (3.26), administrative personnel (3.15), and nurses (3.35) as ‘weak’. Similarly, administrators rated their external RC as ‘weak’ across the board (3.11 to 3.19). Nurses rated the RC scores for administrators as ‘moderate’ (3.58), but for GPs and hospital doctors also as ‘weak’ (3.44 and 3.27, respectively). Notably, none of the external RC scores reached the ‘strong’ category.
The weighted overall scores, considering the response rates, suggest ‘weak’ RC for the work with GPs and administration staff and ‘moderate’ RC for the work with hospital physicians and nurses. This classification does not differ from the unweighted classification.
The strongest intergroup coordination was rated by GPs for the RC between GPs and nurses (3.97), which is close to being classified as ‘strong’. The weakest intergroup coordination was actually rated by GPs for the RC between GPs and administrative staff (2.93).
3.2.1. Within-Group Relational Coordination by Dimension
Table 4 details the internal RC scores by dimension within various healthcare professional groups.
GPs demonstrated moderate RC within their group, with all dimensions scoring above the moderate threshold, except for the ‘Frequency of communication’, ‘Accuracy of information’, and ‘Shared goals’ dimensions.
Hospital physicians reported a weak overall RC, with individual scores revealing a relative weakness in the ‘Shared goals’, ‘Shared knowledge’, ‘Accuracy of information’, ‘Frequency of communication’, ‘Problem solving communication’, and ‘Timeliness of communication’ dimensions. Only ‘Mutual respect’ scored above the moderate threshold.
Administrative staff showed the weakest internal RC, with all seven dimensions falling below the threshold for moderate coordination. The ‘Frequency of communication’, ‘Shared goals’, and ‘Shared knowledge’ dimensions are areas of concern.
While nurses have better overall internal RC, they do not have any weak dimensions, which underlines their more cohesive internal communication and collaboration practices.
The overall weighted scores across all professional groups indicated that the ‘Frequency of communication’, ‘Shared goals’, and ‘Shared knowledge’ dimensions were most frequently identified as weak.
3.2.1. Between-Group Relational Coordination by Dimension
Table 5 illustrates the external RC scores of different professional groups within the healthcare setting by dimension, with particular emphasis on dimensions where the scores were classified as ‘weak’ (<3.5).
In the analysis of between-group RC, the overall weighted score indicates a general trend towards weak external coordination between these professional groups. This finding is consistent across most dimensions.
The dimension of ‘Frequency of communication’ reveals particularly low scores across all groups, with the lowest reported by administrative staff, followed by hospital physicians, and GPs. Nurses rated this dimension slightly higher, but it still falls within the ‘weak’ category. ‘Timeliness of communication’ also received weak ratings, particularly from hospital physicians. In contrast, ‘Accuracy of information’ was rated comparatively higher, with administrative staff reporting the highest score. However, hospital physicians rated this dimension lower, indicating variability in the perceived precision of information exchange.
‘Problem-solving communication’ achieved the highest weighted overall score among the dimensions, bordering on ‘moderate’ coordination. This dimension was rated highest by GPs and lowest by hospital physicians. The dimensions of ‘Shared goals’ and ‘Shared knowledge’ received weak ratings overall, with administrative personnel reporting particularly low scores. ‘Mutual respect’ stands out with a ‘moderate’ overall score, highlighting a level of professional esteem, particularly as rated by hospital physicians and administrative staff.
Table S1 illustrates the RC scores among healthcare professional groups across seven dimensions, shedding light on the dynamics of internal and external coordination. The ‘Frequency of communication’ dimension is generally rated ‘weak’ across all groups, with GPs (3.19), hospital physicians (3.12), and administrators (2.45. Nurses, however, rated it moderately (3.57) for GPs, suggesting a relatively better frequency in their interactions. ‘Timeliness of communication’ scores are weak to moderate, with GPs rating it strong (4.11) for nurses. Other groups, including hospital physicians (3.32) and administrators (3.32), indicate a moderate level of ‘Timeliness in communication’. The ‘Accuracy of information’ dimension sees a mix of weak to moderate ratings. GPs rated it as moderate for both hospital doctors (3.55) and nurses (3.67), while administrators rated it moderately across all groups (3.63). ‘Problem-solving communication’ is rated as moderate across most groups, with GPs rating it strong (4.27) for nurses. ‘Shared goals’ is generally rated weak, except for GPs rating it strong for nurses (4.09). The ‘Shared Knowledge’ dimension received weak to moderate ratings, with GPs rating it as strong for nurses (4.40) but weak for administrators (2.95). The lowest score was reported by administrators for their group (2.71). ‘Mutual Respect’ is the strongest dimension overall, with moderate to strong ratings. Both GPs (4.46) and hospital physicians (4.17) rated it ‘strong’ for nurses, indicating high levels of respect among these groups.
The overall weighted scores reflect a trend towards moderate coordination, with the highest scores in ‘Mutual respect’ and the lowest in ‘Frequency of communication’.
3.3.1. Referral Compliance and Feedback Among General Practitioners
Approximately 20% of GPs believe that their compliance with the referral criteria does not exceed 50%. Furthermore, a significant majority of GPs (63.4%) seldom or never received feedback from specialists.
Regarding hospital physicians, nearly half (48.4%) perceived over 30% of referrals from GPs as being inappropriately prioritized. Feedback on such referrals is also lacking, with 57.6% of hospital doctors indicating that they rarely or never provide referrals. Moreover, almost one-third of hospital doctors (29.9%) rated the quality of clinical questions posed by GPs as very poor or poor.
3.3.2. Impact of Demographic and Professional Factors on Relational Coordination
The overall RC score did not differ by rater gender (male, female, not specified). Similarly, neither the within-group nor the between-group scores in both the weighted and unweighted versions differed significantly between genders in the overall scores, nor did the subgroups by profession. Sub-scores per professional group differed significantly for hospitalists (male 3.72, female 3.48, not reported 3.75; p = 0.001), administrative staff (male 3.16, female 3.36, not reported 2.98; p = 0.001), and nurses (male 3.60, female 3.91, not reported 3.53; p < 0.001), but not for GPs. ‘Frequency of communication’ (male 3.16, female 3.35, not reported 3.15; p = 0.001), ‘Accuracy of information’ (male 3.48, female 3.59, not reported 3.22; p = 0.01), and ‘Timeliness of communication’ (male 3.42, female 3.46, not reported 3.13; p = 0.03) were the dimensions that differed by gender.
Further, the overall as well as the between-group RC scores differed between languages of the raters (German 3.58, Italian 3.45; p = 0.003, and German 3.42, Italian 3.23; p < 0.001, respectively), while there was not found any difference for the within-group score. The weighted scores did not differ at all. The only professional group-depending sub-score that was different between languages was the score of the GP’s (German 3.64, Italian 3.23; p < 0.001). The dimensions ‘Timeliness of communication’ (German 3.58, Italian 3.21; p < 0.001), ‘Shared goals’ (German 3.53, Italian 3.41; p = 0.028) and ‘Mutual respect’ (German 4.06, Italian 3.80; p<0.001) also differed between languages.
Finally, the largest differences were found between health districts, under the restriction, that administrative staff is not included in this analysis. The overall RC (Health District–1 3.49, Health District–2 3.71, Health District–3 3.67, Health District–4 3.49), as well as the within-group RC (Health District–1 4.11; Health District–2 4.11; Health District–3 4.15; Health District–4 4.06), the between-group RC (Health District–1 3.28; Health District–2 3.57; Health District–3 3.51; Health District–4 3.30), and the RC of the for subgroups GP’s, hospital practitioners and nurses differed highly significant between health districts (p < 0.001, each). The subdimensions ‘Frequency of communication’ (Health District–1 3.32; Health District–2 3.37; Health District–3 3.34; Health District–4 3.28; p<0.001), ‘Timeliness of communication’ (Health District–1 3.30; Health District–2 3.68; Health District–3 3.70; Health District–4 3.45; p = 0.003), ‘Accuracy of communication’ (Health District–1 3.52; Health District–2 3.72; Health District–3 3.70; Health District–4 3.44; p < 0.001), ’Shared goals’ (Health District–1 3.43; Health District–2 3.69; Health District–3 3.74; Health District–4 3.41; p = 0.04) and ‘Shared knowledge’ (Health District–1 3.45; Health District–2 3.52; Health District–3 3.28; Health District–4 3.43; p < 0.001) were also significantly different between health districts, while ‘Problem-solving communication’, and ‘Mutual respect’ did not differ. Weighted RC scores were significantly different for the overall RC score (Health District–1 3.43; Health District–2 3.67; Health District–3 3.63; Health District–4 3.42; p = 0.007) and the within-group RC score (Health District–1 4.03; Health District–2 4.06; Health District–3 4.09; Health District–4 3.95; p<0.001), while the between-group weighted score did not differ.
Table S2 presents a comprehensive analysis of the differences in RC scores between various metric demographic and professional factors, as well as feedback practices. Findings indicate that age and years of service do not significantly correlate with overall RC scores. However, notable correlations are observed in feedback practices and perceptions. Specifically, hospital physicians’ perceptions of inappropriate referral priority demonstrate a negative correlation with both overall and between-group RC scores. Similarly, the quality of clinical questions posed by GPs shows a significant negative correlation with RC. The dimension of frequency of communication positively correlates with overall RC, suggesting its importance in relational dynamics. Other dimensions, including timeliness, accuracy, problem-solving, shared goals, and mutual respect, also exhibit significant correlations, particularly with hospital physicians’ perceptions.