Mesh sensitivity analysis was first performed to determine if the numerical solution is independent of domain discretization (mesh resolution). Three systematically refined meshes of , , and million elements were used with the same operating parameters. Richardson extrapolation was going to be used, but it was found that the solution differed so minutely that Richardson extrapolation would hold no quantitative value for the solution of mesh convergence. Instead, it was deemed that the medium mesh size of million elements would be used. Simulations were performed in our previous work that demonstrates a particle diameter to minimum side cell size ratio of up to particles diameters can be used while still conserving the accurate momentum exchange between the two phases.
4.1. Particle Full Development
A fully developed flow is critical in these simulations. If we did not have a fully developed flow before the outlet of a simulation, then particle flux and velocities would entirely depend on both the insertion location and initial velocities. It is difficult to conclude that the experimental work performed by Hardalupas et al. [
1] is a fully developed flow. The pipe used in the study is
in length with a Reynolds number of 13000. For single-phase flows, this Reynolds number is well into the turbulent zone; therefore, no laminar boundary layer forms and there will be a turbulent velocity profile. To further demonstrate this, an idealized entry length is calculated until fully developed flow by using an assumed velocity profile at the inlet and the classical solution developed by Latzko [
48] given as
where
for the circular tube. Calculating this, an entry length of
is obtained for an idealized single-phase flow. The pipe used for Hardalupas et al. [
1] is well above this entry length, but this does not indicate for certain that the particles are fully developed because of particle slippage during particle insertion and mixing. Indeed, this is briefly discussed in the paper when the results for the highest particle size (
) showed signs of a lack of fully developed flow. In the work performed by Mena and Curtis [
49], it is postulated and concluded that 51 pipe diameters is sufficient for fully developed flow and they measure this by the use of pressure transducers placed periodically along the pipe. In the study, they use water as the carrier fluid; therefore, viscous effects are significantly higher than air and particle slip velocity would be less of a concern.
To the current authors’ knowledge, Lau and Nathan [
44] and their future work of Lau and Nathan [
7] are the only studies to test for fully developed flow. They perform particle experiments with
and
with identical Stokes numbers to test for particle slippage. They realized that the particle concentration and velocities converged to similar values and therefore concluded that
was a sufficiently long enough nozzle for full flow development at Stokes numbers of
and Reynolds numbers of 10000 and 20000. In the study performed by Hardalupas et al. [
1], a Stokes number of
and
with a Reynolds number of 13000 is used. There is a possibility that with the larger Stokes number of 50 and the fact that Lau and Nathan [
44] has a pipe length almost
times that of Hardalupas et al. [
1] that we may not have fully developed flow. Therefore, no conclusion can be realized.
To test for fully developed flow for the numerical Hardalupas et al. [
1] test case, simulations were performed using merely the nozzle or pipe flow with the same operating parameters as previously outlined. Still, instead of particles inserted at
, they are inserted at
with sampling at locations at
and negative
,
,
,
,
,
,
,
,
,
, and
, as referenced from the nozzle exit with the positive in the flow direction. In addition, two simulations were performed to test for the location of full development and determine whether inlet velocity makes a significant difference in flow development.
It was realized that most of the flow development occurs in the first 40 nozzle diameters, where the last
experiences very minute changes in all parameters (
Figure 2). The inside of the nozzle for the Lau and Nathan (Lau3,
Table 1 and
Table 4) test case are also examined. It was realized that full development occurs in about the first 30 nozzle diameters with only minute differences observed in the last 10 nozzle diameters
Figure 3.
4.2. Hardalupas Et al. Results
It is realized that the results at the nozzle exit for all the Hardalupas et al. [
1] test cases provide a similar trend for the mean and root mean square velocity near the central regions of the jet stream. Still, towards the outer regions near the nozzle wall, it is observed that the particles move more towards a laminar shape profile rather than a more turbulent shape as demonstrated by the experiments (
Figure 4a and
Figure 5a). This is believed to be caused by the interpolation method used for velocity in particle-fluid interactions.
To achieve effective momentum coupling, we require a cell size with a minimum length side of at least
times the particle diameter for these types of flow problems [
16]. To model the boundary layer at the wall of a CFD simulation, we either need to fully integrate into the wall or use an algebraic function. To fully integrate to the wall, we require an extremely fine mesh with, at the very least, 8-10 cells located below the
[
23]. Achieving this, while also achieving effective momentum coupling, would require extremely small particles, far smaller than those used in many cases including the experimental studies considered in this current work. Therefore, a wall approach near the wall is used in this current work to accurately model the strong viscous effects on fluid velocity while achieving effective momentum coupling. That said, there isn’t a perfect solution here because there is also concern with fluid velocity and void fraction interpolation when calculating the drag particle-fluid coupling force. A wall function is used in this current work to model the fluid boundary layer in the larger cell. Still, for interpolation, that same wall function is not used for particle coupling parameter interpolation, but instead, a linear interpolation method.
In CFD-DEM, the interpolation method used is a linear cell point method. This method breaks each face into triangles to define tetrahedra and the cell center point; then, it cycles through the tetrahedra to determine where the point or particle cell point center is located. An inverse distance linear interpolation is then used from known velocity points of the cell centers. This produces a different distribution of coupling velocities than is modeled by our wall function. Optimally, we would want to use an interpolation method that follows the wall function but is out of this work’s scope.
Although there are discrepancies at the nozzle exit it is observed that further in the flowfield there are very similar trends between the numerical and experimental results for the lower (0.23) and higher (0.86) mass fractions, respectively.
In general, for the
particle simulations (Hardalupas1 and Hardalupas2), discrepancies are observed at the nozzle exit associated with the interpolation method for both mean (rms) velocities and particle number fluxes (
Figure 4a). As previously outlined, this can be attributed to the issues with the interpolation method near the wall. That being said, relative accuracy is achieved for the velocities where the general change in results between the two mass fractions is similar to that of the experiments. At location
, similar slopes are achieved between all numerical and experimental results. The numerical results show a slight decrease in slope relative to the experimental results near the outer regions of the jet. At
, higher mean velocities are observed for the numerical results with root mean square velocities closely following and smaller spread with the particle number fluxes. It is concluded that with
particles, up to
should see viable results for the case of optimization of an industrial unit, but anything further downstream would need more analysis. Work needs to be performed for pipe flows with wall functions in CFD-DEM to solve the interpolation issues, but it is observed that the particles “correct” themselves after they exit the nozzle. This could be attributed to having relatively low Stokes numbers, and therefore the particles closely follow the highly accurate and validated fluid flow where wall effects are not present.
For the
particles, the same errors at the nozzle exit are observed because of interpolation for all fields but with the same relative accuracy or change in trends between the two mass fluxes (
Figure 5ab). An interesting note for the nozzle exit is that with smaller mass loading (Hardalupas3), a particle number flux with a more flat profile than that of the larger mass loading (Hardalupas4) is observed. It is postulated that if mass loading continued to increase, a more triangular profile for particle number flux would be achieved. Further downstream, similar slopes in the results for mean velocity and particle number flux are observed but a significantly lower root mean square velocity for the numerical results. It is curious to note that these smaller particle trends line up quite well up to
, but with the larger particles results can only be trusted up to
. It can be concluded that as Stokes number increases, where the particles tend to "go their own path," the numerical results start deviating from the experimental results.
4.3. Lau and Nathan Single-Phase Results
The single-phase results for the Hardalupas et al. [
1] test cases were validated in our previous work of Weaver and Mišković [
23], but Lau and Nathan [
7] results were not. Therefore, three different simulation setups were used to determine that single-phase results were accurate before adding the particulate phase. If this is not done, then there is a risk of obtaining a model that may have the perfect combination of parameters for the two-phase flow but not necessarily because each of the phases separately is accurate. This would make it difficult to conclude with any confidence that this model can be generally applied to other similar flows and accurately represents the physics.
All cases start with the Lau3 test case (
) but without the inclusion of particles. The default Launder and Spalding [
22] k-
is used for one case. For the second case, a change of the empirical coefficients is made to the turbulence model by
and
. This modification is researched and validated extensively in our previous work of Weaver and Mišković [
23]. The third simulation uses the modified k-
turbulence model but without the use of co-flow. This is to test the effect that co-flow has on these types of numerical simulations.
There is very little difference between all simulations at the nozzle exit for both mean and root mean square velocities as shown in
Figure 6ad. At location
, the unmodified k-
turbulence model produces spreading that more closely follows the experiment for mean velocity, but the modified k-
produces a drop in axial velocity that more closely matches the experiment. Therefore, it is up to the researcher to weigh the pros and cons of using the modified k-
; whether it is important for accurate spreading or accurate axial drop in mean velocities. That being said, in the current authors’ previous work, it was found that both spreading rates and axial drop in mean velocity were both improved with the slight modification of the empirical coefficients [
23] for two different sets of data provided by Bogusławski and Popiel [
24] and Hardalupas et al. [
1]. For root mean square velocities, the modified k-
produces spreading rates that more closely match experiments at all sampling locations. There is no appreciable difference in all sampling locations in adding a co-flow to simulations. Considering all this, it was decided to use the modified k-
with a co-flow for all subsequent simulations for the Lau and Nathan [
7] test cases.
4.5. Coarse Graining Results
Some DEM simulations use a large number of particles, upwards of 6 million. This would require a significant amount of compute cores to track and calculate all collisions between particles. Also, developing codes to have the ability to scale to a very large amount of compute cores can be difficult because of the issues associated with the slowed communication between compute nodes. To circumvent this, many researchers have turned to coarse-graining methods. The essence of coarse-graining is that multiple particles are represented as a single grain and then the grain is tracked through time. This significantly decreases the number of trajectories tracked throughout the system and, depending on the type of coarse-graining model, can reduce the number of tracking points by
. We can see that with a coarse-graining factor of 2,
, the example of 6 million particles reduces down to 750 thousand particles. This reduces overall computational cost significantly. Also, additional computational savings can be realized by increasing the integration time-step and the duration of the soft sphere collisions will generally be smaller by orders of magnitude [
32].
As previously mentioned in
Section 2.4, the type of coarse-graining used for this analysis was first proposed by Radl et al. [
33] and further expanded to include the Hertz nonlinear contact model by Nasato et al. [
34]. Radl et al. [
33] conclude that the major hurdle in using a coarse-graining approach is to correctly compute the collision rate and inter-parcel or inter-grain stress, with there being no easy way to scale the interaction parameters in an inertial flow regime [
33]. This conclusion is significant because it will be difficult to relate this current work collision frequencies, forces, and stresses from an unscaled system (no coarse-graining) to a scaled (coarse-grained) one. That being said, if overall trends are followed, we may find the usefulness of coarse-graining in an industrial application.
To optimize an industrial scale unit, a statistical analysis needs to be performed on parameters, including collision frequencies and forces acting on particles. Therefore, at first glance, it would seem that coarse-graining shouldn’t be used in this type of setting because of the disparity between inter-grain stresses from the un-scaled to scaled system. That being said, the goal of numerical simulations, and any experimental study for that matter, is to find the optimum setup through an analysis of changes in geometric and operating parameters. Since this is the ultimate practical goal, and it may not be necessary to obtain the exact answers of the physical space, we can, for example, compare two different geometries of an unscaled system to a scaled system using those same geometries.
To investigate the accuracy of coarse-graining, two sets of simulations with the same operating parameters as the Hardalupas1 case are performed. The first set of simulations is unscaled, while the second set is scaled. In each set, a plate is inserted at two different axial locations:
and
relative to the nozzle exit, resulting in four simulations in total. The collision frequencies and force statistics applied to particles when they hit the plate for both the
and
cases are then outputed. The ratio of results obtained from the
and
simulations is calculated and compared the ratios between the scaled and unscaled systems to determine the accuracy of coarse-graining. We then analyze the collision frequencies and mean/variances of the normal/tangential forces for all collisions (
Figure 8a), and also analyzed the results grouped by location with intervals of
on the plate. These results and then used to determine whether the coarse-graining method accurately predicted the results that follow the simulation trend of an unscaled system.
The ratios we are comparing for collision frequencies are given by
where
and
are the collision frequencies for the unscaled DEM method for simulations with 5D and 15D plate, respectively.
and
are the collision frequency for the scaled system with the coarse-graining factor of 2 for simulations with 5D and 15D plate, respectively. Similarly, for the force statistics we have
The collision frequencies and force statistics are then output for all collisions (
Figure 8a) and concentric regions (
Figure 8b), and calculated ratios and associated errors as shown in
Table 5.
It should be noted, for completeness, that the percent error is calculated from the traditional formula given by
When analyzing all the collisions acting on the plate, it is realized that there is an error of less than
for all parameters as indicated by
Table 5. These results demonstrate accuracy in comparing ratios of scaled to un-scaled systems when analyzing all collisions on the plate.
To further analyze the trends of particle collisions with the plate, the collisions are independently considered within specific concentric regions on the plate as shown in
Figure 8b and perform the same analysis of collision frequency and force statistics.
In
Figure 9 the relative errors are plotted between the scaled and unscaled system ratios for each region. A slightly different conclusion than when analyzing all collisions can be realized. The majority of mean values have an error less than
with the only region above being at
with an error of just above
. The highest errors in the force statistics are the variance errors which are in line with the collision frequency errors; most errors are observed in regions where the collision frequency is low. To investigate this further, (
Figure 9b) shows individual collision frequencies for both unscaled and scaled systems at 5D and 15D plate positions with corresponding collision frequency ratio relative errors. It is realized that collision frequency is high near the center region with a very lower percentage error. Conversely, collision frequency is lowest in the outer regions with a relatively high percentage error. Considering all this, along with the very low error of less than
for all collisions, it can be concluded that we have achieved relative accuracy that provides significant results for the practical application of an industrial flow problem.
A kernel density estimate plot (KDE) is then used to visualize the distribution of collisions on the plate (
Figure 10). This plot uses a Gaussian smoothing algorithm to estimate the probability density function. This allows visualization of the distribution of particle impacts across the plate. It is realized that the probability of particle collisions between
and
are very comparable with the largest difference towards the outer regions as demonstrated when comparing at location
(
Figure 10b and
Figure 10d). This reassuringly confirms the use of coarse-graining for an industrial flow optimization problem.
Of special interest is comparing the flow profiles both inside of the nozzle and outside of the jet. This could be beneficial in future simulations because if flow development occurs sooner for coarse-graining, that further reduces the computational cost by allowing a shorter nozzle.
Very little difference in the length of full development for the particles (
Figure 11 in all parameters is observed. Furthermore, it is realized that there is also very little difference between the scaled and unscaled systems in the jetstream for all fields, with the highest error being observed for root mean square velocity (
Figure 12).
If the desire is to optimize an industrial unit, then the goal is to pinpoint a design that performs better than another. Therefore, from a practical standpoint, if relatively accurate answers are achieved that follow the same trends as changes in the design of a physical system, then the goal can be reached from purely numerical simulations alone. With that in mind, coarse-graining is deemed a very useful tool in the optimization of an industrial unit for high-speed jet flows because of the significant reduction in simulation cost. That being said, these results and conclusions shouldn’t be realized for other types of systems, in particular, systems with a large amount of particle-particle collisions in a semi-quasi steady state inertial system that significantly dictates the bulk flow behavior.