GPU-enabled particle-particle particle-tree scheme for simulating dense stellar cluster system
- Masaki Iwasawa^{1, 2}Email author,
- Simon Portegies Zwart^{2} and
- Junichiro Makino^{1, 3}
https://doi.org/10.1186/s40668-015-0010-1
© Iwasawa et al. 2015
Received: 19 February 2014
Accepted: 15 June 2015
Published: 3 July 2015
Abstract
We describe the implementation and performance of the \(\mathrm {P}^{3}\mathrm{T}\) (Particle-Particle Particle-Tree) scheme for simulating dense stellar systems. In \(\mathrm{P}^{3}\mathrm{T}\), the force experienced by a particle is split into short-range and long-range contributions. Short-range forces are evaluated by direct summation and integrated with the fourth order Hermite predictor-corrector method with the block timesteps. For long-range forces, we use a combination of the Barnes-Hut tree code and the leapfrog integrator. The tree part of our simulation environment is accelerated using graphical processing units (GPU), whereas the direct summation is carried out on the host CPU. Our code gives excellent performance and accuracy for star cluster simulations with a large number of particles even when the core size of the star cluster is small.
Keywords
methods: N-body simulationsPACS Codes
95.10.Ce 98.10.+z1 Background
Direct N-body simulation has been the most useful tool for the study of the evolution of collisional stellar systems such as star clusters and the center of the galaxy (Aarseth 1963). The force calculations, of which the cost is \(O(N^{2})\), are the most compute-intensive part of direct N-body simulations. Barnes and Hut (1986) developed a scheme which reduces the calculation cost to \(O(N\log N)\) by constructing the tree structure and evaluating the multipole expansions. Dehnen (2002, 2014) developed a scheme to reduce the calculation cost to \(O(N)\) by combining the fast multipole method (Greengard and Rokhlin 1987) and the tree code. Recently, the graphical processing units (GPU), which is a device originally developed for rendering the graphical image, started to be used for scientific simulations. The tree code is also implemented on GPUs and it is much faster than it is on CPUs (Gaburov et al. 2010; Bédorf et al. 2012). Bédorf et al. (2014) parallelized the tree code on GPUs and showed good scalability up to 18,600 GPUs. They also simulated the Milky Way Galaxy with N of up to 242 billion and reported that the average calculation time per iteration on 18,600 GPUs was 4.8 seconds.
The tree schemes are widely used for collisionless system simulations. However, for collisional system simulations, the use of the tree code has been very limited. One reason might be that a collisional stellar system spans a wide range in timescales. Thus it is essential that each particle has its own integration timestep. This scheme is called the individual timestep or the block timestep (McMillan 1986). However, when we use the tree code and the block timestep together, the tree structure is reconstructed at every block timestep, because the positions of integrated particle are updated. The cost of the usual complete reconstruction of the tree is \(O(N\log N)\) and not negligible.
To reduce the cost of the reconstruction of the tree, McMillan and Aarseth (1993) introduced local reconstruction of tree. They demonstrated a good performance, but there seems to be no obvious way to parallelize their scheme.
Recently, Oshino et al. (2011) introduced another approach to combine the tree code and the block timesteps which they called the \(\mathrm{P}^{3}\mathrm{T}\) scheme. This scheme is based on the idea of Hamiltonian splitting (Kinoshita et al. 1991; Wisdom and Holman 1991; Duncan et al. 1998; Chambers 1999; Brunini and Viturro 2003; Fujii et al. 2007; Moore and Quillen 2011). In the \(\mathrm {P}^{3}\mathrm{T}\) scheme, the Hamiltonian of the system is split into short-range and long-range parts and they are integrated with different integrators. The long-range part is evaluated with the tree code and is integrated using the leapfrog scheme with a shared timestep. The short range part is evaluated with direct summation and integrated using the fourth-order Hermite scheme (Makino and Aarseth 1992) with the block timesteps. They investigated the accuracy and the performance of the \(\mathrm{P}^{3}\mathrm{T}\) scheme for planetary formation simulations and showed that the \(\mathrm{P}^{3}\mathrm{T}\) scheme achieves high performance.
In this paper, we present the implementation of the \(\mathrm {P}^{3}\mathrm{T}\) scheme on GPUs and report its accuracy and performance for star cluster simulations. We found that the \(\mathrm{P}^{3}\mathrm{T}\) scheme demonstrates a very good performance for star cluster simulations, even when the core of the cluster becomes small.
The structure of this paper is as follows. In Section 2, we briefly describe the \(\mathrm{P}^{3}\mathrm{T}\) scheme. In Section 3, we report the accuracy and performance of the \(\mathrm{P}^{3}\mathrm{T}\) scheme. We summarize these results in Section 4.
2 Methods
2.1 Formulation
Here \(W(s_{ij})\) is a smooth transition function. A suitable form of \(W(s_{ij})\) should be zero when a distance between two particles is smaller than the inner cutoff radius \(r_{\mathrm{in}}\) and should be unity if the distance is larger than the outer cutoff radius \(r_{\mathrm {cut}}\). This splitting is introduced by Chambers (1999) to avoid undesirable energy error from close encounters between particles. Similar splitting has been used with \(\mathrm{P}^{3}\mathrm{M}\) (Particle-Particle Particle-Mesh) scheme, in which the long-range part of the interaction is evaluated by using FFT (Hockney and Eastwood 1981).
The tree algorithm is used for the evaluation of \(\boldsymbol {F}_{\mathrm{soft},i}\) to reduce the calculation cost.
This \(K(x)\) is the lowest-order polynomial which satisfies the requirement that derivatives up to the third order is zero for \(x=0\) and 1 (i.e. the highest-order term of the lowest-order polynomial is the seventh, because there are eight boundary conditions at \(x=0\) and \(x=1\)).
- (1)
At time t, by using the tree code, calculate the acceleration due to \(H_{\mathrm{soft}}\), \(\boldsymbol {a}_{{\mathrm{soft}},i}\), and construct a list of all particles which come within \(r_{\mathrm{cut}}\) from particle i for \(\Delta t_{\mathrm{soft}}\). Here, \(\Delta t_{\mathrm{soft}}\) is the timestep for the soft Hamiltonian.
- (2)
Update the velocities of all particles with \(\boldsymbol {v}_{\mathrm{new}, i}=\boldsymbol {v}_{\mathrm{old},i}+(1/2)\Delta t_{\mathrm{soft}} \boldsymbol {a}_{\mathrm{soft},i}\).
- (3)
Integrate all particles to time \(t+\Delta t_{\mathrm{soft}}\) under \(H_{\mathrm{hard}}\), using the neighbour list and the fourth order Hermite integrator with the block timesteps.
- (4)
Calculate the acceleration due to \(H_{\mathrm{soft}}\) at new time \(t+\Delta t_{\mathrm{soft}}\) and update the velocity
- (5)
Go back to step 2.
Here η is the accuracy parameter of the timestep and its typical value is 0.1. \(\Delta t_{\max}\) is the maximum timestep which should be smaller than \(\Delta t_{\mathrm{soft}}\), \(\boldsymbol {a}_{i}^{(n)}\) is the nth time derivative of the acceleration of particle i, \(a_{0}\) is a constant introduced to prevent \(\Delta t_{i}\) from becoming too small when the distance to the nearest neighbor is close to \(r_{\mathrm{cut}}\) and α is a parameter to control \(a_{0}\). In this case, the acceleration from \(H_{\mathrm{hard}}\) becomes very small and there is no need to use very small \(\Delta t_{i}\). According to Oshino et al. (2011), when we choose \(\alpha\le1\), α hardly affects the energy error. Thus we set \(\alpha= 0.1\) for all simulations.
Symbols and definitions for the accuracy parameters of the \(\pmb{\mathrm{P}^{3}\mathrm{T}}\) scheme
α | timestep softening. For all runs α = 0.1 |
γ | ratio of inner and outer cutoff radius (\(r_{\mathrm{in}}/r_{\mathrm{cut}}\)). For all runs γ = 0.1 |
\(\Delta r_{\mathrm{buff}}\) | width of the buffer shell. \(\Delta r_{\mathrm{buff}}=3\sigma\Delta t_{\mathrm{soft}}\), as a standard value |
\(\Delta t_{\mathrm{soft}}\) | timestep of the soft part. \(\Delta t_{\mathrm{soft}}=(1/256)(N/16\mathrm{K})^{-1/3}\), as a standard value |
\(\Delta t_{\max}\) | maximum timestep of the hard part. \(\Delta t_{\max}=\Delta t_{\mathrm{soft}}/4\), as a standard value |
ϵ | plummer softening length. ϵ = (4/N), as a standard value |
η | accuracy parameter for timestep criterion. η = 0.1, as a standard value |
\(r_{\mathrm{cut}}\) | outer cutoff radius of smooth transition functions W and K. \(r_{\mathrm{cut}}=4\Delta t_{\mathrm{soft}}\), as a standard value |
\(r_{\mathrm{in}}\) | inner cutoff radius of smooth transition functions W and K (\(r_{\mathrm{in}}=\gamma r_{\mathrm{cut}}\)) |
θ | opening criterion for tree. θ = 0.4, as a standard value |
2.2 Implementation on GPUs
Even with the Barnes-Hut tree algorithm, obtaining \(\boldsymbol {F}_{\mathrm{soft},i}\) is still costly and dominates the total calculation time (Oshino et al. 2011). To accelerate this part, we use GPUs, by modifying the sequoia library (Bédorf, Gaburov and Portegies Zwart, submitted to ComAC), on which the high-performance tree code for parallel GPUs Bonsai (Bédorf et al. 2012) is based. Our library calculates the long range forces on all particles, \(\boldsymbol {F}_{\mathrm{soft},i}\) by the Barnes-Hut tree algorithm (up to the quadrupole moment). On the other hand, we calculate \(\boldsymbol {F}_{\mathrm {hard},i}\) on the host computer. The library also returns, for each particle, the list of particles within the distance h from it. We use this list of neighbors to calculate \(\boldsymbol {F}_{\mathrm{hard},i}\). The value of h should be sufficiently larger than \(r_{\mathrm{cut}}\) to guarantee that the particles which are not on the list of the neighbors of particle i do not enter the sphere of the radius \(r_{\mathrm{cut}}\) around particle i during the time interval \(\Delta t_{\mathrm{soft}}\).
We call the sphere with a radius of \(r_{\mathrm{cut}}\) the neighbor sphere and the shell between the sphere with a radius of h and the neighbor sphere the buffer shell. The particles of which the nearest neighbor is outside the sphere with radius h are considered isolated and the particles on the list of neighbors are considered neighbor particles. We denote the width of the buffer shell as \(\Delta r_{\mathrm{buff}}\) (i.e. \(h=r_{\mathrm{cut}}+\Delta r_{\mathrm{buff}}\)).
- (1)
Evaluate long range forces on all particles \(\boldsymbol {F}_{\mathrm {soft},i}\) using GPU.
- (2)
Particles are divided into two groups; isolated and non-isolated, by using the neighbour list made on GPU.
- (3)
For non-isolated particles, \(\boldsymbol {F}_{\mathrm{hard},i}\) are calculated on the host computer.
- (4)
All particles receive a velocity kick through \(\boldsymbol {F}_{\mathrm{soft},i}\) for \(\Delta t_{\mathrm{soft}}/2\).
- (5)
Isolated particles are drifted by \(\boldsymbol {r}_{i} \leftarrow\boldsymbol {r}_{i}+\Delta t_{\mathrm{soft}}\boldsymbol {v}_{i}\).
- (6)
Non-isolated particles are integrated with the fourth-order Hermite scheme for \(\Delta t_{\mathrm{soft}}\).
- (7)
Evaluate \(\boldsymbol {F}_{\mathrm{soft},i}\) and make the neighbour list in the same way as in step 1-2.
- (8)
All particles obtain the velocity kick again for \(\Delta t_{\mathrm{soft}}/2\).
- (9)
go back to step 3.
3 Results
3.1 Accuracy and performance
We performed a number of test calculations using the \(\mathrm {P}^{3}\mathrm{T}\) scheme on GPUs, to study its accuracy and performance. In this section, we describe the result of these tests. For most of them we adopted a Plummer model (Plummer 1911) with 128K (hereafter \(\mathrm{K}=2^{10}\)) equal-mass particles as the initial condition. We use the so-called N-body unit or Heggie unit, in which total mass \(\mathrm{M}=1\), the gravitational constant \(\mathrm{G}=1\) and total energy \(E=-1/4\) (Heggie and Mathieu 1986). To avoid the singularity of the gravitational potential, we use the Plummer softening and set \(\epsilon= 4/N\). Since this value is a typical separation of a hard binary in the N-body unit, we can follow the evolution of the system up to the moment of the core collapse.
Note, in this paper, we use the energy errors as an indicator of the accuracy of the scheme. However, energy conservation dose not guarantee accuracy of simulations (though it is necessary). Thus we will perform realistic simulations in Section 3.2 and check the statistical character of stellar systems by comparing the results with the Hermite scheme, which is widely used in collisional stellar system simulations. As we will see later, for simulations of the core collapse of the star cluster, when the relative energy error is ≲10^{−3} at the moment of the core collapse, the behavior of the core collapse with the \(\mathrm{P}^{3}\mathrm{T}\) scheme agrees with that with the Hermite scheme very well.
3.1.1 Accuracy
With the \(\mathrm{P}^{3}\mathrm{T}\) scheme, we have six accuracy parameters. First, we discuss how each parameter controls the accuracy of the \(\mathrm{P}^{3}\mathrm{T}\) scheme. Finally, we describe the accumulation of the energy error in a long-term integration. To measure energy errors accurately, we calculate potential energies by the direct summation instead of the tree code for all runs in this paper.
Effect of \(r_{\mathrm{cut}}\), \(\Delta t_{\mathrm{soft}}\) and θ
We can see that the error is smaller for smaller θ, smaller \(\Delta t_{\mathrm{soft}}\), or larger \(r_{\mathrm{cut}}\). Roughly speaking, the error depends on two terms, \(\Delta t_{\mathrm{soft}}/r_{\mathrm{cut}} \sigma\) and θ. If \(\Delta t_{\mathrm{soft}}/r_{\mathrm{cut}} \sigma\) is large, it determines the error. In this regime, the error is dominated by the truncation error of the leapfrog integrator. If it is small enough, θ determines the error, in other words, the tree force error dominates the total error. Even for a very small value of θ like 0.2, the tree force error dominates if \(\Delta t_{\mathrm{soft}}/r_{\mathrm{cut}} \sigma\lesssim0.05\).
Effect of \(\Delta r_{\mathrm{buff}}\)
Effect of \(\Delta t_{\max}\) and η
Long term integration
3.1.2 Calculation cost
In this section, we discuss the calculation cost of the \(\mathrm {P}^{3}\mathrm{T}\) scheme and its dependence on the number of particles N, required accuracy, and other parameters.
First, we construct a simple theoretical model of the dependence of the calculation cost on parameters of the integration scheme such as N, \(\Delta t_{\mathrm{soft}}\), θ and \(r_{\mathrm{cut}}\). Finally, we derive the optimal set of parameters from the model and compare this model with the result of the numerical tests. We found that the calculation cost per unit time is proportional to \(N^{4/3}\).
Theoretical model
The calculation cost for the force evaluations in \(\mathrm{P}^{3}\mathrm {T}\) is split into the tree part and the Hermite part. For the tree part, the calculation cost of evaluating forces for all particles per tree step is proportional to \(O(\theta^{-3}N\log N)\). Since we use constant timestep for the tree part, the calculation costs of the integration of particles per unit time for the tree part is proportional to \(O ( \theta^{-3}N\log N/\Delta t_{\mathrm {soft}} )\).
The bottom panel of Figure 7 shows that all curves eventually approach to constant values for both of large and small \(r_{\mathrm{cut}}\). For large \(r_{\mathrm{cut}}\), the timesteps of the non-isolated particles are determined by N, not by \(r_{\mathrm{cut}}\) (see equation (25)), whereas for small values of \(r_{\mathrm{cut}}\) the non-isolated particles have a timesteps \(\Delta t_{\max}\). This is because most neighbouring particles are in the buffer shell and not in the neighbour sphere. For runs with \(\Delta t_{\mathrm {soft}}=1/2{,}048, 1/1{,}024\mbox{ and }1/512\), we can see bumps of \(N_{\mathrm{step}}\) at \(r_{\mathrm{cut}} \sim1/512\) due to the dependence on \(r_{\mathrm{cut}}\) shown in equation (26).
Optimal set of accuracy parameters
In this section, we derive the optimal values of \(r_{\mathrm{cut}}\) and \(\Delta t_{\mathrm{soft}}\) from the point of view of the balance of the calculation costs between the tree and the Hermite parts, in other words we express \(r_{\mathrm{cut}}\) and \(\Delta t_{\mathrm{soft}}\) as functions of N such that \(N_{\mathrm{int}, \mathrm{hard}}/N_{\mathrm{int}, \mathrm{soft}}\) is independent of N. Following the discussion in Section 3.1.1 and because the energy errors can be controlled through \(\Delta t_{\mathrm{soft}}/r_{\mathrm{cut}}\) and \(\Delta t_{\mathrm{soft}}/\Delta r_{\mathrm{buff}}\), \(r_{\mathrm{cut}}\) and \(\Delta r_{\mathrm{buff}}\) should be proportional to \(\Delta t_{\mathrm{soft}}\).
The requirements are met for \(N_{\mathrm{int}, \mathrm{hard}} \propto N^{7/3}(r_{\mathrm{cut}}+\Delta r_{\mathrm{buff}})^{3}\) (or \(N^{2}(r_{\mathrm{cut}}+\Delta r_{\mathrm{buff}})^{3}\)), \(\Delta t_{\mathrm{soft}} \propto N^{-1/3}\) and \(r_{\mathrm{cut}} \propto N^{-1/4}\) and both \(N_{\mathrm{int}, \mathrm{hard}} \) and \(N_{\mathrm{int}, \mathrm{soft}} \) are proportional to \(N^{4/3}\) (or \(N^{5/4}\)). Here we have neglected the logN dependence in the tree part.
We also perform the simulations using the direct Hermite integrator with the same η and the standard tree code with the same θ and \(\Delta t_{\mathrm{soft}}\). These calculations are performed with the Sapporo GPU library (Gaburov and Harfst 2009) and a standard tree code with the same θ and \(\Delta t_{\mathrm{soft}}\) using the Bonsai GPU library (Bédorf et al. 2012). The calculation time for our \(\mathrm{P}^{3}\mathrm{T}\) implementation is also proportional to \(N^{4/3}\), as we presented above, while for the Hermite integrator it is proportional to \(N^{7/3}\). The \(\mathrm{P}^{3}\mathrm{T}\) scheme is faster than the direct Hermite integrator for \(N > {16\mathrm{K}}\) and when \(N=1\mathrm{M}\) (\(\mathrm{M}=2^{20}\)), the \(\mathrm {P}^{3}\mathrm{T}\) scheme is about 50 times faster than the direct Hermite scheme. The pure tree code is slightly faster than the \(\mathrm{P}^{3}\mathrm{T}\) scheme, but the integration errors are worse by several orders of magnitude (see Figures 6 and 10).
3.2 Examples of practical applications
In Sections 3.1.1 and 3.1.2, we presented a detailed discussion on the accuracy and performance of our \(\mathrm{P}^{3}\mathrm{T}\) scheme. However, we performed simple simulations, where the stellar systems are in the dynamical equilibrium. In this section, we study the performance of our \(\mathrm{P}^{3}\mathrm{T}\) scheme when applied to more realistic, or more difficult, simulations by comparing the results of the Hermite scheme. In Section 3.2.1, we discuss the case of the simulation of star clusters up to core collapse. In Section 3.2.2, we discuss the case of a galaxy model with massive central black hole binary.
3.2.1 Star cluster down to core collapse
In this section, we discuss the performance of our \(\mathrm {P}^{3}\mathrm{T}\) scheme for the simulation of the core collapse of a star cluster. First, we describe the initial condition and parameters of the integration scheme. Next, we compare the calculation results obtained by the \(\mathrm{P}^{3}\mathrm{T}\) and Hermite schemes, and finally, the calculation speed.
Initial conditions
We apply the \(\mathrm{P}^{3}\mathrm{T}\) scheme to the evolution of a star cluster consisting of 16K stars to the moment of the core collapse (Lynden-Bell and Eggleton 1980). We use an equal-mass plummer model as an initial density profile and we adopt \(\eta=0.1\). We apply the Plummer softening \(\epsilon= 4/N = 1/4{,}096\). The simulations are terminated when the core number-density exceeds 10^{6}, at which point the mean interparticle distance in the core is comparable to ϵ. Next, we set θ. We must choose θ so that the tree force error is smaller than the force due to the two-body relaxation. Hernquist et al. (1993) pointed out that, for \(\theta=0.5\) with monopole and quadrupole, the tree-force error is much smaller than the force due to the two-body relaxation. Thus we choose \(\theta=0.4\) with quadrupole as a standard model. For comparison, we also perform a run with \(\theta=0.8\).
To resolve the motions of the particles in the core, we impose \(\Delta t_{\mathrm{soft}}\) to be smaller than 1/128 of the dynamical time of the core (\(\sim\sqrt{3\pi/16\rho_{\mathrm{core}}}\), where \(\rho_{\mathrm {core}}\) is the core density). To reduce the calculation cost for the Hermite part we require \(r_{\mathrm{cut}} \propto\rho_{\mathrm{core}}^{-1/3}\) and set the initial value of \(r_{\mathrm{cut}} = 1/64\). We also change \(\Delta r_{\mathrm{buff}} = 3\sigma_{\mathrm{core}}\Delta t_{\mathrm{soft}}\), where \(\sigma _{\mathrm{core}}\) is the velocity dispersion in the core, and \(\Delta t_{\mathrm{max}} =\Delta t_{\mathrm{soft}}/4\), as \(\Delta t_{\mathrm{soft}}\) and \(\sigma_{\mathrm{core}}\) are changing. Here, to calculate \(\rho_{\mathrm{core}}\) and \(\sigma_{\mathrm{core}}\), we use the formula proposed by Casertano and Hut (1985). The same simulation is repeated using the fourth-order Hermite scheme with the block timesteps with the same value of \(\eta= 0.1\).
Results
Calculation speed
Initially the \(\mathrm{P}^{3}\mathrm{T}\) scheme is much faster than the Hermite scheme, but after the time when \(\rho_{\mathrm{core}} \sim10^{4}\), the \({\mathrm{P}^{3}\mathrm{T}}\) scheme is slightly slower than the Hermite scheme because in the \(\mathrm{P}^{3}\mathrm{T}\) scheme, \(\Delta t_{\mathrm{soft}}\) is proportional to \(\rho_{\mathrm{core}}^{-1/2}\). However, even for the \(\mathrm {P}^{3}\mathrm{T}\) scheme, the CPU time spent after \(\rho_{\mathrm{core}}\) reaches 10^{4} is small. As a result, the calculation time to the moment of the core collapse of the \(\mathrm{P}^{3}\mathrm{T}\) scheme is smaller than that of the Hermite scheme by a factor of two.
3.2.2 Orbital evolution of SMBH binary
In this section, we also discuss the performance of the \(\mathrm {P}^{3}\mathrm{T}\) scheme applied to simulations of a galaxy with a supermassive black hole (SMBH) binary. First, we describe the initial conditions and parameters of the integration scheme. Next, we compare the calculation results obtained by the \(\mathrm{P}^{3}\mathrm{T}\) and Hermite schemes, and finally, the calculation speed.
Initial conditions and methods
We use the Plummer model with \(N=16\mathrm{K}, 128\mathrm{K}\mbox{ and }256\mathrm{K}\) as the initial galaxy model. Two SMBH particles with a mass of 1% of that of the galaxy are placed at the positions \((\pm 0.5, 0.0, 0.0)\) with the velocities \((0.0, \pm 0.5, 0.0)\). We use three values for the cut off radius with respect to three different kinds of interactions. For the interaction between field stars (FSs), we set \(r_{\mathrm{cut},\mathrm{FS}-\mathrm{FS}} = 1/256\). For the interaction between SMBHs, the force is not split and \(F_{\mathrm{soft}}=0\). In other words, the force between SMBHs is integrated with the pure Hermite scheme. We set the cut off radius between SMBH and FS \(r_{\mathrm{cut}, \mathrm{BH}-\mathrm{FS}}=1/32\) which is large enough that \(\Delta t_{\mathrm{soft}}\) is smaller than the Kepler time of a particle in orbit around the SMBH binary at a distance of \(r_{\mathrm{cut}, \mathrm{BH}-\mathrm{FS}}\). We use the Plummer softening \(\epsilon= 10^{-4}\) for the interactions between FS-FS and FS-SMBH. For the SMBH-SMBH interaction, we do not use the softening. The accuracy parameter of timestep criterion for FS \(\eta_{\mathrm{FS}}\) is 0.1, and for SMBH \(\eta_{\mathrm{BH}}\) is 0.03. We adopt \(\Delta r_{\mathrm{buff}} = 3\sigma\Delta t_{\mathrm{soft}}\), \(\Delta t_{\max}=\Delta t_{\mathrm{soft}}/4\) and \(\theta=0.4\).
We use \(\Delta t_{\mathrm{soft}}=1/1{,}024\) at \(T=0\), and as the binary becomes harder, we decrease \(\Delta t_{\mathrm{soft}}\) to suppress the aliasing error of the binary. As a standard model, we set \(\Delta t_{\mathrm{soft}}\) to be less than half of the Kepler time of the SMBH binary \(t_{\mathrm{kep}}\). Only for \(N=128\mathrm{K}\), we also perform two other runs, where \(\Delta t_{\mathrm{soft}}< t_{\mathrm{kep}}/4\) and \(t_{\mathrm{kep}}\).
We also perform the same simulations by the Hermite scheme with the same \(\eta_{\mathrm{FS}}\) and \(\eta_{\mathrm{BH}}\).
Results
Calculation speed
4 Conclusions
We have described the implementation and performance of the \(\mathrm {P}^{3}\mathrm{T}\) scheme for simulating dense stellar systems. In our implementation, the tree part is accelerated using GPU. The accuracy and performance of the \(\mathrm{P}^{3}\mathrm{T}\) scheme can be controlled through six parameters: \(\Delta r_{\mathrm{cut}}\), \(\Delta r_{\mathrm{buff}}\), \(\Delta t_{\mathrm{soft}}\), \(\Delta t_{\max}\), η and θ. We find that \(\Delta r_{\mathrm {buff}} \gtrsim2\sigma\Delta t_{\mathrm{soft}}\) is a good choice to prevent non-neighbour particles from entering the neighbour sphere. The integration errors can be controlled through \(\Delta t_{\mathrm{soft}}/ \Delta r_{\mathrm{cut}}\sigma\). For \(\theta= 0.2\), if we set \(\Delta t_{\mathrm{soft}}\) to be less than \(0.05\Delta r_{\mathrm{cut}} / \sigma\), the integration error is smaller than the tree force error. For the Hermite part, if we choose \(\eta\lesssim0.2\), the errors hardly depend on \(\Delta t_{\max}\).
From the point of view of the balance of the calculation costs between the tree and Hermite parts, we derive the optimal set of accuracy parameters, and find that the calculation cost is proportional to \(N^{4/3}\).
The \(\mathrm{P}^{3}\mathrm{T}\) scheme is suitable for simulating large N stellar clusters with a high density contrast, such as star clusters or galactic nuclei. We demonstrate the efficiency of the code and show that it is able to integrate N-body systems to the moment of the core collapse. We also performed the simulations of the galaxy with the SMBH binary and found that the \(\mathrm{P}^{3}\mathrm{T}\) scheme can be applied to these simulations.
Finally, we discuss the possibilities of implementing two important effects on star cluster evolution in \(\mathrm{P}^{3}\mathrm {T}\). The first is the effect of a tidal field which dramatically changes the collapse time and the evaporation time of a star cluster. The tidal field effect can be included in the soft part.
The other is an effect of stellar-mass binaries. A stellar-mass binary plays an important role in halting the core collapse. In this paper, we introduce the Plummer softening and neglect these binary effect. However, we could treat these effects by integrating stellar-mass binaries in the hard part.
Our \(\mathrm{P}^{3}\mathrm{T}\) code is incorporated in the AMUSE frameworks and free for use (Portegies Zwart et al. 2013; Pelupessy et al. 2013).
GTX680 does not have ECC (Error Check and Correct) memories. However, as we will see later, we do not observe any large energy error in any of our runs, which means the hardware error does not affect our result. Betz et al. (2014) performed Molecular Dynamics simulations, in order to investigate the rate of bit-flip error events. They observed a single bit-flip error event in about 4,700 GPU*hours without ECC and conclude that the bit-flip error is exceedingly rare.
Declarations
Acknowledgements
We are grateful to Jeroen Bédorf, for preparations of the GPU cluster and GPU library. We also thank to Shoichi Oshino, Daniel Caputo and Keigo Nitadori for stimulating discussion, and to Edwin van der Helm for carefully reading the manuscript. This work was supported by NWO (grants VICI [#639.073.803], AMUSE [#614.061.608] and LGM [# 612.071.503]), NOVA and the LKBF.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Authors’ Affiliations
References
- Aarseth, SJ: Dynamical evolution of clusters of galaxies, I. Mon. Not. R. Astron. Soc. 126, 223-255 (1963) ADSView ArticleGoogle Scholar
- Barnes, J, Hut, P: A hierarchical \(O(N \log N)\) force-calculation algorithm. Nature 324, 446-449 (1986) ADSView ArticleGoogle Scholar
- Dehnen, W: A hierarchical \(O(N)\) force calculation algorithm. J. Comput. Phys. 179, 27-42 (2002) MATHMathSciNetADSView ArticleGoogle Scholar
- Dehnen, W: A fast multipole method for stellar dynamics. Comput. Astrophys. Cosmol. 1, 1 (2014) ADSView ArticleGoogle Scholar
- Greengard, L, Rokhlin, V: A fast algorithm for particle simulations. J. Comput. Phys. 73, 325-348 (1987) MATHMathSciNetADSView ArticleGoogle Scholar
- Gaburov, E, Bédorf, J, Portegies Zwart, S: Gravitational tree-code on graphics processing units: implementation in CUDA. Proc. Comput. Sci. 1, 1119-1127 (2010) View ArticleGoogle Scholar
- Bédorf, J, Gaburov, E, Portegies Zwart, S: A sparse octree gravitational N-body code that runs entirely on the GPU processor. J. Comput. Phys. 231, 2825-2839 (2012) MATHMathSciNetADSView ArticleGoogle Scholar
- Bédorf, J, Gaburov, E, Fujii, MS, Nitadori, K, Ishiyama, T, Portegies Zwart, S: 24.77 Pflops on a gravitational tree-code to simulate the Milky Way Galaxy with 18600 GPUs. In: SC’14 Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, pp. 54-65. IEEE Press, Piscataway (2014). doi:https://doi.org/10.1109/SC.2014.10 View ArticleGoogle Scholar
- McMillan, SLW: The vectorization of small-n integrators. In: Hut, P, McMillan, SLW (eds.) The Use of Supercomputers in Stellar Dynamics. Lecture Notes in Physics, vol. 267, pp. 156-161. Springer, Berlin (1986) View ArticleGoogle Scholar
- McMillan, SLW, Aarseth, SJ: An \(O(N \log N)\) integration scheme for collisional stellar systems. Astrophys. J. 414, 200-212 (1993) ADSView ArticleGoogle Scholar
- Oshino, S, Funato, Y, Makino, J: Particle-particle particle-tree: a direct-tree hybrid scheme for collisional N-body simulations. Publ. Astron. Soc. Jpn. 63, 881-892 (2011) ADSView ArticleGoogle Scholar
- Kinoshita, H, Yoshida, H, Nakai, H: Symplectic integrators and their application to dynamical astronomy. Celest. Mech. Dyn. Astron. 50, 59-71 (1991) MATHADSView ArticleGoogle Scholar
- Wisdom, J, Holman, M: Symplectic maps for the n-body problem. Astron. J. 102, 1528-1538 (1991) ADSView ArticleGoogle Scholar
- Duncan, MJ, Levison, HF, Lee, MH: A multiple time step symplectic algorithm for integrating close encounters. Astron. J. 116, 2067-2077 (1998) ADSView ArticleGoogle Scholar
- Chambers, JE: A hybrid symplectic integrator that permits close encounters between massive bodies. Mon. Not. R. Astron. Soc. 304, 793-799 (1999) ADSView ArticleGoogle Scholar
- Brunini, A, Viturro, HR: A tree code for planetesimal dynamics: comparison with a hybrid direct code. Mon. Not. R. Astron. Soc. 346, 924-932 (2003) ADSView ArticleGoogle Scholar
- Fujii, M, Iwasawa, M, Funato, Y, Makino, J: BRIDGE: a direct-tree hybrid N-body algorithm for fully self-consistent simulations of star clusters and their parent galaxies. Publ. Astron. Soc. Jpn. 59, 1095-1106 (2007) ADSView ArticleGoogle Scholar
- Moore, A, Quillen, AC: QYMSYM: a GPU-accelerated hybrid symplectic integrator that permits close encounters. New Astron. 16, 445-455 (2011) ADSView ArticleGoogle Scholar
- Makino, J, Aarseth, SJ: On a Hermite integrator with Ahmad-Cohen scheme for gravitational many-body problems. Publ. Astron. Soc. Jpn. 44, 141-151 (1992) ADSGoogle Scholar
- Hockney, RW, Eastwood, JW: Computer Simulation Using Particles (1981) Google Scholar
- Plummer, HC: On the problem of distribution in globular star clusters. Mon. Not. R. Astron. Soc. 71, 460-470 (1911) ADSView ArticleGoogle Scholar
- Heggie, DC, Mathieu, RD: Standardised units and time scales. In: Hut, P, McMillan, SLW (eds.) The Use of Supercomputers in Stellar Dynamics. Lecture Notes in Physics, vol. 267, pp. 233-235. Springer, Berlin (1986) View ArticleGoogle Scholar
- Portegies Zwart, S, Boekholt, T: On the minimal accuracy required for simulating self-gravitating systems by means of direct N-body methods. Astrophys. J. Lett. 785, L3 (2014) ADSView ArticleGoogle Scholar
- Gaburov, E, Harfst, S, Portegies Zwart, S: SAPPORO: a way to turn your graphics cards into a GRAPE-6. New Astron. 14, 630-637 (2009) ADSView ArticleGoogle Scholar
- Betz, RM, DeBardeleben, NA, Walker, RC: An investigation of the effects of hard and soft errors on graphics processing unit-accelerated molecular dynamics simulations. Concurr. Comput., Pract. Exp. 26, 2134-2140 (2014) View ArticleGoogle Scholar
- Lynden-Bell, D, Eggleton, PP: On the consequences of the gravothermal catastrophe. Mon. Not. R. Astron. Soc. 191, 483-498 (1980) MathSciNetADSView ArticleGoogle Scholar
- Hernquist, L, Hut, P, Makino, J: Discreteness noise versus force errors in N-body simulations. Astrophys. J. Lett. 402, L85 (1993) ADSView ArticleGoogle Scholar
- Casertano, S, Hut, P: Core radius and density measurements in N-body experiments connections with theoretical and observational definitions. Astrophys. J. 298, 80-94 (1985) ADSView ArticleGoogle Scholar
- Begelman, MC, Blandford, RD, Rees, MJ: Massive black hole binaries in active galactic nuclei. Nature 287, 307-309 (1980) ADSView ArticleGoogle Scholar
- Makino, J, Funato, Y: Evolution of massive black hole binaries. Astrophys. J. 602, 93-102 (2004) ADSView ArticleGoogle Scholar
- Berczik, P, Merritt, D, Spurzem, R: Long-term evolution of massive black hole binaries. II. Binary evolution in low-density galaxies. Astrophys. J. 633, 680-687 (2005) ADSView ArticleGoogle Scholar
- Merritt, D, Mikkola, S, Szell, A: Long-term evolution of massive black hole binaries. III. Binary evolution in collisional nuclei. Astrophys. J. 671, 53-72 (2007) ADSView ArticleGoogle Scholar
- Nitadori, K, Makino, J: Sixth- and eighth-order Hermite integrator for N-body simulations. New Astron. 13, 498-507 (2008) ADSView ArticleGoogle Scholar
- Portegies Zwart, S, McMillan, SLW, van Elteren, E, Pelupessy, I, de Vries, N: Multi-physics simulations using a hierarchical interchangeable software interface. Comput. Phys. Commun. 183, 456-468 (2013) ADSView ArticleGoogle Scholar
- Pelupessy, FI, van Elteren, A, de Vries, N, McMillan, SLW, Drost, N, Portegies Zwart, SF: The astrophysical multipurpose software environment. Astron. Astrophys. 557, A84 (2013) ADSView ArticleGoogle Scholar