 Research
 Open Access
A new hybrid technique for modeling dense star clusters
 Carl L. Rodriguez^{1}Email author,
 Bharath Pattabiraman^{2, 3, 4},
 Sourav Chatterjee^{2, 3},
 Alok Choudhary^{2, 4},
 Weikeng Liao^{2, 4},
 Meagan Morscher^{2} and
 Frederic A. Rasio^{2}
https://doi.org/10.1186/s4066801800273
© The Author(s) 2018
 Received: 6 August 2018
 Accepted: 22 November 2018
 Published: 28 November 2018
Abstract
The “gravitational millionbody problem,” to model the dynamical evolution of a selfgravitating, collisional Nbody system with ∼10^{6} particles over many relaxation times, remains a major challenge in computational astrophysics. Unfortunately, current techniques to model such systems suffer from severe limitations. A direct Nbody simulation with more than 10^{5} particles can require months or even years to complete, while an orbitsampling Monte Carlo approach cannot adequately model the dynamics in a dense cluster core, particularly in the presence of many black holes. We have developed a new technique combining the precision of a direct Nbody integration with the speed of a Monte Carlo approach. Our Rapid And Precisely Integrated Dynamics code, the RAPID code, statistically models interactions between neighboring stars and stellar binaries while integrating directly the orbits of stars or black holes in the cluster core. This allows us to accurately simulate the dynamics of the black holes in a realistic globular cluster environment without the burdensome \(N^{2}\) scaling of a full Nbody integration. We compare RAPID models of idealized globular clusters to identical models from the direct Nbody and Monte Carlo methods. Our tests show that RAPID can reproduce the halfmass radii, core radii, black hole ejection rates, and binary properties of the direct Nbody models far more accurately than a standard Monte Carlo integration while remaining significantly faster than a full Nbody integration. With this technique, it will be possible to create more realistic models of Milky Way globular clusters with sufficient rapidity to explore the full parameter space of dense stellar clusters.
1 Main text
The dynamics of dense star clusters is one of the most challenging problems of modern computational astrophysics. The large number of particles, high interaction rate, and large number of processes with vastly different physical timescales conspire to make globular clusters (GCs) and galactic nuclei (GN) uniquely difficult to model. In particular, the large number of black holes (BHs) in both GCs and GN often dynamically interact on a much shorter timescales than the rest of the cluster (Spitzer 1969). Although only comprising a small fraction of the total cluster mass, these BHs provide the dominant energy source for GCs, especially after the BHdriven corecollapse (Morscher et al. 2015). Hence, understanding their dynamics is critical to understanding the overall evolution and presentday appearance of these systems (Mackey et al. 2008). Unfortunately, since the orbital and interaction timescales of these BHs are frequently ordersofmagnitude smaller than the interaction timescale of a typical star in the cluster, resolving these effects can be particularly difficult.
Modern stellar dynamics codes have intensely investigated GCs, with the majority of work focusing on two approaches. The Nbody approach directly integrates the force of every particle on every other particle, with the current generation of codes (Portegies Zwart et al. 2001; Harfst et al. 2008; Nitadori and Aarseth 2012; CapuzzoDolcetta et al. 2013; Wang et al. 2015) making extensive use of stateoftheart hardware acceleration and algorithmic enhancements. While extremely precise, this approach can require more than a year (e.g., Heggie 2014; Wang et al. 2016) to complete a full simulation of a realistic MilkyWay GC.
As such, an approximate Monte Carlo (MC) technique is often used in place of a full direct summation (Hénon 1971; Giersz 1998; Joshi et al. 2000; Freitag and Benz 2001). Whereas an Nbody approach computes the orbit of stars directly, the orbitsampling approach assumes that particle orbits remain fixed on a dynamical timescale, only changing due to slight perturbations from twobody encounters between neighboring particles. This allows the orbits to be sampled statistically, and since computing a single orbit in a fixed spherical potential is faster than computing the precise orbits in the fullN potential of a cluster, these MC models can be generated in at most a few days or weeks. However, the assumptions of spherical symmetry and dynamical equilibrium break down in the BHdominated core, where the potential and the particle orbits are primarily determined by a small number of particles. This can lead to a substantial underprediction of the core radii by MC techniques (compared to direct Nbody), particularly during the deep collapses that produce dynamicallyassembled binaries (Morscher et al. 2015; Rodriguez et al. 2016a).
GCs are formed as the result of a burst of star formation in the early universe. Approximately 10 to 20 Myr after this formation is complete, the most massive stars in the cluster collapse, yielding hundreds to thousands of BHs (Belczynski et al. 2006). As the BHs are more massive than the typical cluster star, they are rapidly driven to the center of the GC by dynamical friction (Fregeau et al. 2002); once there, the number density of BHs is sufficient to form binaries via threebody encounters. While it was longassumed that these BHs would not be retained in GCs to the present day (e.g., Sigurdsson and Hernquist 1993), recent evidence has begun to suggests otherwise.
The past decade has seen the first detections of BHs in GCs, starting with the first detection in an extragalactic GC by Maccarone et al. (2007) and several recent detections in Milky Way GCs, (Chomiuk et al. 2013; Strader et al. 2013; MillerJones et al. 2015), including two BH candidates in M22 (Strader et al. 2012) and the recent dynamical measurement of a \(\gtrsim 4.5 M_{\odot }\) BH in NGC 3201 (Giesers et al. 2018). These observational results complimented recent theoretical results suggesting that the GCs can potentially retain hundreds of BHs up to the present day (Mackey et al. 2007; Downing 2012; Morscher et al. 2013, 2015; Kremer et al. 2018; Askar et al. 2018). This has led to a new theoretical understanding that the number of BHs retained in a GC directly controls the size and density of its observational core (e.g., Merritt et al. 2004; Mackey et al. 2008; Sippel and Hurley 2013; Breen and Heggie 2013; Kremer et al. 2018a; Arca Sedda et al. 2018). The importance of BHs in GCs cannot be overstated. In addition to determining the structural and evolutionary properties of the clusters, GCs also have important implications for BH astrophysics. GCs can produce Xray binaries at a significantly higher rate than the galactic field (Clark 1975), suggesting that there might be ∼100s of lowmass Xray binaries in Milky Way GCs (Pooley et al. 2003). Furthermore, recent studies have shown that the secondgeneration of gravitationalwave detectors can potentially detect ≳100 binary BH mergers per year from binaries forged in the cores of GCs (Rodriguez et al. 2015, 2016b; Antonini et al. 2016), with recent detections (Abbott et al. 2017) by LIGO/Virgo showing spin alignments suggestive of dynamical formation. As such, understanding the dynamics of these systems is critical.
What is needed is a technique that combines the speed of the MC approach with the precision of a direct Nbody integration. In this paper, we describe a new code, the Rapid and Precisely Integrated Dynamics (RAPID) code, which combines both methods into a “best of both worlds” approach. In this method, the majority of particles are modeled with our parallel Hénonstyle code, the Cluster MC (CMC) code (Pattabiraman et al. 2013), while the orbits of BHs are integrated directly with the Kira Nbody integrator (Portegies Zwart et al. 2001). We find that this technique accurately reproduces the core radii and BH dynamics of a full direct Nbody integration, with similar runtimes to the MC approach. Although we only integrate the BH orbits directly in the current work, the method is general, allowing us to select any population of particles in the cluster for Nbody integration.
In Sect. 2, we briefly review the Nbody and MC approaches, and describe the combination of the two approaches as implemented in RAPID code. In Sect. 3, we describe a single RAPID timestep, illustrating the technical details of the approach, while in Sect. 4, we describe the parallelization strategy that allows us to compute particle positions and velocities via orbit sampling and direct Nbody simultaneously. In Sect. 5, we show the results of an analytic toy model, comparing the inspiral due to dynamical friction of a single particle as predicted by theory, direct Nbody, and RAPID. Finally, in Sect. 6, we compare the properties of four idealized GCs as modeled by NBODY6, CMC, and RAPID. Throughout the paper, we will frequently refer to the “stars” and “BHs” in the cluster separately. In our current method, the stars are modeled with CMC and the BHs are integrated with Kira. This shorthand is to delineate which systems are being modeled by which technique, even though the particles under consideration are pointmass particles.
2 Hybridization approach
In this section, we provide a brief overview of the current methods employed to model GCs, and describe how our approach combines the virtues of both methods. Both the Nbody and MC approaches are the result of decades of precision work by multiple groups. For a more comprehensive description of collisional Nbody dynamics, see Aarseth (2003) or Dehnen and Read (2011). A review of MC methods can be found in Freitag (2008).
It should be noted that RAPID is not the first attempt at a hybrid Nbody/statistical sampling approach to stellar dynamics. In particular, the hybrid approach developed by McMillan and Lightman (1984b) combined a FokkerPlanck sampling code with a direct Nbody approach, in order to study GCs undergoing core collapse (McMillan and Lightman 1984a; McMillan 1986). The RAPID code continues this tradition of attempting to “have it all”, by combining the best of the direct integration and statistical sampling methods.
2.1 Direct Nbody integration
The physical principle behind a direct Nbody integrator is simple: since the force on any given particle in the sum of the gravitational force from every other particles in a given system, the most accurate way to model such a system is to numerically sum all the forces. This is the underlying principle behind the Nbody integrators. The most frequently used of these codes, the NBODY series of codes, have been improved and finely tuned with additional physics, including stellar evolution (Hurley et al. 2001), algorithmic regularization (Aarseth 1999), postNewtonian chain regularization (Aarseth 2012), and GPU acceleration (Nitadori and Aarseth 2012). With advanced hardware and a minimal number of simplifying assumptions, direct integration is the most precise method available for modeling dense stellar systems.
However, this precision comes at a cost. Naively, the cost of an Nbody integration scales as \(N^{2}\), since one must evaluate the force of every particle on every other particle, every timestep. In practice, most modern Nbody codes do not evaluate the force between every particle every timestep, instead opting for a variable “block” timestep approach in which only certain particles have their forces reevaluated at a given time. Despite this, and many other algorithmic improvements (such as employing a nearest neighbor scheme to accelerate force evaluations), the computational cost to integrate a cluster of N particles forward by a given physical time scales as \(\mathcal{O}(N ^{2})\), regardless of the timestep scheme or mass distribution of the cluster (Makino and Hut 1988).
This steep scaling makes largescale simulations of massive star clusters exceedingly challenging. The largest simulation attempted with NBODY6 is currently the \(N = 5\times 10^{5}\) model of galactic GC M4 performed by Heggie (2014), requiring 2.5 years on a dedicated GPU system. More recently, the current stateoftheart parallelized code NBODY6++GPU (Wang et al. 2015) can model a realistic (\(N = 10^{6}\)) cluster in little more than a year (Wang et al. 2016). Despite these remarkable achievements, simulation times in excess of ∼1 year for large systems preclude any reasonable exploration of the parameter space of initial conditions of GCs, and any collisional models of GN (\(N = 10^{7}\mbox{}10^{9}\)) remain beyond the capabilities of the current generation of direct summation techniques. To answer astrophysical questions related to such systems, a more rapid technique is called for.
For our hybrid approach, we use the Kira Nbody integrator, included as part of the Starlab software package (Portegies Zwart et al. 2001). Like the NBODY series of codes, Kira is a 4thorder Hermite predictorcorrector integrator with a block timestep scheme. Kira also integrates close encounters and tightlybound multiples using Keplerian regularization, where sufficientlyisolated hyperbolic and tightly bound binaries are evolved as analytic twobody systems. Additionally, Kira organizes its internal data using easilymodifiable C++ class structures, and includes an easilycustomizable module for including an external gravitational potential. These two features make it ideal for inclusion in the hybrid method.
2.2 Orbitsampled Monte Carlo
For some cases. such as a spherically asymmetric mass distribution, the orbit of the particle must be integrated numerically (e.g., Vasiliev 2014; Vasiliev et al. 2015); however, for most applications to large collisional star clusters (such as GCs and GN) the background gravitational potential can be assumed to be spherical. This allows the clever theorist to determine a star’s position and velocity by analytically sampling a random point along its orbit. This orbitsampling MC approach, first developed by Hénon (1971) and built upon by multiple groups (Stodoikiewicz 1982; Giersz 1998; Joshi et al. 2000; Freitag and Benz 2001) can model stellar systems with \(N \gtrsim 10^{7}\) particles in a fraction of the time of a direct Nbody simulation. Unlike a direct Nbody integration, the orbit calculation and dynamical encounters in the MC method scale linearly with the number of particles; only the sorting of particles by radius, with its characteristic \(N\log N\) complexity, limits the scaling. Furthermore, the MC method computes the interactions of particles on a relaxation timescale, as opposed to the dynamical timescale of a direct Nbody integration. Put together, the computational difficulty of the MC method scales as \(\mathcal{O}(N \log N)\) per halfmass relaxation time, versus \(\mathcal{O}(N^{3})\) for a direct Nbody approach. Because of this, the MC method can easily model large systems that are simply beyond the reach of other techniques.
However, the assumptions that enable the speed of the MC method can easily break down in some of the most interesting regions of parameter space. Once mass segregation is complete, the evolution of a GC is largely determined by the small number of BHs that have accumulated in the core. This can consist of as few as hundreds or even tens of BHs. Since the dynamics of these small, spherically asymmetric systems change rapidly on an orbital timescale, the MC method is unable to accurately follow the evolution of these BHs in the center of the cluster. And since this small cluster of BHs forms the hard binaries whose binding energy acts as a power source for the entire cluster, their dynamics must be accurately modeled to understand the longterm evolution of the cluster.
Our orbitsampling Cluster MC code, CMC, was first developed by Joshi et al. (2000), based on the original developments by Hénon (1971) and Stodoikiewicz (1982). As the code considers interactions between individual stars, CMC incorporates multiple physical processes, including stellar evolution (Hurley et al. 2000; Hurley et al. 2002), strong threebody and fourbody scatterings with the smallN integrator Fewbody (Fregeau et al. 2004), probabilistic threebody binary formation (Morscher et al. 2013), and physical collisions. Additionally, CMC has recently been parallelized to run on an arbitrary number of computer processors (Pattabiraman et al. 2013). This MPI parallelizaion makes CMC an ideal code base for RAPID, as the current parallelization scheme can be easily expanded to allow the Nbody integration to run in parallel to the MC
2.3 Hybrid partitioning

a mass criterion, which divides the system according to a specified threshold, where particles above the threshold are considered BHs and particles below it are considered stars, and

a stellar evolution criterion, in which objects identified as BHs by stellar evolution are integrated by Kira, and all other objects are integrated by CMC.
There are two reasons to focus on BHs in our hybridization scheme. The first is that by limiting the integration to a persistent set of particles, we can avoid the the large communications overhead that is incurred each time particle must be transferred back and forth from MC to Nbody. This would occur much more frequently if, for instance, we divided our computational domains according to radius, with the Nbody integrating particles in the core, and the MC integrating particles in the halo (similar to McMillan and Lightman 1984b). Secondly, by limiting the Nbody to only BHs, we sidestep the difficulties of treating binary stellar evolution during the Nbody integration. Although Kira includes a builtin package for binary and single stellar evolution (the SeBa package), it is not compatible with the stellar evolution in CMC (the Binary Stellar Evolution of Hurley et al. 2002). We will explore ways to integrate selfconsistent stellar evolution into the hybrid approach in a future work.
However, in realistic clusters, we find that the segregation between BHs and stars is extreme, with the innermost regions of the cluster completely dominated by BHs. In Fig. 2, we show the cumulative fraction of BHs as a function of cluster radius for a typical GC model with \(N=10^{6}\) and full stellar evolution (see Rodriguez et al. 2018). After 100 Myr, the central region of the cluster is completely dominated by BH, with 75% of the objects less that 0.01 pc from the cluster center being BHs. These are the objects that primarily participate in the dynamical formation of binaries that drive the cluster evolution (Breen and Heggie 2013; Morscher et al. 2015). Furthermore, any nonspherical effects that arise from having a small number of particles in the cluster center will be limited to these central BHs, ensuring that the hybrid approach can correctly integrate the correct 3D potential in the central regions.
3 RAPID timestep
3.1 Compute the potential
In the RAPID code, two potentials are calculated: the full spherical potential, Φ, of all stars and BHs, and a MConly potential, \(\varPhi ^{\mathrm{MC}}\), computed only with the stars in CMC. The MC potential is sent from the CMC processes to the Nbody process, and used as an external potential for the Kira integration.
3.2 Select the timestep
The RAPID timestep is chosen in a similar fashion. The timescale for each physical process is computed for all particles in the cluster (stars and BHs), and the minimum (or a fraction of the minimum) is selected as the current timestep. In Kira, this timestep determines how many dynamical times the Nbody system will be advanced.
Unlike CMC, RAPID can produce BH multiples with an arbitrary number of components. To compute the interaction timescale for scatterings between stars and BH multiple systems, the timestep is chosen using the same prescription as the \(T_{\mathrm{BS}}\) and \(T_{\mathrm{BB}}\) timesteps, but with the semimajor axis of the outermost binary pair as the effective width of the system.
For the standard definition of the relaxation time from Binney and Tremaine (2008), \(\theta _{\mathrm{{max}}} = \pi /2\). However, for the systems considered here (particularly the highlyidealized twocomponent models presented in Sect. 6), this averaging can sometimes smooth out the otherwise short relaxation times between a heavy object and a neighboring lighter object (particularly if there is only one heavy object in that bin of 40 particles). For the theoretical comparison shown in Sect. 5, we still set \(\theta _{\mathrm{{max}}}\) to the theoretical value of \(\pi /2\), but average the above quantities over the closest 2 particles. For the numerical comparison in Sect. 6, we average over the nearest 40 particles, but use \(\theta _{\mathrm{{max}}} = 1\) to calculate the relaxation timestep, which was found (Fregeau and Rasio 2007) to provide a good compromise between accuracy and speed for such twocomponent systems.
3.3 Perform dynamical interactions

StarStar interactions are handled in the same fashion as in a pureCMC integration (see Pattabiraman et al. 2013 and references therein).

StarBH interactions are also handled in the same fashion (by CMC); however, in addition to updating the local BH information stored in CMC, the dynamical changes to each particle are communicated back to the Kira process once the interaction step is complete.

BHBH nearestneighbor interactions are skipped, since such encounters will be performed with greater accuracy in the Nbody integration.
In addition to twobody relaxations, CMC integrates strong scattering encounters between neighboring multiples (such as binarysingle neighbors or binarybinary neighbors) using the Fewbody smallN integrator (Fregeau et al. 2004). In RAPID, we also allow for strong encounters between neighboring stars and BHs. However, as Kira can produce BH higherorder multiple systems, (triples, quadruples, etc), we have modified Fewbody to perform scatterings between systems of arbitrary multiplicity, such as singlemultiple and binarymultiple encounters. We ignore multiplemultiple scatterings, since CMC does not track higherorder stellar triples, and BHBH encounters are performed naturally in Kira.
Once Fewbody has completed the scattering (which can take several seconds of CPU time for compact higherorder multiple systems) the output is then sent back to CMC and Kira separately. For higherorder multiples, the full 3D position and velocity of each BH component, relative to the multiple centerofmass, is sent to the Kira process. The multiple is then reinserted into the Nbody integration at its previous position.
The only exception to this procedure is the formation of mixedmultiple systems (a binary or higherorder multiple with both BH and stellar components). We evolve any BHStar binaries in CMC. For higherorder multiples with both stellar and BH components, the multiple is hierarchically broken apart into smaller components. Any star or BHStar is evolved by CMC, while any BH, binary BH, or BH multiple is evolved by Kira. The kinetic energies of the newlybroken components are adjusted to ensure conservation of energy. While this limits the modeling of BHnon BH systems (such as lowmass Xray binaries), these systems predominatly form with low mass BHs at the outer region of the BH subsystem, where mixing between stars and BHs is more common (Kremer et al. 2018b). Because of this, these systems are less likely to participate in the strong threebody encounters that form BH binaries in the central regions of the cluster.
3.4 Perform stellar evolution
RAPID considers realistic stellar evolution using the Single Stellar Evolution (SSE) and Binary Stellar Evolution (BSE) packages of Hurley et al. (2000, 2002). This is identical to the previous implementation in CMC. No stellar evolution is required by the direct Nbody, since the only particles integrated by Kira are BHs. As stated above, all mixed BHStar objects are integrated in CMC. This is to ensure that the binary stellar evolution for BHstar systems is performed consistently.
3.5 Calculate new orbits and positions
After the dynamical information for each particle has been updated, a new orbital position and velocity, consistent with the particle’s new energy and angular momentum, must be computed. Since the dynamical state of each particle is uptodate in CMC and Kira, the MC and Nbody integrations can be performed in parallel by their respective processes.
3.5.1 Orbit calculation (MC)
3.5.2 Orbit calculation (Nbody)
Because we have dynamically perturbed the BHs though scattering and a new external potential, the gravitational force and its higher derivatives must be recalculated before resuming the Kira integration. Otherwise, the dynamical changes to the BH velocities would produce discontinuities in the higher derivatives of the force, breaking the smoothness needed for the 4thorder Hermite integrator to work. The Nbody system is reset using Kira’s builtin reinitialization function, which recomputes the acceleration and jerk for each BH explicitly. The position and velocities of all the particles (up to any changes from dynamical interactions) are not modified.
3.6 Sort radially
Finally, once all the relevant physics has been applied and the data collected on the CMC processes, the particles must be sorted in order of increasing radial distance from the GC center. The sorting is performed in parallel by all CMC processes using the parallel Sample Sort algorithm described in Pattabiraman et al. (2013).
4 Parallelization strategy
To incorporate the Kira integrator into CMC, we make the following modifications to our parallelization strategy. When the simulation starts, all particles are evolved using the CMC scheme. As described in Sect. 2.3, the activation criterion for the Nbody integrator depends on the type of particles being integrated. For point particle simulations, the Nbody integrator is begun immediately, whereas for star clusters modeled with stellar evolution, the Nbody integrator is only activated once a certain number (∼25) of BHs have been formed. Once the activation condition is met, we divide the entire set of particles into two sets: MC stars and Nbody BHs.
At the same time, we divide the existing set of p processes into two separate groups: a single Kira process for integrating the BHs, and \(p1\) CMC processes for integrating the stars. Any MPI communication is handled by two custom intracommunicators: one corresponding to all p processes and one restricting communication to the \(p1\) CMC processes. The latter allows us to employ the same parallelization strategy described in Pattabiraman et al. (2013) with minimal modification. When the processes are split, all the BHs are sent to the Kira process, where their coordinates are converted from \((r,v_{r},v_{t})\) space to the full 6D phase space by randomly sampling the orientation of the position and velocity vectors. This sets the initial conditions for the Nbody integration. In addition, the Kira process maintains a radiallysorted array of \((r,v_{r},v_{t})\) for all BHs. This is done to facilitate easier communication with CMC and to minimize the required MPI communication between CMC and Kira every timestep. Although the CMC processes hand over their BHs to the Kira process, the BHs are not deleted from their local arrays. The BHs are left intact yet inert, so that their positions and velocities can be updated upon completion of the Nbody integration.
Once the CMC and Kira MPI processes have been initialized, the hybrid method must allow both codes to interact while minimizing the amount of MPI communication. This is accomplished by a series of intermediate arrays, designed to store and transmit the minimum amount of information back and forth between the CMC and Kira. Communication only occurs at two points during a RAPID timestep. The first occurs after the relaxation and strongencounters have been performed, in order to communicate the dynamical changes from CMC to Kira. The second occurs after both systems have completed their respective orbit computations, to communicate the new dynamical positions and velocities from Kira back to CMC.
4.1 CMC to Kira communication

The MC potential, \(\varPhi ^{\mathrm{MC}}\), is sent to the Kira process as two arrays, containing the radius and cluster potential of every star (excluding the BHs) in CMC. The Kira process selects 30 of these stars as described in Sect. 3.5.2, and passes the information to the Kira integrator to use when computing the external force.

The twobody relaxations are communicated as an array of objects, each containing a particle ID and a 3dimensional Δv⃗. These weak velocity perturbations are added to the single BHs and the centersofmass of any BH multiple systems before the Nbody system is reinitialized by Kira.

The results of strong encounters are communicated differently depending on the type of encounter. For binary BHs that have experienced a strong encounter with a star, only the change in semimajor axis and eccentricity are communicated back to Kira. For triples and higherorder multiples, the full position and velocity of every BH in the multiple is communicated to Kira. To reduce communication, any hierarchical information is not transmitted, and the Kira process reconstructs the hierarchy locally before the Nbody system is reinitialized.
Since all the information previously described does not drastically change the radial positions or velocities of the particles in the Nbody (by assumption, the MC approach requires that \(\Delta v / v \ll 1\)), the dynamical state of the Nbody system is preserved between RAPID timesteps. The one exception is strong encounters in which a single bound BH multiple is broken into components. Since strong encounters in CMC are performed by assuming the scatterings are isolated from the cluster potential at infinity, the resultant components cannot be placed at the same infinite location in the Nbody system. For such systems, the components are placed at the correct radius and random orientations, similar to the initial transfer of BHs from CMC.
Finally, we allow for the possibility that CMC may create new BHs, either through stellar evolution or through strong encounters which produce single or binary BHs. We add any such new BHs to Kira. The new BH is then flagged as a BH in the local array of the CMC process that created it, to ensure it is not evolved by CMC during the next timestep.
4.2 Kira to CMC communication
After the Kira integrator has computed the orbits of the BHs, the new dynamical state must be communicated back to the CMC processes. First, the dynamical 6D phase space information for each particle in the Nbody is projected back to the reduced \((r,v_{r},v_{t})\) basis, and copied back into place in the intermediate star array on the Kira process. The intermediate BH array is then divided up and communicated back to the respective CMC processes.
In addition to the positions and velocities, any new objects, such as single BHs, newly formed binaries, or higherorder multiples, are communicated as new objects to the last \((p1)\) CMC processes, to be placed in the correct process once the CMC (and intermediate BH) arrays are sorted. For single and binary BHs, this is accomplished by sending the usual \((r,v_{r},v_{t})\) for each system and the semimajor axis and eccentricity for any binaries to the CMC processes. For higherorder multiples, the full 6D dynamical information is transmitted back using the same array that sent the strong encounters from CMC in the first communication. Again, only the positions and velocities of the BH multiple components are communicated, so the hierarchical information is reconstructed on the local CMC process after communication.
Because the Nbody integration is being performed in a lowerdensity environment than the full cluster, it is possible for Kira to produce pathologically wide binaries and higherorder multiples. This effect is particularly problematic at late times, where a handful (∼10) of BHs can easily produce binaries with separations greater than the local interparticle separation of stars. Since the unphysically large interaction crosssections of these systems drastically shrink the CMC timestep, we break apart any multiple systems whose apocenter distance is greater than 10% of the local interparticle separation of stars at that radius. The kinetic energy of the CMC stars is adjusted to ensure energy conservation. This criterion for breaking wide binaries is the same criterion used to break wide binaries produced by Fewbody in CMC.
5 Analytic comparison
Because RAPID and CMC are designed to model twobody relaxation by averaging various quantities over several neighboring stars, special care must be taken for RAPID to accurately model the behavior of a single particle. To that end, we set the maximum scattering angle to the typical value of \(\theta _{\mathrm{{max}}} = \pi / 2\), while we reduce the number of neighboring particles over which the quantities in Equation (5) are averaged to 2. In other words, the quantities used to compute the timestep consider only the nearest particles when computing the local twobody relaxation timescale (while the global timestep is chosen as the minimum of all nearest neighbor timescales). This ensures that the twobody relaxation timestep chosen for the RAPID simulations can appropriately model the dynamical friction experienced by a single massive object.
We reiterate that these results were obtained by reducing the number of particles used to compute various average quantities (Sect. 3.2). This allows the timestep to be properly calibrated for the dynamical friction of a single particle (the massive BH). In a standard CMC and RAPID run, a larger averaging kernel can be used, since for realistic clusters with a continuous mass function, there will be many massive and light objects within the innermost 40 particles (which typically sets the minimum relaxation time for the cluster). For more idealized clusters, where the mass function is discrete and there can sometimes be only one massive BH per 40particle kernel (especially before mass segregation), the averages must be computed carefully. In this section, we have accomplished this by setting \(\theta _{\mathrm{{max}}}\) to the standard value of \(\pi / 2\) and taking the mimimum timestep computed over averages between the nearestneighbor particles (usually the average between the BH and its two neighboring stars). More generally, this reduction in timestep can also be accomplished by reducing the maximum angle for a twobody deflection to \(\theta _{\mathrm{{max}}}=1\). We elect for the later in the next section, as it has been demonstrated to work well for the idealized twocomponent systems considered there (Fregeau and Rasio 2007).
As an aside, it is interesting to note that Fig. 4 provides an excellent way to test the value of γ commonly used in numerical work involving the Coulomb Logarithm (\(\log \varLambda \equiv \log {\gamma N}\)). Even small changes (such as \(\gamma = 0.005\) or \(\gamma =0.02\)) produce obviouly worse agreement in Fig. 3. This suggests that the value of \(\gamma =0.01\) used in many previous studies of multimass clusters is appropriate.
6 Numerical comparison
The model names, initial conditions, and runtimes for each of the four idealized GC models. The runtimes quoted are the relevant walltimes for each method. We list the single runtime for each NBODY6 run, and the mean and standard deviation of the 10 CMC and RAPID runs for each cluster. The CMC and RAPID models were run on 4 Intel Xeon E52670 Sandy Bridge processors, while the NBODY6 models were run on 8 processors and 1 Nvidia Tesla M2090 GPU
Initial Conditions  Runtimes (Hours)  

Model Name  \(M_{\mathrm{BH}}/M_{\mathrm{star}}\)  \(m_{\mathrm{BH}}/m_{\mathrm{star}}\)  \(T_{\mathrm{end}}\) \((t_{dyn})\)  CMC  NBODY6  RAPID 
64k0.0110  0.01  10  14,000  5.0 ± 1.7  425  1.8 ± 0.2 
64k0.0210  0.02  10  20,000  8.4 ± 0.9  304  2.6 ± 0.3 
64k0.0120  0.01  20  10,000  2.6 ± 0.4  554  1.3 ± 0.2 
64k0.0220  0.02  20  20,000  5.3 ± 0.7  512  2.7 ± 0.6 
Immeditally obvious from Table 1 is that the runtimes for RAPID are much shorter than the typical runtimes for NBODY6, sometimes by factors of a few hundred. Slightly more surprising is that the RAPID runtimes also tend to be shorter than the runtimes for CMC. This largely arises from the improved treatment of the cluster center in RAPID. As will be shown in the next section, RAPID reproduces the lessdense core radii of the full Nbody models better than CMC, with the latter producing core radii 2–4 times smaller and more compact than NBODY6. Because the timestep of the MC is dominated by the relaxation in the densest region of the cluster, the more compact CMC models require many more timesteps to resolve the deep core collapses, resulting the longer runtimes.
6.1 Core and halfmass radii
At late times, both the halfmass and core radii predicted by RAPID begin to diverge from those determined by NBODY6. This divergence is to be expected: as the BHs are ejected from the cluster, the orbits of individual BHs are determined less by their encounters with other BHs, but by twobody relaxation with stars controlled by the MC. Given the disagreement between the pure CMC and NBODY6, this divergence is consistent. This suggests that RAPID will be most effective when modeling systems that retain a large number of BHs. Recent work (e.g., Mackey et al. 2007; Downing 2012; Morscher et al. 2013, 2015; Kremer et al. 2018; Askar et al. 2018) has shown that the most massive GCs can retain hundreds to thousands of BHs up to the present day. Given that, RAPID should be able to correctly model realistic GCs throughout their entire evolution far more accurately than a traditional MC method.
6.2 Binary formation
What explains the substantial improvement in the RAPID core radii? In Rodriguez et al. (2016a), we explored a direct comparison between CMC and a stateoftheart direct Nbody simulation of 10^{6} particles (the DRAGON simulation, Wang et al. 2016). There, we found good agreement between most of the structural parameters of the two cluster models (e.g., the halfmass radii, the formation and ejection rate of BHs, etc.). However, the one notable exception was the evolution of the inner parts of the cluster such as the core radii and the innermost Lagrange radii (the radius enclosing a certain fraction of the cluster mass). This was especially true when considering the Lagrange radii of only the BHs (Rodriguez et al. 2016a, Fig. 7), where the innermost few BHs would fall into a much deeper state of collapse (by nearly two orders of magnitude) into the cluster center than the equivalent radii from the Nbody model. The cluster would remain in this deep state until the formation of a BH binary, which would reverse this deep collapse and bring the inner Lagrange radii back into agreement with the Nbody results.
It was speculated that the reason for this discrepancy lay in the analytic prescription that CMC employs to model the dynamical formation of binaries during threebody encounters of single BHs. This prescription, from Morscher et al. (2013), may underestimate the formation rate of BH binaries, especially given that the probability of binary formation scales as \(v^{9}\) in the local velocity dispersion. Because these interactions typically involve only a handful of objects in the cluster center, where the standard MC assumptions of spherical symmetry and \(T_{\mathrm{rel}} \gg T_{\mathrm{dyn}}\) break down, it is not obvious that CMC’s statistical approach to binary formation based on locallyaveraged quantities can correctly model this process.^{2} This difficulty was one of the primary motivators for the development of RAPID: by directly integrating the BH dynamics every timestep, we can explicitly model the complicated threebody encounters between single BHs on a dynamical timescale.
We believe this discrepancy is responsible for the deep collapses observed in Rodriguez et al. (2016a). In those models, the most massive objects would naturally find themselves in the center, and continue to collapse until a binary was formed. This caused the deep collapses noted there, which were not reproduced in the direct Nbody model. Here, the deep collapses have smoothed out to a more continuous underprediction, since the twocomponent models studied here have equal masses for all BHs, whereas in Morscher et al. (2015), Rodriguez et al. (2016a), it was consistently the most massive BHs decoupling from the rest of the core that were responsible for the deep collapses.
6.3 BH retention
In each of the four clusters, the RAPID models eject BHs at a much faster rate than the CMC models, in better agreement with the NBODY6 model. Although the BH ejection rate for NBODY6 is slightly faster than the other two methods (particularly for the \(m_{\mathrm{{BH}}}/m_{\mathrm{{star}}}=10\) cases), in each case RAPID performs far better than the pure MC approach. This is consistent with the difference in halfmass radii between NBODY6 and RAPID noted in the previous section, where systems with lessmassive BHs expand faster in NBODY6 than in RAPID. Since the overall expansion of the cluster regulates the ejection rate of BH binaries, models that expand more rapidly will eject BHs more rapidly. For BHs in the cluster, NBODY6 and RAPID produce a similar number of BH binaries and BH triples over time. This is a noticeable improvement over CMC, especially when considering BH triples, which are explicitly removed from the MC integration.
6.4 Ejected BH systems
Although we do not show it here, the distribution of orbital eccentricities of the ejected binaries strongly follows the theoreticallypredicted distribution thermal distribution (\(p(e)\propto e\), Jeans 1919) regardless of binary mass and cluster properties. Because the gravitationalwave merger time for binary BHs is determined by the semimajor axis and eccentricity at ejection (Peters 1964), RAPID will be able to model the merger rate of binary BHs from dense stellar clusters with similar accuracy to a direct Nbody approach.
6.5 Energy conservation
The total energy conservation of RAPID can vary over a single run, and usually lies within 2–3% of the initial energy for the duration of the run. There are two main sources of error which contribute to this energy flux. The first is the difficulty of integrating higherorder multiples and very close encounters accurately, particularly those that occur in close triple systems. This issue is not limited to the Kira integrator, and is one of the wellknown issues common to all collisional dynamics simulations (the so called “terrible triples”). Although Kira does implement Keplerian regularization for isolated twobody systems and higherorder multiples, we still find that occasionally longlived triples can induce substantial jumps in energy conservation. Furthermore, as the CMC potential is only applied to the centerofmass of any multiple systems, any tidal effects upon the multiples from the background cluster potential are not incorporated correctly. These integration errors manifest as discontinuous jumps in the overall energy conservation, which can be seen in the bottom panels of Fig. 10.
The second source of error arises from the integration of orbits in a fixed external potential. This error takes two specific forms. First, the MC method, as described above, has an inconsistency in the computation of the potential. When a timestep is performed in CMC, the potential is computed first, before the dynamical encounters take place and the new orbit is calculated. However, the new orbit is calculated using the original potential, which does not take in to account the evolution of the cluster while the particles are dynamically interacting. While the work done by neighboring particles is correctly accounted for, the work done by the change in potential upon each particle is ignored. To compensate for this energy drift, CMC employs a technique developed by Stodoikiewicz (1982), in which the work done by the changing potential is explicitly added to the kinetic energy of the particle at the end of each timestep. This allows energy to be conserved in the MC to 1 part in 10^{3} over a run (Fregeau and Rasio 2007). In RAPID, we selfconsistently correct the velocities of stars controlled by the MC in a similar fashion, but do not account for this energy drift in the BHs. We will explore modifications to the external potential (such as a timedependent background potential for the Kira integrator) in a future work.
Additionally, the 4thorder Hermite integration scheme, while typical for collisional stellar dynamics, is known to produce systematic energy errors when integrating many orbits in a fixed potential (Binney and Tremaine 2008, cf. Fig. 3.21). This produces a small but systematically positive energy drift while integrating the BHs over many orbits. This is particularly problematic for the Kira integrator, which assumes that external potentials are weak perturbations to the internal dynamics of the cluster. While this assumption is valid for a cluster evolving in a galactic tidal field, in our current approach, the external potential from the MC particles is much stronger than the interparticle forces of the Nbody integration. Methods to improve the longterm stability of the Nbody integration, allowing for many integrations in a fixed potential while still treating close encounters accurately, are currently being investigated.
This issue may also be exacerbated by the particular combination of the dynamical timesteps between MC and Nbody employed here. By integrating both systems for an identical length of time and combining the results, it is entirely possible that RAPID cannot adjust to rapid dynamical changes that occur during either the MC or Nbody integrations. Investigations into using an adaptive timestep between the two computational domains, similar to the effective operator splitting developed by Fujii et al. (2007) and employed in Portegies Zwart et al. (2013), are currently underway.
Somewhat unexpectedly, Fig. 11 shows that this net energy drift does not depend on the number of BHs present in the cluster, but on the mass ratio between the individual stars and BHs. For clusters with a smaller \(m_{\mathrm{{BH}}}/m_{\mathrm{{star}}}\), this net energy drift is slower. This indicates that for more realistic clusters containing many BHs and stars of different masses, this energy drift should improve. This is confirmed by preliminary testing of RAPID on clusters with a realistic initial mass function.
7 Conclusion
In this paper, we described the motivation and development of a new hybrid technique for dynamically modeling dense star clusters. By combing our Cluster Monte Carlo (CMC) code with the Kira direct Nbody integrator, we are able to combine the speed of the MC approach with the accuracy of a direct summation. This hybrid code, the Rapid and Precisely Integrated Dynamics (RAPID) Code is designed to accurately model the nonequilibrium BH dynamics that powers the overall evolution of GCs and GNs. Given recent observational detection of BH candidates in GCs, and the importance of theoretical modeling of GC BHs to Xray binary astrophysics (Pooley et al. 2003) and gravitationalwave astrophysics (Rodriguez et al. 2015; Antonini et al. 2016), understanding the dynamics of BHs in clusters is crucial to understanding BH astrophysics.
We found that the hybrid approach is able to replicate both the halfmass radius and the core radius for several Nbody models of idealized GCs with a much greater accuracy than a traditional MC integration. Unlike a purely MC approach, RAPID can model the highly nonspherical and rapidly changing dynamics of the few BHs in the center of the cluster. This suggests that the RAPID approach can follow the dynamics of BH systems with comparable accuracy to a direct Nbody integration, but with roughly the same integration time (with in a factor of 2) of an orbitsampling Monte Carlo approach.
With this technique, it will be possible to explore regions of the GC parameter space that have remained outside the computational feasibility of direct Nbody computations. In particular, by treating the central BH subcluster correctly, RAPID can explore regions of the GC and GN parameter space, including clusters with massive central BHs, that have previously been unexplored by direct collisional methods.
Two issues remain to be addressed. First, the Kira Nbody integrator does not completely conserve energy in the presence of a large external potential. This effect is a wellknown drawback of 4thorder Hermite integrators, will need to be addressed. Efforts are currently underway to increase the computational order of the Hermite predictorcorrector, improving both the accuracy and speed of the integration (e.g., Nitadori and Makino 2008), and to incorporate a timedependent potential in the Nbody integrator, to account for the work done by the total cluster potential on the BHs..
Secondly, the regularization of binaries and higherorder multiples in Kira is based on Keplerian regularization for sufficiently unperturbed systems (see Portegies Zwart et al. 2001). However, this regularization does not include any tidal effects from the external Monte Carlo potential, which will effect the longterm evolution of binaries retained by the cluster. Incorporation of these physical effects into the regularization scheme is currently underway.
Future work will also investigate hardware acceleration of the Nbody integrator. The Kira integrator is designed to run on the specialized GRAPE series of hardware, which yields substantial improvements in computational speed. When combined with the Sapporo GPU/GRAPE library (Gaburov et al. 2009), Kira can be run on modern, distributed GPU systems with comparable performance to the NBODY series of codes (Anders et al. 2012). Hardware acceleration was not implemented the current RAPID version, since we have not considered systems with sufficiently large numbers of BHs for efficient GPU useage; however, the development of the Sapporo2 library (Bédorf et al. 2015) provides efficient GPU saturation for smallN systems. We will explore the advantages of a RAPID integration with Sapporo2 in a future paper.
RAPID is designed to be a singlepurpose code incorporating all the necessary physics to model dense star clusters. However, these “kitchensink” codes, in which many numerical codes are integrated into a single parallel infrastructure, are often difficult to extend or modify for different purposes, particularly with regard to the shared timestep. The energy drift noted in Sect. 6.5 arises from a combination of the 4thorder Hermite integrator and the particular combination of the two dynamical timesteps. While the parallel design of RAPID makes it difficult to explore variations on this code structure, there do exist more modular approaches to computational stellar dynamics that may prove helpful. For example, the Astrophysical Multipurpose Software Environment (AMUSE, Portegies Zwart et al. 2013) can be used to easily swap different Nbody integrators into a largescale astrophysics code. Furthermore, there exist methods of combining different dynamical timesteps in a single code (e.g., the Bridge approach, Fujii et al. 2007), similar to the operator splitting approach developed by Wisdom and Holman (1991), that enable large multiscale simulations to be performed with an adaptive, shared timestep. Because this leapfrogesque approach is already implemented in AMUSE (as well as several different Nbody integrators), we are exploring the possibility of integrating RAPID into AMUSE, allowing for greater precision and flexibility in the Nbody timestep.
In NBODY6, the sum is performed out whichever is greater of the halfmass radius or three times the previous core radius.
Of course, Hénon’s principle ensures that the rate of binary formation and hardening will automatically adjust to satisfy the energy flux of the cluster across the halfmass radius (e.g., Breen and Heggie 2013). Because CMC can model the global properties of realistic clusters correctly, the rate of binary formation must be correct on a relaxation timescale, regardless of the specific implementation of threebody binary formation. What we are interested in here is the behaivor of the inner parts of the core on a dynamical timescale, where the assumptions of the MC approach explicitly break down.
Declarations
Acknowledgements
We thank Eugene Vasiliev, Simon Portegies Zwart, Vicky Kalogera, ClaudeAndré FaucherGiguère, Douglas Heggie, Philip Breen, and Fabio Antonini for useful discussions.
Availability of data and materials
The authors have elected to not release the RAPID or CMC codes publicly at this time. Any of the data presented in this paper is available upon request.
Authors’ information
Carl L. Rodriguez is a Pappalardo Fellow.
Funding
CR was supported by an NSF GRFP Fellowship, award DGE0824162, and is currently supported by the MIT Pappalardo Fellowship in Physics. This work was supported by NSF Grant AST1312945 and NASA Grant NNX14AP92G.
Authors’ contributions
The RAPID code was developed in equal parts by CR and BP, with CR developing most of the astrophysical features and BP developing most of the parallel infrastructure; SC, MM, and FR assisted with the former, while AC and Wk L assisted with the later. CR prepared this manuscript and performed the tests detailed within. All authors read and approved the final manuscript.
Competing interests
The authors declare that they have no competing interests.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Authors’ Affiliations
References
 Aarseth, S.J.: Publ. Astron. Soc. Pac. 111, 1333 (1999) ADSView ArticleGoogle Scholar
 Aarseth, S.J.: Gravitational NBody Simulations. Cambridge Univ. Press, Cambridge (2003) View ArticleGoogle Scholar
 Aarseth, S.J.: Mon. Not. R. Astron. Soc. 422, 841 (2012) ADSView ArticleGoogle Scholar
 Abbott, B.P., et al.: Phys. Rev. Lett. 118, 221101 (2017) ADSView ArticleGoogle Scholar
 Anders, P., Baumgardt, H., Gaburov, E., Zwart, S.P., Portegies Zwart, S.: Mon. Not. R. Astron. Soc. 421, 3557 (2012) ADSView ArticleGoogle Scholar
 Antonini, F., Chatterjee, S., Rodriguez, C.L., Morscher, M., Pattabiraman, B., Kalogera, V., Rasio, F.A.: Astrophys. J. 816, 65 (2016) ADSView ArticleGoogle Scholar
 Arca Sedda, M., Askar, A., Giersz, M.: Mon. Not. R. Astron. Soc. 479, 4652 (2018) ADSView ArticleGoogle Scholar
 Askar, A., Arca Sedda, M., Giersz, M.: Mon. Not. R. Astron. Soc. 478, 1844 (2018) ADSView ArticleGoogle Scholar
 Bédorf, J., Gaburov, E., Portegies Zwart, S.: Comput. Astrophys. Cosmol. 2, 8 (2015) ADSView ArticleGoogle Scholar
 Belczynski, K., Sadowski, A., Rasio, F.A., Bulik, T.: Astrophys. J. 650, 303 (2006) ADSView ArticleGoogle Scholar
 Binney, J., Tremaine, S.: Galactic Dynamics, 2nd edn. Princeton Univ. Press, Princeton (2008) MATHGoogle Scholar
 Breen, P.G., Heggie, D.C.: Mon. Not. R. Astron. Soc. 432, 2779 (2013) ADSView ArticleGoogle Scholar
 CapuzzoDolcetta, R., Spera, M., Punzo, D.: J. Comput. Phys. 236, 580 (2013) ADSMathSciNetView ArticleGoogle Scholar
 Casertano, S., Hut, P.: Astrophys. J. 298, 80 (1985) ADSView ArticleGoogle Scholar
 Chandrasekhar, S.: Astrophys. J. 97, 243 (1943) ADSGoogle Scholar
 Chomiuk, L., Strader, J., Maccarone, T.J., et al.: Astrophys. J. 777, 69 (2013) ADSView ArticleGoogle Scholar
 Clark, G.W.: Astrophys. J. 199, L143 (1975) ADSView ArticleGoogle Scholar
 Dehnen, W., Read, J.I.: Eur. Phys. J. Plus 126, 28 (2011) View ArticleGoogle Scholar
 Downing, J.M.B.: Mon. Not. R. Astron. Soc. 425, 2234 (2012) ADSView ArticleGoogle Scholar
 Fregeau, J.M., Cheung, P., Portegies Zwart, S.F., Rasio, F.A.: Mon. Not. R. Astron. Soc. 352, 1 (2004) ADSView ArticleGoogle Scholar
 Fregeau, J.M., Joshi, K.J., Portegies Zwart, S.F., Rasio, F.A.: Astrophys. J. 570, 171 (2002) ADSView ArticleGoogle Scholar
 Fregeau, J.M., Rasio, F.A.: Astrophys. J. 658, 1047 (2007) ADSView ArticleGoogle Scholar
 Freitag, M.: In: Aarseth, S.J., Tout, C.A., Mardling, R.A. (eds.) The Cambridge NBody Lectures. Lecture Notes in Physics, vol. 760, pp. 123–158. Springer, Dordrecht (2008). https://doi.org/10.1007/9781402084317 View ArticleGoogle Scholar
 Freitag, M., Benz, W.: Astron. Astrophys. 375, 711 (2001) ADSView ArticleGoogle Scholar
 Fujii, M., Iwasawa, M., Funato, Y., Makino, J.: Publ. Astron. Soc. Jpn. 59, 1095 (2007) ADSView ArticleGoogle Scholar
 Gaburov, E., Harfst, S., Zwart, S.P.: New Astron. 14, 630 (2009) ADSView ArticleGoogle Scholar
 Giersz, M.: Mon. Not. R. Astron. Soc. 298, 1239 (1998) ADSView ArticleGoogle Scholar
 Giesers, B., et al.: Mon. Not. R. Astron. Soc. 475, L15 (2018) ADSView ArticleGoogle Scholar
 Harfst, S., Gualandris, A., Merritt, D., Mikkola, S.: Mon. Not. R. Astron. Soc. 389, 2 (2008) ADSView ArticleGoogle Scholar
 Heggie, D.C.: Mon. Not. R. Astron. Soc. 445, 3435 (2014) ADSView ArticleGoogle Scholar
 Heggie, D.C., Hut, P.: The Gravitational MillionBody Problem: A Multidisciplinary Approach to Star Cluster Dynamics. Cambridge Univ. Press, Cambridge (2003) View ArticleGoogle Scholar
 Hénon, M.: Astrophys. Space Sci. 14, 151 (1971) ADSView ArticleGoogle Scholar
 Hurley, J.R., Pols, O.R., Tout, C.A.: Mon. Not. R. Astron. Soc. 315, 543 (2000) ADSView ArticleGoogle Scholar
 Hurley, J.R., Tout, C.A., Aarseth, S.J., Pols, O.R.: Mon. Not. R. Astron. Soc. 323, 630 (2001) ADSView ArticleGoogle Scholar
 Hurley, J.R., Tout, C.A., Pols, O.R.: Mon. Not. R. Astron. Soc. 329, 897 (2002) ADSView ArticleGoogle Scholar
 Jeans, J.H.: Mon. Not. R. Astron. Soc. 79, 408 (1919) ADSView ArticleGoogle Scholar
 Joshi, K.J., Rasio, F.A., Zwart, S.P., Portegies Zwart, S.: Astrophys. J. 540, 969 (2000) ADSView ArticleGoogle Scholar
 Kremer, K., Chatterjee, S., Rodriguez, C.L., Rasio, F.A.: Astrophys. J. 852, 29 (2018b) ADSView ArticleGoogle Scholar
 Kremer, K., Chatterjee, S., Ye, C.S., Rodriguez, C.L., Rasio, F.A.: arXiv:1808.02204 (2018a)
 Kremer, K., Ye, C.S., Chatterjee, S., et al.: Astrophys. J. 855, L15 (2018) ADSView ArticleGoogle Scholar
 Maccarone, T.J., Kundu, A., Zepf, S.E., Rhode, K.L.: Nature 445, 183 (2007) ADSView ArticleGoogle Scholar
 Mackey, A.D., Wilkinson, M.I., Davies, M.B., et al.: Mon. Not. R. Astron. Soc. 379 L40 (2007) ADSView ArticleGoogle Scholar
 Mackey, A.D., Wilkinson, M.I., Davies, M.B., Gilmore, G.F.: Mon. Not. R. Astron. Soc. 386, 65 (2008) ADSView ArticleGoogle Scholar
 Makino, J., Hut, P.: Astrophys. J. Suppl. Ser. 68, 833 (1988) ADSView ArticleGoogle Scholar
 McMillan, S.L.W.: Astrophys. J. 307, 126 (1986) ADSView ArticleGoogle Scholar
 McMillan, S.L.W., Lightman, A.P.: Astrophys. J. 283, 813 (1984a) ADSView ArticleGoogle Scholar
 McMillan, S.L.W., Lightman, A.P.: Astrophys. J. 283, 801 (1984b) ADSView ArticleGoogle Scholar
 Merritt, D., Piatek, S., Portegies Zwart, S., Hemsendorf, M.: Astrophys. J. Lett. 608, L25 (2004) ADSView ArticleGoogle Scholar
 MillerJones, J.C.A., Strader, J., Heinke, C.O., et al.: Mon. Not. R. Astron. Soc. 453, 3919 (2015) ADSView ArticleGoogle Scholar
 Morscher, M., Pattabiraman, B., Rodriguez, C., Rasio, F.A., Umbreit, S.: Astrophys. J. 800, 9 (2015) ADSView ArticleGoogle Scholar
 Morscher, M., Umbreit, S., Farr, W.M., Rasio, F.A.: Astrophys. J. 763, L15 (2013) ADSView ArticleGoogle Scholar
 Nitadori, K., Aarseth, S.J.: Mon. Not. R. Astron. Soc. 424, 545 (2012) ADSView ArticleGoogle Scholar
 Nitadori, K., Makino, J.: New Astron. 13, 498 (2008) ADSView ArticleGoogle Scholar
 Pattabiraman, B., Umbreit, S., Liao, W.k., et al.: Astrophys. J. Suppl. Ser. 204, 15 (2013) ADSView ArticleGoogle Scholar
 Peters, P.C.: Phys. Rev. 136, 1224 (1964) ADSView ArticleGoogle Scholar
 Pooley, D., Lewin, W.H.G., Anderson, S.F., et al.: Astrophys. J. 591, L131 (2003) ADSView ArticleGoogle Scholar
 Portegies Zwart, S., McMillan, S.L.W., van Elteren, E., Pelupessy, I., de Vries, N.: Comput. Phys. Commun. 184, 456 (2013) ADSView ArticleGoogle Scholar
 Portegies Zwart, S.F., McMillan, S.L.W., Hut, P., Makino, J.: Mon. Not. R. Astron. Soc. 321, 199 (2001) ADSView ArticleGoogle Scholar
 Rodriguez, C.L., AmaroSeoane, P., Chatterjee, S., et al.: Phys. Rev. Lett. 120, 151101 (2018) ADSView ArticleGoogle Scholar
 Rodriguez, C.L., Haster, C.J., Chatterjee, S., Kalogera, V., Rasio, F.A.: Astrophys. J. 824 L8 (2016b) ADSView ArticleGoogle Scholar
 Rodriguez, C.L., Morscher, M., Pattabiraman, B., et al.: Phys. Rev. Lett. 115, 051101 (2015) ADSView ArticleGoogle Scholar
 Rodriguez, C.L., Morscher, M., Wang, L., et al.: Mon. Not. R. Astron. Soc. 463, 2109 (2016a) ADSView ArticleGoogle Scholar
 Sigurdsson, S., Hernquist, L.: Nature 364, 423 (1993) ADSView ArticleGoogle Scholar
 Sippel, A.C., Hurley, J.R.: Mon. Not. R. Astron. Soc. 430, L30 (2013) ADSView ArticleGoogle Scholar
 Spitzer, L.J.: Astrophys. J. 158, L139 (1969) ADSView ArticleGoogle Scholar
 Stodoikiewicz, J.S.: Acta Astron. 32(1–2), 63–91 (1982) ADSGoogle Scholar
 Strader, J., Chomiuk, L., Maccarone, T.J., MillerJones, J.C.A., Seth, A.C.: Nature 490, 71 (2012) ADSView ArticleGoogle Scholar
 Strader, J., Seth, A.C., Forbes, D.A., et al.: Astrophys. J. 775, L6 (2013) ADSView ArticleGoogle Scholar
 Vasiliev, E.: Mon. Not. R. Astron. Soc. 446, 3150 (2014) ADSView ArticleGoogle Scholar
 Vasiliev, E., Antonini, F., Merritt, D.: Astrophys. J. 810, 49 (2015) ADSView ArticleGoogle Scholar
 Wang, L., Spurzem, R., Aarseth, S., et al.: Mon. Not. R. Astron. Soc. 450, 4070 (2015) ADSView ArticleGoogle Scholar
 Wang, L., Spurzem, R., Aarseth, S., et al.: Mon. Not. R. Astron. Soc. 458, 1450 (2016) ADSView ArticleGoogle Scholar
 Wisdom, J., Holman, M.: Astron. J. 102, 1528 (1991) ADSView ArticleGoogle Scholar