Review Of Home Range Analyses

From Anatrack Ranges User Guide
Jump to: navigation, search

Ranges are analysed for many reasons, including estimation of range sizes and habitat use for conservation projects, estimation of range structure and overlap for behavioural studies, and combinations of all these parameters for demographic modelling. Just as there is no one best method for all statistical tests, there is no single best method of range analysis. Methods differ in smoothness of fit to locations and are constrained by sample size.

The different analysis techniques divide loosely into families, whose relationships and properties are outlined in the next few paragraphs. The special value of each technique is mentioned again at the start of the section dealing with its implementation in Ranges. For a more comprehensive recent review, see "A Manual for Wildlife Radio Tagging" (Kenward 2001).

The two main families of range analysis techniques are either primarily parametric, based on estimating location density distributions, or non-parametric, based on linkage distances between individual locations and usually involving a ranking process. The density-based techniques are fundamentally most smoothing. An early circular approach (Calhoun & Casby 1958, Harrison 1958) was extended to estimate bivariate normal ellipses (Jennrich & Turner 1969). These approaches assume that locations are distributed normally on one or two axes about the arithmetic mean x and y coordinates for all the locations. The implicit assumptions of normality and that ranges are mono-nuclear are seldom met (White & Garrott 1990), but ellipses are still useful for extremely smoothed estimates of range size when few locations are available to give appreciable detail.

More sophisticated parametric techniques use kernel estimators to allow for multinuclear distributions (Dixon & Chapman 1980, Donn & Rennolls 1983, Worton 1989). Location density is estimated over a matrix of intersections of a grid, which is placed arbitrarily (i.e. without reference to the coordinate system used for the locations). Contours are then interpolated between the intersections. Density-contouring confers shape that is lacking in ellipse models, but still includes assumptions about the density distribution that substantially affect the results. Contouring on Gaussian kernels is mathematically more robust than harmonic mean contouring, and use of techniques such as least squares cross validation (Worton 1989) to estimate the smoothness of the contouring have some value for providing optimal smoothing (Seaman & Powell 1996), at least for moderate numbers of locations (Hemson et al. 2005). However, applying any continuous density distribution smooths contours into unused areas that border high-use areas, especially when outlying locations extend the density distribution, and the more smoothing the less precise the fit to the pattern of locations.

The linkage techniques are based on creating polygons with lines that link adjacent locations. The earliest approach linked peripheral locations to give a minimum sum of linkage distances (imagine stretching string round locations marked by pegs) as a convex polygon (Dalke & Sime 1938, Mohr 1947), which provides comparability between studies due to widespread use (Harris et al. 1990). However, outlying locations cause a convex polygon round the peripheral locations to include large unvisited areas. This problem is avoided by excluding the largest linkage distances, either by excluding locations furthest from a range centre to give mononuclear peeled polygons that estimate single-outline territories (Michener 1979), or by restricting the linkage distance along edges to create concave polygons (Stickel 1954, Harvey & Barbour 1965) that may fragment into separate areas, or by creating multi-nuclear clusters of locations with a minimal sum of nearest-neighbour distances (Kenward 1987, Kenward et al. 2001). In effect, the peripheral convex polygon uses largest linkages and is the most smoothed of these techniques, while restricting the edge distance reduces the smoothing (as does minimising the sum of nearest-neighbour distances). The totally unsmoothed (effectively unlinked) option is achieved when each location is surrounded by a grid cell that has the width of the minimal linkage between locations (i.e. the tracking resolution). Finding the maximum range areas without any smoothing to link locations requires very large numbers of observations to record animals in each grid cell they visit, but with smaller samples of locations the neighbour linkage methods can give detailed outlines enclosing adjacent locations to approximate multinuclear cores.

The original concave polygons had the peripheral linkage distances restricted to half the span of maximum distance between any locations. However, the span is strongly influenced by outlying locations and using a maximum linkage of half the span is an arbitrary decision and is therefore best replaced by one of two more rigorous uses of neighbour-linkage distances.

These two methods are implemented as incremental hierarchical cluster analysis (Kenward 1987) and local convex hulls (Getz & Wilmers 2004). Both use nearest neighbour distances to define polygons that are inclusive of the densest locations, and then fuse polygons that have one or more locations in common. Both can produce utilisation plots as distances are increased to reduce density. However they differ in terms of how they reduce location density within polygons and in the fusing rules. Cluster analysis starts with 3 locations with the smallest sum of nearest neighbour distances and adds locations in each cluster, or starts new clusters of 3, to minimise this sum. LoCoH forms a convex hull of N locations round each location, then ranks the hulls according to size and fuses outlines of those that touch. Although an arbitrary choice of N is problematic, the hull-outline fusing can enable voids (holes) within polygon outlines and solves a problem in cluster analysis of outlines around separate clusters occasionally overlapping if they are convex polygons. However, using concave outlines can now solve this problem in cluster analysis, in which the hierarchical incremental use of nearest-neighbour distances avoids arbitrary choices.

Another recent refinement to avoid arbitrary choices is the objective definition of a distance that excludes as outliers those locations with nearest neighbour distances beyond the normal distribution (Kenward et al. 2001), which produces an excursion-exclusive home range, of usual movements in the sense of Burt (1943). The use of an objective exclusion distance was originally implemented in cluster analysis, but can also be applied to derive Objective Restricted-Edge Polygons (OREPs) which, like LoCoH, can include voids within polygon outlines.

All linkage-based (polygon-generating) range analyses are best used with a boundary strip. The strip is half the tracking resolution, which makes allowance for the real position of the location being up to this distance on either side of the registered coordinates. Use of a boundary strip provides a consistent relationship between the different linkage techniques. Thus, outlying locations in cluster or restricted edge analyses become isolated grid cells; moreover, if the tracking resolution is used to set the linkage-restriction of concave polygons, the range is estimated as grid cells

A number of studies have examined how the performance of different analysis methods, for estimating area, shape and internal structure of home ranges, is affected by numbers and distribution of locations. Estimating minimum numbers of locations required is important, because there is now a consensus that individual ranges, not locations, should be the sample units for statistical tests, to avoid assumptions about independence of locations within each range (Kenward 1992, Aebischer et al. 1993, Otis & White 1999).

In general, the more detail on shape and structure that is required, the more locations that are needed to define a range. The most stable ranges are ellipses, which can require just 10 fixes to give a stable index of area, assuming that consecutive fixes are independent in time. However, ellipses estimate an animal’s true trajectory least precisely (Robertson et al. 1998) and are highly sensitive to outlying locations unless there is centre-weighting (Samuel & Garton 1985). Density contouring improves on precision, and can give stable area estimates with only 15 - 20 fixes, although at least 30 locations are often necessary for smoothing of kernel contour estimates by Least Squares Cross Validation (Seaman et al. 1999). Harmonic mean estimation gives greater precision than kernels with nominal (fixed) smoothing (Robertson et al. 1998) and lower sensitivity to outliers, but can require unusually extensive calculations to minimise matrix-dependence effects. Compared with contouring, polygon peeling gives similar or better precision, but also typically requires at least 30 locations for stability. Expansive range outlines, including ellipses and contours containing 95% of the location density or MCPs round all the locations, seem suitable for estimating habitat available to an animal, while peeled polygons or contours containing 40-90% of the locations are useful for examining range overlaps. Cores from cluster polygons contain the highest density of trajectory. Cluster polygons seem especially appropriate for examining habitat use and internal structure of ranges when habitats are coarse-grained relative to range size (Kenward et al. 2001). However, the number of locations required for stability with both these techniques is large (albeit less than with unconnected grid cells).

Ranges implements all of the analyses that have been used in more than one refereed publication and are robust to a wide variety of location distributions and sample sizes. The latter condition excludes Fourier analysis (Anderson 1982). Polygon peeling techniques differ from the location-density polygon of Hartigan (1987) and mononuclear peeled convex hulls (Glendinning 1991, Worton 1995a); however, peeling with iterative recalculation of the arithmetic mean should give very comparable results. Cluster analysis should give results similar to those from Dirichlet tessellations (Wray et al. 1992), without the problems of setting limits to outer tiles. The favouring of a minimal sum of nearest neighbour distances for optimising Local Convex Hulls (Getz et al. 2007) also converges that technique on cluster analysis.

Which method to use depends on the number of locations and the biological questions. With 10-12 locations, ellipses can give robust results for questions of range size. With more locations, density contours can give some shape to ranges for questions about sociality and use of resources. If you have at least 30 locations with low spatio-temporal correlation, linkage techniques become appropriate for more precise definition of areas visited, with clustering or edge restriction based on outlier exclusion for defining range core polygons that conform to abrupt habitat boundaries. To avoid a priori choices, you may choose several techniques and adjust alpha, say to 0.01 for P<0.05 with 5 techniques; a broad selection could be 99% ellipses as an expansive size index of which overlaps define neighbouring animals with 95% kernel contours for a size index with shape for more subtle social and habitat assessments (Walls & Kenward 2001), outlier-excluded cluster cores for a tight definition of home range in grainy habitats, 50% kernels as a more probabilistic index at similar size, and 50% clusters to see if there is a tight focus on any particular resources. The peripheral (100%) convex polygons can provide comparability with older studies but tend not to outperform the other 5 in statistical tests.