Review Of Home Range Analyses

From Anatrack Ranges User Guide
Revision as of 09:17, 7 November 2014 by Admin (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Ranges are analysed for many reasons, including estimation of range sizes and habitat use for conservation projects, estimation of range structure and overlap for behavioural studies, and combinations of all these parameters for demographic modelling. Just as there is no one best method for all statistical tests, there is no single best method of range analysis. Methods differ in smoothness of fit to locations and are constrained by sample size.

The different analysis techniques divide loosely into families, whose relationships and properties are outlined in the next few paragraphs. The special value of each technique is mentioned again at the start of the section dealing with its implementation in Ranges. For a more comprehensive recent review, see "A Manual for Wildlife Radio Tagging" (Kenward 2001).

The two main families of range analysis techniques are either primarily parametric, based on estimating location density distributions, or non-parametric, based on linkage distances between individual locations and usually involving a ranking process. The density-based techniques are fundamentally most smoothing. An early circular approach (Calhoun & Casby 1958, Harrison 1958) was extended to estimate bivariate normal ellipses (Jennrich & Turner 1969). These approaches assume that locations are distributed normally on one or two axes about the arithmetic mean x and y coordinates for all the locations. The implicit assumptions of normality and that ranges are mono-nuclear are seldom met (White & Garrott 1990), but ellipses are still useful for extremely smoothed estimates of range size when few locations are available to give appreciable detail.

More sophisticated parametric techniques use kernel estimators to allow for multinuclear distributions (Dixon & Chapman 1980, Donn & Rennolls 1983, Worton 1989). Location density is estimated over a matrix of intersections of a grid, which is placed arbitrarily (i.e. without reference to the coordinate system used for the locations). Contours are then interpolated between the intersections. Density-contouring confers shape that is lacking in ellipse models, but still includes assumptions about the density distribution that substantially affect the results. Contouring on Gaussian kernels is mathematically more robust than harmonic mean contouring, and use of techniques such as least squares cross validation (Worton 1989) to estimate the smoothness of the contouring have some value for providing optimal smoothing (Seaman & Powell 1996), at least for moderate numbers of locations (Hemson et al. 2005). However, applying any continuous density distribution smooths contours into unused areas that border high-use areas, especially when outlying locations extend the density distribution, and the more smoothing the less precise the fit to the pattern of locations.

The linkage techniques are based on creating polygons with lines that link adjacent locations. The earliest approach linked peripheral locations to give a minimum sum of linkage distances (imagine stretching string round locations marked by pegs) as a convex polygon (Dalke & Sime 1938, Mohr 1947), which provides comparability between studies due to widespread use (Harris et al. 1990). However, outlying locations cause a convex polygon round the peripheral locations to include large unvisited areas. This problem is avoided by excluding the largest linkage distances, either by excluding locations furthest from a range centre to give mononuclear peeled polygons that estimate single-outline territories (Michener 1979), or by restricting the linkage distance along edges to create concave polygons (Stickel 1954, Harvey & Barbour 1965) that may fragment into separate areas, or by creating multi-nuclear clusters of locations with a minimal sum of nearest-neighbour distances (Kenward 1987, Kenward et al. 2001). In effect, the peripheral convex polygon uses largest linkages and is the most smoothed of these techniques, while restricting the edge distance reduces the smoothing (as does minimising the sum of nearest-neighbour distances), with a totally unsmoothed (effectively unlinked) option of surrounding each location by a grid cell that has the width of the minimal linkage between locations (i.e. the tracking resolution). Finding the maximum range areas without any smoothing to link locations requires very large numbers of observations to record animals in each grid cell they visit, but with smaller samples of locations the neighbour linkage methods can give detailed outlines enclosing adjacent locations to approximate multinuclear cores.

The original concave polygons had the peripheral linkage distances restricted to half the span of maximum distance between any locations. However, the span is strongly influenced by outlying locations and using a maximum linkage of half the span is an arbitrary decision. In this method or cluster analysis, acceptably small linkage distances could be incremented, as an equivalent of increasing the smoothing of contours. A refinement in cluster analysis was to define objectively a distance that excluded as outliers those locations with nearest neighbour distances beyond the normal distribution (Kenward et al. 2001), but plotting outlines round clusters remains problematic: convex polygons around separate clusters occasionally overlap and a concave solution again raises the issue of defining edge distances. A solution has been to use the outlier exclusion distance to set a limit for restricted edge polygons. The resulting Objective-Restricted-Edge Polygons should give results almost identical to the nearest-neighbour convex hull approach advocated by Getz & Wilmers (2004).

All linkage-based (polygon-generating) range analyses are best used with a boundary strip. The strip is half the tracking resolution, which makes allowance for the real position of the location being up to this distance on either side of the registered coordinates. Use of a boundary strip provides a consistent relationship between the different linkage techniques. Thus, outlying locations in cluster or restricted edge analyses become isolated grid cells; moreover, if the tracking resolution is used to set the linkage-restriction of concave polygons, the range is estimated as grid cells

A number of studies have examined how the performance of different analysis methods, for estimating area, shape and internal structure of home ranges, is affected by numbers and distribution of locations. Estimating minimum numbers of locations required is important, because there is now a consensus that individual ranges, not locations, should be the sample units for statistical tests, to avoid assumptions about independence of locations within each range (Kenward 1992, Aebischer et al. 1993, Otis & White 1999).

In general, the more detail on shape and structure that is required, the more locations that are needed to define a range. The most stable ranges are ellipses, which can require just 10 fixes to give a stable index of area, assuming that consecutive fixes are independent in time. However, ellipses estimate an animal’s true trajectory least precisely (Robertson et al. 1999) and are highly sensitive to outlying locations unless there is centre-weighting (Samuel & Garton 1985). Density contouring improves on precision, and can give stable area estimates with only 15 - 20 fixes, although at least 30 locations are often necessary for smoothing of kernel contour estimates by Least Squares Cross Validation (Seaman et al. 1999). Harmonic mean estimation gives greater precision than kernels with nominal (fixed) smoothing (Robertson et al. 1999) and lower sensitivity to outliers, but can require unusually extensive calculations to minimise matrix-dependence effects. Compared with contouring, polygon peeling gives similar or better precision, but also typically requires at least 30 locations for stability. Expansive range outlines, including ellipses and contours containing 95% of the location density or MCPs round all the locations, seem suitable for estimating habitat available to an animal, while peeled polygons or contours containing 40-90% of the locations are useful for examining range overlaps. Cores from cluster polygons contain the highest density of trajectory. Cluster polygons seem especially appropriate for examining habitat use and internal structure of ranges when habitats are coarse-grained relative to range size (Kenward et al. 2001), with similar results to be expected from objective-restricted-edge polygons. However, the large numbers of locations required for stability with both these techniques is likely to be exceeded only with unconnected grid cells.

Ranges implements all of the analyses that have been used in more than one refereed publication and are robust to a wide variety of location distributions and sample sizes. The latter condition excludes Fourier analysis (Anderson 1982). Polygon peeling techniques differ from the location-density polygon of Hartigan (1987) and mononuclear peeled convex hulls (Glendinning 1991, Worton 1995); however, peeling with iterative recalculation of the arithmetic mean should give very comparable results. Cluster analysis should give results similar to those from Dirichlet tessellations (Wray et al. 1992), without the problems of setting limits to outer tiles.

Which method to use depends on the number of locations and the biological questions. With 10-12 locations, ellipses can give robust results for questions of range size. With more locations, density contours can give some shape to ranges for questions about sociality and use of resources. If you have at least 30 locations with low spatio-temporal correlation, linkage techniques become appropriate for more precise definition of areas visited, with clustering or edge restriction based on outlier exclusion for defining range core polygons that conform to abrupt habitat boundaries. To avoid a priori choices, you may choose several techniques and adjust alpha, say to 0.01 for P<0.05 with 5 techniques, for which a broad selection could be ellipses, contour peripheries and cores, peripheral convex polygons and outlier-excluded cores (Walls & Kenward 2001).