Computer-aided method for calculating animal configurations during social interactions from two-dimensional coordinates of color-marked body parts

Citation
P. Sustr et al., Computer-aided method for calculating animal configurations during social interactions from two-dimensional coordinates of color-marked body parts, BEHAV RE ME, 33(3), 2001, pp. 364-370
Citations number
12
Categorie Soggetti
Psycology
Journal title
BEHAVIOR RESEARCH METHODS INSTRUMENTS & COMPUTERS
ISSN journal
07433808 → ACNP
Volume
33
Issue
3
Year of publication
2001
Pages
364 - 370
Database
ISI
SICI code
0743-3808(200108)33:3<364:CMFCAC>2.0.ZU;2-L
Abstract
In an experiment investigating the impact of preweaning social experience o n later social behavior in pigs, we were interested in the mutual spatial p ositions of pigs during paired social interactions. To obtain these data, w e applied a different colored mark to the, head and back of each of 2 pigs per group and videotaped the pigs' interactions. We used the EthoVision tra cking system to provide xy coordinates of the four colored marks every 0.2 sec. This paper describes the structure and functioning of a FoxPro, progra m designed to clean the raw data and use it to identify the mutual body pos itions of the 2 animals at 0.2-sec intervals. Cleaning the data was achieve d by identifying invalid data points and replacing them by interpolations. An algorithm was then applied to extract three variables from the coordinat es. (1) whether the two pigs were in body contact; (2) the mutual orientati on (parallel, antiparallel, or perpendicular) of the twos pigs; and (3) whe ther the pig in the "active" position made snout contact in front of, or be hind, the ear base of the other pig. Using these variables, we were able to identify five interaction, types: Pig A attacks, Pig B attacks, undecided head-to-head position, "clinch" resting position, or no contact. To assess the, reliability of the automatic system, a randomly chosen 5-min videotape d interaction was scored for mutual positions both visually (by 2 independe nt observers) and automatically. Good agreement was found between the data from the 2 observers and between each observers data and the data from the automated system, as assessed using Cohen's kappa coefficients.