In Preparation
2022
Abstract
Despite the abundance of behavioral evidence showing the interaction between attention and prediction in infants, the neural underpinnings of this interaction are not yet well understood. The endogenous attentional function in adults have been largely localized to the frontoparietal network. However, resting-state and neuroanatomical investigations have found that this frontoparietal network exhibits a protracted developmental trajectory and involves weak and unmyelinated long-range connections early in infancy. Can this developmentally nascent network still be modulated by predictions? Here, we conducted the first investigation of infant frontoparietal network engagement as a function of the predictability of visual events. Using functional near-infrared spectroscopy, the hemodynamic response in the frontal, parietal, and occipital lobes was analyzed as infants watched videos of temporally predictable or unpredictable sequences. We replicated previous findings of cortical signal attenuation in the frontal and sensory cortices in response to predictable sequences and extended these findings to the parietal lobe. We also estimated background functional connectivity (i.e., by regressing out task-evoked responses) to reveal that frontoparietal functional connectivity was significantly greater during predictable sequences compared to unpredictable sequences, suggesting that this frontoparietal network may underlie how the infant brain communicates predictions. Taken together, our results illustrate that temporal predictability modulates the activation and connectivity of the frontoparietal network early in infancy, supporting the notion that this network may be functionally available early in life despite its protracted developmental trajectory.
Abstract
Infants' looking behaviors are often used for measuring attention, real-time processing, and learning-often using low-resolution videos. Despite the ubiquity of gaze-related methods in developmental science, current analysis techniques usually involve laborious post hoc coding, imprecise real-time coding, or expensive eye trackers that may increase data loss and require a calibration phase. As an alternative, we propose using computer vision methods to perform automatic gaze estimation from low-resolution videos. At the core of our approach is a neural network that classifies gaze directions in real time. We compared our method, called iCatcher, to manually annotated videos from a prior study in which infants looked at one of two pictures on a screen. We demonstrated that the accuracy of iCatcher approximates that of human annotators and that it replicates the prior study's results. Our method is publicly available as an open-source repository at https://github.com/yoterel/iCatcher.