Skip to main content

Beyond visual integration: sensitivity of the temporal-parietal junction for objects, places, and faces

Abstract

One important role of the TPJ is the contribution to perception of the global gist in hierarchically organized stimuli where individual elements create a global visual percept. However, the link between clinical findings in simultanagnosia and neuroimaging in healthy subjects is missing for real-world global stimuli, like visual scenes. It is well-known that hierarchical, global stimuli activate TPJ regions and that simultanagnosia patients show deficits during the recognition of hierarchical stimuli and real-world visual scenes. However, the role of the TPJ in real-world scene processing is entirely unexplored. In the present study, we first localized TPJ regions significantly responding to the global gist of hierarchical stimuli and then investigated the responses to visual scenes, as well as single objects and faces as control stimuli. All three stimulus classes evoked significantly positive univariate responses in the previously localized TPJ regions. In a multivariate analysis, we were able to demonstrate that voxel patterns of the TPJ were classified significantly above chance level for all three stimulus classes. These results demonstrate a significant involvement of the TPJ in processing of complex visual stimuli that is not restricted to visual scenes and that the TPJ is sensitive to different classes of visual stimuli with a specific signature of neuronal activations.

Highlights

Left and right hemispheric TPJ regions show comparable BOLD univariate responses to different object classes (objects, faces, places).

Demonstration that the TPJ has unique activation patterns for the different object classes.

Specifically positive activations for TPJ regions are significantly involved in global perception compared to TPJ regions not responding to global shapes.

Above chance level decoding of objects, faces, and places from TPJ regions are involved in global perception.

Introduction

The temporo-parietal junction (TPJ) is involved in various cognitive functions, like understanding other people’s intentions and behavior (Theory of Mind; Saxe and Kanwisher [11, 44, 75, 76], visual search and orienting of attention [14, 28, 38, 39] and visual stimulus detection [5]. Studies investigating patients with damage to bilateral temporo-parietal cortices exhibiting simultanagnosia and functional imaging studies suggested a TPJ involvement in perception of hierarchical global stimuli [2, 25, 32, 48], and objects in demanding viewing conditions [13, 58, 69, 71].

Several studies that associated temporo-parietal brain regions with global perception of hierarchical stimuli used Navon-like stimuli [56] where a global percept is constructed from local elements [29, 33, 67, 68, 85]. However, a Navon-like, hierarchical stimulus was always intended as a representation of real-world visual scenes where individual elements create a global visual precept e.g., humans, trees, walkways, and grass create the global scene impression of a park. It is also known that patients suffering from simultanagnosia show significant deficits in perception of global, hierarchical stimuli and grasping the gist of visual scenes like the Broken Window Picture [3, 35, 66, 72].

A recent fMRI study [58] showed the connection between clinical observations in simultanagnosia and functionality of the healthy human brain. It was demonstrated that the TPJ responds particularly to objects presented in demanding viewing conditions which is in line with patient studies reporting particular deficits for demanding object presentations in simultanagnosia [13, 66]. However, the link between clinical findings in simultanagnosia and neuroimaging in healthy subjects is missing for real-world global stimuli, like visual scenes. While it is well-known that Navon-like, hierarchical stimuli used as artificial representations of real-world global scenes activate TPJ regions [29, 33, 67, 68, 85] and that simultanagnosia patients show deficits during the recognition of hierarchical stimuli and visual scenes the role the TPJ in real-world scene processing is unclear.

In the current study, we aimed at investigating the role of the TPJ in processing of real-word scenes. We first conducted a fMRI TPJ localizer experiment [33] to identify voxels in the TPJ responding to global, hierarchical stimuli. In an independent fMRI experiment, we showed real-world scenes as well as objects and faces as control stimuli. We hypothesized that TPJ voxels that respond to artificial representations of global scenes should show stronger responses to real-world visual scenes compared to objects and faces. While previous research already showed an involvement of the TPJ in processing of visual scenes [55], human faces [46], and object use [83] the present study aims at finding differences in univariate activations or unique voxel patterns between the three stimulus classes (see below).

Beyond univariate responses, multivariate voxel patterns in selected regions of interest provide a unique and sensitive insight into the functionality of a particular brain region. We hypothesized that TPJ areas (responding to global shapes) of the healthy human brain should show unique voxel patterns for real-world scenes, and possibly real-world complex objects and faces. The existence of unique voxel patterns for real-world scenes (and other stimulus classes) would show evidence for specific mechanisms supporting perception of different stimulus classes with different perceptual requirements. Eventually, this result would help to understand the mechanisms causing simultanagnosia where lesions in posterior temporo-parietal brain regions gradually impair the perception of hierarchical stimuli, real-world visual scenes or even coherent, complex objects. Since the perceptual impairments observed in simultanagnosia are usually no all-or-nothing phenomenon (e.g., modulated by severity of the task, Huberle & Karnath [32, 66], with partial damage to crucial brain regions specific activation pattern might be able to compensate deficits under certain conditions. In our multivariate analysis approach, we trained a machine learning classifier to assess the specificity of voxel response patterns in the TPJ to real-world scenes, objects, and faces.

Methods

18 healthy individuals participated in the experiments (6 left-handed and 7 female; mean age = 26 years, SD = 3). All had normal or corrected-to-normal vision and gave written informed consent prior to scanning. Participants reported no history of neurological or psychiatric disorders. The experiment was approved by the ethics committee of the medical faculty of the University in Tübingen and conducted in accordance with the declaration of Helsinki.

The sample size was chosen based on prior experience with fMRI studies involving the TPJ and visual processing providing the necessary statistical power [7, 33, 58, 67, 68].

MRI scans were acquired using a 3T Siemens Magnetom Trio scanner (Siemens AG, Erlangen, Germany), using a 64-channel head coil. Stimuli were presented using Matlab (The Mathworks, Inc., Natick, MA, USA) and the Psychophysics Toolbox [9, 62] and shown on a MR compatible screen placed behind the scanner bore which could be viewed by the participants via a mirror mounted on the head coil. Behavioral responses were collected using a fiber-optic button response pad (Current Designs, Haverford, PA, USA).

In the fMRI localizer experiment, we presented Navon-like global shape stimuli [56] that were applied in previous neuroimaging studies [7, 33, 58, 67, 68]. The stimuli showed the global shape of either a circle or a square constructed from local images of squares or circles and were presented in all possible combinations of global and local elements (congruent and incongruent). Each stimulus consisted of 900 small elements organized in 30 columns and 30 rows. In order to minimize learning effects, all global objects were presented at one of four different positions within an individual stimulus (left top, right top, left bottom, right bottom) and luminance and contrast were varied between the objects and their background (e.g., dark objects presented in a light background and vice versa). We created 192 different stimuli (4 combinations of objects at the global and local level, 48 stimuli per combination differing in luminance and position of the global objects). The stimulus images were scrambled at two levels (20% and 80%) so that the global form could either be recognized (‘intact global perception’) or not (‘scrambled global perception’; Fig. 1A). The stimuli were scrambled by exchanging the small images of objects at the local level with each other. The percentage number indicated hereby the percentage of relocated local elements in relation to their total number. Stimuli were presented in two runs (duration of each block: about 7 min), each consisting of 168 experimental trials (42 intact circles, 42 intact squares, 42 scrambled circles, 42 scrambled squares). The 168 stimuli were selected pseudo-randomly from the set of 192 stimuli available keeping a balanced number of intact and scrambled global stimuli. The global forms were presented at 5.4° visual angles (size of the background square of each stimulus). Participants were instructed to respond via button press whether the stimulus was a circle or a square.

Fig. 1
figure 1

Stimulus material. (A) In the localizer fMRI experiment, we showed the global shapes of either a circle or a square constructed from local images of squares or circles. All possible combinations of global and local elements were presented. The images were scrambled at two levels (20% and 80%) so that the global form could either be recognized (intact global shapes) or not (scrambled global shapes). (B) In the main fMRI experiment, we presented visual stimuli from three different categories: places, objects, and faces

In the main fMRI experiment, we used visual stimuli of three different classes: places, objects and faces (Fig. 1B). We choose classic place stimuli as the representation of visual scenes. The original images were taken from multiple sources [8, 41, 42, 59, 61, 70, 74]. Since the three stimulus classes have systematic differences, e.g. faces are mostly round, objects can have various shapes, we controlled for low-level stimulus features. In the first step, all stimulus classes were converted to grayscale using the rgb2gray function in Matlab. In a next step, object and face stimuli were normalized for size by extending the stimuli on their longer axis on the image background and adjusting the other axis accordingly, i.e., if an object/face was higher than wide we stretched it to fully cover the y-axis of the background image and adjusted stimulus size accordingly on the x-axis. Luminance was adjusted across all stimuli by calculating the average pixel value across all stimuli and adjusting the pixel values of each stimulus to the average pixel value. For object and face stimuli only the non-white proportions of the stimulus were considered to calculate the luminance per individual stimulus and across stimuli. To control for differences in spatial frequencies between stimulus classes we calculated spatial frequencies across all stimuli and excluded the images with the 10% lowest and highest values. We did not filter any spatial frequencies from the images since this would induce another systematic bias, e.g., through blurring, and might cause an unnatural appearance of a stimulus. All stimuli were presented at a size of 5° visual angles (size of the background square of each stimulus). The stimuli were selected from the databases in a way that preferably few similar objects were included. Four runs with a duration of 8 min were conducted. Per run, 180 experimental trials were presented (60 per class) in a pseudorandomized order. Twelve stimuli (four from each stimulus class) were repeated, and stimulus repetitions had to be indicated via button press in a one-back task. In total, 673 different stimuli were presented throughout the experiment.

We investigated the image features contrast, luminance, and spatial frequency between the three stimulus classes for all stimuli used in the present study. We calculated a linear model per image feature with the predictor stimulus type (objects, faces, places) and the respective feature values of the individual stimuli as dependent variable found significant differences between all three stimulus types for contrast (F = 12.03, p = 7.9 × 10− 6), luminance (F = 55.07, p < 2.0 × 10− 16) and spatial frequency (F = 866.56, p < 2.0 × 10− 16). However, each stimulus type showed particularly different patterns across image features. Place stimuli showed the highest value for spatial frequencies (p < 2.0 × 10− 16 compared against faces and objects). In contrast, object stimuli had the highest luminance (p < 2.0 × 10− 16 compared against faces and p = 0.0953 compared against places) and face stimuli the highest contrast (p = 9.0 × 10− 6 compared against objects and p = 6.7 × 10− 7 compared against places). In conclusion, each of the three stimulus types showed image characteristics varying unsystematically from the other stimulus types which makes it unlikely that these image features systematically influenced BOLD responses in non-visual areas of the brain, the TPJ.

Both fMRI experiments were event-related designs and stimuli were presented for 300 ms with an inter-stimulus interval of 1700 ms. During the inter-stimulus interval, a central fixation crosshair was presented. The events were ordered in an optimal rapid event-related design specified by optseq2 [15]; https://surfer.nmr.mgh.harvard.edu/optseq) adding an additional fixation baseline time (80 s in the fMRI localizer experiment and 120 s in the main fMRI experiment), distributed between the trials.

MRI data acquisition

Functional images were acquired using multiband echo-planar-imaging (EPI) sequences. All remaining participants were scanned with parameters for multiband EPI sequences from the HCP [53]: TR = 1000 ms, TE = 37 ms, flip angle = 52°, FOV = 187 × 187 mm2, 72 slices, voxel size = 2 × 2 × 2 mm3. Single band reference images (TR = 1000 ms, TE = 37 ms, flip angle = 52°, FOV = 1872 × 1872 mm2, 72 slices, voxel size = 2 × 2 × 2 mm3) were collected before each functional run. Per participant, two T1-weighted anatomical scans (TR = 2280 s, 176 slices, voxel size = 1.0 × 1.0 × 1.0 mm3; FOV = 256 × 256 mm², TE = 3.03 ms; flip angle = 8°) were collected at the end of the experimental session.

fMRI data analysis

Data pre-processing and model estimation were performed using SPM12 (http://www.fil.ion.ucl.ac.uk/spm). Functional images were realigned to each participant’s first image, aligned to the AC-PC axis and slice-time corrected. The original single-band image was then co-registered to the pre-processed functional images and the anatomical image was co-registered to the single-band image. The resolution of the single-band image was up-sampled before the anatomical image was aligned to it. Functional images were smoothed with a 4 mm FWHM Gaussian kernel. Time series of hemodynamic activation were modeled based on the canonical hemodynamic response function (HRF) as implemented in SPM12. Low-frequency noise was eliminated with a high-pass filter of 128 s. Correction for temporal autocorrelation was performed using an autoregressive AR(1) process. Movement parameters (roll, pitch, yaw; linear movement into x-, y-, z-directions) estimated in the realignment were included as regressors of no interest. To avoid bias associated with spatial normalization, analyses were conducted in native space. For the fMRI localizer experiment, we used two experimental regressors: intact global shapes and scrambled global shapes. For the main fMRI experiment, the experimental regressors consisted of the three stimulus classes: places, objects, and faces. Hit trials from the one-back task and trials with accidental/erroneous button presses were not modeled explicitly. To give MNI-coordinates of our functional ROIs we normalized the anatomical and functional data and re-created the functional ROIs in MNI space identical to the method described below in native space.

Region of interest (ROI) analysis

Anatomical ROIs were created applying Freesurfer’s cortical reconstruction routine [16, 22] and the Destrieux atlas [19] for each subject. To create an individual anatomical TPJ ROI for each participant, we combined the posterior third of the superior temporal gyrus (Freesurfer Label 11,174 and 12,174), the sulcus intermedius primus (Freesurfer Label 11,165, 12,165), the angular gyrus (Freesurfer Labels 11,125, 12,125) and the posterior half of the supramarginal gyrus (Freesurfer Label 11,126, 12,126). Since the functional anatomy of global perception is widely under debate within posterior temporo-parietal brain areas with possibly high inter-individual variability we decided to create individual global TPJ ROIs within liberal anatomical boundaries [7, 58]. Our individual anatomical-functional ROIs might therefore represent an adequate anatomical correlate of global perception in individual posterior temporo-parietal brain areas.

In a next step, we identified individual voxels that showed higher signals for intact global shapes compared to fixation baseline as functional ROIs involved in global shape perception. The voxel-level threshold was set to p < 0.05 (uncorr.) without cluster threshold. Each participant’s individual global shape TPJ ROI was created as an intersection between the functional intact global shapes vs. baseline contrast and the anatomical TPJ ROI. We were able to identify functional global shape TPJ ROIs in all our 18 participants in the left and right hemisphere. Example structural and functional TPJ ROIs are presented in Fig. 2A. The average size of the individual TPJ global shape ROIs was 1216.22 mm3 (SD = 902.07 mm3) in the left hemisphere and 1027.78 mm3 (SD = 515.95 mm3) in the right hemisphere. The mean center of mass was located at the MNI coordinates x = −48.33 (SD = 6.48); y = −41.67 (SD = 9.76); z = 32.78 (SD = 9.26) for left hemispheric global shape TPJ ROIs and x = 47.56 (SD = 4.68); y = −44.67 (SD = 8.03); z = 31.11 (SD = 8.71) for right hemispheric global shape TPJ ROIs. Since the ROIs were created based on individual anatomy and individual BOLD responses to the functional localizer task they differed substantially in size and form also allowing partially connected or distinct subclusters (see Fig. 2A).

Fig. 2
figure 2

Univariate ROI Analysis (A) Individual anatomical TPJ ROIs were created applying Freesurfer’s cortical reconstruction routine [16, 22] and the Destrieux atlas [19]. From these ROIs, we then identified voxels that showed higher signals for intact global shapes compared to baseline for each subject (red voxels: global TPJ ROI, blue voxels: control ROI). Percent signal change values and beta-coefficients extracted from these individual ROIs were then used for univariate and multivariate statistical analyses. Example ROIs from four representative subjects are presented in in standard MNI space on a surface version of the ch2 brain using the BrainNet viewer [84]. (B) Percent signal change values for places, objects, and faces from left and right hemispheric global shape TPJ ROIs with corresponding error bars (standard error of the mean). The dots represent the individual data points per participant

As control ROI we selected all voxels of the individual, anatomical TPJ ROIs not responding to global shapes (Fig. 2A). The average size of these individual control ROIs was 11104.20 mm3 (SD = 1493.74 mm3) in the left hemisphere and 11562.70 mm3 (SD = 1479.70 mm3) in the right hemisphere. The mean center of mass was located at the MNI coordinates x = −49.44 (SD = 2.15); y = −51.44 (SD = 2.73); z = 29.44 (SD = 2.55) for left hemispheric and x = 51.00 (SD = 1.85); y = −46.33 (SD = 3.77); z = 30.00 (SD = 3.36) for right hemispheric ROIs.

We used MarsBar (http://marsbar.sourceforge.net) to extract the mean percent signal change from the individual global shape TPJ ROIs for all three experimental conditions of the main fMRI experiment (per run and participant). For statistical data analysis we applied linear mixed effect models using R’s lme4 and lmerTest packages. Model estimation was done using Restricted Maximum Likelihood (REML) estimation. Statistical significance was assessed using the Anova function provided by the car package.

Multivoxel pattern analysis (MVPA)

A univariate analysis can only demonstrate differences in signal strengths of experimental conditions in a ROI, e.g., stronger signals for places vs. objects in TPJ regions. In contrast, a MVPA allows to distinguish between multivariate patterns evoked by distinct between experimental conditions [26, 27]. First, we created feature vectors from fMRI data by applying the approach suggested by Mumford et al. [54] for each stimulus class from the main fMRI experiment and the fMRI localizer experiment and every participant we calculated beta regression coefficient images for each experimental trial separately by running a general linear model including a regressor for the respective trial as well as another regressor for all other trials. For this analysis, we used unsmoothed images and did not apply any high-pass filtering in the statistical model. Resulting beta values of voxels from individual global shape TPJ ROIs (and control ROIs) were then used as features for training and testing support vector machines (SVM) using the R package e1071.

We aimed at demonstrating that TPJ areas responding to intact global shapes show specific voxel pattern responses for places in contrast to objects and faces. Per participant and hemisphere, we selected beta values for every experimental trial from the main fMRI experiment (objects, faces, places) from every voxel of the previously defined global shape TPJ ROI. Each experimental trial was treated as an observation and each voxel as a feature for the machine learning model. We split the data across all experimental runs randomly into a training set (80% of trials = 576 data points) and a test set (20% of trials = 144 data points) and trained an SVM with a linear basis kernel with the training set. Using the training data, we conducted grid search using 10-fold cross validation to optimize the regularization parameter C = [0.01, 0.1, 1, 10, 100] and gamma = [0.1, 0.5, 1, 2]. Using the built-in tune() function of the e1071 package for cross-validation we sampled trials randomly from the training data avoiding a systematic bias that can be induced by a leave-one-run-out cross validation. Using the SVM model, we predicted from the voxel patterns of the test set (per participant and hemisphere) if an individual trial was an object, face, or place. We calculated classification accuracy and predictive value per condition. The predictive value per condition was calculated as follows: number of correct classifications (condition A) / [number of correct classifications (condition A) + incorrect classification (condition A)]. The overall classification accuracy is calculated as the proportion of correctly classified trials relative to all classified trials across both object viewing conditions: [correct classifications (condition A) + correct classifications (condition B) + correct classifications (condition C)] / [correct and incorrect classifications (condition A) + correct and incorrect classifications (condition B) + correct and incorrect classifications (condition C)]. We interpret a potential difference in classification accuracies between the two conditions as noisier data in one condition compared to the other condition.

To explore possible differences between a linear and a non-linear representation of stimulus activation patterns we repeated the MVPA with a radial basis kernel (with otherwise identical methods and parameters as with the linear SVM kernel). This analysis was motivated by the conceptual reasoning that a linear decodability of information indicates an abstract encoding of this information in the respective brain area beyond low-level object features, like shape or size [45, 52]. A linear representation (with higher classification accuracies for a linear kernel) would indicate that a brain region encodes high level stimulus concepts beyond simple low-level features.

Whole brain analysis

We used the spatially normalized functional data to calculate the following whole brain contrasts across all participants: intact vs. scrambled global perception, objects vs. faces and places, faces vs. objects and places as well as places vs. objects and faces. We used the AAL3 toolbox for SPM12 [73] to extract the overlap of the clusters from the respective contrasts with the AAL3 regions.

Results

Behavioral data

To ensure attention to the visual stimuli during the main fMRI experiment participants were instructed to indicate stimulus repetitions via button press in a one-back task. We calculated a linear mixed-effect model with percent correct values as dependent variable, stimulus (objects, faces, places) as fixed effects and participant as a random effect. Participants detected repetitions objects (mean: 90%, SD: 14), faces (mean 94%, SD: 6) and places (mean 88%, SD: 11) reliably. We observed no significant main effect for stimulus (χ² = 3.29, p = 0.193).

During the localizer fMRI experiment, participants were instructed to indicate whether a stimulus showed a circle or a square. For intact global stimuli, responses were mostly correct (mean: 97%, SD: 16), for scrambled global stimuli, responses were around chance level (mean: 46%, SD: 50).

Univariate ROI analysis

We observed positive BOLD signals for all three experimental conditions in the global TPJ ROI (Fig. 2B): objects (mean percent signal change left hemisphere: 0.06%; right hemisphere: 0.08%), faces (left hemisphere: 0.06%; right hemisphere: 0.07%) and places (left hemisphere: 0.05%; right hemisphere: 0.06%). In contrast, we observed only deactivations in the remainder of the anatomically defined TPJ (Fig. 2B). We found negative BOLD signals for objects (left hemisphere: -0.02%; right hemisphere: -0.01%), faces (left hemisphere: -0.02%; right hemisphere: -0.02%) and places (left hemisphere: -0.03%; right hemisphere: -0.03%).

To statistically quantify the differences between stimulus classes and ROIs we used percent signal change values as a dependent variable in a linear mixed-effects model with fixed effects for stimulus (objects, faces, places), ROI (global TPJ ROI, control ROI) and hemisphere (left vs. right) and participant and run as a random effect. There was a significant main effect of stimulus (χ² = 7.20, p = 0.027), ROI (χ² = 399.72, p = 2.0 × 10− 16) and no effect for hemisphere (χ² = 3.05, p = 0.081) and no significant interaction (p > 0.678). See Table 1 for the full model output. This model was calculated to demonstrate the significant differences in univariate activation between the global TPJ ROI and the control ROI.

Table 1 Parameter estimates and results of the univariate ROI analysis (comparison between ROIs)

To assess statistical differences between stimulus classes and hemispheres in the global TPJ ROI we used percent signal change values as a dependent variable in a linear mixed-effects model with fixed effects for stimulus (objects, faces, places) and hemisphere (left vs. right) and participant and run as a random effect. There was no significant main effect of stimulus (χ² = 2.19, p = 0.335) and hemisphere (χ² = 1.51, p = 0.220) and no significant interaction (χ² = 0.25, p = 0.882). The full model output is presented in Table 2.

Table 2 Parameter estimates and statistical results of the univariate ROI analysis (global TPJ ROI)

We calculated the same model for the Control ROI and observed a significant main effect of stimulus (χ² = 7.53, p = 0.023) but not for hemisphere (χ² = 2.03, p = 0.154) and no significant interaction (χ² = 0.77, p = 0.679). See the full model output in Table 3. Pairwise comparisons showed a significant difference between places and objects (χ² = 6.39, p = 0.011), places and faces (χ² = 4.45, p = 0.035) but not between objects and faces (χ² = 0.15, p = 0.694). This analysis shows that the significant main effect for stimulus in the first analysis (including the factor ROI) is driven by greater deactivation for places in the control ROI and therefore not relevant for the analysis of the global TPJ ROI.

Table 3 Parameter estimates and statistical results of the multivariate ROI analysis (global TPJ ROI)

MVPA

In the left and right hemispheric global TPJ ROI, the overall classification accuracy was significantly above chance level (Fig. 3A; left hemisphere: mean: 40.5%; t-test against 33% chance level: t(17) = 4.34, p = 4.4 × 10− 5; right hemisphere: 40.2%; t(17) = 6.15, p = 4.8 × 10− 7).

Fig. 3
figure 3

Multivoxel Pattern Analysis (MVPA). (A) Average classification accuracy in the global TPJ ROI across all three stimulus conditions (places, objects, faces) in the left and right hemisphere. (B) Average predictive values (calculated as the proportion of correctly classified trials relative to all tested trials for all three stimulus classes) for places, objects, and faces in left and right hemispheric global shape TPJ ROIs. In both panels, the error bars are the standard error of the mean. The dashed line indicates the 33% chance level. The asterisk indicates significant results against 33% chance level. The dots represent the individual data points per participant

In the left hemispheric global TPJ ROI (Fig. 3B), we observed predictive values significantly above chance for objects (41.9%; t(17) = 2.80, p = 0.018) and places (43.5%; t(17) = 2.86, p = 0.018), but not for faces (36.1%; t(17) = 0.94, p = 0.363). The predictive values in the right hemispheric global TPJ ROI (Fig. 3B) were significantly above chance for objects (41.1%; t(17) = 4.31, p = 3.8 × 10− 4), faces (42.4%; t(17) = 2.97, p = 0.007) and places (37.4%; t(17) = 2.88, p = 0.007).

All t-tests of classification accuracies against chance are corrected for multiple comparisons using the False Discovery Rate (FDR) per MVPA analysis.

To assess statistical differences of SVM classification between stimulus classes and hemispheres we used predictive values as a dependent variable in a linear mixed-effects model with fixed effects for stimulus (objects, faces, places) and hemisphere (left vs. right) and participant as a random effect. There was no significant main effect of stimulus (χ² = 0.361 p = 0.736) and hemisphere (χ² = 0.01, p = 0.922) and no significant interaction (χ² = 4.64, p = 0.098). See Table 3 for the full model output.

To statistically investigate the differences in classification accuracies between SVM models applying a linear vs. radial kernel (in the global TPJ ROI) we calculated a linear mixed-effects model with predictive values (each stimulus class) as a dependent variable fixed effects for kernel (linear, radial), stimulus (objects, faces, places) and hemisphere (left vs. right) and participant as a random effect. We observed higher classification accuracies for the linear kernel throughout all stimulus classes and both hemispheres (see Table 4) and a significant main effect for kernel (χ² = 13.23, p = 2.8 × 10− 4) indicating a significant difference. There was no other significant main effect (p > 0.106) and no significant interaction (p > 0.125). See Table 5 for the full model output.

Table 4 Predictive values per kernel, hemisphere and stimulus class (global TPJ ROI)
Table 5 Parameter estimates and results of the multivariate ROI analysis (comparison of linear and radial kernels)

We repeated the MVPA for the control ROI using a linear kernel and compared the classification accuracies between the two ROIs (global TPJ ROI vs. control ROI). In the left and right hemispheric control ROIs, the overall classification accuracy was above chance level but not significant (left hemisphere: mean: 35.8%; t-test against 33% chance level: t(17) = 2.07, p = 0.054; right hemisphere: 34.9%; t(17) = 1.37, p = 0.207). In the left hemispheric control ROI, we observed predictive values above chance (but not significant) for objects (36.8%; t(17) = 1.39, p = 0.182), places (33.5%; t(17) = 0.14, p = 0.891) and faces (37.1%; t(17) = 1.32, p = 0.205). The predictive values in the right hemispheric control ROI were also not significantly above chance for objects (34.0%; t(17) = 0.28, p = 0.780), faces (33.6%; t(17) = 1.99, p = 0.845) and places (37.2%; t(17) = 1.37, p = 0.190). Next, we compared predictive values between the global TPJ ROI and the control ROI. We used predictive values as a dependent variable in a linear mixed-effects model with fixed effects for ROI (global TPJ ROI vs. control ROI), stimulus (objects, faces, places) and hemisphere (left vs. right) and participant as a random effect. We observed a significant main effect of ROI (χ² = 8.63, p = 0.003) but no other significant main effects or interactions (p > 0.061). The full model output is presented in Table 6.

Table 6 Parameter estimates and results of the multivariate ROI analysis (comparison of global TPJ ROI and control ROI)

Whole brain analysis

For the contrasts intact vs. scrambled global perception and objects vs. faces and places we only report results uncorrected for multiple comparisons (p < 0.001). For the contrasts faces vs. objects and places as well as places vs. objects and faces a considerable number of voxels and clusters survived a correction for multiple comparisons (p < 0.05, FWE). For all contrasts we set a cluster threshold of 100. Results from the whole brain analysis are shown in Table 7; Fig. 4.

Table 7 Location of cluster peaks from all whole brain contrasts in MNI space and % overlap with left and right hemispheric AAL regions
Fig. 4
figure 4

Whole brain results. Whole brain results presented in in standard MNI space on a surface version of the ch2 brain using the BrainNet viewer [84]. The color bar indicates the t value of a certain voxel for the respective contrast. For the contrasts Intact vs. Scrambled global perception and Objects vs. Faces and Places no voxel survived a FWE correction for multiple comparisons

Discussion

In the present study, we were able to show that bilateral TPJ regions contribute to perception of several independent object stimulus classes (objects, faces, places) with unique activation patterns coding each stimulus class specifically. We demonstrated that TPJ regions involved in perception of global, hierarchical structures are also active during perception of coherent objects of several object classes. This suggests that both, hierarchical organized stimuli and coherent objects, are being processed in similar bilateral TPJ regions. While univariate responses were significantly positive, but not different between the three object classes (objects, faces, places), we identified unique activation patterns for each object class in our multivariate analysis. This suggests that the TPJ may contribute with a specific strategy to the processing of each object class.

Our results are in line with previous work showing a significant contribution of the TPJ in the perception of coherent objects [37, 43, 58, 77, 80, 81]. A recent study by Nestmann and colleagues [58] demonstrated that TPJ regions, predominantly in the left hemisphere, responded significantly positive to object stimuli. Here, TPJ regions of interest were functionally localized using hierarchical global shapes [7, 58, 67, 68] and object stimuli of different viewing conditions were presented. In contrast, anatomically defined TPJ regions that did not respond to global shapes showed a significantly negative response. With the present study, we were able to replicate and extend the results of Nestmann et al. [58]. We demonstrated that the TPJ is not only involved in the perception of objects stimuli but also involved in processing of faces and places with a unique signature of neuronal activations for each object class.

Our results are also in good agreement with studies in simultanagnosia that demonstrated significant impairments with perception deficits for hierarchical stimuli as well as objects, scenes and places [13, 17, 57, 66]. While simultanagnosia patients do not only suffer from impairments perceiving hierarchical stimuli like Navon-letters [56] they also show significant problems grasping the gist of visual scenes like the Broken Window Picture [3, 35, 66, 72]. The present study confirms these clinical observations by demonstrating that TPJ regions that encode global structures also contribute generally (significant univariate responses) and specifically (unique voxel patterns) to the perception of object classes where simultanagnosia patients show deficits with, e.g. coherent objects, faces and visual scenes [13, 57, 66]. In conclusion, it is valid to assume the existence of a distributed network of TPJ voxels that contribute to global perception and perception of coherent structures that - when lesioned - cause symptoms of simultanagnosia.

Our results are in line with several neuroimaging studies that have demonstrated dorsal contributions to object perception [18, 23, 24, 40]. Numerous studies showed significant contributions of the posterior temporal sulcus which was included into our anatomical ROI definition in face [63,64,65] and social scene perception [36, 82]. However, the present study used non-social scenes (without displaying human interaction) adding inanimate scenes to the categories of stimuli being processed through the TPJ.

A recent study [1] demonstrated an interesting contribution of dorsal brain areas during the perception of objects and their local elements. In a localizer experiment, the authors identified bilateral anterior and posterior regions in the intraparietal sulcus (IPS) that processed object-centered relations of local object parts. In a next step, it was demonstrated that the object category (e.g., boats, cars) from an independent experiment was successfully decoded from the right posterior IPS. This result shows the crucial and specific contributions of dorsal brain areas to highly specific processes of object perception. However, the study by Ayzenberg and Behrmann [1] also suggests the possibility for a more fine-grained and potentially more informative approach to investigate scene perception in posterior temporo-parietal brain areas, e.g., modulating visual scenes composed of different numbers of local objects to directly test the involvement of the TPJ in hierarchical scene perception.

Our results are also supported by several studies applying TMS to the TPJ providing evidence from a method allowing causal conclusions beyond functional neuroimaging that cannot support casual claims. It was shown that inhibitory TMS to the TPJ significantly reduced the ability for mental rotation of face stimuli indicating a significant representation of visually presented faces in the TPJ region [86]. It was also demonstrated that a semantic advantage in object processing was significantly disturbed through TMS inhibition over the TPJ suggesting a higher order representation of objects in this this brain area [60]. A study using movies as visual scene stimuli (designed to test mechanisms of the Theory of Mind) showed that predictions about future events of that presented scene were significantly influenced by TMS to the TPJ [4]. All three studies support our findings of significant and specific contributions of the TPJ to face, object and scene processing and suggest a particular high-level mechanism of visual perception for all three stimulus types facilitated through TPJ regions.

Our analysis investigating differences between linear and radial kernels of the SVM models demonstrated significantly better classification accuracies for the SVM model using the linear kernel. This result, with better decoding of the linear classifier, indicates a high-level, abstract representation of each object class in TPJ regions. This linear representation shows that TPJ regions in fact encode high level stimulus concepts beyond simple low-level object features [45, 52].

We hypothesized that place stimuli might elicit the strongest responses in the TPJ compared to objects and faces. The hypothesis was based on clinical observations in simultanagnosia patients that struggle the most with scene/place stimuli compared to single objects [72] and the fact that every real-world visual scene where individual elements create a global visual entity represents a hierarchical global stimulus where single elements create a superior percept. However, there were no significant differences between places, objects, and faces, but the MVPA detected unique activation signatures across all TPJ voxels responding to global shapes for all three stimulus classes. The behavioral deficits in global scene perception known to be typical for simultanagnosia could therefore arise from problems interpreting an incomplete activation pattern due to lesions to the TPJ that is specific to the patterns of place stimuli. Another reason for the similar univariate activations across all three stimulus classes could be the general functionality of the TPJ as brain area providing the necessary resources of visual attention to all kinds of object-like visual stimuli [5, 31]. A higher sensitivity for hierarchical Navon-like shapes [29, 33, 34, 67, 68] and objects in demanding viewing conditions [13, 57, 58, 66] might also be explained by a higher attentional demand in the TPJ for more complex object-like visual content.

Another possible explanation for the absence of significant differences in the univariate analysis could be the complexity of otherwise coherent objects also requiring significant contributions from posterior temporo-parietal brain areas [13, 57, 58, 66]. This explanation is in line with the Recognition-by-Components Theory [6] postulating that objects are visually processed assembling various local components to a superior, coherent percept. A comparable explanation can be applied for face stimuli that per se can be seen as hierarchically organized entities where local elements, like mouth, eyes, and nose, create a superior percept. Several studies suggested a holistic, Gestalt-like processing of faces in healthy human participants [30, 78, 79]; for a review see Maurer et al [50]. , while a study with patients suffering from simultanagnosia also showed significant deficits in face recognition [49, 51]. Taken together, the absence of a significant difference between neuronal signals for places, objects and faces can be explained by the possibly similar visual processing mechanisms for all three stimulus types.

The results of the present study fit very well with the Recognition-by-Components Theory [6] postulating that real-world objects are assembled from various local components to a superior, coherent percept. While it was previously shown that posterior temporo-parietal brain areas in close vicinity of the TPJ [29, 33, 67, 68] are involved in processing of hierarchically organized visual stimuli [56] a mechanism of general feature integration for real-world objects is a possible explanation for the results found in our univariate and multivariate analyses.

Another popular theory of visual perception, the theory of visual attention (TVA) [10], and especially one of its extensions, the contour detector (CODE) theory of visual attention [47] might help explaining the present results. The CODE theory claims that visual attention clusters nearby items into perceptual groups which applies to individual objects and their subcomponents as well as relations of objects in space. This mechanism that is very similar to visual integration known from processing of hierarchically organized Navon-like stimuli [56] and can therefore help to explain the present results. Since the TPJ was reported to be involved in visual search and orienting of attention [14, 28, 38, 39] the CODE theory fits well with our results showing a connection between the TPJ and rather perceptual mechanisms processing different kinds of real-world object types.

A possible limitation of the current study is the focus on posterior temporo-parietal brain regions located in close vicinity to the TPJ [7, 33, 58, 67, 68]. Neuroimaging studies [20, 21] and studies with simultanagnosia patients [12, 57] showed a significant involvement of more posterior brain regions in more occipital or ventral areas in mechanisms of global perception. Therefore, significant differences between stimulus conditions in the univariate analysis could possibly be discovered focussing on more posterior/ventral areas as ROIs.

In conclusion, we here demonstrated that the TPJ responds to several kinds of object stimuli expanding expectations from clinical observations in simultanagnosia. A multivariate analysis showed that TPJ subregions that respond to global shapes have a unique activation pattern for places, objects, and faces. These results allow new important insights into the functionality of the TPJ in visual perception and hint towards a general role of the TPJ as a brain area significantly supporting visual perception.

Data availability

The fMRI data and analysis scripts (univariate and multivariate analysis) are available under the following URL: https://osf.io/v8w7f. The structural and functional scans are not publicly available due to the data protection agreement of the University of Tübingen, as approved by the ethics committee of the medical faculty of the University of Tübingen and signed by the participants. Scans are available on request to the corresponding author following a formal data sharing agreement after obtaining informed consent of each participant.

References

  1. Ayzenberg V, Behrmann M. The dorsal visual Pathway represents object-centered spatial relations for object recognition. J Neurosci. 2022;42(23):4693–710. https://doi.org/10.1523/JNEUROSCI.2257-21.2022.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  2. Bálint R. Seelenlähmung Des Schauens, optische ataxie, räumliche Störung Der Aufmerksamkeit. Monatsschrift Für Psychiatrie Und Neurologie. 1909;25(1):51–66.

    Article  Google Scholar 

  3. Balslev D, Odoj B, Rennig J, Karnath H-O. Abnormal center-periphery gradient in spatial attention in simultanagnosia. J Cogn Neurosci. 2014;26(12). https://doi.org/10.1162/jocn_a_00666.

  4. Bardi L, Six P, Brass M. Repetitive TMS of the temporo-parietal junction disrupts participant’s expectations in a spontaneous theory of mind task. Soc Cognit Affect Neurosci. 2017;12(11):1775–82. https://doi.org/10.1093/scan/nsx109.

    Article  Google Scholar 

  5. Beauchamp MS, Sun P, Baum SH, Tolias AS, Yoshor D. Electrocorticography links human temporoparietal junction to visual perception. Nat Neurosci. 2012;15(7):957–9. https://doi.org/10.1038/nn.3131.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  6. Biederman I. Recognition-by-components: a theory of human image understanding. Psychol Rev. 1987;94(2):115–7. https://doi.org/10.1037/0033-295X.94.2.115.

    Article  PubMed  Google Scholar 

  7. Bloechle J, Huber S, Klein E, Bahnmueller J, Moeller K, Rennig J. Neuro-cognitive mechanisms of global gestalt perception in visual quantification. NeuroImage. 2018;181(April):359–69. https://doi.org/10.1016/j.neuroimage.2018.07.026.

    Article  PubMed  Google Scholar 

  8. Brady TF, Konkle T, Alvarez GA, Oliva A. Visual long-term memory has a massive storage capacity for object details. Proc Natl Acad Sci USA. 2008;105(38):14325–9. https://doi.org/10.1073/pnas.0803390105.

    Article  PubMed  PubMed Central  Google Scholar 

  9. Brainard DH. The Psychophysics Toolbox. Spat Vis. 1997;10(4):433–6. https://doi.org/10.1163/156856897X00357.

    Article  CAS  PubMed  Google Scholar 

  10. Bundesen C. A theory of visual attention. Psychol Rev. 1990;97(4):523–47. https://doi.org/10.1037/0033-295x.97.4.523.

    Article  CAS  PubMed  Google Scholar 

  11. Bzdok D, Langner R, Schilbach L, Jakobs O, Roski C, Caspers S, Laird AR, Fox PT, Zilles K, Eickhoff SB. Characterization of the temporo-parietal junction by combining data-driven parcellation, complementary connectivity analyses, and functional decoding. NeuroImage. 2013;81:381–92. https://doi.org/10.1016/j.neuroimage.2013.05.046.

    Article  PubMed  Google Scholar 

  12. Chechlacz M, Rotshtein P, Hansen PC, Riddoch JM, Deb S, Humphreys GW. The neural underpinings of simultanagnosia: disconnecting the visuospatial attention network. J Cogn Neurosci. 2012;24(3):718–35. https://doi.org/10.1162/jocn_a_00159.

    Article  PubMed  Google Scholar 

  13. Cooper AC, Humphreys GW. Coding space within but not between objects: evidence from Balint’s syndrome. Neuropsychologia. 2000;38(6):723–33. https://doi.org/10.1016/S0028-3932(99)00150-5.

    Article  CAS  PubMed  Google Scholar 

  14. Corbetta M, Shulman GL. Control of goal-directed and stimulus-driven attention in the brain. Nat Rev Neurosci. 2002;3(3):201–15. https://doi.org/10.1038/nrn755.

    Article  CAS  PubMed  Google Scholar 

  15. Dale AM. Optimal experimental design for event-related fMRI. Hum Brain Mapp. 1999;8(2–3):109–14.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  16. Dale AM, Fischl B, Sereno MI. Cortical surface-based analysis I. Segmentation and Surface Reconstruction. NeuroImage. 1999;9:179–94. https://doi.org/10.1006/nimg.1998.0395.

    Article  CAS  PubMed  Google Scholar 

  17. Dalrymple KA, Birmingham E, Bischof WF, Barton JJS, Kingstone A. Experiencing simultanagnosia through windowed viewing of complex social scenes. Brain Res. 2011;1367:265–77. https://doi.org/10.1016/j.brainres.2010.10.022.

    Article  CAS  PubMed  Google Scholar 

  18. Dekker T, Mareschal D, Sereno MI, Johnson MH. Dorsal and ventral stream activation and object recognition performance in school-age children. NeuroImage. 2011;57(3):659–70. https://doi.org/10.1016/j.neuroimage.2010.11.005.

    Article  PubMed  Google Scholar 

  19. Destrieux C, Fischl B, Dale A, Halgren E. Automatic parcellation of human cortical gyri and sulci using standard anatomical nomenclature. NeuroImage. 2010;53(1):1–15. https://doi.org/10.1016/j.neuroimage.2010.06.010.

    Article  PubMed  Google Scholar 

  20. Fink GR, Halligan PW, Marshall JC, Frith CD, Frackowiak RS, Dolan RJ. Where in the brain does visual attention select the forest and the trees? Nature. 1996;382(6592):626–8. https://doi.org/10.1038/382626a0.

    Article  CAS  PubMed  Google Scholar 

  21. Fink GR, Halligan PW, Marshall JC, Frith CD, Frackowiak RS, Dolan RJ. Neural mechanisms involved in the processing of global and local aspects of hierarchically organized visual stimuli. Brain. 1997;120(1):1779–91. https://doi.org/10.1093/brain/120.10.1779.

    Article  PubMed  Google Scholar 

  22. Fischl B, Salat DH, Busa E, Albert M, Dieterich M, Haselgrove C, Van Der Kouwe A, Killiany R, Kennedy D, Klaveness S, Montillo A, Makris N, Rosen B, Dale AM. Whole brain segmentation: automated labeling of neuroanatomical structures in the human brain. Neuron. 2002;33(3):341–55. https://doi.org/10.1016/S0896-6273(02)00569-X.

    Article  CAS  PubMed  Google Scholar 

  23. Freud E, Plaut DC, Behrmann M. What’ is happening in the dorsal visual pathway. Trends Cogn Sci. 2016;20(10):773–84. https://doi.org/10.1016/j.tics.2016.08.003.

    Article  PubMed  Google Scholar 

  24. Freud E, Culham JC, Plaut DC, Behrmann M. The large-scale organization of shape processing in the ventral and dorsal pathways. ELife. 2017;6:1–26. https://doi.org/10.7554/eLife.27576.

    Article  Google Scholar 

  25. Friedman-Hill SR, Robertson LC, Treisman A. Parietal contributions to visual feature binding: evidence from a patient with bilateral lesions. Sci (New York N Y). 1995;269(5225):853–5. https://doi.org/10.1126/science.7638604.

    Article  CAS  Google Scholar 

  26. Haxby JV, Gobbini MI, Furey ML, Ishai A, Schouten JL, Pietrini P. Distributed and overlapping representations of faces and objects in ventral temporal cortex. Sci (New York N Y). 2001;293(5539):2425–30. https://doi.org/10.1126/science.1063736.

    Article  CAS  Google Scholar 

  27. Haynes J-D, Rees G. Decoding mental states from brain activity in humans. Nat Rev Neurosci. 2006;7(7):523–34. https://doi.org/10.1038/nrn1931.

    Article  CAS  PubMed  Google Scholar 

  28. Himmelbach M, Erb M, Karnath H-O. Exploring the visual world: the neural substrate of spatial orienting. NeuroImage. 2006;32(4):1747–59. https://doi.org/10.1016/j.neuroimage.2006.04.221.

    Article  PubMed  Google Scholar 

  29. Himmelbach M, Erb M, Klockgether T, Moskau S, Karnath H-O. fMRI of global visual perception in simultanagnosia. Neuropsychologia. 2009;47(4):1173–7. https://doi.org/10.1016/j.neuropsychologia.2008.10.025.

    Article  PubMed  Google Scholar 

  30. Hole GJ, George PA, Dunsmore V. Evidence for holistic processing of faces viewed as photographic negatives. Perception. 1999;28(3):341–59. https://doi.org/10.1068/p2622.

    Article  CAS  PubMed  Google Scholar 

  31. Horiguchi H, Wandell BA, Winawer J. A predominantly visual subdivision of the right Temporo-Parietal Junction (vTPJ). Cereb Cortex. 2016;26(2):639–46. https://doi.org/10.1093/cercor/bhu226.

    Article  PubMed  Google Scholar 

  32. Huberle E, Karnath H-O. Global shape recognition is modulated by the spatial distance of local elements–evidence from simultanagnosia. Neuropsychologia. 2006;44(6):905–11. https://doi.org/10.1016/j.neuropsychologia.2005.08.013.

    Article  PubMed  Google Scholar 

  33. Huberle E, Karnath H-O. The role of temporo-parietal junction (TPJ) in global gestalt perception. Brain Struct Function. 2012;217(3):735–46. https://doi.org/10.1007/s00429-011-0369-y.

    Article  Google Scholar 

  34. Huberle E, Driver J, Karnath H-O. Retinal versus physical stimulus size as determinants of visual perception in simultanagnosia. Neuropsychologia. 2010;48(6):1677–82. https://doi.org/10.1016/j.neuropsychologia.2010.02.013.

    Article  PubMed  PubMed Central  Google Scholar 

  35. Huberle E, Rupek P, Lappe M, Karnath H-O. Perception of biological motion in visual agnosia. Front Behav Neurosci. 2012;6(August):56. https://doi.org/10.3389/fnbeh.2012.00056.

    Article  PubMed  PubMed Central  Google Scholar 

  36. Isik L, Koldewyn K, Beeler D, Kanwisher N. Perceiving social interactions in the posterior superior temporal sulcus. Proc Natl Acad Sci USA. 2017;114(43):E9145–52. https://doi.org/10.1073/pnas.1714471114.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  37. James TW, Humphrey GK, Gati JS, Menon RS, Goodale Ma. Differential effects of viewpoint on object-driven activation in dorsal and ventral streams. Neuron. 2002;35(4):793–801. https://doi.org/10.1016/S0896-6273(02)00803-6.

    Article  CAS  PubMed  Google Scholar 

  38. Karnath H-O, Fruhmann Berger M, Küker W, Rorden C. (2004). The anatomy of spatial neglect based on voxelwise statistical analysis: a study of 140 patients. Cerebral Cortex (New York, N.Y.: 1991), 14(10), 1164–1172. https://doi.org/10.1093/cercor/bhh076.

  39. Karnath H-O, Rennig J, Johannsen L, Rorden C. The anatomy underlying acute versus chronic spatial neglect: a longitudinal study. Brain. 2011;134(Pt 3):903–12. https://doi.org/10.1093/brain/awq355.

    Article  PubMed  Google Scholar 

  40. Konen CS, Kastner S. Two hierarchically organized neural systems for object information in human visual cortex. Nat Neurosci. 2008;11(2):224–31. https://doi.org/10.1038/nn2036.

    Article  CAS  PubMed  Google Scholar 

  41. Konkle T, Brady TF, Alvarez GA, Oliva A. Conceptual distinctiveness supports detailed visual long-term memory for real-world objects. J Exp Psychol Gen. 2010a;139(3):558–78. https://doi.org/10.1037/a0019165.

    Article  PubMed  PubMed Central  Google Scholar 

  42. Konkle T, Brady TF, Alvarez GA, Oliva A. Scene memory is more detailed than you think: the role of categories in visual long-term memory. Psychol Sci. 2010b;21(11):1551–6. https://doi.org/10.1177/0956797610385359.

    Article  PubMed  Google Scholar 

  43. Kosslyn SM, Alpert NM, Thompson WL, Chabris CF, Rauch SL, Anderson aK. Identifying objects seen from different viewpoints. A PET investigation. Brain. 1994;117(5):1055–71. https://doi.org/10.1093/brain/117.5.1055.

    Article  PubMed  Google Scholar 

  44. Krall SC, Rottschy C, Oberwelland E, Bzdok D, Fox PT, Eickhoff SB, Fink GR, Konrad K. The role of the right temporoparietal junction in attention and social interaction as revealed by ALE meta-analysis. Brain Struct Function. 2014. https://doi.org/10.1007/s00429-014-0803-z.

    Article  Google Scholar 

  45. Kriegeskorte N, Kievit RA. Representational geometry: integrating cognition, computation, and the brain. Trends Cogn Sci. 2013;17(8):401–12. https://doi.org/10.1016/j.tics.2013.06.007.

    Article  PubMed  PubMed Central  Google Scholar 

  46. Li B, Solanas MP, Marrazzo G, Raman R, Taubert N, Giese M, Vogels R, de Gelder B. (2023). A large-scale brain network of species-specific dynamic human body perception. Progress in Neurobiology, 221(December 2022), 102398. https://doi.org/10.1016/j.pneurobio.2022.102398.

  47. Logan GD. The CODE theory of visual attention: an integration of space-based and object-based attention. Psychol Rev. 1996;103(4):603–49. https://doi.org/10.1037/0033-295x.103.4.603.

    Article  CAS  PubMed  Google Scholar 

  48. Luria A. Disorders of simultaneous perception in a case of bilateral occipito-parietal brain injury. Brain. 1959;82:437–49.

    Article  CAS  PubMed  Google Scholar 

  49. Marotta J, Locheed K. Posterior cortical atrophy: the role of Simultanagnosia in deficits of Face Perception. J Vis. 2011;11(11):580–580. https://doi.org/10.1167/11.11.580.

    Article  Google Scholar 

  50. Maurer D, Grand R, Le, Mondloch CJ. The many faces of configural processing. Trends Cogn Sci. 2002;6(6):255–60. https://doi.org/10.1016/S1364-6613(02)01903-4.

    Article  PubMed  Google Scholar 

  51. Meek BP, Locheed K, Lawrence-Dewar JM, Shelton P, Marotta JJ. Posterior cortical atrophy: an investigation of scan paths generated during face matching tasks. Front Hum Neurosci. 2013;7(June):309. https://doi.org/10.3389/fnhum.2013.00309.

    Article  PubMed  PubMed Central  Google Scholar 

  52. Misaki M, Kim Y, Bandettini PA, Kriegeskorte N. Comparison of multivariate classifiers and response normalizations for pattern-information fMRI. NeuroImage. 2010;53(1):103–18. https://doi.org/10.1016/j.neuroimage.2010.05.051.

    Article  PubMed  Google Scholar 

  53. Moeller S, Yacoub E, Olman CA, Auerbach E, Strupp J, Harel N, Uğurbil K. Multiband multislice GE-EPI at 7 tesla, with 16-fold acceleration using partial parallel imaging with application to high spatial and temporal whole-brain fMRI. Magn Reson Med. 2010;63(5):1144–53. https://doi.org/10.1002/mrm.22361.

    Article  PubMed  PubMed Central  Google Scholar 

  54. Mumford JA, Turner BO, Ashby FG, Poldrack RA. Deconvolving BOLD activation in event-related designs for multivoxel pattern classification analyses. NeuroImage. 2012;59(3):2636–43. https://doi.org/10.1016/j.neuroimage.2011.08.076.

    Article  PubMed  Google Scholar 

  55. Nardo D, Console P, Reverberi C, Macaluso E. Competition between visual events modulates the influence of salience during free-viewing of naturalistic videos. Front Hum Neurosci. 2016;10(June):1–16. https://doi.org/10.3389/fnhum.2016.00320.

    Article  Google Scholar 

  56. Navon D. Forest before trees: the precedence of global features in visual perception. Cogn Psychol. 1977;9(3):353–83. https://doi.org/10.1016/0010-0285(77)90012-3.

    Article  Google Scholar 

  57. Neitzel J, Ortner M, Haupt M, Redel P, Grimmer T, Yakushev I, Drzezga A, Bublak P, Preul C, Sorg C, Finke K. Neuro-cognitive mechanisms of simultanagnosia in patients with posterior cortical atrophy. Brain. 2016;aww235. https://doi.org/10.1093/brain/aww235.

  58. Nestmann S, Wiesen D, Karnath HO, Rennig J. Temporo-parietal brain regions are involved in higher order object perception. NeuroImage. 2021;234(March):117982. https://doi.org/10.1016/j.neuroimage.2021.117982.

    Article  PubMed  Google Scholar 

  59. Olmos A, Kingdom FAA. A biologically inspired algorithm for the recovery of shading and reflectance images. Perception. 2004;33(12):1463–73. https://doi.org/10.1068/p5321.

    Article  PubMed  Google Scholar 

  60. Ortiz-Tudela J, Martín-Arévalo E, Chica AB, Lupiáñez J. Semantic incongruity attracts attention at a pre-conscious level: evidence from a TMS study. Cortex. 2018;102:96–106. https://doi.org/10.1016/j.cortex.2017.08.035.

    Article  PubMed  Google Scholar 

  61. Park S, Konkle T, Oliva A. Parametric Coding of the size and clutter of natural scenes in the human brain. Cereb Cortex (New York N Y : 1991). 2015;25(7):1792–805. https://doi.org/10.1093/cercor/bht418.

    Article  Google Scholar 

  62. Pelli DG. The VideoToolbox software for visual psychophysics: transforming numbers into movies. Spat Vis. 1997;10(4):437–42. https://doi.org/10.1163/156856897X00366.

    Article  CAS  PubMed  Google Scholar 

  63. Pinsk MA, Arcaro M, Weiner KS, Kalkus JF, Inati SJ, Gross CG, Kastner S. Neural representations of faces and body parts in macaque and human cortex: a comparative FMRI study. J Neurophysiol. 2009;101(5):2581–600. https://doi.org/10.1152/jn.91198.2008.

    Article  PubMed  PubMed Central  Google Scholar 

  64. Puce A, Allison T, Gore JC, McCarthy G. Face-sensitive regions in human extrastriate cortex studied by functional MRI. J Neurophysiol. 1995;74(3):1192–9.

    Article  CAS  PubMed  Google Scholar 

  65. Puce A, Allison T, Bentin S, Gore JC, McCarthy G. Temporal cortex activation in humans viewing eye and mouth movements. J Neuroscience: Official J Soc Neurosci. 1998;18(6):2188–99.

    Article  CAS  Google Scholar 

  66. Rennig J, Karnath H-O. Stimulus size mediates gestalt processes in object perception - evidence from simultanagnosia. Neuropsychologia. 2016;89:66–73. https://doi.org/10.1016/j.neuropsychologia.2016.06.002.

    Article  PubMed  Google Scholar 

  67. Rennig J, Bilalić M, Huberle E, Karnath H-O, Himmelbach M. The temporo-parietal junction contributes to global gestalt perception-evidence from studies in chess experts. Front Hum Neurosci. 2013;7:513. https://doi.org/10.3389/fnhum.2013.00513.

    Article  PubMed  PubMed Central  Google Scholar 

  68. Rennig J, Himmelbach M, Huberle E, Karnath H-O. Involvement of the TPJ area in processing of novel global forms. J Cogn Neurosci. 2015;27(8):1587–600. https://doi.org/10.1162/jocn_a_00809.

    Article  PubMed  Google Scholar 

  69. Riddoch MJ, Humphreys GW. Object identification in simultanagnosia: when wholes are not the sum of their parts. Cognit Neuropsychol. 2004;21(2):423–41. https://doi.org/10.1080/02643290342000564.

    Article  CAS  Google Scholar 

  70. Righi G, Peissig JJ, Tarr MJ. Recognizing disguised faces. Visual Cognition. 2012;20(2):143–69. https://doi.org/10.1080/13506285.2012.654624.

    Article  Google Scholar 

  71. Robertson L, Treisman A, Friedman-Hill S, Grabowecky M. The Interaction of spatial and object pathways: evidence from Balint’s syndrome. J Cogn Neurosci. 1997;9(3):295–317. https://doi.org/10.1162/jocn.1997.9.3.295.

    Article  CAS  PubMed  Google Scholar 

  72. Roid GH. Stanford-Binet Intelligence scales. Riverside Publishing; 2003.

  73. Rolls ET, Huang C-C, Lin C-P, Feng J, Joliot M. (2020). Automated anatomical labelling atlas 3. NeuroImage, 206, 116189. https://doi.org/10.1016/j.neuroimage.2019.116189.

  74. Sareen P, Ehinger KA, Wolfe JM. CB database: a change blindness database for objects in natural indoor scenes. Behav Res Methods. 2016;48(4):1343–8. https://doi.org/10.3758/s13428-015-0640-x.

    Article  PubMed  PubMed Central  Google Scholar 

  75. Saxe R, Kanwisher N. People thinking about thinking people. The role of the temporo-parietal junction in theory of mind. NeuroImage. 2003;19(4):1835–42.

    Article  CAS  PubMed  Google Scholar 

  76. Saxe R, Wexler A. Making sense of another mind: the role of the right temporo-parietal junction. Neuropsychologia. 2005;43(10):1391–9. https://doi.org/10.1016/j.neuropsychologia.2005.02.013.

    Article  PubMed  Google Scholar 

  77. Sugio T, Inui T, Matsuo K, Matsuzawa M, Glover GH, Nakai T. The role of the posterior parietal cortex in human object recognition: a functional magnetic resonance imaging study. Neurosci Lett. 1999;276(1):45–8. https://doi.org/10.1016/S0304-3940(99)00788-0.

    Article  CAS  PubMed  Google Scholar 

  78. Tanaka JW, Farah MJ. Parts and wholes in face recognition. Q J Exp Psychol A. 1993;46(2):225–45. https://doi.org/10.1080/14640749308401045.

    Article  CAS  PubMed  Google Scholar 

  79. Tanaka JW, Sengco JA. Features and their configuration in face recognition. Mem Cognit. 1997;25(5):583–92. https://doi.org/10.3758/bf03211301.

    Article  CAS  PubMed  Google Scholar 

  80. Terhune KP, Liu GT, Modestino EJ, Miki A, Sheth KN, Liu C-SJ, Bonhomme GR, Haselgrove JC. Recognition of objects in non-canonical views: a functional MRI study. J Neuro-Ophthalmology: Official J North Am Neuro-Ophthalmology Soc. 2005;25(4):273–9. https://doi.org/10.1097/01.wno.0000189826.62010.48.

    Article  Google Scholar 

  81. Valyear KF, Culham JC, Sharif N, Westwood D, Goodale MA. A double dissociation between sensitivity to changes in object identity and object orientation in the ventral and dorsal visual streams: a human fMRI study. Neuropsychologia. 2006;44(2):218–28. https://doi.org/10.1016/j.neuropsychologia.2005.05.004.

    Article  PubMed  Google Scholar 

  82. Walbrin J, Downing P, Koldewyn K. Neural responses to visually observed social interactions. Neuropsychologia. 2018;112:31–9. https://doi.org/10.1016/j.neuropsychologia.2018.02.023.

    Article  PubMed  PubMed Central  Google Scholar 

  83. Wurm MF, Schubotz RI. (2018). The role of the temporoparietal junction (TPJ) in action observation: Agent detection rather than visuospatial transformation. NeuroImage, 165(July 2017), 48–55. https://doi.org/10.1016/j.neuroimage.2017.09.064.

  84. Xia M, Wang J, He Y. BrainNet Viewer: a network visualization tool for human brain connectomics. PLoS ONE. 2013;8(7):e68910. https://doi.org/10.1371/journal.pone.0068910.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  85. Zaretskaya N, Anstis S, Bartels A. Parietal cortex mediates conscious perception of illusory gestalt. J Neuroscience: Official J Soc Neurosci. 2013;33(2):523–31. https://doi.org/10.1523/JNEUROSCI.2905-12.2013.

    Article  CAS  Google Scholar 

  86. Zeugin D, Notter MP, Knebel JF, Ionta S. Temporo-parietal contribution to the mental representations of self/other face. Brain Cogn. 2020;143(July):105600. https://doi.org/10.1016/j.bandc.2020.105600.

    Article  PubMed  Google Scholar 

Download references

Funding

Open Access funding enabled and organized by Projekt DEAL. This work was supported by the Deutsche Forschungsgemeinschaft (KA 1258/23 − 1; RE 3629/2 − 1).

Open Access funding enabled and organized by Projekt DEAL.

Author information

Authors and Affiliations

Authors

Contributions

HOK and JR conceptualized the study. JR an CL collected and analyzed the data. JR wrote the main manuscripts and prepared the figures. All authors reviewed the manuscript.

Corresponding author

Correspondence to Johannes Rennig.

Ethics declarations

Ethical approval

The study was approved by the ethics committee of the medical faculty of the University of Tübingen and conducted in accordance with the declaration of Helsinki. All participants gave written informed consent prior to participation in the study.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Rennig, J., Langenberger, C. & Karnath, HO. Beyond visual integration: sensitivity of the temporal-parietal junction for objects, places, and faces. Behav Brain Funct 20, 8 (2024). https://doi.org/10.1186/s12993-024-00233-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12993-024-00233-2

Keywords