Skip to main content

Simultaneity in the millisecond range as a requirement for effective shape recognition


Neurons of the visual system are capable of firing with millisecond precision, and synchrony of firing may provide a mechanism for "binding" stimulus elements in the image for purposes of recognition. While the neurophysiology is suggestive, there has been relatively little behavioral work to support the proposition that synchrony contributes to object recognition. The present experiments examined this issue by briefly flashing dots that were positioned at the outer boundary of namable objects, similar to silhouettes. Display of a given dot lasted only 0.1 ms, and temporal proximity of dot pairs, and among dot pairs, was varied as subjects were asked to name each object. In Exp 1, where the display of dots pairs was essentially simultaneous (0.2 ms to show both), there was a linear decline in recognition of the shapes as the interval between pairs increased from 0 ms to 6 ms. Compared with performance at 0 ms of delay, even the 2 ms interval between pairs produced a significant decrease in recognition. In Exp 2 the interval between pairs was constant at 3 ms, and the interval between pair members was varied. Here also a linear decline was observed as the interval between pair members increased from 0 ms to 1.5 ms, with the difference between 0 ms and 0.5 ms being significant. Thus minimal transient discrete cues can be integrated for purposes of shape recognition to the extent that they are synchronously displayed, and coincidence in the millisecond and even submillisecond range is needed for effective encoding of image data.


A cornerstone principle of neurophysiology is the idea that neurons are either intrinsically designed to be selective with respect to the stimuli to which they will respond, or through connections with other units, can be made to be selective [14].

A corollary is the concept of a "rate code," this being the notion that the strength or salience of the stimulus is reflected in the average rate at which the cell fires [5]. In this regard, it is assumed that the timing of individual spikes is random and must be averaged over some interval – generally thought to be in the 20–200 ms range.

This time interval seems consistent with various perceptual phenomena, such as the frequency at which one sees fusion of a flickering stimulus, that which provides for smooth motion in a rapid sequence of still images, and the duration of visible persistence resulting from a brief flash. The fact that an observer can combine partial shape cues over a hundred milliseconds or more to achieve object recognition also suggests that exact timing of the spike signal is not critical.

Eriksen and Collins [6, 7] for example, examined the interval across which two dot patterns could be integrated to allow recognition of a three-consonant trigram. A portion of the dots needed to see the letters of the trigram were contained in each pattern, and random dots were added so that the letters could not be identified by inspection of either pattern alone. However, when presented in succession the information from the two patterns could be combined to allow successful recognition over an interval upward of 100 ms.

A prior study from this lab used a similar approach, i.e., the minimal transient discrete cue protocol [8], in which dots that marked the boundary of namable shapes were broken into two subsets. The number of dots in the subsets allowed for successful recognition with a 75% probability if both subsets were shown very briefly and with no delays. The ability to integrate the information from brief, successive display of the two subsets was a function of room illumination and of the time interval inserted between them. With dim illumination recognition levels fell only by half with a subset interval of 80 ms, and in the dark the hit rate fell less than 25% when the interval between the two subsets was 270 ms.

Results such as these show that shape cues can be combined over many tens or even hundreds of milliseconds. This suggests that the exact timing of spikes being sent forth from the retina is relatively unimportant for conveying shape cues. Put otherwise, and with specific reference to the recognition of shapes using briefly flashed dots, one would think that recognition should not be much affected by the order in which the dots were presented, or small differentials of time interval.

It is somewhat surprising, therefore, to learn that neurons can respond to stimuli with millisecond precision, and to hear proposals that synchrony of firing may be essential for image encoding and object recognition. Von der Malsburg [9, 10] was among the first to suggest that coordinated firing of neurons might be used to specify what stimulus elements belong to a given object and to differentiate among the objects in a scene. This has been called the "binding hypothesis." One aspect of this hypothesis relates to the processing of extended contours that cross two or more non-overlapping receptive fields. Here it is proposed that synchronous firing provides a special signal that affirms the unity of the contour stimulus. In support of this possibility, coordinated firing across separate receptive fields in response to contours and gratings has now been reported for the retina as well as cortex [1117].

It would be good to determine whether synchronous neural activity provides a special benefit for processing of cues needed for object recognition. This issue was tested in two experiments using stimulus conditions that would be expected to generate various degrees of synchronized neural response. Similar to the methods used in the earlier report [8], boundary dots were briefly displayed to elicit recognition of namable objects, mostly animals and manufactured items. The boundary dots were displayed in pairs, with various time intervals being inserted between successive pairs and/or between the pair members. The results indicate that simultaneity in the millisecond and even submillisecond range has a major influence on whether the stimuli can elicit recognition of the objects.


Recognition judgments were collected from a total of 22 subjects, 8 for Exp 1 and 14 for Exp 2, using the minimal transient discrete cue (MTDC) protocol [8]. Except for the timing conditions detailed below, the stimuli to be judged and the task conditions were the same for both experiments.

Subjects were asked to name objects, each of which was suggested by a set of dots that marked locations at the boundary of the object, with the dots being displayed very briefly and in rapid succession. This will be described as display of a "shape pattern," or simply "shape" with the understanding that the pattern was designed to provide the minimal cues needed for naming the object.

One hundred fifty shape patterns, listed in Table 1, were shown to each subject. To create each shape pattern, an image of each object was sized and discretized so that the largest dimension of the object, either vertical or horizontal, fit to the edges of a 64 × 64 grid. Then a cursor was moved to trace the outer boundary of the object, marking the grid cells that were crossed by that boundary. This provided an x,y address for each marked cell, and the table of addresses provided the basis for subsequent display using a 64 × 64 array of LEDs. This LED array is hereafter described as the "display board."

Table 1 The names of shapes used in both experiments are listed, and for each shape, the table also provides the following information: Perimeter: the number of dots in the full inventory of boundary locations ; Area: the number of dots enclosed within the perimeter, and including the perimeter dots; Skip: the skip factor, which specified that every Nth dot would be included in the sample that was shown to a given subject; Dot% and Dot#: the percentage and number of dots that were displayed as a result of applying the skip factor.

The shape patterns were shown on the display board under the control of a Mac G4 computer and microprocessor slave. Each LED emitted with a peak wavelength of 660 nm, with a rise/fall time of 50–100 ns, and with a luminance of 10 Cd/m2. Background luminance, measured from the wall on which the display board was mounted, was 1 Cd/m2. Subjects viewed the display board from a distance of 3.5 m. At this distance the diameter of each LED subtended 4.9 arc' of visual angle, center-to-center spacing was 7.4 arc', and the span of the full array was 7.7 arc° in each direction.

Based on unpublish data gathered to formulate the protocols of the present and related experiments, the number of dots and their spacing was adjusted to provide approximate equivalence in potential for recognition of each shape pattern. Each shape was shown to a given subject only once, and only some of the dots in the boundary were shown. As illustrated in Fig. 1, selection of the display set for a given subject began by randomly choosing a starting point and then selecting every Nth dot, with N ranging from 3 to 10. Table 1 lists the number of boundary dots, the value of N, and the number of dots in the display set for each of the objects. This method of picking dots for the display set was the same for both experiments.

Figure 1
figure 1

The average shape displayed 57 dots, this being every 4th dot from the full inventory of dots in the boundary. The full inventory of boundary dots for a shape that matches this average is shown in panel A. At B the method for choosing the display set is illustrated. To select this set for a given subject, a random dot was chosen as the starting point, here indicated by an arrow. Then, counting clockwise, every Nth dot was marked for inclusion in the set of dots to be displayed (every fourth dot for this example). The full complement of dots that would be shown, i.e., the display set, is provided in panel C.

For display of a shape to a given subject, adjacent dots of the display set were then yoked to form pairs. The two members of a given pair were always displayed sequentially, but the order in which pairs were displayed was random. Fig. 2 illustrates this protocol, with each of the panels on the left showing the full array of dots that constituted the display set for one of the objects, and the pairs (as filled circles) that were chosen for display. Note that the process would continue until all pairs were shown, with any odd remaining dot being displayed at the end of the sequence. Each panel on the right illustrates what would be displayed in Exp. 1 during a given 0.2 ms interval, with the time interval between successive pairs being varied, as detailed below.

Figure 2
figure 2

Adjacent dots from the display set were formed into pairs. The members of a given pair were shown sequentially, but the order of pair presentation was chosen at random. The left panels show the successive display of four pairs from the display set, and this sampling would continue until all pairs were shown. The right panels illustrate that the pairs would be seen as brief flashes of light at the specified positions within the array of LEDs. Dot size is not to scale for purposes of illustration.

Fig. 3 illustrates the timing conditions for the two experiments. All dots from the display set were shown one at a time, with pulse durations of 0.1 ms for each dot (designated as T1). The T2 interval specified time from offset of the first member of a pair till onset of the second member, and T3 specified the interval between successive pairs, measured from onset of the first pair till onset of the next pair. For Exp. 1 the T2 interval was a constant 0 ms., and there were five levels of T3, these being 0 (nominally), 2, 4, 6 and 8 ms. For Exp. 2 T3 was held constant at 3 ms, and there were four levels of T2, these being 0.0, 0.5, 1.0, and 1.5 ms. All timing was specified with a precision no less than 0.1 ms.

Figure 3
figure 3

In Exp. 1, each dot of the display set was flashed for a duration of 0.1 ms, and there was no temporal separation between members of each pair. The temporal separation of pairs was varied from 0 to 8 ms. The time line has been expanded for the illustration of Exp. 2, most of it being used to illustrate the alternative intervals at which the second member of a given pair would be positioned. In this example, the pair formed by dots 33 and 34 are separated for an interval that varied between 0 and 1.5 ms, and the spacing between one pair and the successive pair (dots 71 and 72 in this illustration) was a constant 3 ms for all pairs in the display set.

There was no demand for speed on the task, but subjects generally gave an immediate answer by saying the name, or indicating that no name for the shape pattern came to mind. The experimenter judged the acceptability of each answer without any information as to the timing level that had been displayed, and entered this data into a computer log.


The minimal transient discrete cue protocol is based on the concept that the information from the brief display of each dot must be combined across time for the full complement of cues to be sufficient for recognition. The question of interest is whether the degree of synchrony in the display of dot pairs (Exp 1) and/or of pair members (Exp 2) contributes to the integration progress

Subjects either could or could not identify a given shape, so the decision is binary. The appropriate model for such data is a generalized linear mixed model with binomial errors and a logit link function [18]. Logit values (loge (proportion/1 – proportion) were calculated, and treatment differences were compared using the standard error of the difference (SED) for these values. Subject and shape variables were treated as random effects in the analysis of data from each experiment, and T3 and T2 intervals were fixed effects for Exps. 1 and 2, respectively. A quadratic term was included in the model to test for possible nonlinear effects.

For the data from Exp. 1, using 8 subjects and judging the 150 shapes, there was a significant decline in recognition as a function of T3 interval (p < .001), with no indication of nonlinear effect (p = 0.75). A unit increase in separation of pairs corresponded to the odds of recognition being multiplied by a factor of 0.80 (95% confidence limit = 0.76, 0.83). There was a significant difference in hit rate at 2 ms, compared to performance at 0 ms (p < .05).

For Exp.2, with 14 subjects and again judging the 150 shapes, there was a linear decline in effect as a function of the T2 interval (between members of each pair) that was significant at p < .001. The unit increase in separation of pairs corresponded to the odds of recognition being multiplied by a factor of 0.35 (95% confidence limit = 0.19, 0.66). There was no indication of a nonlinear effect (p = 0.28). The difference between the recognition level at T2 = 0 versus at T2 = 0.5 ms was significant at p < .001. This was also found to be true when the data from just the first eight subjects was analyzed alone, so effect size for the two experiments is in the same range.

Mean levels of shape pattern recognition for the two experiments are illustrated in Figs. 4 and 5, along with regression lines that were calculated using only a linear model. Note that the right axis of each plot shows the logit scale that provides the appropriate measure of effect for the binary data, and error bars should be interpreted only in relation to this scale. The left scales show the means that were backtransformed into hit rates, these being almost identical to the means of the raw data.

Figure 4
figure 4

In Exp. 1, with 8 subjects and 150 shapes, each dot pair was presented within a 0.2 ms interval, and the T3 interval between pairs was varied. The hit rate declined across the 8 ms range for T3, and relative to hit rates at T3 = 0, the decline in recognition was significant even with a T3 of 2 ms. The right scale shows the logit values that were the basis for statistical analysis, and the error bars (+/- SEMs) should be interpreted against this scale. The ordinate on the left shows the corresponding levels of percent recognition.

Figure 5
figure 5

For Exp. 2 (14 subjects, 150 shapes), dot pairs were separated by a constant T3 interval of 3 ms, and T2 – the interval between members of each pair – was varied. Hit rates declined significantly with as little as 0.5 ms of separation between the pair members, and the decline in recognition was linear across the T2 intervals that were tested.

Fig. 4 shows percentage recognition of the shapes to be over 70% when the dot pairs are displayed as rapidly as possible, i.e., with 0 ms of separation between offset of the last member of one pair and the onset of the first member of the next pair. Recognition of shapes declined as a linear function of T3, and as reported above, not only was the overall decline significant, even the drop from 0 to 2 ms proved to be significant. This may support an inference that synchrony in the millisecond range is a factor in "binding" of the shape cues, but see discussion of this issue below.

Fig. 5 shows recognition levels when 3 ms was provided between successive pairs, and with the temporal separation between the pair members being variable. The 3 ms T3 interval allowed for a hit rate in the 65% range when there was no temporal separation of the pair members, i.e., at T2 = 0. As T2 was increased the subject's ability to identify the shapes decreased, and as indicated in the analysis above, even a 0.5 ms interval between pair members produced a significant decline in recognition. This supports the proposition that neural responding is sensitive to submillisecond differentials in stimulus presentation, as discussed below.


As outlined at the outset, it is commonly thought that the timing of individual spikes is rather random, and that the essential information about stimulus attributes is conveyed by an average across intervals in the 20–200 ms range. A number of investigators have challenged that view [917, 1923], suggesting that precise synchronous firing of neurons provides a special designation of what stimulus components belong together. This has been described as "binding."

The binding concept is most often invoked to explain how one would define one shape from others that might be present in an image, though for the present work, it can be discussed in terms of aggregating partial cues for a single object. The goal is to combine those cue components that belong together. Von der Malsburg [9, 10] argued that highly correlated activity, i.e., synchrony of firing across these shape components, is essential to that process. The degree of temporal contiguity would depend upon the specific linking to be done, but synchrony of spikes in the 1–10 ms range would be most likely needed in the early stages of image encoding.

The present work used very brief flashes from an array of LEDs to mark the outer boundary of shapes, and varied the timing of those flashes to determine how these manipulations affected recognition of the shapes. For Exp. 1, where zero separation of pulse-pairs was a constant condition, the average hit-rate was in the 70% range when there was no delay between successive pairs. Recognition dropped as the interval between pairs was increased, with hit rates being less than 30% when 8 ms was inserted between each pair. This falls within the time range that might be expected for initial image encoding [10], and these results might reflect a role for synchrony of cue components for eliciting recognition of the various shapes.

In evaluating whether the results of Exp. 1 relate to synchrony mechanisms, one must consider another hypothesis that is commonly invoked in discussions of iconic memory. It is widely believed that visible persistence provides the basis for integration, this being the sustained visibility of a brief stimulus for a period that is considerably longer than the stimulation itself [2426]. This model specifies that recognition can be accomplished as long as one has not exceeded the duration of the integration window for essential cues. With an increase in the T3 interval, progressively more dots would exceed this duration, and one would expect hit rates to decline.

There are several reasons to reject an explanation that is based on the duration of visible persistence. First, a previous study from this laboratory [8] measured not only the time interval across which transient boundary dots could be integrated, but took independent measures of the duration of visible persistence. The duration of visible persistence of subjects did not predict the time interval across which the subjects could integrate the shape cues, nor did it predict the rate of decline that was observed as the interval between display subsets was increased.

Second, the incremental increase in the interval between dot pairs produces a total display time that is a multiple of the number of pairs. If the decline in recognition were due to exceeding the duration of the integration window, one would expect a geometric change in the rate of decline in recognition once the cumulative display time exceeded that interval. In the present experiment a linear decline was found, which seems more consistent with a synchrony hypothesis, which is further discussed below.

Third, the evidence suggests that the duration across which the transient discrete shape cues can be integrated is not a fixed interval. In the prior study mentioned above [8], the display set was broken into odd and even subsets, each containing half the dots to be shown. With dim room illumination, hit rates were in the 75–80% range when the interval between subsets was 20 ms or less, and they declined to about half that level with an interval of 80 ms. From the plot shown in Fig. 4, one can infer a comparable level of decline, i.e., a 50% drop in recognition levels from the maximum, when the interval between pairs was 6–8 ms. This provides a total display time, on average, of somewhere between 170 and 230 ms. This is over twice as long as the duration over which the odd-even subsets could elicit a 50% decline in recognition when subjects were tested with the room dim [8].

It is more plausible that the ability to combine successive dot pairs into a code that can effectively elicit recognition depends on the temporal contiguity between successive pairs. Based on the data from Exp. 1, the linkage that takes place, i.e., binding, appears to be a linear function of time intervals in the millisecond range.

The second experiment provides additional evidence that simultaneity contributes to the processing and integration of shape information. Separation of pulse-pairs was held constant at 3 ms, so total time to show all the dots was the same for each of the T2 intervals that separated the pair members. For the T2 = 0 condition, wherein stimulus pairs were virtually simultaneous, recognition levels were in the 70% range. Providing even 0.5 ms of separation between the pulses produced a significant drop in recognition, and with an interval of 1.5 ms the hit-rates had dropped into the 40% range. Thus simultaneous (or near simultaneous) presentation has a substantial benefit for integration of the successive stream of partial shape cues.

The benefit from millisecond and submillisecond simultaneity of stimulus pulses may reflect encoding operations taking place at the earliest stages of visual processing, i.e., in the retina. Full field stimulation of ganglion cells with random flicker causes a reduction of spontaneous activity, and these neurons then provide only sparse firing that is tightly linked to the brightness transitions [2733].

Meister et al. [32] as well as Brivanlou et al. [34] examined the stimuli that would elicit correlated activity of neighboring retinal ganglion cells, and concluded that the synchrony was provided by joint activation of overlapping portions of their receptive fields. They suggested that spiking amacrine cells provide the basis for the synchronous firing.

Mastronarde [35] found synchrony in On and Off Y cells, wherein the mutual influence was restricted to cells of the same class. Antidromic activation of one of these cells would lead to an increase in the firing rate of neighbors that begin in about 0.5 ms and lasted for 1.5 ms. This investigator suggested that the joint activity was from direct electrical coupling by gap junctions rather than being a response to common input. Gap junctions would provide electrotonic linkage between adjacent neurons, essentially combining their receptive fields.

Several groups have found synchronized firing to a moving slit from retinal ganglion cells [16, 17, 36]. At least for direction-selective On cells of rabbit, chemical blockade of gap-junction communication abolished synchronous firing, possibly by disrupting input from wide-field amacrine cells [36].

DeVries [37] suggested that gap junctions were responsible for yoking the responses in 4 of the 5 classes of ganglion cells where synchrony was observed, and similar results were reported by Hu and Bloomfield [38] for Off-center ganglion cells of rabbits, but not for On-center cells. Hidaka et al. [39] examining rat retina using dual patch recordings and tracer labels, and demonstrated dendrodendritic gap junctions and electrotonic coupling of alpha ganglion cells of the same type, including On-center cells.

Nirenberg et al. [40] argue against the hypothesis that correlated firing provides a special signal. Using an information measure, they examined the firing patterns of isolated mouse retina, and conclude that 90% of the information that can be recovered from the cell firing can be derived from the independent activity of the separate ganglion cells responses. It can be said, however, that the final 10% that could not be accounted for may well be highly significant information, and in particular may signal the position of key boundary markers that allow the object to be identified.

Roelfsema et al. [41] and Palanca and DeAngelis [42] found little evidence that synchrony serves to bind contours that were part of a common form. Their results challenge at least the most general form of the synchrony/binding hypothesis [10, 22], but cannot be taken as evidence that synchrony provides no perceptual benefit. Even if synchrony does not serve to bind all contours, it might provide a means to mark events that are temporally coincident, such as a common moving edge.

Although synchrony-based encoding may begin in the retina, it is possible that the process continues, yielding correlated firing of cortical neurons. For the present results, an increase in the interval between pair members (T2) produced a 17% decline in recognition for each millisecond that was added to the interval, whereas recognition declined by only 5.3% for each millisecond that was added between pairs (T3). This suggests two separate processing stages, retinal and cortical, with the former being especially sensitive to the time interval.

It is certainly the case that cortical neurons are capable of synchronous firing, and most of the evidence and theorizing about the role of synchrony for binding of stimulus attributes is based on recordings taken from cortex. [For reviews, see [23], [4346]] In this regard, the present results lead to a slight modification of the proposal that synchrony contributes to the analysis of boundaries, in that isolated dots were used rather than contours. There can be no doubt that these boundary markers provide the necessary information for shape recognition, for the shapes are indeed identified. So we can say that simultaneity in the presentation of boundary markers, and most likely the synchronized neural responses that they generate, contributes to the binding of information that is important for ultimate recognition of shapes.


Objects can be identified from brief display of dots that mark the outer boundary of the objects. Recognition drops as a linear function of temporal separation in the display of successive dot-pairs, or with temporal separation of members of a pair. In the latter condition, recognition is significantly impaired if as little as half a millisecond of time is provided between offset of first and onset of the second member of the pair. These results support proposals based on neurophysiological findings that argue for synchrony of firing as a special encoding process.


Computer programming for conduct of this research was done by David Gorin, DarkHorse Software. LED emission was measured by Dr. Andrew Jones, USC Space Science Center. Statistical analysis was done by Leigh Callinan, Bendigo Scientific Data Analysts. This research was supported by the Quest for Truth Foundation.



degrees of visual angle


minutes of visual angle


candela per meter squared


gallium aluminum and arsenic


light emitting diode


natural log






number used to specify which dots from address list will be displayed








pulse width


temporal separation between members of subset pairs


temporal separation between subset pairs


  1. Barlow HB: Summation and inhibition in the frog's retina. J Physiol. 1953, 119: 69-88.

    Article  PubMed Central  CAS  PubMed  Google Scholar 

  2. Lettvin JY, Maturana HR, McCulloch WS, Pitts WH: What the frog's eye tells the frog's brain. Proc Inst Radio Eng. 1959, 47: 1940-1951.

    Google Scholar 

  3. Hubel DH, Wiesel TN: Receptive fields of single neurons in the cat's striate cortex. J Physiol. 1959, 148: 574-591.

    Article  PubMed Central  CAS  PubMed  Google Scholar 

  4. Hubel DH, Wiesel TN: Receptive fields, binocular interactions and functional architecture in the cat's visual cortex. J Physiol. 1962, 160: 106-154.

    Article  PubMed Central  CAS  PubMed  Google Scholar 

  5. Adrian ED: The basis of sensation. 1928, New York: W.W. Norton

    Google Scholar 

  6. Eriksen CW, Collins JF: Some temporal characteristics of visual pattern perception. J Exp Psychol. 1967, 74: 476-484. 10.1037/h0024765.

    Article  CAS  PubMed  Google Scholar 

  7. Eriksen CW, Collins JF: Sensory traces versus the psychological moment in the temporal organization of form. J Exp Psychol. 1968, 77 (3): 376-382. 10.1037/h0025931.

    Article  CAS  PubMed  Google Scholar 

  8. Greene E: Information persistence in the integration of partial cues for object recognition. Percept Psychophys.

  9. Von der Malsburg C: The correlation theory of brain function. Internal Report 81–82, Max Planck Institute, 1981. Reprinted in Models of Neural Networks II. Edited by: Domany E, van Hemmen JL, Schulten K. 1994, Berlin: Springer, 95-119.

    Google Scholar 

  10. Von der Malsburg C: The what and why of binding: the modeler's perspective. Neuron. 1999, 24: 95-104. 10.1016/S0896-6273(00)80825-9.

    Article  CAS  PubMed  Google Scholar 

  11. Engel AK, Konig P, Kreiter AK, Schillen TB, Singer W: Temporal coding in the visual cortex: new vistas on integration in the nervous system. Trends Neurosci. 1992, 15: 218-226. 10.1016/0166-2236(92)90039-B.

    Article  CAS  PubMed  Google Scholar 

  12. Engel AK, Konig P, Singer W: Direct physiological evidence for scene segmentation by temporal coding. Proc Natl Acad Sci USA. 1991, 88: 9136-9140. 10.1073/pnas.88.20.9136.

    Article  PubMed Central  CAS  PubMed  Google Scholar 

  13. Maldonado PE, Friedman-Hill S, Gray CM: Dynamics of striate cortical activity in the alert Macaque: II. Fast time scale synchronization. Cereb Cortex. 2000, 10: 1117-1131. 10.1093/cercor/10.11.1117.

    Article  CAS  PubMed  Google Scholar 

  14. Samonds JM, Allison JD, Brown HA, Bonds AB: Cooperation between area 17 neuron pairs enhances fine discrimination of orientation. J Neurosci. 2003, 23: 2416-2425.

    CAS  PubMed  Google Scholar 

  15. Samonds JM, Zhou Z, Bernard MR, Bonds AB: Synchronous activity in cat visual cortex encodes collinear and cocircular contours. J Neurophysiol. 2006, 95: 2602-2616. 10.1152/jn.01070.2005.

    Article  PubMed  Google Scholar 

  16. Amthor FR, Tootle JS, Grzywacz NM: Stimulus-dependent correlated firing in directionally selective retinal ganglion cells. Vis Neurosci. 2005, 22: 769-787.

    PubMed  Google Scholar 

  17. Chatterjee S, Merwine DK, Grzywacz NM: Stimulus-dependent response correlations between rabbit retinal ganglion cells [abstract]. J Vision. 2006, 6: 62a-

    Article  Google Scholar 

  18. Schall R: Estimation in generalized linear models with random effects. Biometrika. 1991, 40: 917-927.

    Google Scholar 

  19. Abeles M: Time is precious. Science. 2004, 304 (5670): 523-524. 10.1126/science.1097725.

    Article  CAS  PubMed  Google Scholar 

  20. Abeles M: Corticonics. 1991, Cambridge: Cambridge U. Press

    Book  Google Scholar 

  21. Singer W: Time as coding space in neocortical processing: a hypothesis. The Cognitive Neurosciences. Edited by: Gazzaniga MS. 1997, Cambridge, MA: MIT Press

    Google Scholar 

  22. Singer W: Time as coding space?. Curr Opin Neurobiol. 1999, 9: 189-194. 10.1016/S0959-4388(99)80026-9.

    Article  CAS  PubMed  Google Scholar 

  23. Singer W, Gray CM: Visual feature integration and the temporal correlation hypothesis. Ann Rev Neurosci. 1995, 18: 555-586. 10.1146/

    Article  CAS  PubMed  Google Scholar 

  24. Coltheart M: Iconic memory and visible persistence. Percept Psychophys. 1980, 27: 183-228.

    Article  CAS  PubMed  Google Scholar 

  25. Long GM: Iconic memory: A review and critique of the study of short-term visual storage. Psychol Bull. 1980, 88: 785-820. 10.1037/0033-2909.88.3.785.

    Article  CAS  PubMed  Google Scholar 

  26. Nisly SJ, Wasserman GS: Intensity dependence of perceived duration: Data, theories, and neural integration. Psychol Bull. 1989, 106: 483-496. 10.1037/0033-2909.106.3.483.

    Article  CAS  PubMed  Google Scholar 

  27. Berry MJ, Warland DK, Meister M: The structure and precision of retinal spike trains. Proc Natl Acad Sci. 1967, 94: 5411-5416. 10.1073/pnas.94.10.5411.

    Article  Google Scholar 

  28. Arnett DW: Statistical dependence between neighboring retinal ganglion cells in goldfish. Exp Brain Res. 1978, 32: 49-53. 10.1007/BF00237389.

    Article  CAS  PubMed  Google Scholar 

  29. Arnett DW, Spraker TE: Cross-correlation analysis of the maintained discharge of rabbit retinal ganglion cells. J Physiol. 1981, 317: 29-47.

    Article  PubMed Central  CAS  PubMed  Google Scholar 

  30. Johnsen JA, Levine MW: Correlation of activity in neighbouring goldfish ganglion cells: relationship between latency and lag. J Physiol. 1983, 345: 439-449.

    Article  PubMed Central  CAS  PubMed  Google Scholar 

  31. Mastronarde DN: Correlated firing of retinal ganglion cells. Trends Neurosci. 1989, 12: 75-80. 10.1016/0166-2236(89)90140-9.

    Article  CAS  PubMed  Google Scholar 

  32. Meister M, Pine J, Baylor DA: Concerted signaling by retinal ganglion cells. Science. 1995, 270: 1207-1210. 10.1126/science.270.5239.1207.

    Article  CAS  PubMed  Google Scholar 

  33. Meister M: Multineuronal codes in retinal signaling. Proc Natl Acad Sci USA. 1996, 93: 609-614. 10.1073/pnas.93.2.609.

    Article  PubMed Central  CAS  PubMed  Google Scholar 

  34. Brivanlou IH, Warland DK, Meister M: Mechanisms of concerted firing among retinal ganglion cells. Neuron. 1998, 20: 527-539. 10.1016/S0896-6273(00)80992-7.

    Article  CAS  PubMed  Google Scholar 

  35. Mastronarde DN: Interactions between ganglion cells in the cat retina. J Neurophysiol. 1983, 49: 350-365.

    CAS  PubMed  Google Scholar 

  36. Ackert JM, Wu SH, Lee JC, Abrams J, Hu EH, Perlman L, Bloomfield SA: Light-induced changes in spike synchronization between coupled ON direction selective ganglion cells in mammalian retina. J Neurosci. 2006, 26: 4206-4215. 10.1523/JNEUROSCI.0496-06.2006.

    Article  CAS  PubMed  Google Scholar 

  37. DeVries SH: Correlated firing in rabbit retinal ganglion cells. J Neurophysiol. 1999, 81: 908-920.

    CAS  PubMed  Google Scholar 

  38. Hu EH, Bloomfield SA: Gap junctional coupling underlies short-latency spike synchrony of retinal α ganglion cells. J Neurosci. 2003, 23: 6768-6777.

    CAS  PubMed  Google Scholar 

  39. Hidaka S, Akahori Y, Kurosawa Y: Dendrodendritic electrical synapses between mammalian retinal ganglion cells. J Neurosci. 2004, 24: 10553-10567. 10.1523/JNEUROSCI.3319-04.2004.

    Article  CAS  PubMed  Google Scholar 

  40. Nirenberg S, Carcieri AL, Latham PE: Retinal ganglion cells act largely as independent encoders. Nature. 2001, 411: 698-701. 10.1038/35079612.

    Article  CAS  PubMed  Google Scholar 

  41. Roelfsema PR, Lamme VAF, Spekreijse H: Synchrony and covariation of firing rates in the primary visual cortex during contour grouping. Nat Neurosci. 2004, 7: 982-991. 10.1038/nn1304.

    Article  CAS  PubMed  Google Scholar 

  42. Palanca BJA, DeAngelis GC: Does neural synchrony underlie visual feature grouping. Neuron. 2005, 46: 333-346. 10.1016/j.neuron.2005.03.002.

    Article  CAS  PubMed  Google Scholar 

  43. Munk MHJ, Neuenschwander S: High-frequency oscillations (20–120 Hz) and their role in visual processing. J Clin Neurophysiol. 2000, 17 (4): 341-360. 10.1097/00004691-200007000-00002.

    Article  CAS  PubMed  Google Scholar 

  44. Engel AK, Roelfsema PR, Fries P, Brecht M, Singer W: Role of the temporal domain for response selection and perceptual binding. Cereb Cortex. 1997, 7: 571-582. 10.1093/cercor/7.6.571.

    Article  CAS  PubMed  Google Scholar 

  45. Aertsen A, Arndt M: Response synchronization in the visual cortex. Curr Opin Neurobiol. 1993, 3: 586-594. 10.1016/0959-4388(93)90060-C.

    Article  CAS  PubMed  Google Scholar 

  46. Konig P, Engel AK: Correlated firing in sensory-motor systems. Curr Opin Neurobiol. 1995, 5: 511-519. 10.1016/0959-4388(95)80013-1.

    Article  CAS  PubMed  Google Scholar 

Download references

Author information

Authors and Affiliations


Corresponding author

Correspondence to Ernest Greene.

Additional information

Competing interest statement

The author(s) declares that he has no competing interests.

Authors’ original submitted files for images

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and Permissions

About this article

Cite this article

Greene, E. Simultaneity in the millisecond range as a requirement for effective shape recognition. Behav Brain Funct 2, 38 (2006).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI:


  • Image Encode
  • Shape Pattern
  • Visible Persistence
  • Successive Pair
  • Pair Member