the analysis of meals, especially of spontaneous meals, has long had a problematic place in the behavioral neuroscience of eating. Only a small minority of eating researchers now, or ever, have concerned themselves with meal patterns. On first glance this seems paradoxical. After all, eating occurs only, or mainly, as meals, so why are meals not the major focus of eating research? Many factors contribute to this situation. I name five:
First, identifying the basic unconditioned physiological stimuli controlling meals has been agonizingly slow and difficult. By the 1970s the classic gastric and glucostatic accounts of meal initiation and termination (7, 21) were no longer influential, and only in the last 10 years have a few alternative mechanisms been well established (e.g., 6, 15, 28). The poor understanding of the physiological basis for meals rendered meal pattern analysis uninteresting for many investigators.
Second, in contrast to the poorly elaborated physiological mechanisms, environmental, ecological or “economic” variables, such as caging conditions, foraging cost, etc., very potently affected meal patterns (10, 22).
Third, even under the simplest conditions, meal pattern analysis did not produce consistent functional relationships. Perhaps the outstanding example concerns early reports (19, 20) that the durations of intermeal intervals were closely correlated with the size of the preceding meal (the “satiety ratio”), suggesting that meal initiation is triggered by some nutritional consequence of depletion of immediately preceding intake. This finding proved discouragingly difficult to replicate (8, 14, 23).
Fourth, there was little unanimity as to the appropriate definition for meals. Furthermore, most definitions were arbitrary, and few were biologically based [an important exception is the criterion that a meal ends when the sequence of postprandial satiety behaviors appears (3), but biological definition has not been used in spontaneous eating]. Finally, there were no points of contact between the various definitions.
Fifth, although formal theories of meal size appeared and were tested (e.g., 12, 27), there was no similar progress for rats’ meal patterns. Perhaps for this reason, formally questionable mathematical procedures were tolerated. For example, the most popular quantitative method applied to meal pattern definition is the log survivorship analysis (9, 26), which is suspect because exponential functions used to fit such data produce the theoretically disturbing prediction that the probability of initiating a new meal stays constant as time since the last meal passes, rather than increasing with time as predicted by most, if not all, concepts of hunger and satiety.
Thus meal pattern analysis was an ugly duckling. For more than 50 years there has been a more attractive alternative: total food intake. Hypothalamic lesions that made animals eat to obesity or starve to emaciation were discovered more than 50 years ago (2, 5, 18). At the same time, convincing arguments were presented that animals eat for calories (1) and that body adiposity is a biologically regulated variable, like blood oxygen content (17). Thus, both a problem and a theoretical approach appeared for eating that was couched in terms of total food intake (daily?, weekly? monthly?) expressed in units of metabolizable energy content, without reference to meals. Nor was theoretical support for behavior-less analysis of eating lacking. Stellar’s (29) theory of hypothalamic centers for motivation focused on overall drive and explicitly omitted the effector mechanisms involved. More generally, overenthusiastic application of cybernetics and control theory (30, 33) often transmogrified biology into information processing. The influence of the kinds of ideas that support analysis of total food intake rather than meals seemed to be waning in the 1980s but has enjoyed a major renaissance since modern molecular biology has revolutionized the science of eating.
Given all of this, why bother with meals? The reason is in fact rather obvious. The meal is a biological unit of eating behavior and therefore should not be ignored. “Why?” was succinctly stated in 1955 by John Brobeck (4) in the first lines of his address at a major conference on the regulation of eating: “Nearly all of the published studies of the regulation of food intake neglect the fact that the total amount of food eating is always the product of two factors, the number of meals multiplied by the intake of the average meal. … Any procedure altering food intake does so through some change in one of both of these.” That Brobeck first reported that ventromedial hypothalamic lesions increase body weight by increasing total food intake (5) and that lateral hypothalamic lesions decrease total food intake and body weight (2), and, as far as I know, himself never measured meal patterns is perhaps ironic, but establishes his credentials as an impartial expert. Brobeck made this point because he was interested in the brain and recognized that a neurological analysis of eating needs to be performed at the level of what the brain produces, that is, movements. A neuroscience of eating that does not include behavior will sooner or later come to a dead end before reaching its goal.
It may be argued that bites, chews, licks, and swallows are the movements of eating, not meals. These microstructural elements of eating are integral parts of the problem and provide a direct link from behavior through lower motor neurons into the interneuronal networks controlling them (13, 16, 27). However, they alone will fail to account for the molar rhythms of eating, which indicates that a supervening component of the controlling interneuronal networks, that may perhaps be called the meal mechanism, organizes the temporal organization of bites, etc., to produce the initiation, maintenance, and termination of meals and the duration of intermeal intervals. These supervening mechanisms are analogous to those pointed out by Sherrington (25) that create the intrinsic rhythm of the scratch reflex, which is not equal to the frequency with which the flea bites, that inhibit scratching while the dog is walking, etc. Admittedly, we are not very far along in the understanding of these neural networks. But that the work is difficult does not change the fact that it is the central challenge for the physiology of eating.
At this point, even the sympathetic reader may be tempted to throw up her hands. How can we reasonably invest scarce research resources in meal pattern analysis in the face of its lackluster, if not dismal, history? Take heart. An extremely encouraging answer is found in the article by Zorilla and colleagues (35) in this issue of the American Journal of Physiology-Regulatory, Integrative and Comparative Physiology.
Building on recent theoretical and empirical advances in meal measurement in agricultural animals (31, 34), Zorilla et al. (35) have defined meals in a wholly novel way, by considering drinking as well as eating in the definition. This new approach differs fundamentally from what came before. It is useful to outline how.
Meals are typically defined by an intermeal interval criterion that determines whether consecutive eating observations belong to one meal or two. Previously, only intereating events have been considered. The major novelty of Zorilla et al.’s (35) “drinking-explicit” metric is that prandial drinking is also considered. They first compiled intervals between all ingestive events, whether they were eating or drinking, into a single distribution and then computed joint estimates of bout size and bout duration that produced minimum rates of change of both parameters. This revealed a single minimum inflection point, at 300 s (see Fig. 1C of Ref. 35), which was taken as the criterion for separating ingestive behavior into individual eating and drinking bouts. Back-calculation of eating per se resulted in a mean meal size of 2.3 g and mean intermeal interval of 66 min (see Table 1 of Ref. 35).
What is special about this method? First, as described above, it takes a wholly new approach by integrating prandial drinking into the analysis. Second, the authors took unusually great pains to validate their approach. Three deserve mention. First, the resulting probability of beginning the next meal was an increasing function of time since the last one (see Fig. 2 of Ref. 35), thus providing construct validity that is lacking in the most common alternative method, break point analysis. Second, the method produced significant satiety ratios, that is, after larger meals rats waited longer to eat (see Fig. 7 and Table 3 of Ref. 35), which in the past has been difficult. Third, and perhaps most dramatically, analysis of videotapes of the rats showed that, using the drinking-explicit meal criterion, 9 of 10 rats displayed the behavioral sequence of postprandial satiety (3) within 15 min of meal end. In contrast, using the drinking-naïve methods, only 1 of 10 did (see Fig. 8 of Ref. 35). Thus the drinking explicit method 1) has predictive validity with respect to the major biological definition of meals, the behavioral sequence of postprandial satiety, and 2) has discriminative validity with respect to the most frequent alternative meal definition methods; furthermore, these alternative methods lack this validity. In sum, Zorilla et al. (35) successfully addressed three of the five issues mentioned above that have discouraged meal pattern analysis.
The data described so far would suffice to make this a landmark paper in meal pattern analysis. However, Zorilla et al. (35) went on to provide two more interesting analyses that should further entice investigators to adopt their method.
First, they extended the analysis to deprivation-induced eating. This was done by comparing meal patterns during a 2-h feeding period after 22 h food deprivation and under ad libitum conditions. Not surprisingly, deprivation affected meal size and intermeal interval. More importantly, however, the drinking-explicit and drinking-naïve meal definitions produced qualitatively different estimations of meal patterns: according to the former, deprivation selectively increased meal size, whereas according to the latter, deprivation selectively increased meal frequency (see Fig. 9 of Ref. 35). This is a crucial result. Experiments designed to probe the influence of deprivation on eating would look for changes in meal size controls if they were informed by drinking-explicit analyses and would look for meal timing controls if they were informed by drinking-naïve analyses. In light of the discussion above, the smart money is certainly on the drinking-explicit prediction.
Finally, realizing that the necessity of measuring drinking responses might impede many labs from using the drinking-explicit method, Zorilla et al. (35) tested a “drinking-implicit” method. It was assumed that the intereating event distribution contained three processes, one resulting in short intereating intervals in which no drinking occurred, one in slightly longer intereating intervals in which drinking occurred but was not measured (hence, “drinking-implicit”), and one in still longer intereating intervals corresponding to intermeal intervals. Analysis of just these data produced a criterion intereating interval of 18 min and misclassified very few of the data. The implication here is that a three-process “drinking-implicit” analysis of just intereating intervals can provide a meal criterion that is reasonable, if not quite perfect.
Where do we stand? To start, note that progress has been made on three of the five general objections to meal pattern analysis described above. Nevertheless, as the authors agree, the generality of the findings remains an important issue. Zorilla et al. (35) tested meals of chow pellets taken by 400-g to more than 500-g male Wistar rats housed in medium-sized cages. Would the data change importantly if, for example, smaller rats, female rats, larger cages, different foods, or cages with different complexities [such as sleeping niches (22)] were used, if “foraging” costs were imposed, or if mice were used? What about other species? That cow and pig meals have already been shown to be tractable to the new metric is certainly encouraging, but establishing its validity in mice is especially important. Acute and chronic manipulation of gene expression is one of the most powerful and promising methods in behavioral neuroscience, but it is useful only to the extent that the resulting changes in phenotype can be sensitively and validly measured (11). In eating science, it would seem that meal patterns should be among the first phenotypes analyzed in newly created transgenic animals, which are mainly mice. Indeed, some initial reports from this front have already contained surprises (32). Investigations of drinking-explicit meal patterns in mice are eagerly awaited.
The drinking-explicit meal analysis method is not difficult. Measuring eating and drinking responses is increasingly common, and a reasonable alternative does not require the drinking data. The mathematics are straightforward, although described briefly, so some investigators may require a brief consultation to implement them.
In sum, Zorilla et al. (35) may have served up the richest intellectual dish for the science of meals since Richter (24) first recorded them experimentally. Following up on very creative work linking the analyses of spontaneous eating and drinking in agricultural animals (31, 34), they have produced a method that may well revolutionize functional analyses of eating in laboratory rodents. To the extent that it is successful, it should also inform the next generation of mechanistic analyses of eating (and, indeed, drinking) and potentially go a long way toward solving a gap in physiology that has worried perspicacious students for more than a half century.
- Copyright © 2005 the American Physiological Society