








Estude fácil! Tem muito documento disponível na Docsity
Ganhe pontos ajudando outros esrudantes ou compre um plano Premium
Prepare-se para as provas
Estude fácil! Tem muito documento disponível na Docsity
Prepare-se para as provas com trabalhos de outros alunos como você, aqui na Docsity
Os melhores documentos à venda: Trabalhos de alunos formados
Prepare-se com as videoaulas e exercícios resolvidos criados a partir da grade da sua Universidade
Responda perguntas de provas passadas e avalie sua preparação.
Ganhe pontos para baixar
Ganhe pontos ajudando outros esrudantes ou compre um plano Premium
Comunidade
Peça ajuda à comunidade e tire suas dúvidas relacionadas ao estudo
Descubra as melhores universidades em seu país de acordo com os usuários da Docsity
Guias grátis
Baixe gratuitamente nossos guias de estudo, métodos para diminuir a ansiedade, dicas de TCC preparadas pelos professores da Docsity
This academic article delves into the complexities of positive reinforcement in behavior analysis, challenging the common perception that it is always beneficial. The author argues that positive reinforcement can have unintended aversive consequences, highlighting the importance of considering the broader context and potential negative side effects. The article explores the concept of aversive control and its role in shaping behavior, demonstrating that even seemingly positive reinforcement can involve elements of negative reinforcement or punishment. It provides a nuanced perspective on the application of reinforcement techniques in various settings, including education and everyday life.
Tipologia: Manuais, Projetos, Pesquisas
1 / 14
Esta página não é visível na pré-visualização
Não perca as partes importantes!
The Behavior Analyst 2003, 26, 1-14 No. 1 (Spring)
Procedures classified as positive reinforcement are generally regarded as more desirable than those classified as aversive-those that involve negative reinforcement or punishment. This is a crude test of the desirability of a procedure to change or maintain behavior. The problems can be identified on the basis of theory, experimental analysis, and consideration of practical cases. Theoretically, the distinction between positive and negative reinforcement has proven difficult (some would say the distinction is untenable). When the distinction is made purely in operational terms, experiments reveal that positive reinforcement has aversive functions. On a practical level, positive reinforcement can lead to deleterious effects, and it is implicated in a range of personal and societal problems. These issues challenge us to identify other criteria for judging behavioral procedures. Key words: negative reinforcement, punishment, positive reinforcement, aversive control
The purpose of this article is to cause you to worry about the broad en- dorsement of positive reinforcement that can be found throughout the liter- ature of behavior analysis. I hope to accomplish this by raising some ques- tions about the nature of positive re- inforcement. At issue is whether it is free of the negative effects commonly attributed to the methods of behavioral control known as "aversive." The topic of aversive control makes many people uncomfortable, and rela- tively few people study it (Baron, 1991; Crosbie, 1998). Ferster (1967) expressed the common view when he wrote, "It has been clear for some time that many of the ills of human behavior have come from aversive control" (p. 341). I believe that much of what has been said about aversive control is mistaken, or at least misleading. Aversive con- trol, in and of itself, it is not necessar- ily bad; sometimes it is good. And, more to the point, the alternative-pos- itive reinforcement-is not necessarily good; sometimes it is bad. Aversive control is an inherent part of our world,
This paper is based on the presidential address delivered to the Association for Behavior Anal- ysis convention in Toronto, Ontario, May 2002. Correspondence should be sent to the author at the Department of Psychology, West Virginia University, P.O. Box 6040, Morgantown, West Virginia 26506-6040 (e-mail: Michael.Perone @mail.wvu.edu).
an inevitable feature of behavioral con- trol, in both natural contingencies and contrived ones. When I say that aver- sive control is inevitable, I mean just that: Even the procedures that we re- gard as prototypes of positive rein- forcement have elements of negative reinforcement or punishment imbedded within them.
It is important to be clear about the narrow meaning of aversive control in scientific discourse. A stimulus is aver- sive if its contingent removal, preven- tion, or postponement maintains behav- ior-that constitutes negative reinforce- ment-or if its contingent presenta- tion suppresses behavior-punishment. That is all there is to it. There is no mention in these definitions of pain, fear, anxiety, or distress, nor should there be. It is easy to cite instances of effective aversive control in which such negative reactions are absent. Aversive control is responsible for the fact that we button our coats when the temperature drops and loosen our ties when it rises. It leads us to come in out of the rain, to blow on our hot coffee before we drink it, and to keep our fin- gers out of electrical outlets. The pres- ence of aversive control in these cases clearly works to the individual's advan- tage.
1
2 MICHAEL PERONE
By the same token, it is easy to cite cases in which the absence of aversive control is to the (^) individual's disadvan- tage. Dramatic^ demonstrations are^ pos- sible in^ laboratory settings. Figure 1 il- lustrates an^ experiment reported over (^100) years ago by E. W. (^) Scripture, di- rector of the first psychological labo- ratory at Yale. A (^) frog was (^) placed in a beaker of (^) water, which was then heated at a rate of (^) 0.002°C per second. (^) Scrip- ture (^) (1895) reported that (^) "the frog nev- er moved and at the end (^) of two and one half hours was found dead. He (^) had evidently been boiled without noticing it" (^) (p. 120). It was not (^) Scripture's in- tention to kill the (^) frog; his (^) goal was (^) to study rate^ of stimulus^ change in^ sen- sory processes. We^ also should^ forgive Scripture for^ his^ unwarranted^ inference about the (^) frog's awareness. For our purposes, it is^ enough to^ acknowledge that this particular environmental ar- rangement clearly was not in the frog's long-term best interest. Because the gradual rise in water temperature did not establish a change of scenery as a
negative reinforcer, no escape behavior was generated. There just wasn't enough reinforcement-negative rein- forcement-to control adaptive behav- ior. It should be remembered that the def- inition of an aversive stimulus-or for that matter, a positive reinforcer-is based on function, not structure. Aver- siveness is not an inherent property of a stimulus. It depends critically on the environmental context of the stimulus, and it cannot be measured apart from the effect of the stimulus on behavior. Consider electric shock, a stimulus so closely associated with the analysis of aversive control that we tend to think of it as inherently aversive. The error is understandable, but it is still an error. Illustrative data come from an experi- ment by de Souza, de Moraes, and To- dorov (1984). These investigators stud- ied (^) rats responding on (^) a signaled shock-postponement (^) schedule. The in- dependent variable was the (^) intensity of the (^) shock, which was varied across a wide (^) range of values in a mixed order. Responding was stabilized at each in- tensity value.^ Figure 2 shows the^ re- sults for 5 individual rats, along with the group average. When the intensity was below about 1 mA, the rats did not respond much, and as a result they avoided only a small percentage of the shocks. At intensities above (^1) mA, however, the rats were more success- ful, avoiding between 80% and 100% of the shocks. (^) By this (^) measure, then, shocks below 1 mA^ are not aversive. Now consider a study by Sizemore and Maxwell (1985), who used electric shock to study not avoidance, but pun- ishment. In baseline conditions, rats' responding was maintained by vari- able-interval (VI) 40-s schedules of food reinforcement. In (^) experimental conditions, some^ responses also pro- duced an^ electric shock. Sizemore and Maxwell found that shocks as low as 0.3 to 0.4 mA completely, or almost completely, suppressed responding. The shaded bars in Figure 2 show where these values fall in relation to de Souza et al.'s (1984) avoidance func-
4 MICHAEL PERONE
tingent on some behavior and if it ef- fectively reduces that behavior, then it is punishment. And, by definition, the more effective it is, the more aversive it is. But^ schoolteachers^ are^ not the only ones to forget this; even behavior analysts writing for professional audi- ences can be found to slip up. For ex- ample, an article advocating^ the use^ of time-out in parental discipline suggest- ed that "through use of time-out, par- ents learn that punishment need not be aversive or painful."' The authors cor- rectly classified time-out as a form of punishment, but erred^ by suggesting that it is not aversive. If time-out is not aversive, it could not possibly function as a punisher. The verbal^ behavior^ of these^ authors may be under control of some dimen- sion of the stimulus events besides their aversiveness-perhaps some events are mistakenly described as nonaversive because they are^ aestheti- cally inoffensive, or because they do not leave welts or bruises. Granted, it may well be that some forms of aver- sive control should be preferred over others. Teachers and^ parents^ might^ be right to prefer time-out over spanking. But the justification for the preference cannot be that one is aversive and the other is not.
Some commentators have serious reservations about any form of aver- sive control. In Coercion and Its Fall- out, Sidman (1989) worried about^ the negative side effects of aversive con- trol. "People who use punishment be- come conditioned punishers them- selves. ... Others will fear, hate, and
Although this quote does come from a pub- lished source in general circulation, the source is not identified because no purpose would be served other than perhaps to embarrass the au- thors. Equivalent errors are easy to find in the literature, and it would be unfair to single out any particular error^ for^ special^ attention.
avoid them. Anyone who^ uses^ shock becomes a shock" (p. 79). This is a powerful indictment of punishment. But^ Sidman^ was^ con- cerned with aversive control more broadly, and he extended his treatment to negative^ reinforcement. According to Sidman (1989),^ punishment^ and negative reinforcement constitute^ "co- ercion." Control by positive reinforce- ment is given dispensation. The problem is^ that^ the^ distinction between positive and negative rein- forcement is often^ unclear,^ even^ in^ lab- oratory procedures. Michael (1975) suggested that the distinction be aban- doned altogether, not only in scientific discourse but also as a rough and ready guide to humane practice. A^ portion^ of Michael's essay is^ especially relevant: [It might be argued] that by maintaining this distinction we can more effectively warn behav- ior controllers against the use of an undesirable technique. "Use positive rather than negative re- inforcement." But if the distinction is quite dif- ficult to make in many cases of human behavior the warning will not be easy to follow; and it is an empirical question at the present time wheth- er such a warning is reasonable-a question which many feel has not been answered. (pp. 41-42) To illustrate the empirical difficulties in distinguishing between positive and negative reinforcement, consider^ a^ pair of experiments conducted by Baron, Williams, and Posner^ (unpublished data). They studied^ the^ responding^ of rats on progressive-ratio schedules in which the required number^ of^ respons- es increases, with^ each reinforcer, over the course of the session. The^ effec- tiveness of the reinforcer is^ gauged by the terminal ratio: the highest ratio the animal will complete before respond- ing ceases. The left panel of Figure^3 shows results from 3^ rats^ whose^ re- sponding produced a signaled time-out from an avoidance schedule that^ oper- ated on another^ lever.^ As^ the^ duration of the time-out was raised, the terminal ratio increased, showing that^ longer time-outs are more effective negative reinforcers. The right panel shows re- sults from rats working for sweetened condensed milk mixed with water.
NEGATIVE EFFECTS OF POSITIVE REINFORCEMENT 5
NEGATIVE POSITIVE H .01 ml
.05ml
<
Figure 3. Effectiveness of negative and positive reinforcers as shown by the highest ratio com- pleted by rats on a progressive-ratio schedule. Left: Responding produced a signaled time-out from an avoidance schedule; reinforcer magnitude was manipulated by changing the duration of the time- out. Right: Responding produced a solution of sweetened condensed milk in water (O.Ol-ml or 0.05- ml cups); magnitude was manipulated by changing the concentration of the milk. (Unpublished data collected at the University of Wisconsin-Milwaukee by Baron, Williams, and Posner)
Raising the concentration of the milk increased the terminal ratio in much the same way as raising the duration of the timeout from avoidance. The con- tingencies in these two experiments- presenting milk or removing an avoid-
ance schedule-can be distinguished as positive and negative. But the func- tional relation between responding and reinforcer magnitude appears to be the same. Despite Michael's (1975) arguments,
lottery tickets may be the first to object to proposals to ban the lottery. Other examples may seem more mundane, but they are just as socially significant. Positive reinforcement is implicated in eating junk food instead of a balanced meal, watching televi- sion instead of exercising, buying in- stead of saving, playing instead of working, and working instead of spending time with one's family. Pos- itive reinforcement underlies our pro- pensity towards heart disease, cancer, and other diseases that are related more to maladaptive lifestyles than to purely physiological or anatomical weakness- es.
I (^) hope you are beginning to share my concerns. If so, you might be think- ing something along these lines: If contingencies of positive reinforce- ment can be so bad, is it possible to avoid aversive control? The answer is
Aversive control is inevitable be- cause every positive contingency can be construed in negative terms. The point can be made many ways. As Baum (1974) has noted, reinforcement can be understood as a transition from one situation to another. The transition involved in positive reinforcement pre- sumably represents an improvement. But the production of improved con- ditions may also be regarded as an es- cape from relatively aversive condi- tions. Thus, we may say that the rat presses a lever because such behavior produces food (a positive reinforcer) or because it reduces food (^) deprivation (a negative (^) reinforcer). The issue (^) may be one of (^) perspective. But outside the (^) laboratory, I (^) cannot help but be^ impressed with the propen- sity of people to respond to the nega- tive side of positive contingencies. Consider college students. In my large undergraduate courses I have tried a variety of (^) contingencies to (^) encourage class attendance. (^) Early on, I (^) simply
scored attendance and gave the score a weighting of 10% of the course grade. There were lots of complaints. The stu- dents (^) clearly saw this system as puni- tive: Each absence represents a loss of points towards the course grade. So I switched to a system to positively re- inforce attendance. When students come (^) to class on time, they earn a point above and beyond the (^) points needed to earn a perfect score in the course. Thus, a student with perfect at- tendance, and a perfect course perfor- mance, would earn 103% of the so- called maximum. A student who never came to class, but otherwise performed flawlessly, would earn 100%. If course points function as reinforcers, then this surely is^ a positive contingency. But the students reacted (^) pretty much the same as before. They saw this as an- other form of punishment: With each absence I was denying them a bonus point. Of course the students are right. Whenever a reinforcer is (^) contingent on behavior, it must be denied in the ab- sence of that behavior. Perhaps the propensity to see the negative side of positive contingencies depends on^ sophisticated verbal and symbolic repertoires that (^) may filter the impact of contingencies on human be- havior. Certainly college students can-and often do-convert course contingencies to^ symbolic terms, then proceed to manipulate those terms: They calculate the number of points at- tainable, including bonus points, and label 103% as the maximum. They keep a (^) running tally as the (^) maximum they can attain drops below 103%- thus, they calculate a reduced maxi- mum after each absence. It would ap- pear, (^) then, that when (^) they miss class they deliver the punishing stimulus themselves! So it might be argued that the (^) punishment contingency is an in- direct by-product of a certain kind of problem-solving repertoire unique to humans.
AVERSIVE FUNCTIONS OF POSITIVE (^) REINFORCEMENT Is verbal or symbolic sophistication necessary for schedules of positive re-
L: GREEN & RED L: GREEN L: GREEN& RED L: GREEN
z R:^ GREEN^ &^ RED^ R:^ GREEN^ &^ RED^ R:^ GREEN^ R:^ GREEN^ &^ RED
40
'° °La-a -
~~LEFT NO(Z Q (^) RIGHiT
LAST FIVE SESSIONS Figure 4. Rates at which a pigeon pecked concurrently available observing keys to produce colors correlated with the VI 30-s and VI 120-s components of a compound schedule of food reinforce- ment, as reported by Jwaideh and Mulvaney (1976). Responding on the key that produced red (correlated with VI 120 s) as well as green (correlated with VI 30 s) was suppressed relative to responding on the key that produced only green.
inforcement to manifest aversive func- tions, or is it possible that a more basic process is at work? One answer comes from research on the conditioned prop- erties of discriminative stimuli associ- ated with the components of multiple schedules. Working in Dinsmoor's laboratory at Indiana University, Jwaideh and Mul- vaney (1976) trained pigeons on a mul- tiple schedule with alternating VI com- ponents of food reinforcement. When the pecking key was green, a rich schedule was in (^) effect, one that al- lowed the bird to earn food every 30 s on (^) average. When the key was red, a leaner schedule was in effect; the bird could earn food every 120 s. In the next phase, the colors signaling the VI schedule components were withheld unless the bird pecked side keys. This arrangement is^ called^ an^ observing^ re- sponse procedure because pecking the side keys allows the bird to see the col- or correlated with the VI^ schedule un- derway on^ the^ main^ key. The^ experi-
mental question is this: Will the colors correlated with the VI schedules of food reinforcement serve as reinforcers themselves? That is, will they maintain responding on the observing keys? To answer this question, Jwaideh and Mulvaney manipulated the conse- quences of pecking the two observing keys. Figure 4 shows the experimental manipulations as well as the results from 1 bird. Response rates on the two observing keys are shown across four experimental conditions. In the first panel, both keys produced green or red, depending on which schedule was in effect on the main key; response rates on the two keys were about equal. In the remaining three panels, pecks on one key continued to produce green or red, but pecks on the other key could produce only green, the color correlat- ed with the rich schedule. The two re- sponse rates differed under these con- ditions. The bird (^) pecked at (^) high rates on the (^) key that (^) produced green (the
10 MICHAEL PERONE
Upcoming Reinforcer
Past Reinforcer Small Large Small 5 regular 5 regular 5 w/ escape 5 w/ escape Large 5 regular 5 regular 5 w/ escape 5 w/ escape Figure 5. Metzger and Perone's method for comparing pausing and escape in the four pos- sible transitions between fixed ratios ending in small or large food reinforcers. Over the course of a session, half of the transitions included the activation of an escape key that could be pecked to suspend the schedule. Pausing was measured in the transitions without the escape option.
ending in a large reinforcer could be followed by one ending in a small re- inforcer or a large reinforcer. The four types of transitions were programmed in an irregular sequence. Each occurred 10 times (^) per session, five times with the escape key available^ to^ initiate time-out and^ five times without it. When the (^) escape key was (^) available, both the food and escape keys were^ lit. A (^) single peck on the (^) escape key initi- ated a (^) time-out, during which the houselight and food^ key were^ turned off and the escape key was dimmed. Another (^) peck on the (^) escape key turned on the houselight and food key and re- instated the FR schedule so that pecks on the food key led eventually to re- inforcement. The escape key was turned off. The bird did not have to (^) peck the escape key. If it pecked the food key first, the escape key was simply turned off. Pecks on the food key led even- tually to reinforcement. Figure 6 shows results from 1 bird. The upper panels show the pauses that occurred during the transitions without the escape option. These data come from the last 10 sessions at each of several FR sizes. In each panel, pauses in (^) the four transitions are shown sep- arately. Pausing is a joint function of the FR size, the magnitude of the past reinforcer, and the magnitude of the upcoming reinforcer. Most important, however, is the general pattern in the functions across the six conditions in the upper panel. Note also that highest
data point is always the open circle to the (^) right. This (^) represents pausing in the transition after a (^) large reinforcer and before a small one. The middle panels in Figure 6 show the escapes that occurred in the same sessions, but in the other half of the transitions-the ones with the escape option available. The general pattern of escape behavior is strikingly similar to the pattern of pausing. Indeed the prob- ability of escape is highest under the same conditions that produce the lon- gest pauses: in the transition after a large reinforcer and before a small one. The bottom panels present an alter- native measure of (^) escape behavior: the percentage of the session the bird spent in the self-imposed time-outs. These data are not as pretty as the others, but they do fall into line. My students and I have replicated this experiment with fixed-interval schedules leading to different reinforc- er magnitudes, and with transitions in- volving large and^ small^ FR^ sizes. The results are pretty much the same: Paus- ing and escape change in tandem. Both forms of behavior are influenced by the same variables in the same (^) ways. Figure 7 shows data from an exper- iment in which escape was studied with mixed schedules as well as mul- tiple schedules. In both cases, an FR led to either small or large reinforcers. In (^) the mixed-schedule condition, how- ever, no key colors signaled the up- coming reinforcer magnitude. Pauses were relatively brief on the mixed schedule, and escape behavior was ab- sent. In the multiple-schedule condi- tion (when the upcoming magnitude was signaled), pausing increased, par- ticularly in the (^) large-to-small transi- tion, and the^ pattern of^ escape behavior resembled the pattern of pausing. The clear parallels in the data on pausing and escape suggest that paus- ing functions^ as^ a^ means^ of^ escape from the aversive aspects of the sched- ule. It seems likely that the pause-re- spond pattern that^ typifies performance on FR (and fixed-interval) schedules
WP (^) MULTR1C CO)
I0~
02 cn
w30 UPC.RNF.
o 20 0 SMALL w (^) * LARGE
PAST REINFORCER
Figure 6. One pigeon's pausing and escape in the transitions between fixed^ ratios^ ending in^ small or large food reinforcers (1-s or 7-s access to grain). The reinforcer magnitudes were signaled by distinctive key colors. Data are medians (and interquartile ranges) over 10 sessions. The ratio size was manipulated across conditions. Top: pausing. Middle: number of escapes; the session^ maximum was five per transition. Bottom: percentage of the session spent in the escape-produced time-out. (Unpublished data)
represents a combination of positive and negative reinforcement.
THE UBIQUITY OF AVERSIVE CONTROL This observation brings me back to the general theme of this paper. Inside and outside the laboratory, aversive
control is ubiquitous. Indeed, it seems to be unavoidable. Given this state of affairs, perhaps it would be worth con- sidering whether aversive control is de- sirable or at least acceptable. In a book on teaching, Michael (1993) observed that "College learning is largely under aversive control,^ and it
is our task to make such control effec- tive, in which case it becomes a form of gentle persuasion" (p. 120). The idea is that aversive control might be acceptable if it generates behavior of some long-term utility. Think for a mo- ment what it means to have a truly ef- fective contingency of punishment or negative reinforcement. When a pun- ishment contingency is effective, un- desirable behavior is decreased and the aversive stimulus is almost never con- tacted. When an avoidance contingen- cy is effective, desirable behavior is in- creased, and again there is minimal contact with the aversive stimulus. In my classes I impose rather stiff penal- ties when assignments are submitted late. Without this aversive contingency, late papers abound. With it, however, late papers are so rare that I doubt that I impose the penalty more often than once in a hundred opportunities. In short, Michael is right: A well-de- signed program of aversive control is gentle, and a lot of good can come of it. That is fortunate, because it is im- possible to construct a behavioral sys- tem free of aversive control. The forms of behavioral control we call "posi- tive" and "negative" are inextricably linked. Thus, decisions about "good" and "bad" methods of control must be decided quite apart from the questions of whether the methods meet the tech- nical specification of "positive rein- forcement" or "aversive" control. We need to seek a higher standard, one that emphasizes outcomes more than (^) pro- cedures. Our chief concern should not be whether the (^) contingencies involve the processes of (^) positive reinforce- ment, negative (^) reinforcement, or (^) pun- ishment. Instead, we should (^) emphasize the ability of the contingencies to fos- ter behavior in the (^) long-term interest of the individual. Of course, this is all we can ask of any behavioral (^) intervention, regardless of its classification.
REFERENCES
Azrin, N. H. (1961). Time out from (^) positive reinforcement. Science, 8133, 382-383.
Baron, A. (1991). Avoidance and punishment. In I. H. Iversen & K. A. Lattal (Eds.), Exper- imental analysis of behavior (Part 1, pp. 173- 217). Amsterdam: Elsevier. Baum, W. M. (1974). (^) Chained concurrent schedules: Reinforcement as situation transi- tion. (^) Journal of the Experimental Analysis of Behavior, 22, 91-101. Cohen, P. S., & Campagnoni, F R. (1989). The nature and determinants of spatial retreat in the pigeon between periodic grain presenta- tions. Animal Learning & Behavior, 17, 39-
Crosbie, J. (1998). Negative reinforcement and punishment. In K. A. Lattal & M. Perone (Eds.), Handbook of research methods in hu- man operant behavior (pp. 163-189). New York: Plenum. de Souza, D. D., de Moraes, A. B. A., & To- dorov, J. C. (1984). Shock intensity and sig- naled avoidance responding. Journal of the Experimental Analysis of Behavior, 42, 67-
Dinsmoor, J. A. (^) (1983). Observing and condi- tioned reinforcement. Behavioral and Brain Sciences, 6, 693-728. (Includes commentary) Fantino, E. (1977). Choice and conditioned re- inforcement. In W. K. Honig & J. E. R. Stad- don (Eds.), Handbook of operant behavior (pp. 313-339). Englewood Cliffs, NJ: Prentice Hall. Ferster, C. B. (1967). Arbitrary and natural re- inforcement. The (^) Psychological Record, 17, 341-347. Hineline, P. N. (1984). Aversive control: A sep- arate domain? Journal of the Experimental Analysis of Behavior, 42, 495-509. Jwaideh, A. R., & Mulvaney, D. E. (1976). Punishment of observing by a stimulus asso- ciated with the lower of two (^) reinforcement frequencies. (^) Learning and Motivation, 7, 211-
Michael, J. (1975). Positive and negative rein- forcement, a distinction that is no longer nec- essary; or a^ better^ way to^ talk^ about bad things. Behaviorism, 3, 33-44. Michael, J. (1993). Concepts and principles of behavior analysis. Kalamazoo, MI: Associa- tion for Behavior Analysis. Morse, W.^ H., &^ Kelleher, R.^ T. (1977). Deter- minants of reinforcement and punishment. In W. K. Honig & J. E. R. Staddon (Eds.), Hand- book of operant behavior (pp. 174-200). En- glewood Cliffs, NJ: Prentice Hall. Perone, (^) M., & (^) Courtney, K. (^) (1992). Fixed-ratio pausing: Joint effects of past reinforcer mag- nitude and stimuli correlated with upcoming magnitude. Journal of the Experimental Anal- ysis of Behavior, 57, 33-46. Perone, M., &^ Galizio, M.^ (1987). Variable-in- terval schedules of timeout from avoidance. Journal of the Experimental Analysis of Be- havior, 47, 97-113. Scripture, E. W. (1895). (^) Thinking, feeling, do- ing. Meadville, PA: Flood and Vincent.
Sidman, M. (1989). Coercion and its fallout. Boston: Authors Cooperative. Sizemore, 0. J., & Maxwell, F R. (1985). Se- lective punishment of interresponse times: The (^) roles of shock intensity and scheduling. Joumal of the Experimental Analysis of Be- havior, 44, 355-366.
Skinner, B. F (1971). Beyondfreedom and dig- nity. New York: Knopf. Skinner, B. F (1983). A matter ofconsequences. New York: Knopf. Thompson, D. M. (1965). Time-out from fixed- ratio reinforcement: A systematic replication. Psychonomic Science, 2, 109-110.