A brief description of the schedules of positive reinforcement and their effects on behavior. For some people reinforcement is the delivery of a treat when the dog exhibits a good behavior. For DogSmiths reinforcement is a little more complicated. Certain schedules of reinforcement are appropriate at certain times.
Continuous reinforcement is when behavior is reinforced each time it occurs, one reinforcer for one response schedule. Because each behavior is reinforced the increase in the rate of behavior is rapid. However, with continuous reinforcement the animal responds until it is satiated. Continuous reinforcement offers little resistance to extinction and produces stereotyped response topography (Pierce and Cheney 2004 p 128). Continuous reinforcement is rare in a natural environment where most behavior is reinforced on an intermittent schedule (Pierce and Cheney 2004 p 124).
An alternative to continuous reinforcement is intermittent schedules of reinforcement. With intermittent schedules of reinforcement only some, not all, behavioral responses are reinforced. Intermittent schedules include ratio schedules of reinforcement and interval schedules of reinforcement. Ratio schedules of reinforcement are based on a set number of responses given prior to reinforcement whereas interval schedules operate on a set amount of time having passed prior to reinforcement being delivered. Both ratio and interval schedules can be on a fixed or a variable, random schedule of reinforcement (Pierce and Cheney 2004).
Fixed ratio schedules produce a rapid run of responses followed by reinforcement and then a pause. The pause is referred to as the ‘post reinforcement pause‘(PRP) and is influenced by the number of responses and the size of the reinforcer. It can be argued that continuous reinforcement is a fixed schedule (one response to one reinforcement – shown as FR: 1), where each behavior response is reinforced. If fixed ratio schedules are carefully engineered a gradual increase in the ratio requirements can support more behavior for a single delivery of reinforcement (Pierce and Cheney 2004 p 131).
Variable ratio schedules for reinforcement are based around an average of fixed ratios of different sizes (Pierce and Cheney 2004 p 131). Variable ratio schedules produce a faster response in behavior than fixed rate schedules as the ‘pause after reinforcement’ is reduced if not eliminated when the ratio contingency is changed from fixed to variable (Pierce and Cheney 2004 p 131).
Interval type intermittent schedules provide reinforcement after a period of time not after a set number of responses. Like ratio schedules, interval schedules can be on a fixed or variable sequence. Unlike fixed ratio schedules that reinforce steady performance and yield a steady run rate, fixed interval schedules provide a characteristic pattern of response called scalloping. After reinforcement there is a pause, followed by a few probing responses followed by rapid responses as the interval times out. Scalloping occurs because animals produce more responses than the interval schedule requires because they cannot tell the time (Pierce and Cheney 2004 p133).
Variable interval schedules provide reinforcement after a variable period of time has passed. Variable interval schedules “produce high steady run rates, higher than fixed interval schedules” (Chance 2008 p 183). To increase the rate of response in a variable interval schedule you can set a rule that the reinforcement is only available for a set period of time. This rule is referred to as “limited hold” The “limited hold” rule can be used for all schedules of reinforcement. In general, ratio schedules produce a higher rate of behavior than interval schedules; they have shorter interresponse time – the time between any two responses (Pierce and Cheney 2004 p 139).
Other schedules of reinforcement include duration schedules and time schedules. Duration schedules of reinforcement are contingent on a behavior being performed for a period of time. Fixed duration schedules require the behavior be performed for a set period of time whereas variable duration schedule works around some average. Each performance of behavior is reinforced after a different duration. Variable duration, like variable interval and variable ratio schedules, appear to be random but they are variable around a mean. With fixed duration and variable duration schedules, reinforcement may not be forthcoming. If the behavior itself does not provide intrinsic reinforcement the behavior may be weak (Chance 2008 p 184).
Time schedules of reinforcement can also be fixed or variable. Time schedules of reinforcement deliver reinforcement independent of a behavior. These are referred to as noncontingent reinforcement schedules (NCR). Fixed time schedules are similar to fixed interval schedules except no behavior is required. Variable time schedules deliver reinforcement at irregular intervals regardless of the behavior.
Fixed time and variable time schedules deliver reinforcement with no regard to behavior but when no reinforcement is delivered this is considered extinction. Intermittent reinforcement schedules make extinction more difficult. When behavior is reinforced more regularly it has a lower momentum and is more readily extinguished (Pierce and Cheney 2004 p 126).
Niki Tudge Copyright 2009. First Serial Rights