CHARLES HUGHES: Hello and welcome to ABA Course One. I'm Charlie Hughes, Professor of Special Education at Penn State.
In this first lesson, I will be introducing the field of Applied Behavior Analysis as a science, what makes it a science, and the key components and assumptions of this approach to explaining behavior. Then I will focus on defining and describing behavior. And then introduce some basic principles of behavior and behavior analysis, such as conditioning and reinforcement.
As we progress through my lesson, you will notice that it is divided into segments. And these segments are labeled according to the eight titles on this slide. At the end of each segment, you will stop and complete an activity designed to check your understanding of the content I just covered. After completing the activity, you can continue with the next segment or take a break. And I definitely do not recommend that you try to go through the presentation and activities in one sitting.
This next slide outlines a sequence of activities that you should follow as you complete my lesson, beginning with which chapters, or parts of chapters, you should read in the course textbook. Before watching this video, it will be very helpful to print out the PowerPoint handout and use it to follow along as you view my presentation. And this will help you keep track of where you are in the presentation, as well as minimize the amount of notes that you'll have to take.
Also later in the lesson, I will be using something called guided notes when I present new vocabulary related to Principles of ABA. Having the printed handout in front of you will help you use these guided notes as they are intended.
Now, as I mentioned earlier, you will watch a segment of my lesson and then stop to complete an activity. And along with reading the text and watching the video, I strongly recommend that you read the posts on the lesson forum daily to see what kinds of questions other students have had, as well as my responses to these questions. And please feel free to post any questions you might have also.
Now I also strongly recommend that you read the archives for this session. These archives include frequently asked questions by students over the years and my responses to those questions. Many of them are likely to be questions that you will have as you learn the content in my lesson.
Finally, you will complete and submit assignment one by the stated due date.
CHARLES HUGHES: Given that ABA is based on science, and in fact is a science, I thought it might be helpful to talk a bit about what makes something a science. First, let me explain why I believe ABA is a natural science, just as biology and physics are. The basic principles of any physical or biological science are discovered, not invented. They exist independently, regardless of whether we humans have recognized them or not.
So the force of gravity has existed as long as the physical universe has existed, long before an apple fell on Isaac Newton's head and he started thinking about why things fall down, and why we just don't float away. Two, electricity existed before Benjamin Franklin flew a kite in a lightning storm. He didn't invent electricity, he discovered it. And then we started figuring out ways that we could use it.
So the principles of behavior, whereby living organisms tend to repeat behaviors that are reinforced by the environment, or decreased if they are punished by the environment, were in effect long before BF Skinner and others discovered them. So if a behavior resulted in our early ancestors finding food, that behavior was repeated. Conversely, if a behavior resulted in your neighbor winding up in the stomach of a tiger, that behavior decreased, certainly for your neighbor. And once scientists observed and identified scientific principles, they usually began to figure out ways of applying these principles to solve problems through the use of scientific experimentation.
So what is science? Well, that is a very big question, but I think some of the words in this slide get at the essence of what it is. Words such as systematic, whereby step-by-step procedures are applied to study phenomenon so that we can organize that knowledge in a way that helps us understand it better. We are, in essence, trying to understand how the natural world works. Now underlying this presumption that the basic processes of the world and beyond are overall orderly, no matter where you are.
Finally, scientific knowledge is independent of personal, political, social, and economic reasons. Now this is a tough one. What I mean is that to be able to keep science from being impacted by human biases is tough. It's probably impossible to do, so we should try though to minimize it. Now think about the fact that Galileo was condemned by the church for his observation that the earth rotated around the sun, instead of the other way around. Unscientific beliefs have often put scientific findings on hold for hundreds of years because they conflicted with current non-scientific beliefs or biases.
So there are three basic types of scientific investigations. These three investigation procedures allow us to understand natural phenomena, but at different levels. Now the first and the lowest level is when we describe an observed phenomenon. The more systematically we observe something without bias, the more accurately we can describe it. The more accurately we can describe it, the better we can organize and classify the phenomenon. We can then use that data to begin to make hypotheses about why, and under what conditions, the phenomenon occur, and then begin to test those hypotheses.
Now another type of investigation is prediction. After repeated and systematic observations of a phenomena occur, and the data from these observations is analyzed and summarized, scientists often discover that two events, or situations, seem to occur together. That is when one event occurs, another event also occurs with regularity. This is called a correlation. In a sense, we can say that x predicts y.
Now, in your text the authors use the example that the occurrence of winter predicts when certain birds fly south. So flying south is correlated with the onset of winter. Now remember at this point we are not controlling or manipulating these variables, we're only observing that they occur. Thus we cannot say for certain that x causes y, only that x might predict that y will occur.
While some correlations seem to have a causal relationship, we need to be careful. Many times, this leads to faulty assumptions. So for example, because of some correlations it was believed that vaccinations caused autism, a conclusion that has not been borne out. This whole thing got started when somebody noticed that autism typically is diagnosed around two to three years of age, and that is when most kids get their measles and rubella shots.
Another example, in the field of psychology, the pseudoscience of phrenology, which is predicting behavior based on the bumps on a person's head, came about in the 1800s because a doctor named Gall noticed that an executioner, and a boy who liked to hurt animals both had a prominent bump above their ear. Because of these very limited correlations, he made the leap to being that we could explain and predict behavior based on where bumps on the head were located. Now we often laugh when we hear these conclusions based on spurious correlations, but they're still happening and we must guard against making causal predictions from them. So to summarize, we can have two things or more things that are highly correlated, but there may be other variables yet unidentified that will predict a phenomenon. So we need to look more closely at correlations, by using controlled experiments, to see if y occurs only when x occurs.
Controlled investigations provide further information about the level and strength of relationships between variables. Controlled experiments, and the information they yield, are the most useful for any scientific field of study. Controlled experiments manipulate events and variables for the purpose of establishing functional relationships. That is, a functional relation exists when y reliably occurs when x is used or presented. Now we do that by designing experiments that control, what I mean by control is eliminate other possible variables and their effects, so that we can isolate the impact of the variable in question. So we conduct experiments designed to answer questions such as whether the use of a particular math curriculum reliably results in better math performance. Or whether tantrums decrease when, and only when, a particular extinction procedure is used. And we can say with confidence, that other variables did not cause the change in the behavior.
Now before we move into ABA as a science, I want to cover some basic assumptions and strategies that all empirical sciences share. In your text these shared perspectives are called attitudes of science, and there are six of them. The first one is determinism. Determinism assumes that to a large degree the area of phenomena being studied, whether it is particle science, human behavior, gravity, is orderly and lawful. Now it has to be lawful in order to control and predict phenomena.
In other words, we can establish functional relationships such as, when we heat a gas, it expands. Or when gravitational pull increases, more thrust is needed to escape it. Or when a behavior is reinforced, it increases. And so on. Now if the universe was not orderly to a great degree, we really would know nothing. Because we would not be able to predict anything.
The next attitude of science is empiricism. That is, we should objectively and without bias observe the studied phenomenon. Scientists need to proceed without individual prejudices and opinions, but as I noted earlier that's easier said than done, given that we are only human. So that is why we have things such as double blind studies, especially in the area of medicine. It's why we have more than one observer, or person, who measures the change in behavior. It's why replications of the experiment are performed by a variety of people, and so on.
Now as I mentioned before, experimentation is the basic strategy for establishing functional relationships. It is possible that a functional relation exists between two variables such as variable x and variable y, but the only way to establish it, is to control for the impact of other possible variables. But one, or even two replications of a study, is usually not enough to have full confidence about a functional relationship. So scientists rely on multiple replications of these experiments to make sure the results are reliable. Replication of results is a cornerstone of the scientific process. Many times we read about groundbreaking discoveries such as cold fusion, only to find out that nobody else can replicate the findings.
Because science is about providing an explanation of why, how, and under what conditions phenomena occur, scientists hold to something that's called the law of parsimony. That is, they initially examine the simplest explanation over more complex explanations. Thus, the simplest, or the most parsimonious, or frugal explanations need to be ruled out before more complex explanations are considered. When you hear someone say that a theory is elegant, that's what he or she is talking about.
The last attitude of science presented here is philosophic doubt. This makes me think of an expression I heard in one of my statistic courses back in the day when I was a doctoral student. That statement is, in God we trust, everybody else needs data. Now what that's getting at is we should not accept things as fact, unless there is some form of proof. And we need to keep a healthy level of skepticism about that proof. So scientists need to continually question what is considered fact, at any point in time. If they did not question and test and, if warranted, set aside their current beliefs, especially about their own research, we would still be feeling the bumps on people's heads to explain their behavior, and think that the sun revolves around the earth.
The quote on this slide shows that BF Skinner, considered by most to be the father of behaviorism, was a strong advocate of continual questioning about what is held as truth. Out of these attitudes of science, comes a definition of science. One that holds for ABA, as well as any natural science. Take a minute to read it over. Now as you can see, this definition contains such concepts as description, prediction, and control that lead to parsimonious explanations, and a healthy skepticism about what is currently known.
So that's enough scientific excitement for now. This is a good time to complete an activity to evaluate your memory and understanding of some of the content we just covered in this segment. Directions for what to do, and where to find this self-evaluation activity, are at the bottom of the video. Now when you've completed the activity, and read the provided feedback, you could come back to the video and continue with segment two. Or, you could do something a bit more pleasurable to do, and come back to the video at a later time.
Please complete the Lesson 1 Segment 1 Activity.
CHARLES HUGHES: To this point, I presented some basic tenants of all natural sciences, regardless of the area of study. Now, we take this scientific framework, and talk about the science behind applied behavior analysis, beginning with the domains of behavior analytic science, and its practices or applications. These four domains include behaviorism, experimental analysis of behavior, applied behavior analysis, and the practices that are guided by behavior analysis.
So first, behaviorism is often characterized as the study of the theory and philosophy behind how organisms, including humans, behave. Now the primary activity and goal of this domain, is to develop theories that are overarching. That is, they account for all behavior.
Next, experimental analysis of behavior deals with basic research designed to discover, or clarify, basic principles, behaviors, as well as identify functional relations between behaviors and their controlling variables. As I said, this is often referred to as basic research. All of those rat and pigeon studies that Skinner conducted, were basic research into the principles of reinforcement, punishment, and the effects of different schedules of reinforcement. All of which you'll be learning about in much greater detail.
The domain of applied behavior analysis includes the application of behavioral principles, to actual behavioral problems. For example, ABA might look at the effects of certain types of reinforcers on student behavior. Relatedly, in the domain of the practice guided by ABA, this is where you begin to focus and develop and validate practices, to help people change their behaviors in ways that improve their lives. Now that is probably what most of you folks are interested in, and we will get into these practices later in these courses. But first, we want you to have a good foundational understanding of the science and the principles that are behind the practices.
To back up just a bit, one of the most famous examples of a discovery that came from basic research was by Skinner, as he attempted to discover and clarify operant behavior principles. In this case, the three term contingency was used as a basic unit of analysis of behavior, and the behavior can be explained by both antecedent and consequence stimuli. That is, Skinner began to look at the impact of both antecedent and consequence stimuli on behavior, and we're going to be talking about this a lot more later on in the presentation.
Alright, so let's move to talking about the philosophy of behaviorism, and what it is. But I want to start with what it is not, rather than what it is. And what it is not, is known as or referred to, as mentalism. By now you've probably figured out that ABA is about behavior, and the unit of focus is observable behavior. And that behavior is affected by environmental events or stimuli. Now this puts behaviorism at odds with other psychological approaches for explaining human behavior, many of which are characterized by mentalism.
Now in mentalistic approaches, it is assumed that some mental dimension exists, and these mental dimensions cause, initiate, or mediate a person's behavior. So when you hear explanations of behavior such as, well I threw the glass against the wall because I was upset, thus, my mental state of upsettedness, caused me to throw the glass. Because these mental dimensions cannot be observed or clearly defined, they are often referred to as explanetary fictions-- excuse me, explanatory fictions. Or as Julie Vargas states, "explanatory fictions are explanations that do not explain." They are also referred to as circular explanations. Now these statements that look like explanations, but in which the cause is basically a restatement of the behavior to be explained. Quoting Vargas again, "The mentalistic and circular statement, Jenny reads well because she has high reading aptitude is the same as saying, Jenny reads well because she reads well."
So another example of a circular explanatory fiction would be saying, the student became hostile because of an aggressive attitude towards authority figures. Well how do you know he has an aggressive attitude? Well because he was hostile. Well why was a hostile? Because he had an aggressive attitude. So around and around in circles we go.
So mentalistic approaches use explanatory fictions, whereby they attribute behavior to hypothetical constructs such as aptitude, motivation, self-esteem, attitudes, and so on. Now the use of such fictions, hinder looking at the environmental stimuli that can be objectively observed. And through controlled experimentation, functional relationships between the behavior and the stimuli, can be established.
Moving back from mentalism to behaviorism, there are two basic types of behaviorism. One is called methodological, and the other is called radical. Methodological behaviorists pretty much discount any events that cannot be defined by objective assessment. That is, if something cannot be directly observed, it doesn't count. They acknowledge that mental events such as thinking occur, but don't use them in the analysis of behavior.
Radical behaviorists, on the other hand, do consider these private events in behavior analysis. Skinner was one of the first behaviorists to acknowledge that thoughts and feelings are behavior, and only differ from public behaviors, or observable behaviors, by the lack of accessibility to them. Now however with new technology, this accessibility is changing, and we'll talk about that more as we get through the course. Basically, radical behaviorism is the broadest and most comprehensive form of behaviorism, and thus is a more encompassing theory than those used by methodological behaviorists.
Now I'm going to wrap up the discussion on science and ABA, by talking about the defining characteristics of ABA. And when we finish with this, we're going to start into the basic concepts and underlying principles of behavior. As I mentioned before, ABA is the domain that focuses on the application of behavioral principles. And this application of a behavioral intervention, which is also called the independent variable, has a positive and practical impact on individuals day-to-day life.
Next, and not terribly surprising, ABA is behavior. That is, changing behavior, is the focus of the intervention. Now the behavior to be changed, is referred to as the dependent variable, and we'll be using these terms later on more often. So the dependent variable must be observable, and thus measurable. Obviously, if you can't observe and measure the behavior, you're not going to know whether it has changed or not.
Now a quick note here about what behavior is. We will be looking very closely about the definition and characteristics of behavior later in this lesson. But for now it is important to know that behavior is essentially an observable movement by an organism, and then it has some impact on the external environment. And that movement, must be clearly described in ways that make it observable and measurable. So let's see a specific description might be something like, the purpose of the intervention is to reduce aggression. Which doesn't really get specific. A more specific description might be, the purpose of the intervention is to reduce the number of times a student hits a peer. Much more precise.
Analytical, is when the experimenter has demonstrated control of variables in ways that allow us to determine a functional relationship, that is, a functional relationship between manipulated events. That is the intervention or independent variable, such as presenting a reinforcing stimulus contingent upon a target behavior occurring, and a reliable change in that behavior, or what we call the dependent variable.
In this case, we are trying to control variables in ways that will allow us, hopefully, to say that the target behavior increases when, and only when, contingent reinforcement is used. So this type of controlled analysis allows us to demonstrate that the behavior only changes when the intervention occurs. This analysis allows us to eliminate the possibility, or likelihood, that other variables are impacting changes in the behavior. Now these other variables are referred to as threats to internal validity. Now you will be learning much more about how controlling variables and establishing functional relationships. You'll be hearing about them much later in the course, when you learn about different types of research designs that are used in ABA.
Technological, in the case of ABA, is referring to describing what was done in an experiment or behavior change plan, and that that description is in a sufficient enough detail, so that other professionals can replicate the procedures used. So basically, technological means that the experimenter has described exactly what they're doing, so that other people can replicate that study.
ABA interventions should always be based on, and flow from, the basic principles of behavior such as reinforcement, punishment, and so on. These principles, and how they link together, will be covered in more detail in this and later sessions. To be truly effective, behavioral interventions must improve behavior, and the improvement needs to be at a sufficient level to have a significant impact on the person's life.
Now often in educational research we say that the intervention had a statistically significant impact, but what we really want is that the impact has some social significance, and actually impacts the student's life. As an educational researcher, if my intervention made an improvement, but the improvement was insufficient for the student to be successful in the classroom, well one might fairly ask the question, so what?
Finally, we want the change in behavior to maintain over time, to occur in other environments, or students to use their new skill with other related behaviors. If this occurs, we can say there was generalization. If generality of the target behavior does not occur, it really minimizes the effectiveness of the intervention. Most behaviors, but not all, are not truly functional unless they are maintained and used in other environments. So how many times have you heard, or said yourself, my kids can do this in my classroom, but they can't do it in other classrooms. Or something like, my kids can do this behavior at school, but they don't do it at home. And this is a problem in generality. Later on in this course series, we'll be talking in great detail about how to promote generality of behavior change.
So to wrap this segment up, here is a definition of ABA that you will find in your course textbook. I have underlined some keywords that are at the essence of this definition, and any other definition of ABA. First and foremost, it is a science and all that implies.
It assumes that there are basic principles that impact all of human behavior, and that ABA is the systematic application of those principles, and they are applied for the purpose of providing changes that are significant to the client. And finally, that we use experimentation in ways that clearly identify which variables are functionally related to the desired impact or outcome.
OK, so this is a good time to take a break from the lecture and complete another self-evaluation activity, to see how well you remember and understand the content contained in this segment. So again, follow the directions at the bottom of the screen, and take a short quiz and read the feedback. When you've done this and feel ready for some more exciting lecture, start the video again and we'll start talking about the basic vocabulary and principles of behavior and behavior analysis.
Please complete the Lesson 1 Segment 2 Activity.
CHARLES HUGHES: In these next segments, I'm going to begin to define basic terminology related to applied behavior analysis, as well as describe some basic principles of behavior. This lesson, as well as the next lesson, will cover information contained in content area three of the BCBA behavior analyst task list, that you can find on the first page of the text used for the course.
Now in the first segments, I stressed that ABA is a science, a science of behavior. And as such, it is focused on the search for relationships between behavior and its causes. By systematically and rigorously studying behavior, we hope to clarify the rules or principles of behavior, and to use this knowledge to solve problems, that is, change behavior. Now I stress that applied behavior analysis is based on scientific principles, just as in other natural sciences. As such, principles like reinforcement are not something that is made up or developed, these principles exist naturally, just as the principle of gravity exists naturally.
So hopefully, as you're beginning to see, ABA is concerned primarily with identifying environmental variables and how they impact behavior. Like all natural sciences, ABA has its own specialized vocabulary and basic underlying principles. These terms and principles, which I begin to define in this segment, are used throughout the ABA course series, and will form the basis for the various applied methodologies presented in subsequent courses. That is, understanding the basic principles and vocabulary, will help you understand what you are doing and why you are doing it, when you use behavioral procedures.
Now on a final note here, before we get started, much of this presentation follows a very structured organization. I will present a slide on the screen, containing the definition of a term or principle. Several keywords in the definition will be missing, in the same way they are on your PowerPoint handout that you printed out. I'll read the definition, while you read along and fill in the missing word. I will then read the definition a second time, and put up a slide that contains the missing words in bold print, just in case you missed something. After I've gone through this procedure of reading the definition twice, I will provide some elaboration and examples whenever necessary.
Now, for your information, this method of presentation is called guided notes, which has been shown to be effective for presenting factual content, such as what I'm presenting in this session. So this presentation won't be the most exciting one you've ever watched, and I really don't recommend viewing it in a horizontal position, but hopefully it will be-- help you understand and use the basic principles of applied behavior analysis.
So let's start with a definition of what a basic principle is. It is a basic functional relationship between behavior and its controlling variables. So a basic principle is a basic functional relationship between behavior and its controlling variables. This relationship must be demonstrated by way of controlled experimentation that is replicated across many species, behaviors, and conditions. So this relationship must be demonstrated by way of control experimentation, replicated across many species, behaviors, and conditions. So an example of that would be positive reinforcement.
Now replication is key to establishing a principle or a functional relationship, and this involves repeated experimentation. These experimental replications are not narrow, but rather are conducted in a variety of situations, across many species of animals, including humans, and across a variety of behaviors.
Now, an example of a behavioral principle is positive reinforcement, which we'll talk about in greater detail later. The principle that behaviors tend to be strengthened when followed by a pleasant consequence, has been established through thousands of experiments beginning with pigeons and rats, and on up through human beings. And based on these control experiments, we know that the principle holds true in many environments and with many behaviors.
So let's talk a little bit more about what a functional relationship is. So when changes in an antecedent or consequent stimulus class consistently alter a dimension of a response class, a functional relationship is said to exist. So when changes in an antecedent or consequent stimulus class consistently alter a dimension of a response class, a functional relationship is said to exist. We seek to identify functional relationships between manipulated environmental events and behavior, through systematic manipulations.
So we seek to identify functional relationships between manipulated environmental events and behaviors, through systematic manipulations. So bottom line, a functional relationship occurs when an environmental event, whether it comes before or after a behavior, reliably changes an aspect of that behavior. So that is when something occurs, something else changes, and it does so consistently.
To establish or identify functional relationships, we have to manipulate different variables in a controlled fashion, to see if the behavior changes. To illustrate this type of manipulation or experimentation, let's take a look at a graph. This type of experiment, sometimes called a reversal or return to baseline design, and you're going to learn more about this and other designs later in the course. Looking at this graph, we see that a behavior is measured during baseline condition.
As you can see on this graph, the behavior, whatever it is, and it doesn't matter, is emitted initially at a low rate with little variability. Now when we begin to positively reinforce the behavior, as noted by the r plus above the second condition, we see the behavior begins to occur more often. But we can't yet say with confidence, that the positive reinforcement was what caused the upswing in behavior. That is, a functional relationship exists between the behavior and the use of a positive reinforcer. So we take away the reinforcer, and we go back to the baseline.
Now, we see that without this particular reinforcer, the behavior goes back to where it was. Then a little bit later we reintroduce the reinforcement condition, and the behavior goes back up. It is this type of experimental manipulation that is needed to establish a functional relationship between an environmental event, in this case the introduction of a reinforcer after the occurrence of a behavior, and a behavior. The types of, like I said, the types of research designs used in ABA, will be covered in more detail in this course.
So now let's examine the concept of behavior more closely. First by looking at the determinants of behavior, defining what behavior is and isn't, and then looking at aspects of response, which is a single instance of behavior. So behavior is determined by either organic or environmental variables.
Behavior is determined by either organic or environmental variables. Organic variables include genetics, physical attributes, and or biochemical and neurological factors. So organic variables include genetics, physical attributes, and or biochemical and neurological factors.
So organic variables can also influence behavior, and these include things that are genetically hardwired, such as instincts, which are certainly critical to the survival of a species. Physical attributes, such as loss of an arm, or how tall you are, blindness, a brain tumor, and so on. Biochemical factors are also organic variables that can influence or determine behavior, such as a lack of serotonin production.
So environmental determinants include current environmental events, and the organism's previous experiences with these of similar environmental conditions. So environmental determinants include current environmental events, and the organism's previous experiences with these of similar environmental conditions. That is, the organisms history of reinforcement.
So what we're talking about here is the organism, the human, the rat, the pigeons, history of reinforcement. So we see that environmental determinants include what is going on in the present environment, as well as what occurred in the past. That is, a person's learning history, or history of reinforcement, can also play a part in determining behavior. In ABA we focus primarily on environmental variables, since we can, as educators, do something about these. I doubt as teachers that you will be doing much neurosurgery or gene splicing in your classrooms, so organic variables typically are not our purview.
So let's stop here and do another quick self-evaluation. You know how to do it, and we'll pick up in a bit.
Please complete the Lesson 1 Segment 3 Activity.
CHARLES HUGHES: OK, so let's get very basic and define what behavior means to the applied behavior analyst. Behavior is everything that organisms do. So behavior is everything that organisms do.
Behavior is the movement of an organism and its parts. Behavior is the interaction of the muscles and the glands of a living organism and the environment. So behavior is the movement of an organism or its parts. Behavior is the interaction of the muscles and the glands of a living organism and the environment.
B.F. Skinner called behavior "the detectable displacement in space through time of some part of the organism that results in some measurable change in the environment." One of Skinner's students, Ogden Lindsley, came up with the dead man's test to illustrate how you know something is a behavior or not. So if a dead man can do it, it isn't a behavior. If a dead man can't do it, it probably is behavior.
So bottom line here is a behavior has to be an interaction between a living organism and its environment. There has to be some movement initiated by the organism. Now, in your textbook, it points out that being blown over by a strong gust of wind, while involving movement, is not a behavior, because a dead man can be blown over, also.
So up next, I'll be talking about behavior in terms of responses, how we can classify those responses, and different ways we can measure these responses. So a response is a specific instance of behavior. So again, a response is a single instance of behavior. A response is a single instance of behavior.
A response cycle refers to the beginning, middle, and end of a response. So a response cycle refers to the beginning, middle, and end of a response.
Now, let me give you an example of a response cycle. Because I'm always doing stupid things, like Homer Simpson, I'm always doing this-- Doh! Doh! Now, let's break this response cycle down. So I have a beginning, a middle, and end of that particular response or behavior.
So in the beginning of the cycle is when I move my arm upwards. The middle of the cycle occurs when I make contact with my forehead. And the end is when my arm moves back down and stops. Now, you will see why it is important to identify the beginning and end of a response cycle later. But I wanted to introduce that definition to you.
Now I want to introduce two ways to classify behaviors-- topographically and functionally. Topography-- topography refers to the physical nature of responses. That is the exact form, configuration, or shape of the response; the appearance of the response; the force involved; and the actual movement. That's a mouthful. So topography refers to the physical nature of responses. That is the exact form, configuration, or shape of the response; the appearance of the response; the force involved; and the actual movement.
A topographical response class is a group of two or more responses that share a common form. So a topographical response class is a group of two or more responses that share a common form. Now, here's an example of a topographical response class. So here it is.
So I could do this on a bell in a hotel lobby. I could do it on a light switch to turn off the light. And in the old days, I could have sent a telegraph by pressing my finger on a telegraph key. Now, the behaviors look alike, but they do not perform the same function.
Now, I've had created some animations for you to illustrate some of these basic principles. So what I want you to do now is to go to the animation and take a look at it. And when you get back, I want to just say a couple things about this. This illustration is designed to help demonstrate a topographical response class.
OK, hopefully you enjoyed that little animation, and you saw that we had a girl hitting a ball and that, in the next scene, a man was chopping with an ax. And hopefully what you saw was that they were both using the same arm motion, but there is a different outcome or function of their behavior. So that illustrates what a topographical response class is, so that you have a behavior that looks or takes the same form, but has a different function.
Now, on the other hand, a functional response class is the inverse of topographical response class. And as we'll see later, it is more germane to ABA. So a function refers to the effect of a response in the environment. So a function refers to the effect of a response in the environment.
A functional response class is a group of two for more topographically different-- topographically graphically different responses that have the same effect on the environment, usually producing a specific class of reinforcers. Again, we have a mouthful here. So a functional response class is a group of two or more topographically different responses that have the same effect on the environment, usually producing a specific class of reinforcers.
[VIDEO PLAYBACK]
-Mama, cookie.
-Here you go, little one. Here's your cookie.
-[CRYING]
-Here you go, little one. Here's your cookie.
[END VIDEO PLAYBACK]
PROFESSOR: So this animation hopefully illustrated a functional response class. So we had two behaviors from the baby that looked different, but had the same outcome. So in the first scene, the baby came in and pointed to the cookie and says, "cookie." And the outcome or effect is the mother hands the cookie to the baby.
The second one, which is probably more realistic, is that the baby came in and pointed and then started whining and crying about that. But the mother knew what the baby wanted and gave it to him. So the outcome was the same, but the behaviors looked different. And that is essentially what a functional response class is.
So now we're going to take another short break and do another self-evaluation activity. And see you when you come back.
Please complete the Lesson 1 Segment 4 Activity.
CHARLES HUGHES: So, now let's talk about properties and dimensions of behavior. This information is critical when deciding which aspects of behavior to change and to measure. So, a property-- a property is a fundamental quality of a phenomenon. So, a property is a fundamental quality of a phenomenon. A dimensional quantity is a quantifiable aspect of a property. So, a dimensional quantity is a quantifiable aspect of a property. So, we're going to start with the fundamental property of something called temporal locus.
So, temporal locus is a single-- a single response occurs in time. More specifically, a response occurs at a certain point in time in relation to a proceeding environmental event. Thus, one of the fundamental properties of a single response is temporal locus. So, a single response occurs in time and more specifically, a response occurs at a certain point in time in relation to a proceeding environmental event. Thus, one of the fundamental properties of a single response is temporal locus.
Now, the property of a temporal locus involves a response and its relation to an event that occurred right before it. Therefore, a dimensional quantity we can measure related to temporal locus is the time between the environmental event, which is often referred to as a stimulus, and the beginning of the response. That period of time between the stimulus and the response is called latency.
Now, one school situation that comes to mind with regards to latency is how long it takes between when a teacher tells her students to "open your books to page 42" and the length of time it takes for a student to do so. Another example of latency is related to American football. The quarterback says, "hike" and the center hands the ball to him. The amount of time between saying hike and when the ball is actually hiked is latency.
OK, so the dimensional quantity that's associated with temporal locus is latency, which is the amount of time between a stimulus and a response. So, the dimensional quantity associated with temporal locus is latency, which is the amount of time between a stimulus and response. Now, the property of temporal locus involves a response and its relation to an event that occurred right before it. Therefore, a dimensional quantity we can measure related to temporal locus is the time between the environmental event, which is often referred to as a stimulus, and the beginning of the response. That period of time is called latency.
Now, for example, one school situation that comes to mind with regard to the latency is how long it takes between when a teacher tells her students to "open your books to page 42" and the length of time it actually takes the students to do so. Now, another example of latency is related to American football. Say the quarterback says, "hike", then the center hands the ball to him and the play starts. The amount of time between saying the word "hike", which is the stimulus, and when the ball is actually hiked is called the latency. Now you're going to see an animation which illustrates latency, which is the time between a stimulus and the behavior.
[VIDEO PLAYBACK]
-Hi, class. Can you tell me what nine times nine is?
-Huh?
-Oh, I know it.
-81?
[END VIDEO PLAYBACK]
PROFESSOR: So, in the animation that you just watched, we have the teacher's asking what is nine times nine. And then, a student responds. But if you count the time between the teacher asking the question and the student response, the time was 11 seconds between those two. So, I could say that the latency of answering this question was 11 seconds. Another related example would be the amount of time the teacher's saying, "Line up for lunch" and when the students actually begin to line up.
Now, the next fundamental property has to do with how long a response lasts, or its temporal extent. A second fundamental property of a single response is derived from the fact that a response occupies a certain amount of time. This property is called temporal extent. So, a second fundamental property of a single response is derived from the fact that a response occupies a certain amount of time. And this property is called temporal extent.
The dimensional quantity associated with temporal extent is duration, which is the amount of time between the beginning and the end of the response. So, the dimensional quantity associated with temporal extent is duration, which is the amount of time between the beginning and the end of the response.
[VIDEO PLAYBACK]
-Teacher, Dillon's falling asleep in class. Teacher, Rebecca ripped my paper. Teacher, Olivia said a bad word. Teacher, he poked me with his pencil. Teacher, Jessica's working ahead. Teacher, Robert wrote all over his desk. Teacher, Julia's passing notes.
[END VIDEO PLAYBACK]
PROFESSOR: OK, in that animation, what we had was a student exhibiting what we often call tattle-taling behavior. Now, if we were looking at the duration of that, we would basically look at how long that class of behavior lasted. I counted that she kept doing it for 25 seconds. So, one could say the duration of her tattle-telling behavior, which is not a great description, was 25 seconds. So again, duration is how long a particular behavior or class of behaviors lasts.
Because duration is how long behavior lasts, it's often measured when we want students to work on a particular task longer or that we want them to be off task for a shorter amount of time. It's a quantity of concern also when we're trying to measure, say, the length of a tantrum or how long it takes a student to get from one class to the next. Now, while the last two properties and quantities that we've talked about dealt with temporal aspects, that is time, the next properties and quantities deal with the frequency of responses or how often the behavior occurs.
So, a third fundamental property is repeatability through time. It refers to the fact that a response can reoccur. So, a third fundamental property of behavior is repeatability through time. It refers to the fact that a response can reoccur. Countability is the dimensional quantity associated with repeatability, which is the number of responses or cycles of the response class. So, countability is the dimensional quantity associated with repeatability, which is the number of responses or cycles of the response class.
Simply stated, we are often concerned with how many times a behavior or response occurs. That is, the behavior may not be occurring enough or the behavior may be occurring too often. So, take a look at another animation, here.
[VIDEO PLAYBACK]
-Teacher, Dillon's falling asleep in class. Teacher, Rebecca ripped my paper. Teacher, Olivia said a bad word. Teacher, he poked me with his pencil. Teacher, Jessica's working ahead. Teacher, Robert wrote all over his desk. Teacher, Julia's passing notes.
[END VIDEO PLAYBACK]
PROFESSOR: So what you saw was a repeat of the last animation, only we're going to look at this behavior of tattling and we're going to count how many times it occurred. So, we might be looking at how many tattles were there there. And I counted seven different unique tantles, excuse me, tattles from this relatively obnoxious young lady. So, again, that's what accountability does is you identify behavior and you count how often it occurs.
So, the next property and quantity come into play when both repeatability and time are of interest. The fundamental property of a response class is the combination of both repeatability and temporal locus. So, the fundamental property of a response class is the combination of both repeatability and temporal locus. The fact that if behavior repeats and that it occurs in time. Now, from this combination, we could derive several dimensional quantities.
Inter-response time-- inter-response time refers to the time between two successive responses. Usually, it is the time elapsed between the end of a response cycle and the beginning of the next response cycle. Now, there's another mouthful. So, from this combination and the combination of repeatability and temporal locus, from this combination and we can derive several dimensional quantities. Inter-response time, or IRT, refers to the time between two successive responses. Usually, it is the time elapsed between the end of a response cycle and the beginning of the next response cycle.
So, in order to measure IRT, there needs to be at least two responses. We then measure the amount of time that occurred between the end of one response cycle and the beginning of the next one. So, let's take another look at that student and take a look-- this time, we're going to look at measuring IRT.
[VIDEO PLAYBACK]
-Teacher, Dillon's falling asleep in class. Teacher, Rebecca ripped my paper. Teacher, Olivia said a bad word. Teacher, he poked me with his pencil. Teacher, Jessica's working ahead. Teacher, Robert wrote all over his desk. Teacher, Julia's passing notes.
[END VIDEO PLAYBACK]
PROFESSOR: So here, if we were wanting to quantify IRT, we would look at the time between she ended one of her tattletales and when she began the other. And so, I looked at and counted the time between her first tattle about a student and how much time elapsed between the end of that one and the beginning in the next one. And she averaged about two to three seconds between each tattle. So, I could say that her IRT was possibly 2.5 seconds. Sometimes we want to know that.
Let's see. Another quantity related to time and repetition is called rate. So, another dimensional quantity related to time and repetition is frequency, also called rate of responding. So, another dimensional quantity related to time and repetition is frequency, also called rate. This is the ratio of the number of responses over some period of time. So, thus we get cycles per unit of time. So, another dimensional quantity related to time and repetition is frequency, which is also called a rate of responding. This is the ratio of the number of responses over some period of time and is referred to as cycles per unit of time.
For example, a common measure used for reading assessment is the number of words read correctly per minute. Later on in this course, guidelines for selecting rate as a dimensional quantity in resulting calculation procedures will be gone over in more detail. Now, rate's relationship with IRT is an inverse one. So, the longer the IRT, the less the rate. That is, the more time between occurrences of behavior, the lower the rate of behavior occurring . And conversely, the shorter the IRT between a series of responses, the greater the rate.
Now, going back to our little student who was doing all of her tattles, we calculated the IRT to be about 2 and 1/2 seconds. Now, if the IRT had-- and that she had tattled seven times. If the IRT had increased to six seconds, that would have a definite impact on the rate of behavior. So, the rate of her tattling would decrease if there was longer periods of time in between. Now, that may seem not important now, but as we get into things, it will be.
So, the final dimensional quantity that we'll cover deals with whether a behavior is increasing or decreasing. So, another dimensional quantity of a response class is celeration. Celeration. This term refers to the direction in behavior, either accelerating or decelerating in rate. So, another dimensional quantity of a response class is celeration. This term refers to the direction in behavior, either accelerating or decelerating in rate.
The key concept is whether a behavior is increasing, or accelerating or decreasing, or decelerating. Now, the easiest way to ascertain celeration is to chart behavioral data over time. You then get a clearer picture of whether the behavior is increasing or decreasing. Now, in later courses, very precise methods for charting behavior as well as quantifying celeration will be provided. Now, I've provided a summary chart for you that shows the fundamental properties of behavior, the dimensional quantity, and the unit of measurement. Since this is information you're going to have to know from my lesson, you can use this chart as a study aid, if you wish.
So, just looking at this chart, you'll see that the first column deals with the fundamental property, such as temporal locus, temporal extent, repeatability, and so on. The middle column looks at the dimensional quantity and deals with what you're measuring, such as am I going to measure latency, duration, IRT, and so on. And the last column is unit of measurement. So, this is just a way of helping you remember what the different properties and quantities are. So, let's stop here and do another self-evaluation exercise before we move on.
Please complete the Lesson 1 Segment 5 Activity.
CHARLES HUGHES: In this next segment, I'll continue to talk about what behavior is and then begin to talk about the interaction of behavior with environmental stimuli and responses, which will lead us to one of the first discoveries of behaviorism, and that is respondent conditioning.
Public behavior. Public behavior is behavior that can be observed by others, even though sometimes, special instruments may be required. So public behavior is behavior that can be observed by others, even though sometimes special instruments may be required. Private behavior, also now known as private events, is behavior that cannot be observed by others. It is only accessible to the organism engaging in the private event. So private behavior, also known as private events, is behavior that cannot be observed by others. It is only accessible to the organism engaging in the private event.
So movement of the large or skeletal muscles is obvious and public behavior. For example, walking is obvious public behavior. It is very observable. Smaller muscles used with talking are less obvious, but are still public. Now, what about smooth muscle movement found in organs and are associated with things such as breathing, digesting, heart beating, and so on? Well, they're not very obvious, but they can be measured and observed using some medical instruments.
So how about silent reading? We can observe the person holding a book. Their eyes are moving, and maybe a slight vocal cord movement. Do we know that they're actually reading from these indicators? Is it a public event? Well, not really. Now, up until recently, silent reading would clearly be considered a private event. Now, it's not so clear.
So using brain imaging things such as positron emission tomography or functional magnetic resolution, FMRs, imaging techniques, these types of things-- we can observe and actually quantify things such as the burning of glucose in different language centers of the brain during activities such as silent reading. So it's getting a little less clear. So now we can at least indirectly measure what used to be seen as very private behavior.
So now let's move on to the environment. The environment is the total constellation of stimuli which can affect behavior. Our environment is both outside the skin of an organism as well as within. So the environment is the total constellation of stimuli which can affect behavior, and the environment is both outside the skin of an organism as well as within. So to a behaviorist, especially a radical behaviorist, the environment also includes stimuli that occur within the organization as well as outside of it.
As I mentioned before, in ABA we are very concerned about environmental events which impact behavior. We call those changes in the environment stimuli. So a stimulus is a change in the environment which can affect behavior. It is an environmental event. So a stimulus is a change in the environment which can affect behavior. It is an environmental event.
It is common to talk about the onset and offset of a stimulus and the magnification versus attenuation of a stimulus. So when we talk about stimuli, it's common to talk about the onset and offset of a stimulus and the magnification versus the attenuation of the stimulus. So from this definition, we see that, like behavior, a stimulus has a beginning, or onset, and an end, or offset.
We are also interested in the magnitude or strength of the stimulus. At its essence, a stimulus is a change in the environment that affects behavior, and there are two basic types of stimuli. The first one is antecedent. An antecedent stimuli is a stimulus that precedes a response. So an antecedent stimulus is a stimulus that precedes a response.
The other type of stimulus is called a consequence, which is a stimulus that follows a response. So a consequence stimulus is a stimulus that follows a response. Identifying and understanding both antecedent and consequence stimuli becomes very important when conducting a functional analysis of behavior, which is a technique that will be discussed more than once in these upcoming courses.
Contiguity is the nearness of events in time and space. Contiguity is the nearness of events in time and space, so it's the nearness of stimuli to behavior. So the magnitude or strength of a stimulus is often impacted by its nearness in time or space to the behavior. So for example, temporal contiguity plays a part in how much a consequent stimulus impacts behavior. That is, the sooner a consequence follows a behavior, the stronger the impact. Now, we'll talk about this a little bit more when we talk about the principle of reinforcement.
Now what we're going to do is we're going to start talking more about the interaction of environmental stimuli and responses. And first, I'm going to define what a contingency is, followed by the different types of contingencies, and then finally explain how there are two basic types of behavior, respondent and operant.
So contingency is a dependency between events. Contingency is a dependency between events. A contingency is said to exist between events when one depends on the other. So a contingency exists between events when one depends on the other. A contingency is kind of an if-then statement. So, a contingency is kind of an if-then statement.
An event that is stimulus-contingent is one that occurs if a particular stimulus occurs. So an event that is stimulus-contingent is one that occurs if a particular stimulus occurs. An event that is response-contingent is one that occurs if a particular response occurs. That is, the event is dependent upon the response. So an event that is response-contingent is one that occurs if a particular response occurs. That is, if the event is dependent upon the response.
So there are two types of behavioral contingencies. One occurs when a stimulus brings about a behavior, such as a red traffic light brings about braking behavior. And the other type relates to a response or behavior affecting a consequent event. For example, if I study hard, the likely consequence is that I'll get good grades. Let's talk about the relationship of stimuli with the two types of behavior.
Respondent behavior. Respondent behavior is unlearned. It occurs or is elicited in response to a stimulus. So respondent behavior is unlearned. It occurs or is elicited in response to a stimulus. Respondent behavior is solely under the control of antecedent stimuli. So respondent behavior, one of the two types, is solely under control of antecedent stimuli.
Antecedent stimuli come before the behavior. The functional relation involved is described as a stimulus-response relationship. So the functional relationship involved in respondent behavior is described as a stimulus-response relationship.
Operant behavior, the second kind, acts or operates on the environment and is emitted rather than elicited. So operant behavior acts or operates on the environment and is emitted rather than elicited. Operant behavior is at least partially under the control of consequences. Operant behavior is at least partially under the control of consequences. Many operants are under the control of both antecedents, stimuli that come before, and consequences, which are stimuli that come after the behavior.
The key points here are that respondent behavior is unlearned, and it only occurs in response to an antecedent stimulus. This relationship is referred to as a stimulus-response or is sometimes written as S-R. For example, in your text, the stimulus-response relationship is illustrated by the example of when a bright light, which is the stimulus, is directed at your pupil and your pupils shrink, and the shrinking is the response. So stimulus-response.
Operant behavior, on the other hand, is learned, and it actually acts or operates on the environment. Operant behavior is under the control of consequent stimuli or consequences, but it also needs some form of antecedent stimulus to actually occasion the behavior. So does this chart look familiar?
Now, remember the three-term contingency that Skinner developed that I presented earlier. It's just like the stimulus-response functional relationship I just mentioned that represents respondent behavior, only with operant behavior, an additional letter is added after the R.
Now, the letters in the top row stand for stimulus, response, and consequence. That is, an environmental stimulus occurs, which then occasions a behavior that then results in another stimulus or consequence, and that consequence can impact whether the behavior will occur or not in the future. Now, the most common way you'll see this presented is using ABC. A stands for antecedent. B is for behavior, and C is for consequence.
Now, in-depth descriptions of how antecedents and consequences can be used to increase or reduce behaviors will be provided later in this course, as well as throughout the series of courses, but it is important to remember that this three-term contingency statement forms the basis of much of what we do in ABA. It is at the heart of techniques such as functional analysis, which is used in many, many ABA programs.
Now, what I'd like to do now is to describe two ways to classify functional relations in terms of the way that stimuli or environmental changes acquire their effectiveness to change behavior. So first, we have phylogenic provenance. That is the effect of the stimulus on a specific response may be innate due to the evolutionary history of that species.
So phylogenic provenance, or the effect a particular stimulus might have on behavior, is when the effect of the stimulus on a specific response may be innate due to the evolutionary history of that species. These innate functional relations are called unconditioned or unlearned. So they're either unconditioned or unlearned.
In addition, the effect of a stimulus on behavior may be due to ontogenic-- some pretty fancy words here-- ontogenic provenance. That is, the effect of the stimulus on a specific response may be learned due to the experiential history of the organism. So in addition, the effect of a stimulus on behavior may be due to ontogenic provenance-- that is, the effect of the stimulus on a specific response may be learned due to the experiential history of the organism.
So these learned functional relations are called conditioned. So here's an example to illustrate unlearned or unconditioned or phylogenic and learned or ontogenic behavior relations. So earlier, I noted that responded behavior is unlearned and is always elicited by a stimulus.
For example, if I had a meat baster and I squirted a puff of air into your eye and you blinked, the puff of air is the stimulus, and the blink is the response. No learning required here. Your eye automatically does it, which means that it is phylogenic.
To illustrate how a stimulus may affect behavior in the ontogenic province, I could ring a bell right before I puffed the air in your eyes. So I ring a bell, puff the air, and your eyes would blink. Later on, if I kept doing that, just hearing the bell would be sufficient to make you blink. In this case, the behavior of blinking is now learned or conditioned.
Now, let's begin to focus on respondent functional relationships by discussing what a reflex is. We will use this information as a prelude to describing a process called respondent conditioning. A reflex is a simple relationship between a specific stimulus and an innate involuntary response, also called an unconditioned reflex. So a reflex is a simple relation between a specific stimulus and an innate involuntary response, also called an unconditioned reflex.
In a reflex, a response is elicited by a stimulus. Reflexes are highly stereotypic. That means they occur the same way every time it happens. So in a reflex, a response is elicited by a stimulus, and reflexes are highly stereotypic. Although some reflexes involve the skeletal, sometimes called the striped or striated, muscles, most reflexes involve the smooth muscles and glands. So although some reflexes involve the skeletal muscles, most reflexes involve the smooth muscles and glands. Smooth muscles change the dimension of various internal organs.
Not all reflexive responses involve muscles. For example, sweating, crying, and salivating are responses involving glands. So smooth muscles change the dimension of various internal organs. Not all reflexive responses involve muscles. For example, sweating, crying, and salivating are responses involving glands. Reflexes are mediated by the autonomic nervous system. So reflexes are mediated by the autonomic nervous system.
So when I refer to something as a reflex, I'm referring to both the stimulus and response. Remember that-- both the stimulus and response. I'm also referring to something that is innate or hard-wired-- that is, instinctual. When these reflexes occur, the cerebral cortex, which is the thinking part of the brain, is bypassed, and everything is handled in the spinal cord and the hypothalamus, which is referred to as the autonomic nervous system.
The types of responses involved in a reflex are often related to changes in organs and glands, but can also include more obvious movements via the skeletal muscles. So some common reflexes include the patellar reflex. Think of going to the doctor and when they tap your knee to see whether your leg jerks-- that's the patellar reflex. Hey, this is actually a good time for a little quiz. What do you call the time between when the doctor taps your knee and when the knee actually jerks? Good remembering. That would be the latency. The time between the stimulus and the response. Sorry about that.
So other reflexes are the gag reflex. You stick something into your throat. You automatically gag. Infants have a sucking reflex. We have a startle reflex, where if somebody comes up behind you and screams and you, as they say, jump out of your shoes, that's a startle reflex. The pupillary reflex-- if somebody shines a light in your eye, your retina automatically constrict.
OK, so I am now going to define some terms related to respondent behavior that will be important to understand when we began to talk about respondent conditioning. A stimulus which elicits an unconditioned response without prior learning that is due to innate capacity to do so, also called an unconditioned stimulus. It is abbreviated as US, and those are capital letters. It is the stimulus part of a reflex.
So an unconditioned stimulus is a stimulus which elicits an unconditioned response without any prior learning. That is, it's due to innate capacity to do so. It's also called an unconditioned stimulus. It's abbreviated as US, and it is the stimulus part of a reflex.
A response which is elicited by an unconditioned stimulus without prior learning-- that is, due to phylogenic provenance-- also called an unconditioned response. It is abbreviated as UR. So now we're talking about an unconditioned response. So that is a response which is elicited by an unconditioned stimulus without any prior learning, because it's due to phylogenic province. We call it an unconditioned response, and it can be abbreviated using UR. It is the response part of a reflex.
The last term here is called a neutral stimulus, which is a stimulus that has no effect on a particular behavior. So a neutral stimulus is a stimulus that has no effect on a behavior. So I'll try to sum that up a little bit. So when the US, the unconditioned stimulus, and the UR, the unconditioned response, are put together, we have a reflex.
So given the patellar reflex described above, the unconditioned stimulus is the tap on the knee, and the unconditioned response is the knee jerk. And the startle reflex, the US, is the loud noise, and the UR is the jumping backwards and increased heart rate and so on. Then, we have the NS, or the neutral stimulus, which at the present time has no effect on the behavior in question.
Now, we're going to talk about conditioned stimulus, and that is a stimulus which elicits a conditioned response. So it elicits a conditioned response due to prior learning. That is, due to ontogenic provenance. It is abbreviated as CS. As we'll see, it is the stimulus part of a conditioned reflex.
And because we have a conditioned stimulus, we have a conditioned response, which is a response which is elicited by a conditioned stimulus due to prior learning-- that is, due to ontological province. It is abbreviated as CR. So a conditioned response is a response which is elicited by a conditioned stimulus due to prior learning. It is abbreviated as CR, and it is the response part of a conditioned reflex.
So notice that the conditioned stimulus, or CS, is created through a learning process, as is the conditioned response, or the CR. So now we have the Us, the UR, and the NR, and now we have also the CS and the CR, all of which are key concepts in the respondent conditioning paradigm. So there's a reason why we've been learning this stuff. So we're going to be talking about the respondent conditioning paradigm. This is also known as classical conditioning or Pavlovian conditioning. Respondent conditioning is also referred to as classical conditioning or Pavlovian conditioning.
Now, many of you might already know this but, Yvon Pavlov was a Russian physiologist, and he was researching how the salivary glands of animals function. So for this purpose, he had several dogs with tubes attached to measure to saliva secretions. Now, he started noticing that the dogs started producing more drool or salivation when his lab assistant came into the room, and that this increase in salivation did not occur when anyone else but that lab assistant entered the room.
So he started looking at that. That wasn't what he was after, or his experiments. He just serendipitously observed this. So he thought he'd take a look. So based on further observation, he discovered that his lab assistant was in charge of feeding the dogs. And this got him to thinking about how the lab assistant became a conditioned stimulus for drooling, so he set up a series of experiments and subsequently discovered the principle of respondent conditioning. Now, his basic respondent conditioning experiment is illustrated in the next slide.
So it started off with, before any conditioning occurred, we know that there is a reflex whereby if you're hungry and food is presented, you will salivate. It's true of dogs. It's true of us. So because it's a reflex, we have two things going on. We have an unconditioned stimulus which elicits the unconditioned response. In this case, food is an unconditioned stimulus. We don't have to learn that we want food when we're hungry. And salivation was the unconditioned response. When we're hungry and food is presented, we do that without any prior learning.
OK, now, the respondent conditioning comes into play. And what his experiment was he said, what I'm going to do is I'm going to present the food and measure the salivation, but at the same time I present the food, I am going to ring a bell. Now the bell at this point time was called the neutral stimulus, because at that time, the bell did not have any impact on salivation. So he would present food, ring the bell, and the dog would salivate. And he did that over and over and over again.
And what he found out afterwards was that he could actually eliminate the food, ring a bell, and the dogs would salivate. Now, after the conditioning, you'll notice that we have a conditioned stimulus, the CS, and a conditioned response. The conditioned stimulus was the bell, and the conditioned response was the salivation. So by pairing a neutral stimulus with an unconditioned stimulus, you have respondent conditioning where the previously neutral system, now called the conditioned stimulus, can elicit the original response.
This type of conditioning process has actually been used to explain a variety of behaviors-- one of the most common being what psychologists call phobias. A behaviorist describes a phobia as fearful types of behaviors such as physical avoidance, crying, cringing, and so on that are produced by what was once a neutral stimulus. These neutral stimuli have acquired averse properties because they have been associated or paired with other stimuli-- unconditioned stimuli-- that naturally produce fear.
Now, a fellow named Watson conducted a classic, and to be honest, a somewhat disturbing, experiment in which he produced a phobia in a young child name Albert. Now, he did this by placing a white rat, which at the time did not frighten the child-- didn't bother the child at all. So he placed this rat next to the child, and then when he would put the rat with the child, he would start banging loudly on pipes.
The result was the child developed a phobia about rats, where the child-- no problem with white furry critters at all, and now he was deathly afraid of rats and other white furry animals, like cute little bunnies. So whenever a small white animal was placed next to him, he exhibited the same startle behaviors that were associated with the sudden loud noises of the banging the pipes. Now, the good news is that they then reversed the phobia. So can you guess how they might have reversed the phobia?
Yeah. What they did was they started pairing or presenting white furry animals and then would present a pleasant stimulus with them, so that the white furry animals were paired with a pleasant stimulus. And so later on, after the conditioning and pairing, the kid actually liked little bunnies and things like that.
So here are a couple of questions to contemplate about this process of pairing conditioned stimuli with neutral stimuli. And I pose these questions really as a way of stressing that respondent conditioning is not just something done in labs by slightly strange people in white coats.
So think about-- why do commercials include attractive people when the product has nothing to do with their appearance? In terms of responding initially, why are some of us afraid of dentists? They're usually nice people. Why does a dog cringe when you say, bad dog? Does he really understand English? And finally, here's a tough one. Why does a lie detector detect lies? How does that work?
All right. Here's a clue, because it is a little tougher, if you need it. What often happened when you told a lie as a child, actually before you got really good at it? Right. Often times, you'd get caught. What happened when you got caught telling a lie? So I think you can start figuring out why lie detectors can detect physical types of things when people lie, unless the person has absolutely no emotions whatsoever.
So anyway, I have two more pieces of information about respondent conditioning. OK. In higher order conditioning, a neutral stimulus is paired with a previously conditioned stimulus, rather than with an unconditioned stimulus. So in general, the higher the order, the weaker the conditioning.
So we have this thing called higher order conditioning, in which a neutral stimulus is paired with a previously conditioned stimulus rather than an unconditioned stimulus. And in general, the higher the order, or the further away you get from the unconditioned stimulus, the weaker the conditioning is.
Respondent extinction is the process through which a CR is weakened by discontinuing to pair the CS with the US. So respondent extinction is the process through which a CR, or conditioned response, is weakened by discontinuing to pair the CS with the US. That is, repeatedly presenting a CS without the US until the CS no longer elicits the CR.
I hope all this is making sense. So that is repeatedly presenting a CS without the US until the CS no longer elicits the CR. So I think now would be a good time to do another self-evaluation activity, and come on back and go into the next segment when you're ready.
Please complete the Lesson 1 Segment 6 Activity.
CHARLES HUGHES: All right, now on to operant behavior. Later in the 20th century, based on Pavlov's work, as well as work by people such as Watson and Thorndike, B.F. Skinner began to focus on operant behavior. That is behavior that operates or acts on the environment and is controlled, to a great extent, by the environmental consequences of the behavior. And again, we've seen this paradigm several times. But this is the operant conditioning paradigm, which is different from the respondent, which only includes the SR.
So as I've mentioned before, this three-term contingency, which is at the core of how operant behavior occurs, includes the stimulus that comes before the behavior, the behavior itself, and the stimulus that occurs as a consequence of the behavior occurring. So here are some examples.
So every time Eli hears his mother say, bedtime, Eli screams and hits his head. His mother sits with him and reads a story to calm him down. So the antecedent stimulus here is his mother saying, bedtime. And the behavior in question is Eli starts screaming and hits his head. The consequence of that behavior is his mother sits down and reads him a story.
And here's another one. So Jose's teacher says, Time to start work. Jose responds by pulling his hair. Jose's teacher calms him down and then has him return to work.
So the antecedent in this case, the antecedent stimuli for all this, was the teacher saying, It's time to work. The behavior in response to that stimulus was Jose starts pulling his hair. And the consequence of him doing that was the teacher goes to him and calms him down and has him return to his work.
So operant behavior acts on the environment, which results from movement of the skeletal frame. Operants are usually contractions of the skeletal or striated muscles. So operative behavior acts on the environment, which results from movement of the skeletal frame. And operants are usually contractions of the skeletal muscles.
Operant behavior is voluntary versus reflexive, and is emitted versus elicited. Now, operant behavior is any behavior whose future frequency is determined by its consequences. That is, operant behavior is shaped and maintained by past consequences. Because consequences play such a critical role in operant conditioning, let's take a closer look at them.
A consequence is a stimulus or event that follows a response. It is an event that occurs after a response. So a consequence is a stimulus or event that follows a response. It is an event that occurs after a response.
Two principal types of consequences are reinforcing and punishing. So two principal types of consequences are reinforcing consequences and punishing consequences. Each type is further divided into two types, as follows. Reinforcing consequences can be positive reinforcement and negative reinforcement. Positive reinforcement and negative reinforcement.
You can also have positive punishment and negative punishment. So we can take reinforcement and punishment and divide them into two types-- positive reinforcement, negative reinforcement, positive punishment, and negative punishment.
We'll get back to positive and negative reinforcement and punishment in a minute. But now it's time for another self-evaluation activity.
Please complete the Lesson 1 Segment 7 Activity.
CHARLES HUGHES: OK. Now, we're starting on the last segment of the lesson in which we're going to take a closer look at how consequences reinforce behavior or punish behavior.
This slide summarizes the four basic antecedent consequences and what kind of effect they can have on behaviors. The arrows indicate whether the effect of a consequence is an increase or a decrease in the behavior. The plus sign means that a consequence was added when the behavior occurred, and the minus sign means that a consequence was taken away upon the occurrence of the behavior. So the SR plus stands for positive reinforcement, which means that a consequence was added and the behavior increased.
The SR negative stands for negative reinforcement. That means a consequence was taken away and the behavior increased.
The SP plus and SP minus deal with punishment, whereby the behavior is decreased or increased-- or excuse me, decreased. Punishment will be covered in the next session of this course, so I'm not going to deal with it here. What I'm going to be dealing with is only reinforcement.
So let's define it. Reinforcement is an environmental change, which follows a response, which increases or maintains the future frequency of that response, class, or behavior.
So reinforcement is an environmental change which follows a response, and this increases or maintains the future frequency of that response or behavior.
Reinforcement may follow a response because of natural causal relation, or because someone arranges it, or by accident. So reinforcement may follow a response because of a natural causal relationship, because somebody arranges it, or purely by accident.
Now, based on that definition, reinforcement is a change or stimulus in the environment that occurs after a response. It is contingent upon that response occurring. And it must increase-- or at the very least, maintain-- the behavior in question in the future.
So the principle of reinforcement can be stated thusly-- a response will occur more frequently, or will be maintained, if a reinforcer has immediately followed the response in the past.
Now I'm going to ask some questions that will probe your understanding of reinforcement to this point. This one's a little tricky. First, is praising a student behavior-- praising whatever the student did-- is that reinforcement?
The answer is, not necessarily. While we often give things the label of reinforcement because we think it should be reinforcing, the only way to know if something is a reinforcer is if the target behavior increases or maintains. Some kids don't like praise, and it has no effect on their behavior, or it can possibly result in a decrease in their behavior.
If I asked the same question about physical restraint, what would you say?
Well, the same answer for the same reason. Different kids respond differently to different consequences. And the only way to ascertain if reinforcement has occurred is if the consequence-- be it praise, be it restraint, or whatever-- increases or maintains the behavior.
So here's another question. I tell my students that I will give them some tokens that they can cash in later if they finish all their work during the period. Is telling them that they could earn tokens reinforcement?
Not technically. Remember, reinforcement is an environmental change that follows a response and is contingent upon the response occurring. In this scenario, telling them they could earn tokens occurred before the behavior occurred. So it's not technically reinforcement.
Now, two other points before moving on. First, please remember that you reinforce behavior, not the person. The name of this course is "Applied Behavior Analysis," not "Applied Person Analysis." So we reinforce behaviors, not people. For example, I would not say I reinforced my students for getting their work done, but rather that I reinforced my students work completion.
Now, the second point deals with the popular view of ABA, and specifically the principle of reinforcement. And that is, reinforcement has become associated with the concept of manipulation, and is this artificial tool created to make people do what you want.
Now, as I've mentioned before, it is important to note that the principle of reinforcement was not created or invented. It was discovered. It occurs naturally and is, to a large part, how all living organisms learn and develop behavioral repertoires. It is, in fact, a very efficient way to learn, and is a product of natural evolution-- especially for those animals that are not, to a large degree, reflexive or instinctuals, like humans. Much of human behavior is controlled by naturally-occurring consequences, and societies are based, to a large part, on consensual consequences for behavior.
So here's some examples. A baby makes some cute sounds and smiles when his mother approaches. So she spends time cuddling and playing with him. The mother's response to her baby's cooing increases the frequency of pleasant sounds and smiles by the baby. This, in turn, increases the amount of time the mother spends playing with the baby and so on.
Another one. I read a science fiction book by Isaac Asimov, and I really enjoy it. I've been trying to read more of his works and, in fact, to read science fiction books by other authors. And I continue this behavior probably for about 25 years.
Another example of reinforcement in action that is not artificial or whatever. I work at a job-- well, this one's kind of artificial. I work at a job, and I am paid for doing it. Now, it is certainly possible that I am reinforced by other consequences than being paid, but I really doubt if I would continue working at a job if they stopped paying me-- unless, of course, I was independently wealthy. And you're going to learn about how antecedent conditions, such as being wealthy, can affect the power of a reinforcer later in the ABA course.
Now, unfortunately, there are people for whom naturally-occurring reinforcers fail to maintain appropriate behavior. So as behavior change professionals, we sometimes have to go looking for other, more powerful reinforcers and actually apply them in a purposeful way. Now, let's talk about a specific type of reinforcement, and that is positive reinforcement.
Positive reinforcement is an environmental change in which a stimulus is added or magnified following a response which increases or maintains the future frequency of that behavior. So positive reinforcement is an environmental change in which a stimulus is added-- that's the positive-- or magnified following a response which increases or maintains the future frequency of that behavior.
The key point here is that something is added or magnified following or during a response. Keep in mind that the term positive in positive reinforcement does not mean increasing behavior or refer to something good or nice. The positive in positive reinforcement refers to adding something to the environment following a response.
Remember that positive reinforcement is represented by the capital R with a plus sign next to it. And what do you do when you see a plus sign on a math problem? You add. Thus, positive reinforcement is when a stimulus is added contingent upon a behavior occurring, and the behavior increases or maintains in the future. And that's what you really need for something to be positive reinforcement.
Keep in mind that positive reinforcement is the process whereby a consequent stimulus is added following a response. Sometimes, when we talk about ABA with others, we want to refer to the stimulus rather than the process, and we have a slightly different term for that.
Sometimes, we talk about a positive reinforcer. So a stimulus that, when presented following a response, increases or maintains the future frequency of that response. This may include tangible items, attention from others, and activities. So a positive reinforcer is a thing-- specifically, it's a stimulus that, when presented following a response, increases or maintains the future frequency of that response. These reinforcers may include tangible items, attention from other people, and activities.
However, positive reinforcement may also be automatic, in which the proprioceptive feedback produced by a response itself is reinforcing. And I'll talk about that in just a minute. But however, positive reinforcement may also be automatic, in which the proprioceptive feedback produced by the response itself is reinforcing.
So a reinforcer is a thing. In this case, this thing is a consequent stimulus. And reinforcement is the process or procedure. For example, if I give one of my students some additional free time for working diligently on a task that is especially frustrating for her, and this diligent behavior increases or is maintained, the reinforcer is free time. The process of what I set up is called reinforcement.
So let's look at another animation to illustrate positive reinforcement.
[ELECTRONIC NOISES]
BOY: Man!
WOMAN: Wow. I didn't know you were so good at that.
[ELECTRONIC NOISES]
BOY: Yes! I got first place. Yes!
PROFESSOR: So in this animation, we have a young man playing a video game. And he kind of gets tired of it and so he begins to quit. And then a young lady comes in, notices his score, and pays some attention to him, says, wow, you're really good,, and all that type of thing. And then the boy picks up the controller and begins to play even more.
And the intimation here is that he continues to play, that his behavior of game-playing is increased or maintained in the future. So we have here a possible example of positive reinforcement where the positive attention the girl gave him could be the reinforcer, and so we have a consequence occurring that increases a behavior.
All right. So what I'd like to do now to wrap up this part on positive reinforcement is talk about what we call a few rules for reinforcement. Something to keep in mind as you might begin to apply reinforcement. First of all, you cannot tell if a given event or stimulus will be a reinforcer unless you try it and observe its effect.
Again, it goes back to my question about-- is praise reinforcement? Well, it could be, and maybe it isn't. The only way we're going to know is if it is presented contingent upon a behavior occurring, and that behavior increases or maintains in the future. So there are pieces of information you must know in order to say, yes, this is a reinforcer, or reinforcement occurred.
Conversely, as I mentioned before, keep in mind what may be a reinforcer for one person may not be a reinforcer for another. I think, if you've taught for any length of time, you've learned that some kids like to be touched. Some kids do not like to be touched. So for one kid, doing something, and followed by me putting my hand on his or her shoulder will actually turn out to be a reinforcer, whereas another kid, if I touch them, quite the opposite.
When you use reinforcement, it's always the most effective the more immediately that the reinforcer follows the behavior. If you wait a day or two to present the reinforcer after a behavior, it will not work. So you want-- whenever possible-- to have the reinforcer be presented soon after the behavior occurring.
And reinforcement must be contingent. There are some times where non-contingent reinforcement can work, but in general, you don't deliver or the reinforcement does not occur unless the behavior occurs. So it's very contingent upon the behavior occurring.
We also know, as you're going to see when you start studying schedules of reinforcement, that typically, when it's a new behavior, and you're trying to strengthen it and make it occur more often, you have to reinforce it more frequently. And of course, after a while, you can start to thin that out.
And again-- sorry, I've got say it again-- behavior is reinforced, not the person.
So next up is another type of reinforcement-- and it'll be the last topic we cover-- and that is negative reinforcement. Negative reinforcement is an environmental change in which a stimulus is subtracted, withdrawn, removed, or attenuated following a response which increases or maintains the future frequency of that behavior. So negative reinforcement is an environmental change in which a stimulus is subtracted. Or you can say withdrawn or removed, or attenuated-- which means lessened-- following a response which increased or maintained the future frequency of that behavior.
All right. So keep in mind that the word negative in negative reinforcement refers to taking away or subtracting a stimulus-- in this case, an unpleasant or aversive stimulus.
The term negative reinforcement is represented by a capital R with a minus sign next to it. And what do you do when you see a minus sign? You take away or subtract.
The key thing to remember is that both positive and negative reinforcement increase behavior. Positive reinforcement occurs when a pleasant-- to that person-- stimulus is added contingent upon a response, and results in the future increase or maintenance of the response. Negative reinforcement occurs when an unpleasant stimulus is subtracted or removed contingent upon a response or behavior, and results in the future increase or maintenance of that response.
And here's an example. Most cars nowadays have an annoying buzzer that goes off when you try to drive without your seat belt. So you start wearing your seat belt more. Thus, the behavior of wearing your seat belt is negatively reinforced by terminating or removing the unpleasant sound.
Another example of negative reinforcement is when a student is given an assignment to do in class, and the student starts cursing and throwing things. The teacher then sends the student to the office. And in the future, the student starts exhibiting that behavior any time he's given something to do.
Now, in terms of the student's behavior, what may be going on is that his behavior of acting out results in the removal of an unpleasant or aversive stimulus, which is working on the assignment. So if his acting out behavior is maintained or increases in the future whenever the teacher gives similar types of assignments, and he continues to act out, we can say that his acting out behavior is being negatively reinforced.
Now, before we leave this scenario, let's take a look at the teacher's behavior of sending students out of class. Is it possible that negative reinforcement may be at work here also? Well, maybe. If the teacher's behavior of removing the student is increased or maintained by removing the unpleasant stimulus of a student's acting out behavior, does that fit our definition of negative reinforcement?
Yeah. Sure. Quite possibly.
We have a stimulus that's being removed-- that would be the student and his behavior-- and the teacher keeps removing students in the future so that he can avoid that particular behavior. So just as an aside, this scenario occurs often when students are given assignments that are too difficult for them. Sometimes, a situation can be taken care of by simply spending a little more time making sure that the tasks that we give our students are appropriate to them, and then we don't need all this reinforcement stuff going on.
Here are a few more terms related to negative reinforcement.
For negative reinforcement to occur, there has to exist an irritant or aversive antecedent condition whose removal would be reinforcing. So for negative reinforcement to occur, there has to exist an irritant or aversive antecedent condition whose removal would be reinforcing.
A stimulus, that when withdrawn following a response, increases or maintains the future of that response. And that's a negative reinforcer. So a negative reinforcer is a stimulus that, when withdrawn following a response, increases or maintains the future frequency of that response.
Here's another term related to negative reinforcement, and that is escape. Escape is a behavior that terminates or stops an aversive stimulus. Thus, it is maintained by negative reinforcement. So escape is a behavior that terminates an aversive stimulus. Thus, that behavior is maintained by negative reinforcement.
Now, the last term is called avoidance. So we had escape behavior. Now, we have avoidance behavior.
Avoidance behavior terminates a warning stimulus. So avoidance behavior terminates a warning stimulus, which is a conditioned aversive stimulus whose presence is correlated with the upcoming onset of an unconditioned aversive stimulus. And I'm going to give you some examples, so don't panic yet. But avoidance behavior terminates a warning stimulus, which is a conditioned aversive stimulus whose presence is correlated with the upcoming onset of an unconditioned aversive stimulus.
Avoidance delays the onset of the aversive stimulus. So sometimes, negative reinforcement involves escaping a behavior that is already occurring. Sometimes it results in avoiding the stimulus occurring altogether by terminating some type of warning stimulus.
So again, we see that a negative reinforcer is the stimulus part of the process of negative reinforcement. It is what is being withdrawn or taken away. Two terms related to negative reinforcement are escape and avoidance. And as we saw in the slides, escape behavior terminates, or eliminates, or stops an aversive stimulus while it is happening. So if I am being rained on, I can escape the aversive stimulus of getting wet by opening my umbrella. My umbrella-opening behavior is thus maintained by negative reinforcement, because opening the umbrella stopped me getting wet.
On the other hand, avoidance terminates some type of warning that lets us know that it is likely that the aversive stimulus is about to occur. And if we do something quickly, we can avoid it.
For example, in a classic animal experiment, a rat learned to press a lever to escape an electric shock. Some of these experiments are kind of iffy. But later on, a light was turned on immediately before the shock. The rat learned to avoid the shock by pressing the lever when the light came on. So the light became a warning system, and the rat would press the lever before the shock occurred, and thus avoid the shock.
OK. Let's do another real quick little quiz here. In the above experiment, what is the negative reinforcer? The negative reinforcer.
Right. It's the shock. Remember, the negative reinforcer is the unpleasant stimulus whose termination immediately following a response reinforces that response.
So what response or behavior was negatively reinforced in the rat scenario? Right. The behavior that was reinforced was pressing the lever, and the lever was the thing that got the rat out of being semi-electrocuted.
So let me wind up the discussion about negative reinforcement with another classic example using some animation.
BOY: I want candy.
WOMAN: No, honey. It's too close to dinner, and you've already had some candy earlier today.
BOY: Aw. Can I please have candy?
WOMAN: No.
BOY: [WHINING] Just give me some candy. I'm starving to death, and I really want candy. That's why I really need some to help me get some more food.
WOMAN: All right. Fine. You can have some candy.
BOY: I want candy.
PROFESSOR: OK. I call this scenario "The Kid, the Candy, and the Checkout Counter." So does that sound familiar? Well, if you're a parent, you've probably gone through this.
So let's analyze that animation about what was going on there. Let's take a look at the mother's behavior. So what happened at the end is the mother gave in and gave the kid the candy. And then in the final scene, we see the mother in another situation. The kid asks for it, and she gives it to him right away.
And also you notice that, in the grocery store, the mother said no several times. The kid escalated. His behavior was extremely aversive. And not only that, we had the little group of disapproving people standing there judging her. So essentially, here we had the mother with a lot of aversive stimuli around her-- her child's whining, the embarrassment of that, the disapproving stares, and so on.
And so if we take a look at her behavior, which is giving the kid candy, we can say that her behavior was negatively reinforced. Giving the kid the candy, that behavior-- then and then in the future-- eliminated, reduced, or avoided the unpleasant stimuli of the kid whining, as well as other people judging her in negative ways. So looking at the mother's behavior, it's pretty clearly that negative reinforcement was going on.
Now, what's interesting about a lot of behavioral scenarios like this is that there are other people behaving. So if we take a look at the child's behavior, think about what's going on. And in that case, the child's behavior of whining could have been easily positively reinforced in that a pleasant stimulus was delivered contingent upon his whining. And if, in the future, he used the whining to get his way-- which happens a lot.
We wonder why our kids keep whining. Well, probably because we're reinforcing that behavior. So hopefully that animation was useful in illustrating negative reinforcement and positive reinforcement.
In the next slide, I have put together a chart on reinforcement that you could use. This information is going to be in your assignments and in your quizzes, so feel free to use it as a study aid also if it helps.
So it's a summary of reinforcement. And so you see again we have an arrow. An arrow means that response is increased or maintained in the future. That's part of the definition of reinforcement. When we have the plus sign, that means we're adding a stimulus. And reinforcement by adding a stimulus is called positive reinforcement or R plus. We have the minus sign which relates to negative reinforcement, and the minus sign means that we are removing or subtracting a particular stimulus.
To summarize, anyway, both positive and negative reinforcements are similar in that they both maintain or increase behavior. They are different in that positive reinforcement deals with the addition of a pleasant stimulus, and negative reinforcement deals with the subtraction or avoidance of an unpleasant aversive stimulus.
Now, I'd like to change gears here a bit before we wrap up, and discuss an issue that often comes up when ABA students are assessed on their knowledge of basic principles, such as the ones that I have covered in this session. Often, the format for this assessment is to present scenarios, such as the ones that I've described, and some of the animations, and ask that you identify the behavioral principle that is in play. So you're given this scenario and then asked to decide if it's, say, an example of positive reinforcement or negative reinforcement. This format is going to be used in your quizzes and tests for this course, as well as on the national board exam.
This is sometimes difficult for students at first, so I wanted to take a minute to provide you with some suggestions about how to go about answering these types of test items. First, when you are taking exams, you should focus on the information presented in that scenario. And don't read too much into it or go beyond the facts that are presented in that scenario. Then the first step is to figure out whose behavior is the focus of the scenario or item. If you read it carefully, you should be able to tell, say, if it's asking about the mother's behavior or the child's behavior. So you want to find out whose behavior are we talking about.
Next, you want to decide what principle is in play with regard to that key person's behavior. For example, if a scenario says that a behavior increased, you're automatically going to know we're talking about reinforcement, because reinforcement is when behavior increases. Then you look at whether a stimulus was added or subtracted contingent upon the behavior occurring to figure out whether or not-- is it positive reinforcement or negative reinforcement. The key here is just deal with the facts that are presented to you, and not to go beyond the information presented.
So let's wind things up. I'm going to do a quick summary of the session content, and wind up with a final self-evaluation activity for you to do. Please remember that the primary goal of this session was to define terms and principles. Specific application of these principles comes later in this course, as well as in the following courses.
The content I presented has stressed the following points. ABA is a science, and the principles discovered in this branch of natural science have been identified through experimentation and extensive replication. ABA is concerned primarily with environmental stimuli, both antecedent and consequent, which reliably alter a behavior or response. We call this type of relationship between a stimulus or stimuli and behavior a functional relationship.
Now, while primarily concerned with environmental determinants of behavior, applied behavior analysts are aware that behavior can also be determined by organic variables, including biological, genetic, and neurological factors. But applied behavior analysts are concerned with observable and measurable behavior, and focus on functional response classes. That is, topographically different behaviors that are functionally similar, in that they result in the same environmental effect.
We talked about that behavior itself has fundamental properties related to their occurrence in space and time, such as temporal locus, temporal extent, and repeatability. And that based on the property of interest to the analyst, we can quantify dimensions of behavior in several ways. We can quantify it using latency, duration, rate, IRT, and so on.
When we talk about the environment, we're saying the environment can be seen as a constellation of stimuli-- some of which may occur before a response, and some of which may occur after a response-- and that these stimuli or environmental changes can impact or control behavior. Some of these stimuli are innate or unconditioned, and some are learned or conditioned. Applied behavior analysts classify behavior as either respondent or operant.
Some behavior is learned through a respondent conditioning. Whereby a neutral stimulus is paired with an unconditioned stimulus. We have that. The neutral stimulus then becomes a conditioned stimulus that can then elicit the response in question. So that's respondent conditioning.
Some behaviors are learned through operant conditioning, and involve the presentation or removal of a consequence contingent on the occurrence of the behavior. The two basic principles related to operant learning are reinforcement, which increases behavior, and punishment, which results in decreasing behavior. And finally, that for both reinforcement and punishment, as you'll see, consequent stimuli are added or subtracted. Remember, in ABA, positive means added-- not that's something's good or nice-- and that negative means taken away, not that something is bad.
Now, if all that made some sense to you, my job and yours is almost done. To complete it, I would like you to go ahead and do the last self-evaluation. And thank you for your time and attention.
Please complete the Lesson 1 Segment 8 Activity.
Complete the Lesson 1 Practice Activity.
This activity covers all Lesson 1 segments.
Please complete and submit Assignment 1 by the due date (see syllabus). This assignment consists of fill in the blank and multiple choice questions. It will be activated one week before the due date.