Operant Conditioning and Behaviorism
An historical outline
Around the turn of the century, Edward Thorndike attempted to develop an objective experimental method for the mechanical problem solving ability of cats and dogs. Thorndike devised a number of wooden crates which required various combinations of latches, levers, strings and treadles to open them. A dog or a cat would be put in one of these 'puzzle-boxes' and, sooner or later would manage to escape from it. Thorndike's initial aim was to show that the anecdotal achievements of cats and dogs could be replicated in controlled, standardized circumstance, however, he soon realized that he could now measure animal intelligence using this equipment. His method was to set an animal the same task repeatedly, each time measuring the time it took to solve it. Thorndike could then compare these 'learning-curves' (see figure below) across different situations and different species.
Thorndike was particularly interested in discovering whether his animals could learn their tasks through imitation or observation. He compared the learning curves of cats who had been given the opportunity of observing others escaping from a box with those who had never seen the box being solved and found no difference in their rate of learning. He obtained the same null result with dogs and, even when he showed the animals the methods of opening a box by placing their paws on the appropriate levers and so on, he found no improvement. He fell back on a much simpler trial and error explanation of learning. Occasionally, quite by chance, an animal performs an action which frees it from the box. When the animal finds itself in the same position again it is more likely to perform the same action again. The reward of being freed from the box somehow strengthens an association between a stimulus, being in a certain position in the box, and an appropriate action. Reward acts to strengthen stimulus-response associations. The animal learns to solve the puzzle-box not by reflecting on possible actions and really puzzling its way out of it but by a quite mechanical development of actions originally made by chance. By 1910 Thorndike had formalised this notion into a 'law' of psychology - the law of effect. In full it reads: "Of several responses made to the same situation those which are accompanied or closely followed by satisfaction to the animal will, other things being equal, be more firmly connected with the situation, so that, when it recurs, they will be more likely to recur; those which are accompanied or closely followed by discomfort to the animal will, other things being equal, have their connections to the situation weakened, so that, when it recurs, they will be less likely to occur. The greater the satisfaction or discomfort, the greater the strengthening or weakening of the bond." Thorndike maintained that, in combination with the law of exercise, the notion that associations are strengthen by use and weakened with disuse, and the concept of instinct, the law of effect could explain all of human behavior in terms of the development of myriads of stimulus-response associations. It is worth briefly comparing trial and error learning with classical conditioning. In classical conditioning a neutral stimulus becomes association with part of a reflex (either the US or the UR). In trial and error learning no reflex is involved. A reinforcing or punishing event (a type of stimulus) alters the strength of association between a neutral stimulus and quite arbitrary response. The response is not to any part of a reflex. The behaviorist position that human behavior could be explained entirely in terms of reflexes, stimulus-response associations, and the effects of reinforcers upon them entirely excluding 'mental' terms like desires, goals and so on was taken up by John Broadhus Watson in his 1914 book 'Behavior: An Introduction to Comparative Psychology.'. Watson had also been involved in the introduction of the most favoured subject in comparative psychology - the laboratory rat. One of his early jobs which he used to fund his Ph.D. was as a caretaker, one of whose duties was to look after laboratory rats used in studies intended to mimic 'real-life' learning tasks such as navigating complex mazes. Watson became adept at taming rats and found he could train rats to open a puzzle-box like Thorndike's for a small food-reward. He also studied maze-learning but simplified the task dramatically. One type of maze is simply a long straight alley with food at the end. Watson found that once the animal was well trained at running this 'maze' it did so almost automatically. Once started by the stimulus of the maze its behavior becomes a series of associations between movements (or their kinaesthetic consequences) rather than stimuli in the outside world. This is made plain by shortening the alleyway - the well-trained rats now run straight into the end wall. This was known as the kerplunk experiment. The development of well-controlled behavioral techniques by Watson also allowed him to explore animals sensory abilities, for example their abilities to discriminate between similar stimuli, experimentally. Watson's theoretical position was even more extreme than Thorndike's - he would have no place for mentalistic concepts like pleasure or distress in his explanations of behavior. He essentially rejected the law of effect, denying that pleasure or discomfort caused stimulus-response associations to be learned. For Watson, all that was important was the frequency of occurrence of stimulus-response pairings. Reinforcers might cause some responses to occur more often in the presence of particular stimuli, but they did not act directly to cause their learning. Watson could therefore reject the notion that some mental traces of stimuli and responses needed to be retained in an animals mind until a reinforcer caused an association between them to be strengthened, which is a rather mentalistic consequence of the law of effect.
|
Copyright a2zpsychology.com (2002-2004). All
rights reserved.
Webspace sponsored by
Aryawat