General issues in interpretation
Adopting a 'problem solving' approach
Interpretation of CoPS results requires some thought. CoPS is a complex instrument and a careful, problem-solving approach is necessary. Teachers should resist the temptation to seek instant answers but instead should get used to considering a number of essential issues before reaching a conclusion. At first this approach may seem unfamiliar and a little slow, but with experience the task becomes quicker and easier. It is important not to lose sight of the fact that the interpretation (particularly in the case of the ‘at risk’ student) is likely to have a significant effect on the student’s education, and such decisions should not be made lightly or hurriedly.
A brief guide to interpretation is given in Brief pointers for interpretation of results. This may be used as a starting point for interpretation but will not give sufficient information to enable a proper interpretation to be carried out.
In this section, unless otherwise specified, the terms ‘scores’ or ‘results’ should be taken to mean measures of the accuracy of the student’s performance. In addition, however, CoPS gives the teacher information on the speed of the student’s responses.
Consistent with sound educational practice in general, and with the Special Educational Needs and Disability Code of Practice: 0-25 years (2014) in particular, teachers should not regard assessment as a single event, but rather as a continuing process. CoPS results should be considered together with other information about the student, including formal test data and informal observations made by the teacher. Strategies for intervention should not be regarded as set in stone but should be flexible and responsive to a student’s progress (or lack of progress). When reviewing a student’s progress, it may be helpful to reassess the student using CoPS, or, if the student is between 8 and 11 years by this time, LASS 8–11.
Guidance for interpretation table
Figure 32. Example of the Guidance for Interpretation section of the report
The Guidance for Interpretation table on the report provides enhanced guidance for interpreting each student’s results. Match the guidance to the CoPS Indications for Action Table found on the GL Ready Support website (www.glreadysupport.com). Interpreting the results from CoPS requires interpretation of the overall profile, and not just consideration of each individual subtest separately. Please see the Case Studies for further guidance on interpreting the whole profile.
It is not possible here to give a detailed account of the nature of dyslexia. Readers are recommended to consult Reid (2016).
In 2007, the British Dyslexia Association adopted the following definition of dyslexia:
‘Dyslexia is a specific learning difficulty that mainly affects the development of literacy and language related skills. It is likely to be present at birth and to be life-long in its effects. It is characterised by difficulties with phonological processing, rapid naming, working memory, processing speed, and the automatic development of skills that may not match up to an individual’s other cognitive abilities. It tends to be resistant to conventional teaching methods, but its effect can be mitigated by appropriately specific intervention, including the application of information technology and supportive counselling.’
The rationale behind CoPS is the identification of cognitive precursors of dyslexia and other problems in the development of literacy and numeracy, which the teacher can use (together with other information about the student) to formulate flexible intervention strategies with which to tackle the problems before they precipitate outright failure (see Singleton, 2002, 2003). This is entirely consistent with the Special Educational Needs and Disability Code of Practice: 0-25 years (2014) which stresses the importance of early identification of special educational needs.
Characteristics of dyslexia
Dyslexia is a variable condition and not all students with dyslexia will display the same range of difficulties or characteristics. Nevertheless, the following characteristics have been the most widely noted in connection with dyslexia.
- A marked inefficiency in the working or short-term memory system (Beech, 1997; Gathercole et al., 2006; Jeffries and Everatt, 2004; McLoughlin, Fitzgibbon and Young 1993; Rack, 1997; Thomson, 2001). Memory difficulties may result in problems of retaining the meaning of text (especially when reading at speed), failure to marshal learned facts effectively in examinations, disjointed written work or an omission of words and phrases in written examinations, because students have lost track of what they are trying to express.
- Inadequate phonological processing abilities, which affects the acquisition of phonic skills in reading and spelling so that unfamiliar words are frequently misread, which may in turn affect comprehension. Not only has it been clearly established that phonological processing difficulties are seen in the majority of children with dyslexia (Snowling, 2000; Catts et al., 2005), but research has also indicated that this occurs in many adults with dyslexia (Beaton, McDougall and Singleton, 1997a; Ramus et al., 2003).
CoPS results and dyslexia
The chapters that follow show how CoPS results can be used very effectively to identify dyslexia in most cases. Although the composition of the CoPS subtests was determined by statistical analysis of longitudinal research data (see Research and statistical information), it can be seen that CoPS nevertheless seems to fit the phonological deficit model more closely than it fits other alternative models of dyslexia. Hence it should be expected that CoPS will be at its most effective in identifying students with the ‘classic’ form of dyslexia – which includes by far the majority of the group – characterised by cognitive difficulties that most notably affect the mapping of graphemes onto phonemes. But CoPS is actually rather broader in its scope than first might meet the eye. Since it includes a number of key visual memory measures, CoPS is also adept at picking up ‘atypical’ cases of dyslexia where, instead of phonological deficits predominating, instead, the chief problem concerns visual memory. Finally, a valuable advantage of including the separately normed speed scores in CoPS is that speed of processing factors can also be taken into account. Thus, in various ways CoPS encompasses a wide range of psychological correlates of dyslexia which have theoretical support from different camps. As an all-round screening and assessment tool, therefore, it has substantial theoretical validity as well as excellent predictive validity, the latter having been established in the original longitudinal study.
Must children be labelled?
Labels for different special educational needs (especially the label ‘dyslexia’) have been controversial for some years. The 1981 Education Act, which had encouraged a non-labelling approach to SEN, was superseded by the 1993 Education Act, and the Code of Practice for the Identification and Assessment of Special Educational Needs (1994), which recognised labelling of SEN categories, including the category ‘Specific Learning Difficulties (Dyslexia)’.
However, the 1994 Code of Practice was superseded by the 2001 SEN Code of Practice, which again moved away from use of labels and focused instead on areas of need and their impact on learning (DfES, 2001). The latest SEND Code of Practice (DfE, 2014) reiterates that ‘The purpose of identification is to work out what action the school needs to take, not to fit a pupil into a category… The support provided to an individual should always be based on a full understanding of their particular strengths and needs and seek to address them all using well-evidenced interventions targeted at their areas of difficulty’ [SEND Code of Practice, 2014].
Many teachers are justifiably worried that labelling a student – especially at an early age – is dangerous and can become a self-fulfilling prophecy. Fortunately, the CoPS approach does not demand that students be labelled, instead it promotes the awareness of students’ individual learning abilities and encourages taking these into account when teaching. Since the CoPS report indicates a student’s cognitive strengths as well as limitations, it gives the teacher important insights into their learning styles. In turn, this provides essential pointers for curriculum development, for differentiation within the classroom, and for more appropriate teaching techniques. Hence it is not necessary to use labels such as ‘dyslexic’ when describing a student assessed with CoPS, even though parents may press for such labels.
By identifying cognitive strengths and weaknesses it is easier for the teacher to differentiate and structure the student’s learning experience in order to maximise success and avoid failure. The intention is that students who would be likely to fail and may subsequently attract the label ‘dyslexic’, never reach that stage.
Screening or assessment?
CoPS can be used both for routine screening of students who have no known difficulties in literacy and/or numeracy. It can also be used equally well to assess students who are known to have difficulties in literacy and/or numeracy or who are suspected of having dyslexia (e.g. because of a family history of the condition or because the students has experienced problems in language development such as pronunciation difficulties). The former approach has the benefit of possibly identifying students who are at risk of dyslexia that the teacher was totally unaware of. In such cases, low-key early intervention can make a remarkable difference to the student’s development and prevent many agonies that would likely have occurred later. Whichever of these two approaches is adopted, the processes of interpretation of CoPS results are essentially the same.
When tests are used for screening, what is critical is that they can accurately discriminate between those who do and who do not possess the target characteristic (in this case, dyslexia).
Inaccuracy in screening is reflected in misclassifications, either ‘false negatives’ (e. g. cases where the test has inaccurately classified a student as not having dyslexia when actually they do) and ‘false positives’ (e. g. cases where the test has inaccurately classified a student as having dyslexia when in reality they do not). Singleton, Thomas and Horne (2000) reported a study in which the screening accuracy of CoPS was evaluated in comparison with various other measures. CoPS had an exceptionally low level of false negatives and false positives and performed better than all the alternative measures under consideration. This finding has been used to develop another program, Rapid, which gives an automatic interpretation of results in terms of probability of dyslexia. Results from Rapid can be exported into CoPS, so the two products can be used together effectively both to screen and then to follow up with a full diagnostic assessment where this is necessary for developing teaching strategies.
Essential factors to take into account
Not one test but several
When considering CoPS results, it is important to bear in mind that it is not one test which is being interpreted, but the performance of a student on a number of related subtests. This is bound to be a more complex matter than single test interpretation. Hence the normative information (about how a student is performing relative to other students of that age) must be considered together with the ipsative information (about how that student is performing in certain cognitive areas relative to that same student’s performance in other cognitive areas). The pattern or profile of cognitive strengths and weaknesses is crucial.
Things that the computer cannot know
The computer is not all-seeing, all-knowing – nor is it infallible. For example, the computer cannot be aware of the demeanour and state of the student at the time of testing. Most students find the CoPS subtests interesting and show a high level of involvement in the tasks. In such cases the teacher can have confidence in the results produced. Occasionally, however, a few students do not show such interest or engagement and in these cases the results must be interpreted with more caution. Where a student produces a number of low scores a simple first precaution in the interpretative process is to note the date when those subtests were carried out. If it turns out that those subtests were all carried out on the same day, then there is cause for suspicion that some other, non-cognitive factors, are involved. It may be that the student was unwell on that day, or anxious, or simply wanted to be doing what the rest of the class were doing at that time (e.g. at playtime). Or it may be that the adult who was supervising the student was impatient to finish and the student sensed this. Speed (as opposed to accuracy) scores can often indicate if a student was not approaching the tasks with the right amount of application or concentration. Young children can easily become fatigued or bored with a task, and for this reason it is recommended that students should normally only attempt two or three CoPS subtests during a given session. Low accuracy scores with corresponding high speed scores usually suggests that the student was tired, bored, not concentrating properly, found the task too difficult, or for some reason was over-eager to finish. The implications of speed scores are discussed in Speed scores.
Cognitive ability not attainment
It is important to remember that the performance being interpreted with CoPS is based on tests of cognitive ability rather than attainment. Teachers are most familiar with tests of attainment, such as reading, spelling, and mathematics. Assessment of cognitive abilities, however, requires a broader interpretative approach. Although cognitive abilities underlie attainment, other factors are obviously involved in the determination of attainment, such as the student’s general motivation towards education and opportunities for learning. CoPS subtests provide a very good prediction of later attainment but cannot provide an infallible prediction because of the intervention of these other factors. Of course, motivation is itself affected by attainment. Students lose interest in activities in which they are failing, and often develop strategies to avoid being exposed to further failure (especially if that failure is public). Consequently, if, for example, two students exhibited identical ‘at risk’ CoPS results, the one with the poorer motivation would be regarded as being at greatest risk (other things being equal). CoPS cannot measure motivation, but it is important for the teacher to take that factor into account.