Essential factors to take into account when interpreting results

LASS 11-15 is not one test, but several

When considering LASS 11-15 results, it is important to bear in mind that it is not one test that is being interpreted, but the performance of a student on a number of related tests. This is bound to be a more complex matter than single test interpretation. Hence the normative information (about how a student is performing relative to other students of that age) must be considered together with the ipsative information (about how that student is performing in certain areas relative to that same student’s performance in other areas). The pattern or profile of strengths and weaknesses is crucial.

It is not legitimate to average a student’s performance across all tests in order to obtain a single overall measure of ability. This is because the modules in LASS are measuring very different areas of cognitive skill and attainment. It would be like adding the length of a person’s legs to their waist measurement in order to obtain a size for a pair of trousers. The trousers would be unlikely to fit very well!

However, where scores in conceptually similar areas are numerically similar, it is sometimes useful to average them. For example, if scores on the two memory modules (Cave and Mobile) were similar, it would be acceptable to refer to the student’s memory skills overall, rather than distinguishing between the two types of memory being assessed in LASS (i.e. visual memory and auditory-verbal memory). Similarly, if scores on the two phonological modules (Nonwords and Segments) were similar, it would be acceptable to refer to the student’s phonological skills overall. Note that this applies only to conceptually similar areas and where scores are numerically similar (within about 10 centile points of each other). It would not be legitimate to average scores across conceptually dissimilar modules (e.g. Reasoning and Nonwords). When scores are dissimilar, this indicates a differential pattern of strengths and/or weaknesses, which will be important in interpretation. In such cases it will be essential to consider the scores separately rather than averaging them. For example, if Cave and Mobile produce different results, this will usually indicate that one type of memory is stronger or better developed (or perhaps weaker or less well developed) than the other. This information will have implications for both interpretation and teaching.

For further information on interpreting strengths and weaknesses see Section 4.3.4

Things which the computer cannot know

The computer is not all-seeing, all-knowing — nor is it infallible. For example, the computer cannot be aware of the demeanour and state of the student at the time of testing. Most students find LASS tests interesting and show a high level of involvement in the tasks. In such cases the teacher can have confidence in the results produced. Occasionally, however, a few students do not show such interest or engagement and in these cases the results must be interpreted with more caution. This is particularly the case where a student was unwell at the time of assessment or had some anxieties about the assessment. Teachers should therefore be alert to these possibilities, especially when results run counter to expectations.

Calculating discrepancy

When we observe the scores obtained by any given student, we will almost invariably find some differences. Some scores will be higher than others. But how do we determine whether any observed differences are ‘significant’?

By ‘significant’ we mean ‘so much poorer than the level that would be expected on the basis of the person’s age and intelligence, that the discrepancy is unlikely to be due to normal variation within the population or to chance’. What is important is not so much the absolute level of the student’s performance but rather the degree of discrepancy between their observed literacy skills and the level of literacy ability that we would reasonably expect such students to have. The conventional way in which psychologists make valid comparisons between performance on different tests or measures is by reference to standardised scores (such as centiles or standard deviation units), which have a clear rationale in psychometric test theory and practice.

On the other hand, poor literacy and/or study skills may also be the result of inadequate teaching or insufficient learning and/or experience and do not necessarily imply that the student has dyslexia. Establishing a discrepancy, as well as seeking evidence of neurological anomalies or cognitive impairments, helps the assessor to rule out these environmental factors as primary causes of the student’s problems. However, the discrepancy model of identification should not be used blindly: it should be part of a more extensive process by which the assessor seeks to build up an understanding of the individual’s difficulties based on quantitative and qualitative evidence.

There is an ongoing scientific debate about the role of intelligence in dyslexia (e.g. Ashton, 1996; Frederickson and Reason, 1995; Nicolson, 1996; Siegel, 1989a, 1989b, 1992; Solity, 1996; Stanovich, 1991; Turner, 1997). Some researchers argue that other types of discrepancy have better diagnostic value (e.g. between oral language abilities and written language abilities, or between listening comprehension and reading comprehension), although these could be problematic in cases of dyslexic individuals who have developed effective strategies for compensating for reading and writing difficulties. Others suggest that identifying those with chronic difficulty in phonological processing would be the most efficient way of diagnosing dyslexia (Snowling et al, 1997), although by no means all dyslexics seem to have phonological difficulties (Rack, 1997). For further discussion of these issues, see Section 3.4.5.

LASS automatically calculate whether or not there is a statistically significant discrepancy between the score on the Reasoning module and all the other scores and displays this in the Summary Table (see Section 2.4.3.1 for information about how to access the Summary Table). The size of the estimated discrepancy is shown and the associated statistical probability (p) value is also given. A p value of < (i.e. less than) 0.001 means that the observed discrepancy would be found by chance in fewer than one in a thousand occasions (and hence is highly likely to be a true discrepancy not simply the outcome of chance variation in the data). A p value of < 0.01 means that the observed discrepancy would be found by chance in fewer than one in a hundred occasions, and p<0.05 means that the observed discrepancy would be found by chance in fewer than one in a twenty occasions and would represent a true difference in 19 out of 20 occasions. All these values (p<0.001, p<0.01; p<0.5) are regarded as ‘statistically significant’ but the confidence that one can place in them increases as the probability of a result due to chance variation decreases. 

Major strengths will show up as a significant positive discrepancy (this is shown as a plus sign the Discrepancy column of the Summary Table). Major weaknesses will show up as a significant negative discrepancy when the individual test result is compared with the Reasoning test score (this is shown as a minus sign the Discrepancy column of the Summary Table).

It is sometimes useful to know whether there are significant discrepancies between tests other than the Reasoning test. For example, a teacher might want to know whether a student’s visual memory was significantly better than their auditory memory, or whether their phonic skills are much better than their phonological skills. To estimate the discrepancy between the results of any two modules, first consult the ‘Assessment Summary’ table in the Results engine. This will give z-scores (sometimes called ‘standard deviation units’) for each completed module. Calculate the difference between the two z-scores and look up the difference in Table 6

In the example given in Section 4.1.1.2, in which a student had a Sentence Reading score of centile 30 and Reasoning score of centile 85, the z-scores are –0.67 and +1.12, respectively: a difference of 1.79, which is highly significant.

It should be remembered that when using LASS to identify dyslexia, teachers do not have to use a discrepancy approach if they would prefer not to. But the method of identifying cognitive deficits that are consistent with a significant discrepancy between observed literacy skills and expected literacy skills (on the basis of age and intelligence) is a tried and trusted one.

Table 6. Estimating discrepancy

z-score difference Discrepancy estimate
less than 0.66 not significant
0.67 to 0.99 significant (p < 0.05)
1.0 to 1.66 significant (p < 0.01)
greater than 1.66 significant (p < 0.001)

 

Strengths and weaknesses

In considering a student’s profile it is important to consider strengths as well as weaknesses. Absolute strengths will appear as centile scores in the range 80+, while absolute weaknesses will appear as centile scores in the range below 20 (see 4.1.2 for explanation of thresholds for interpreting absolute weaknesses). Relative strengths and weaknesses, however, are shown in terms of discrepancies between scores – usually between the Reasoning score and the other individual scores (see Section 4.3.3 for an explanation of ‘discrepancies’).

Generally, the teacher is most interested in discrepancies that occur when a student’s literacy skills are significantly below expected levels — i.e. scores that are much lower than the Reasoning score. Occasionally, however, a student will have scores that are much higher than the Reasoning score. Discounting Single Word Reading (for reasons that are explained elsewhere: see Sections 2.2.2 and 5.3), the area in which this is most likely to be encountered is in visual memory (and sometimes auditory memory). Some students have visual memory skills that surprisingly good and higher than would be predicted from their Reasoning score. This can still show up as a significant discrepancy — if the difference between the scores is statistically significant — but obviously such results need to be treated differently as what is revealed is a particular and significant strength rather than a weakness. This strength can be utilised effectively in teaching and learning (see Chapter 6) and may reflect an individual learning style (see Section 6.1.3), but teachers should also be aware that strengths can sometimes cause problems. For example, students with very good visual memory skills sometimes fail to acquire satisfactory phonic skills in the primary stage because they find they can quite easily read words by remembering their visual patterns as whole units (rather than having to break the words down into component letters and using rules about letter-sound correspondences to decode the text). A student such as this will not necessarily have dyslexia — this will depend on the overall pattern of their LASS scores — but they will need help to enable them to improve their phonic skills.