How LASS 11-15 was developed

LASS 11-15 was created by Lucid in conjunction with the same research team from Hull University that collaborated in the development and validation of the assessment programs CoPS Cognitive Profiling System (Singleton, Thomas and Leedale, 1996, 1997; see also Singleton, Thomas and Horne, 2000) and CoPS Baseline Assessment (Singleton, Thomas and Horne, 1998; see also Singleton, Horne and Thomas, 1999). The first version of LASS 11-15 was known by this name, but in 2001 its name was changed to LASS Secondary because in that year Lucid released a version of the software for younger children, called ‘LASS Junior’ (Thomas, Singleton and Horne, 2001), covering the age range 8 years 0 months to 11 years 11 months. LASS Junior was renamed LASS 8-11 in 2009 at the same time that LASS Secondary reverted to its original title LASS 11-15.

LASS 8-11 comprises the same tests as LASS 11-15, but these were redesigned specifically for the younger age group, with easier items, age-appropriate graphics, and were standardised using a representative sample of pupils in the 8–11 age range. For further information about LASS 8-11 and the other Lucid computerised assessment systems, please contact Lucid or consult the Lucid website (www.lucid-research.com).

The initial research to develop LASS 11-15 was carried out over the period 1997 to 1999, using a total of 2,366 students aged 11 years 0 months to 15 years 11 months (mean 13 years 3 months, standard deviation of the sample 14.0 months) attending 28 different secondary schools in the UK. There were 1,302 boys and 1,064 girls. Pilot versions of the eight tests in the suite were trialled with these students, and feedback was obtained from the teachers administering the system. The data obtained from these trials were subjected to item analysis, including determination of difficulty levels for each item (pass rates), which were then incorporated into the adaptive algorithm in the adaptive tests in the suite. In addition, timings, progression rules and discontinuation rules were calibrated. The tests were then subjected to a standardisation process to obtain norms for each year group. Subsequent studies examined the validity and reliability of LASS 11-15, whether it is subject to any gender bias and whether students preferred LASS 11-15 to being assessed with comparable conventional tests.

At the time of publication of this fourth edition of the Teacher’s Manual, LASS 11-15 is in use in several thousand schools in the UK, and in many English-speaking schools across the world. In the years since its first release it has become an indispensible assessment tool for many teachers, but it is also widely used in many other settings, including prisons, youth offender centres, careers guidance centres, community support centres, and by national and local voluntary associations that serve the needs of students with dyslexia and other learning problems. Feedback from users indicates that the fact that students tend to prefer this method of assessment over conventional forms of assessment is an important factor in the decision by many teachers to use LASS 11-15. However, the ease of use of the program, the flexibility of the system, and the value of the results in informing educational decisions also play a very significant part in decision-making.

Like all Lucid products, LASS 11-15 conforms to the British Psychological Society’s guidelines for the development and use of computer-based assessments (British Psychological Society, 1999a).

Standardisation

The eight tests in LASS 11-15 have been standardised so teachers using the system can establish where any given student falls on any of the components of the suite, in relation to the population norms. This means that direct and meaningful comparisons can be made between the individual tests that a single student takes. In addition, direct and meaningful comparisons can be made between students as well as between the student and national norms. The initial standardisation of LASS 11-15 was carried out in 1998, using a representative sample of 505 students (300 boys and 205 girls) attending 14 schools in different parts of the UK. The age range was 11 years 0 months to 15 years 11 months. The mean age was 13 years 2 months (standard deviation 14.3 months). For full details of the standardisation process, see Horne (2002).

Validity

Validity of new psychological and educational tests is usually established by comparing them with equivalent established tests. This is usually called ‘concurrent validity’. Some difficulties may arise in the case of computer-based tests, where the modes of response (typically using a mouse) are different to those used in conventional tests (typically either oral or written responses). Inevitably, this tends to result in somewhat lower correlation coefficients that those obtained when comparing two similar conventional tests (for a discussion of these issues, see Singleton, 2001).

Bearing this limitation in mind, Horne (2002) carried out a concurrent validity study of LASS 11-15 using 75 students (47 boys and 28 girls), age range 11 years 6 months to 15 years 11 months (mean age 13 years 6 months; standard deviation 17.0 months). This sample had been randomly selected from Year 7 to Year 11 registers in five different secondary schools in different regions of England and Scotland, the schools having been chosen so that pupils from a broad range of socioeconomic backgrounds were adequately represented. (These were not the same schools in which the reliability study had been carried out.) The students were tested on LASS 11-15 (all modules except the Single Word Reading Test) and also tested within four weeks using well-known published conventional tests of skills that, as far as possible were equivalent or similar to those in LASS 11-15. The order of test administration was counterbalanced to control for order effects. The results, which are shown in Table 1, indicate significant correlations between the LASS 11-15 tests and the comparison measures, with the highest correlation coefficients being obtained for the literacy measures (where there is the closest correspondence in the tasks involved). The somewhat lower correlation coefficients for the cognitive measures may be explained by differences in the modes of response (oral or motor in the conventional tests, via mouse input in LASS 11-15) and requirements of the tasks (e.g. in WMS-III spatial span, no semantic elements are included, whereas in the LASS 11-15 Cave test the student has to remember the object as well as its spatial position). Despite these inevitable limitations when comparing computer-based tests with conventional tests, it may be concluded that the results provide satisfactory concurrent validation for the tests in LASS 11-15. This study has been submitted for publication (see Horne Singleton and Thomas, 2003a).

Table 1. Correlation coefficients obtained between LASS 11-15 tests and equivalent or similar conventional tests (n=75).

LASS 11-15 test Comparison test Correlation
    coefficient (r)*
Sentence reading NFER Sentence Completion Test 0.75
Spelling British Spelling Test Series 3 0.88
Reasoning Matrix Analogies Test 0.52
Cave (Visual memory) Wechsler Memory Scales (WMS-III) Spatial Span (total score) 0.37
Mobile (Auditory memory) Wechsler Memory Scales (WMS-III) Digit Span (total score) 0.55
Nonwords (Nonword reading) Phonological Assessment Battery (PhAB) Nonword Reading 0.43
Segments (Syllable segmentation) Phonological Assessment Battery (PhAB) Spoonerisms 0.45

* All correlations except Cave are significant at p<0.001 or better; the correlation for Cave was significant at the p<0.01 level.

Validity of assessment instruments may also be established by another method, in which the instrument is used to predict which individuals do, and which do not, fall into a given category. This is usually called ‘predictive validity’. In the case of LASS 11-15 the most obvious test of this would be to see how effective it was in identifying dyslexia in a group of students that contained by known dyslexic and known non-dyslexic individuals. Horne (2002) carried such a study using 176 students (102 boys and 74 girls), age range 11 years 6 months to 15 years 11 months (mean age 13 years 7 months; standard deviation 17.4 months). This sample had been randomly selected from Year 7 to Year 11 registers in five different secondary schools in different regions of England and Scotland, the schools having been chosen so that pupils from a broad range of socioeconomic backgrounds were adequately represented. The sample was broken down into a group of 30 students (21 boys and 9 girls) who had been diagnosed by educational psychologists as having dyslexia, 17 students (11 boys and 6 girls) with other special educational needs (‘other SEN group’), and 129 students (76 boys and 59 girls) without special educational needs (‘non-SEN group’). The dyslexic group scored significantly lower than the non-SEN group on five of the seven LASS 11-15 tests (sentence reading, spelling, auditory memory, nonword reading and syllable segmentation). There were no significant differences between the dyslexic group and the non-SEN group on LASS 11-15 reasoning or visual memory. However, the other SEN group scored significantly lower than the non-SEN group on all seven of the LASS 11-15 tests used in the study. Comparable results were found when the same groups were compared on several conventional tests (the tests used are listed in the column headed ‘Comparison tests’ in Table 1). These findings fit well with established views about dyslexia – i.e. that dyslexic students are comparatively poor on measures of literacy, phonological skills and auditory memory and these weaknesses are not due to low intelligence (see Snowling, 2000) – and provide validation for the use of LASS 11-15 in the identification of dyslexia. When the overall profile of scores was examined, LASS 11-15 was found to have correctly identified 79% of the dyslexic students as having dyslexia, compared with 63% success rate for the equivalent conventional tests and only 59% using the phonological measures alone. These results provide convincing predictive validity for the use of LASS 11-15, which had rather greater accuracy than a mixture of conventional tests. This study has been submitted for publication (see Horne Singleton and Thomas, 2003b).

 

Reliability

The term ‘reliability’, when applied to a psychometric test, usually refers to the extent to which it can be expected to yield similar results when administered to the same individual on different occasions. This is sometimes referred to as ‘test-retest reliability’.

Horne (2002) investigated the test-retest reliability of LASS 11-15 using 101 students (55 boys and 46 girls) aged between 11 years 6 months and 15 years 11 months (mean age 13 years 8 months; standard deviation 16.5 months). This sample had been randomly selected from Year 7 to Year 11 registers in seven different secondary schools in different regions of England and Scotland, the schools having been chosen so that pupils from a broad range of socioeconomic backgrounds were adequately represented. The students were tested on LASS 11-15 (all modules except the Single Word Reading Test) and then retested four weeks later. The results (see Table 2) show that in all cases, significant test-retest correlation were obtained, indicating satisfactory test-restest reliability. Higher correlations were found for the literacy measures than for the cognitive measures. It appears most likely that the somewhat lower (but nevertheless significant) correlations for the memory measures is due to greater susceptibility of these task to practice effects arising from enhanced motivation and application of strategic thinking at the retest. This study has been submitted for publication (see Horne Singleton and Thomas, 2003a).

Table 2. Test-retest correlation coefficients for LASS 11-15 tests over a four week period (n=101).

LASS 11-15 test Correlation coefficient (r)*
Sentence reading 0.85
Spelling 0.93
Reasoning 0.51
Cave (Visual memory) 0.53
Mobile (Auditory memory) 0.58
Nonwords (Nonword reading) 0.77
Segments (Syllable segmentation) 0.74

* All correlations are significant at p<0.001 or better.

Gender differences

Studies of gender differences in education typically find that girls out-perform boys in school attainment (see Fergusson & Horwood, 1997) and that boys are more likely to be referred for educational difficulties (see Vardill, 1996). Nevertheless, it is generally held that psychological and educational tests should, as far as possible, be free of gender bias, so that when decisions about children’s progress are being made (especially where special support may be required) this can be based on information derived from sources that favour neither girls nor boys. On the other hand, it has sometimes been suggested that computer-based tests may favour boys because of their supposed greater interest in computers (see Crook, 1996). If this is the case, it could distort results obtained using a computer-based assessment such as LASS 11-15.

Horne (2002) carried out a study to investigate possible gender bias in LASS 11-15, using 176 students (102 boys and 74 girls), age range 11 years 6 months to 15 years 11 months (mean age 13 years 7 months; standard deviation 16.7 months). This sample had been randomly selected from Year 7 to Year 11 registers in twelve different secondary schools in different regions of England and Scotland, the schools having been chosen so that pupils from a broad range of socioeconomic backgrounds were adequately represented. The results (see showed that although girls scored consistently higher than boys in all except the Cave test (Visual memory), in no cases were these differences found to be statistically significant. When the same sample was examined for possible gender bias on equivalent conventional tests (the tests used are listed in the column headed ‘Comparison tests’ in Table 1) the only significant difference to be found between boys and girls was on the British Spelling Test Series 3, where girls outperformed boys. With this one exception, therefore, there was no evidence that either the conventional or the LASS 11-15 computer-based tests are biased in favour of boys or girls. This study has been submitted for publication (see Horne Singleton and Thomas, 2003c). 

Table 3. Gender comparisons on LASS 11-15 tests (mean z scores).

LASS 11-15 test Female Male
Sentence reading 0.87 0.71
Spelling 0.79 0.64
Reasoning 0.62 0.54
Cave (Visual memory) 0.27 0.33
Mobile (Auditory memory) 0.66 0.40
Nonwords (Nonword reading) 0.78 0.51
Segments (Syllable segmentation) 0.56 0.47

Student preferences

It is a fairly well-established finding that most students prefer computer-based tests to conventional tests (see Singleton, 1997, 2001, 2003). In the validity study carried out by Horne (2002), the students were asked whether they preferred the computer-based tests or the conventional tests. The results were that 54 of the 75 pupils (72%) preferred the computer-based tests while only 17 preferred the conventional tests (23%). There were no significant gender differences in this preference pattern. These findings have implications for assessment, especially where disaffected pupils are concerned. If students enjoy doing computer based tests, they are likely to be more motivated and stay on-task. This helps to produce results that teachers can be confident about.