Designing Accessible Technology-Enabled Reading Assessments:
Recommendations from Teachers of Students with Visual Impairments

By Eric George Hansen, Cara Cahalan Laitusis, Lois Frankel, and Teresa King

The authors serve as research scientists,  assessment specialist, and research associate at ETS, Educational Testing Service.

Abstract

There is a great need to ensure that innovative technology-enabled assessments are accessible for students with disabilities. This study examined the severity of accessibility challenges that students with visual disabilities (ranging from low vision to complete blindness) would encounter in accessing prototype reading tasks from ETS’s CBAL research system (Cognitively-Based Assessments of, for, and as Learning). This focus group study involved presenting prototype tasks to six teachers of students with visual impairments. The teachers (a) examined the prototype tasks, (b) evaluated the severity of accessibility problems that would be encountered by students having different primary methods for accessing text (e.g., braille, audio, visual enhancement), and (c) offered suggestions about how to improve the accessibility of the tasks. The report summarizes the results of this study and provides recommendations for improving the accessibility of innovative reading tasks.

Keywords:

assessment design, innovative assessment, teachers of students with visual impairments, accessibility

 

There is widespread interest in technology-enabled assessments (TEAs), that is, computer-based tests that go beyond the capabilities of simple multiple choice test questions, and how they can enhance learning and teaching. TEAs may involve greater use of multimedia, interactivity, constructed response scoring, or other innovative features (Quellmalz & Pellegrino, 2009; Tucker, 2009). TEAs may permit “the presentation of more complex, multi-step problems for students to solve” in ways that are “more compelling than text alone” (Tucker, 2009, p.4). Yet it is critical that such TEAs be accessible to students with disabilities, including visual disabilities.  The requirement for accessibility of state- and district-wide assessments is part of the Individuals with Disabilities Education Improvement Act (2004) that states: “All children with disabilities are included in all general state and district-wide assessment programs, . . . with appropriate accommodations and alternate assessments where necessary.”  This requirement will necessitate that all state- and district-wide assessment programs that use TEAs must develop accessibility features and possibly alternate assessments to allow students with disabilities to participate (Bechard et al., 2010; Hansen & Mislevy, 2006; Salend, 2009).1

TEAs are playing an increasingly prominent role in the assessment landscape. For example, the National Assessment of Educational Progress (NAEP) has been exploring the use of technology-rich assessment environments (Bennett, Persky, Weiss, & Jenkins, 2010).  Furthermore, the U.S. Department of Education’s request for proposals from consortia of states placed significant emphasis on TEAs for Race to the Top Assessment (RTTTA) funding, requiring that applicants “Use technology to the maximum extent appropriate to develop, administer, and score assessments and report assessment results” (Federal Register, 2010).  Funded applicants placed significant emphasis on computer-based delivery of assessments, including innovative item types.

Challenges for Students with Visual Disabilities

Students with visual disabilities may face some of the most significant as well as diverse challenges in accessing TEAs. A low vision student might use synthesized speech for lengthy documents (due to eyestrain), but can still use vision (often with visual enhancement [enlargement or color modification]) to help disambiguate any problematic words, to navigate, or to select portions of text to be read. A student who has virtually no usable sight typically needs content to be rendered in braille (refreshable and/or hard copy) or audio, and requires a means of non-visual navigation and response (primarily keyboard). Furthermore, for such a student, the best way to proceed through a text is not always start-to-finish.  This is particularly true in an assessment environment where students move back and forth among questions, passages, and answer choices, or when the assessment interface requires interactive responses, such as moving text or other objects from place to place.  It is important that investigations of the accessibility challenges of student with visual disabilities take into account the diverse needs within that population.

An Example of an Innovative TEA

ETS’s “Cognitively-Based Assessments of, for, and as Learning” (CBAL) is a frequently cited example of an innovative TEA (Embretson, 2010; Linn, 2010; Tucker, 2009).  CBAL is a long-term research and development program intended to create a model for a comprehensive system of K-12 assessment (Bennett, 2010; Bennett & Gitomer, 2009).  CBAL researchers envision an assessment system that documents what students have achieved (“of learning), helps identify how to plan and adjust instruction (“for learning), and is considered by students and teachers to be a worthwhile educational experience in and of itself (“as learning).  The CBAL system model includes four components.  The first component is a conceptual component consisting of domain-specific competency models.2 These models underlie the other three components, which are: summative assessment, formative assessment, and professional support.

Focus on Prototype Reading Tasks

Some of the accessibility challenges associated with CBAL prototype tasks can be illustrated by focusing on CBAL reading tasks.  The CBAL reading assessment prototypes are being built to measure aspects of the CBAL reading competency model (i.e., CBAL reading construct) (O’Reilly & Sheehan, 2009a, 2009b) and, more recently, aspects of the unified CBAL English language arts competency model (Deane, 2012), which involved writing as well as reading.  These models were developed from extensive reviews of the learning sciences literature, state content standards, and other published analyses of the skills required for English literacy.  The CBAL reading competency model conceptualizes reading broadly to include paper as well as electronic sources  and to encompass a variety of document types and representational forms that routinely appear within reading contexts.  These representational forms include graphics, video, and audio, all of which may need to be processed together by the reader, because they are meant as an integrated presentation of content (e.g., as on a Web page).  It should be emphasized that this view of reading is consistent with the new Common Core State Standards (CCSS) (Council of Chief State School Officers [CCSSO] & National Governors Association [NGA], 2010).  For example, one of the CCSS grade 6 reading standards is: “Integrate information presented in different media or formats (e.g., visually, quantitatively) as well as in words to develop a coherent understanding of a topic or issue” (p. 39).  Note that this is a standard for reading.  Under older conceptions of reading, it might be surprising to see this emphasis on multiple media and formats (e.g., pictures, animations/videos) and on the possible inclusion of content with a significant quantitative (mathematical) aspect.  These new conceptions of reading can create new challenges in making reading tasks accessible to students with visual disabilities.

The Nature of the CBAL Research and Development Program

In order to understand the timeliness of the present study, it is helpful to understand the nature of the CBAL initiative. CBAL is a long-term research and development program targeted at creating a model for a comprehensive summative and formative assessment system for K-12. The model incorporates many innovations, including a basis in learning sciences research, technology-based performance tasks, and a through-course approach to summative assessment.  Creating such a system model is an extremely complex undertaking due to the many constraints that must be simultaneously satisfied.  As a consequence of this complexity, CBAL has been using an iterative approach to design, in which constraints are progressively added and existing prototype designs are modified or rethought to incorporate those new constraints. In that approach, a subset of the larger set of problems is selected, and prototypes are designed to target the constraints imposed by that subset. As promising designs are identified which meet those constraints, new constraints are added, requiring those designs to be modified or rethought completely.  By way of illustration, an early subset of such constraints was that CBAL assessment prototypes needed to (a) measure important reading, writing, and mathematics competencies well and (b) model good teaching and learning practice for general student populations.  The evidence to date suggests that the CBAL research program is making progress toward at least the first of those constraints (Bennett, 2011).  Consequently, new constraints are being added, including ones related to accessibility for students who have disabilities or are English language learners.  The current project was conducted to evaluate selected CBAL reading task prototypes with the new constraint of accessibility for students with visual disabilities.  This constraint was expected to raise fundamental issues about how accessibility should be facilitated in innovative assessments in which construct definitions have broadened to include tasks more like those that most literate individuals must negotiate in their daily lives.  In particular, the CBAL reading competency model may include skills that are important for students to possess (e.g., the ability to integrate information across multiple electronic documents, graphics, audio, and video), but that cannot yet be easily assessed in an accessible manner. Note that the addition of these constraints does not constitute a retrofit because there is no established design, much less an existing assessment, to retrofit.  Thus, an investigation of the accessibility challenges of CBAL reading tasks for students with visual disabilities can help assessment designers and planners improve the accessibility of this and other innovative TEAs.

Sources of Data about the Accessibility of Assessments

Ideally, an investigation of accessibility challenges would include a significant component in which students with visual disabilities would interact directly with the innovative reading TEAs. However, as an initial step, it seemed useful to draw upon the expertise of teachers of students with visual impairments (TVIs). Once the critical accessibility challenges have been identified and candidate strategies for addressing them have been selected and prototyped, it would then be important to conduct studies that involve students with visual disabilities interacting directly with the tasks.

Purpose

The purpose of this study was to examine the nature and severity of the accessibility challenges that students with visual disabilities (e.g., low vision and blindness) would have in using technology-enabled reading assessments, based on a focus group of TVIs who examined existing-prototype innovative TEA items.  The researchers then used these ratings and suggestions to produce recommendations for improving the accessibility of innovative TEAs. This study will allow us to modify or rethink the existing designs for TEAs, create new prototypes and, in turn, examine the success of those accessibility solutions in future research efforts.

Method

Overall Strategy

The overall strategy of the project was to: (a) identify features of CBAL reading tasks that may present accessibility challenges, (b) obtain judgments from TVIs about the severity of these accessibility challenges, (c) elicit suggestions from TVIs about how to improve the accessibility of the CBAL reading tasks, and (d) develop recommendations for making tasks more accessible. In investigating the accessibility of CBAL reading tasks, we sought to take into account the ways the different subgroups of students with visual disabilities access content on a computer.

Participants

Recruitment. The participants in this study were all current or former teachers of students with visual impairments (TVIs) who had recently participated in a test development workshop at ETS that was part of the Technology Assisted Reading Assessment (TARA) project (Technology Assisted Reading Assessment Project, 2007). Teachers were selected for participation in the TARA project based on their expertise in assistive technologies (screen readers, braille, enlargement, etc.) and their experience working with students with visual disabilities. Of the seven TARA project teachers invited, six were able to participate. Teachers were paid for their participation in this focus group separately from payment for participation in the TARA project.

Background. TVIs were from states in different regions of the country (California, Arizona, Minnesota, Massachusetts, New York, and Texas).  Five of the six TVIs were female and all were sighted. TVIs had experience in working with students in a variety of grade levels and academic achievement levels. All had experience working with students with visual disabilities at the middle school level and were credentialed to do so by a state. Each of the teachers had over a decade of experience as a TVI (the average was 22.8 years; the range was from 10 to 31 years). Four of the teachers were current TVIs; one was a professor in a TVI program; and the other was a Webmaster at a school for students with visual impairments.  Five of the six TVIs had experience as itinerant teachers in public schools; four of the six had experience teaching in a specialized school. All of the TVIs had experience teaching students to use braille (hard copy and refreshable), screen readers (text-to-speech software), screen enhancers (e.g., enlargement, color modification), and electronic notetakers (portable devices with braille keyboard and braille or speech output).

Materials

Several different assessment and data collection instruments were used during the focus group.  These included: (a) a background questionnaire, (b) CBAL tasks, and (c) a task survey, each of which are described below:

Background questionnaire.  The background questionnaire included questions about the teachers’ experience in teaching students with visual impairments and their experience with various assistive technologies. It was emailed to the TVIs a few days before the face-to-face meeting and either mailed back or returned to the researchers at the face-to-face meeting.

Tasks and features.  Each of four tasks were selected by researchers and test developers as presenting a feature that that would present a significant accessibility challenge to at least some students with visual disabilities. These targeted features were: (a) graphic organizer, (b) embedded graphics, (c) maneuvering multiple documents, and (d) multimedia.  We refer to each task by the name of its targeted feature.  These tasks are described in greater detail in the results section of this report.  Because these four features were selected as potentially the most problematic ones in the current CBAL prototypes, they may not represent the level of challenge associated with the prototypes on the whole.

Task survey.  The task survey was administered for each of four CBAL tasks presented to the participants.  The survey first asked TVIs to indicate whether they had encountered a similar feature in their teaching or in assessment.  They were then asked to rate the severity of the accessibility challenge for each task for four groups of students with visual disabilities (see Table 1).  Each group is defined by its primary means for accessing text: braille, audio, 18 point or larger font, and 14 to 17 point font.

Table 1: Four reference groups of students with visual disabilities

Group

Primary means of accessing text

1

Braille (hard copy or refreshable)

2

Audio (synthesized, recorded, live reader)

3

18 point or larger font

4

14 to 17 point font

In Table 1, the sequence from group 1 (braille) through group 4 (14 to 17 point font) tends to correspond to a progression of decreasing severity of visual disability.3 Although individuals are grouped by primary means of accessing text, some individuals use a combination of means simultaneously (e.g., audio with braille or large print).  Finally the task survey asked the participants to suggest solutions for the challenges they identified. 

Procedure

TVIs responded to a survey of their backgrounds (as mentioned above) then participated in a face-to-face focus group discussion in Princeton, New Jersey. This section describes briefly the face-to-face focus group session. The session included: (a) an introduction, (b) a review of the tasks, and (c) a discussion and summary.

Introduction. The introduction included an overview of the CBAL reading initiative (including a high-level description of the CBAL reading competency model), a presentation about Evidence Centered Design (to support reasoning about accessibility and assessment validity; Hansen & Mislevy, 2006; Hansen, Mislevy, & Steinberg, 2008; Hansen, Zapata-Rivera, & Feng, 2009; Mislevy, Steinberg, & Almond, 2003; National Research Council, 2004, chapter 6), and a demonstration of the Voiced GRE system.4 

Review of tasks. The review of tasks involved, for each task, describing the targeted feature to the teachers, giving them time to complete the task survey, and then discussing the task as a group.  The task (and its targeted feature) were presented on a large screen, and some possible accessibility challenges were explained. Teachers were given about 5 minutes to independently complete the task survey for each task. A key part of the survey asked teachers to rate the severity of the accessibility challenge posed by the feature.  Then, for each task, we had an open discussion about the task, in which suggestions were elicited from the teachers for improving the accessibility of the tasks.

Discussion and summary. After the discussion of each task individually, there was a final discussion to summarize thoughts about each task.  Researchers made notes of the teachers’ opinions about what the tasks were attempting to measure or teach and what the construct-irrelevant demands of the task might be for students with visual disabilities. Comments were summarized on a whiteboard.

Results

In this initial section of the results we will review what was learned about teachers’ previous experience with tasks similar to the ones used in this study, followed by the teachers’ responses to each of the CBAL tasks. In some cases not all teachers responded to the survey questions. The majority of teachers had encountered the first two features (graphic organizer and embedded graphics) during their teaching but were less familiar with them on assessments.  However, the majority of teachers had not encountered the second and third CBAL  features (maneuvering multiple documents and multimedia) during teaching or assessments (even though they are frequently encountered on the Internet). Table 2 below shows the results for the questions about encounters with the features in teaching and assessment of students with visual disabilities.

Table 2: Teacher encounters with task features

Feature

1. Have you encountered something like this in your teaching?

2. Have you encountered something like this on an assessment?

Yes

No

Yes

No

1. Graphic organizer

4

2

3

2

2. Embedded graphics

4

2

3

2

3. Maneuvering multiple documents

2

3

2

3

4. Multimedia

2

4

1

5

In the remainder of the results section we will provide a task description and teacher feedback for each of the four CBAL tasks. The task description includes a detailed description of the task along with the targeted feature as understood by the researchers prior to the focus group.  The teacher feedback section includes ratings of the severity of the accessibility challenges posed by the feature for the four different groups of students with visual disabilities and potential solutions suggested by the teachers.  For all the tasks, the discussion by TVIs regarding the severity ratings focused largely on braille users (group 1) and audio users (group 2) rather than low vision users (groups 3 and 4). This emphasis on groups 1 and 2 is reflected in the emphasis in the descriptive material below.

Task 1: Graphic Organizer

Task description. This task, as shown in Figure 1, is a reading item that uses a graphic organizer.  This particular graphic organizer (also called a concept map) is a diagrammatic structure for presenting concepts or ideas and the relationships among them. The graphic organizer was chosen as a tool for CBAL in part because of its frequent use in classroom instruction and its capacity to reduce memory load. In particular, the use of graphic organizers may benefit students whose disabilities impact working memory (e.g., some types of learning disabilities). At the same time, the graphic organizer may impose an accessibility challenge for students with visual disabilities.  During discussions of this item the teachers were told that this task was designed to measure the student’s ability to “chunk and organize discourse structure.”  As presented in Figure 1, the organizer has a hierarchical structure with a set of nodes and interconnecting lines. This item asks the student to complete the partially filled out organizer based on what they have read about the benefits of school uniforms. This item uses a click-to-select, click-to-paste function response format. Specifically, a student clicks on a statement from a bulleted list of possible benefits of school uniforms and then clicks on an empty node of the graphic organizer to transfer the statement from the list to the node.

This item presents several challenges for students with visual disabilities. For individuals with low vision, enlarging the font size might make it impossible to see the whole screen at once, resulting in the need for the student to scroll. In doing so, students would need to remember information that is not currently on screen, thereby increasing the working memory load. For individuals who are blind and rely on audio, the challenges are much more numerous and serious. Without sight, one cannot know where to click the mouse or easily grasp the structure of the organizer and other parts of the item. This inherent difficulty affects not only the person’s access to the bulleted list and the graphic organizer, but also the controls (“Next”), progress indicators (“Question number 18 of 29”), and other content (directions, other important documents).

Figure 1. Graphic organizer presented to teachers

For a detailed description of Figure 1, please see Appendix A.

Teacher Feedback. After an overview of the task, teachers were asked to independently rate the severity of the accessibility problems for four target groups.  Table 3 displays the results: five of the teachers indicated this task was a severe accessibility problem for braille readers; four of the teachers felt that this was a severe accessibility problem for audio users, and two felt that the task was a moderate challenge for audio users. (As mentioned earlier, the sequence from group 1 through group 4 involves, by and large, decreasingly severe visual disability.) Note that for the graphic organizer, the more severe the visual disability (i.e., the lower the group number), the greater the severity of the accessibility problem. This basic pattern, as will be seen below, was observed for all four tasks.

Table 3: Severity of accessibility problems with the graphic organizer for four groups

Group

Primary mode of accessing text

Severe

Moderate

Mild or none

1

Braille

5

0

0

2

Audio

4

2

0

3

18 point or larger font

2

1

1

4

14 to 17 point font

0

2

3

The teachers offered several potential solutions. Many of the solutions were typical testing accommodations (i.e., supplementing with hard copy braille, using a braille transcriber,5 or providing extra time) that are provided routinely to group 1 and 2 students by large-scale testing programs. Other potential solutions included the use of more innovative alternate supplemental formats of the item (i.e., creating an outline in a text file or describing the outline/hierarchy appropriately). Other solutions focused on alternate formats for the question.  These suggestions included providing multiple choice options to allow students to choose between several outlines or to use pull down menus. One teacher who had rated the graphical organizer as a “severe” problem for groups 1 and 2 indicated that with a combination of braille and audio, the task could be adapted and not be perceived as presenting a severe problem.  Several of the recommendations made by the teachers would require audio and tactile rendering and significant changes to layout and navigation.  A discussion of how some of these changes might be implemented is found in Appendix B.

Task 2: Embedded Graphics

Task description. This task has several embedded graphics, including a pie chart, a table, a bar chart, and a graph, representations routinely found in paper as well as in electronic instructional materials, though less commonly in current reading or English/language arts (ELA) assessment materials. The user scrolls the screen vertically and, in normal magnification, can see one or two of the graphics at a time. Figure 2, shown and described below, shows the instructions and the first graphic. In response to questions posed by the TVIs about the construct being measured, the CBAL reading expert responded that the task was designed to measure the student’s ability to refute a claim. This intention is supported by the text in the prompt, which states: “Select the graph that would most weaken this argument” (see Figure 2).  In CBAL reading, such items are used to assess the student’s facility in translating flexibly among different meaning representations—in this case text and graphics.  Multiple representations are common in academic materials as well as in the real world. 

The fact that a student must use information from multiple graphics to answer a single question, plus the fact that scrolling is needed to see all the images (for those who can see) makes this item an accessibility challenge even for people without visual impairments.  Even without magnification, only one graph can be seen on screen at a time (and only the first one can be seen at the same time as the instructions and stem). It is necessary to scroll the screen vertically to see the other graphs.  Therefore, all test takers are required to keep some information in memory.  Magnification for test takers with low vision would likely require them to scroll horizontally as well, making it unlikely that any of the graphs could be seen at once in their entirety.  These users would need to add a graphical mental map, thereby increasing the memory load already required by the presentation format. Test takers without sight would need to rely on tactile graphics and/or descriptions. In the latter case, the working memory load would likely be increased significantly.

Figure 2. Graphic, one of four used in a question

For detailed description of Figure  2, please see Appendix A.

Teacher Feedback. As shown in Table 4, for braille users, three teachers rated embedded graphics as a severe accessibility problem and two rated it as a moderate problem in the context of this task. One teacher indicated that one “cannot use refreshable braille for graphics.”6 For audio users, three teachers rated embedded graphics as presenting a severe accessibility problem and one rated it as a moderate problem.  The number of graphics and variety of types of graphic in a single item struck some TVIs as excessive. In light of these issues, several teachers questioned whether interpretation of graphical information was properly part of the reading construct, given the unintended consequences for students who are blind or have low vision.  However, such a construct revision may have unintended consequences for a great many students who, on a daily basis, must be able to comprehend graphical material in the context of reading both academic and real-world material.

Much of the discussion of this task focused on the issue of how the task might still be made accessible, under the assumption that being able to shift between textual, graphical, and/or multimedia representations was part of the construct. For example, assuming that understanding of “graphical” information does not necessarily mean visually-perceived graphical information, then we can more readily allow presentation of graphical information as raised line (tactile) drawings or perhaps even as auditory descriptions of the graphics.  The ability to scroll on a screen to see several graphs was assumed to not be part of the intended construct, but rather a consequence of the presentation technology (although such presentation is frequently found on the Internet).  A number of solutions were suggested.  Several teachers cited the importance of carefully crafted descriptions of the graphs or charts, e.g., texts describing the points, and describing or summarizing the trends.  Other solutions mentioned included: providing an Excel spreadsheet file containing the data, which could then be read aloud to students using screen reader software.7  Additional suggestions included: providing tactile drawings, reducing the number of graphics, making all charts of the same type (e.g., pie chart), and providing manipulatives.

Table 4: Severity of accessibility problems with embedded graphics for four groups

Group

Primary mode of accessing text

Severe

Moderate

Mild or none

1

Braille

3

2

0

2

Audio

3

1

0

3

18 point or larger font

2

1

0

4

14 to 17 point font

1

2

1

Task 3. Maneuvering Multiple Documents

Task description. This task requires the student to navigate among several tabs or documents in order to answer the question. The tasks are illustrated in Figures 3 and 4. The reading expert indicated that this task was intended to measure the ability to synthesize information across texts (i.e., to identify similarities or differences between texts, to form a conclusion jointly based on them). Some people questioned the extent to which skills such as remembering what is contained in several documents is part of the construct, even though working with multiple documents is key to successful performance in advanced academic as well as in many job environments. It was recognized that if some ability to remember what was in multiple documents was considered construct-relevant, great care needs to be given to keeping the memory requirement reasonable. This requirement is an important issue because maneuvering between multiple documents is often much more time-consuming for students who are using media such as braille, large print, and audio, and this added time may impose a memory requirement for visually impaired students that is far greater than for students without any visual impairment.

This task included questions about multiple articles (documents), which required students to click on tabs to access the different articles.  While this item used a fairly intuitive interface for sighted users, as currently implemented, it cannot be used at all by blind users working with a keyboard and screen reader. Specifically, the tabs and other controls did not make themselves “known” to a screen reader and couldn’t all be reached through keystrokes. Even if that basic problem were to be remedied, this task would create navigational complexities similar to those described for the graphic organizer task.  In addition, for this task, audio users will need to hold a large amount of information in working memory due to the use of multiple articles (a problem that they might also encounter in real-world academic situations and that, therefore, might be considered part of the construct intended for measurement).

Figure 3. Maneuvering multiple documents, task 24

For detailed  description of Figure 3, please see Appendix A.

Figure 4. Maneuvering multiple documents, task 25

For detailed  description of Figure 4, please see Appendix A.

Teacher Feedback. As shown in Table 5, for braille users, all six teachers rated maneuvering multiple documents as a severe accessibility problem in the context of this task. One teacher mentioned the considerable difficulty in accessing tables using the refreshable braille display, due to limitations in the technology. For audio users, four teachers rated the problem as a severe accessibility problem, and one rated it as moderate.

Table 5: Severity of accessibility problems with maneuvering multiple documents for four groups

Group

Primary mode of accessing text

Severe

Moderate

Mild or none

1

Braille

6

0

0

2

Audio

4

1

0

3

18 point or larger font

2

3

0

4

14 to 17 point font

1

2

1

Among the solutions offered by the teachers were changes to the layout and different response formats.  One suggestion for layout changes included having one document with links to each point of view (instead of each point of view having its own document).  Several suggestions were made for the response formats.  One suggestion was to provide a more linear format for responding (e.g. for each person, simply indicating whether that person supports, opposes, or neither supports nor opposes school uniforms.)  Other suggestions included using drop-down menus rather than drag-and-drop to fill in the slots of the table, eliminating the table (for responding), enabling a cut-and-paste capability; and allowing students to highlight (mark up) portions of the text for later attention, which the delivery will use to provide auditory and visual cues to allow students to identify the text on their return.  A discussion of how some of these changes might be implemented is found in Appendix B.

Task 4. Multimedia

Task description. In this task students are required to watch a short video (having both audio and animation), read a poem that was the inspiration for creating the video, and then answer questions that require the student to compare and contrast the poem with the video. The task uses two tabs, one for the poem and the other for the video.

Feedback from teachers. As shown in Table 6, for braille users, five teachers rated multimedia in the context of this task as a severe accessibility problem, and one rated it as moderate. For audio users, four teachers rated the problem as a severe accessibility problem, and one rated it as moderate. The multimedia content was identified as being largely inaccessible to students with severe visual or auditory impairments because it relies explicitly on sight and sound, rather than text. In addition several teachers indicated that it may be impossible to make the task accessible, or at least impossible to make it accessible to students who are blind without undermining validity.

Solutions suggested to make the task accessible to individuals with low vision included improving the color contrast and enlarging the video. In addition several teachers emphasized the need for captions and audio descriptions. Also, additional changes could be made to the interface to play/pause the video such as keyboard controls and enlarging the icons for the mouse-operated controls. However teachers also noted that even if a keyboard and audio interface for the controls were added, it may be difficult to make this task accessible. For example, in some cases a rich visual track would require so much audio description that it may interfere with comprehension of the poem.

Table 6: Severity of accessibility problems with multimedia for four groups

Group

Primary mode of accessing text

Severe

Moderate

Mild or none

1

Braille

5

1

0

2

Audio

4

1

0

3

18 point or larger font

3

1

0

4

14 to 17 point font

2

2

0

Cross-cutting Teacher Feedback

In addition to task-specific feedback, several comments applied to all of the tasks or to assessment designs in general.  Some teachers emphasized the importance of assessment designers understanding how students with blindness and low vision access materials. Another cross-cutting theme was the potential fairness issue raised by imposing heavy demands upon students’ nontargeted knowledge, memory, and physical capabilities.

The TVIs suggested that, as a general rule, the format and layout should be consistent (both within and among test items) and that a tutorial is essential when the test format is novel.  When there were multiple “panes” (or regions) in the display for tasks, sometimes instructions were on the left, other times on the top of the screen, or in some other location.  While such inconsistency is to be expected in research prototypes that are constantly evolving, it can lead to confusion and slow progress through the test for the student.  The teachers expressed the need to involve TVIs as well as other experts (accessibility, content, pedagogy, assessment development) in the assessment development process.  They also emphasized that essential content should be included in the test even if it is a difficult thing to make accessible; that is, essential content of assessments should not be reduced in its rigor for students with visual disabilities.

Discussion and Implications

In summary, data were collected from a focus group with six teachers of students with visual impairments (TVIs). As expected, most teachers perceived that the CBAL reading tasks presented significant accessibility issues for students with visual disabilities. Furthermore, there was a tendency for the severity of accessibility challenges to correlate with the severity of the visual disability (e.g., low vision versus blindness). This is a useful reminder that the accessibility of a task may depend not only on efforts to design accessible tasks but also on the nature of the test taker’s disability. The challenge is to match appropriate accessibility features to the needs of each test taker while at the same time being careful to ensure that no task feature (accessibility-related or otherwise) undermines the validity of the assessment results. Implications of this study include the need for improved methods for designing assessment tasks that take into account many factors—among them, the construct (targeted skills) being measured, the characteristics of the student, and the nature of the tasks. The next section provides several recommendations to move in that direction. In order to provide context for these recommendations, it is important to keep in mind the limitations of the study and the nature of the CBAL research and development program.

Limitations of the Study

Several limitations of this study should be noted, any or all of which could have affected the results.  First, the sample of items selected for review constituted the ones likely to be the most problematic for students with visual disabilities.  As such, these items may not represent CBAL reading tasks in general.  Second, the teachers used were a small convenience sample that may not fully represent the thinking of those who teach this population or the thinking of members of the population itself.  Third, the teachers were given only a high-level overview of the relevant CBAL competency models and perhaps not enough detail of the rationale behind them, including real-world skills in those models, how those real-world skills were intended to be reflected in the items reviewed, or the extent to which those same skills might also be important for visually impaired students.  Finally, the discussion focused largely on students with the most severe visual disabilities, which may be the most challenging segment of the population from an accessibility perspective.

Conclusions

This section describes recommendations for development of accessible TEAs in areas such as CBAL reading. This section elaborates on the following five recommendations: (a) provide a precise definition of the construct to be measured, (b) avoid cumulatively excessive requirements for nontargeted skills, (c) follow accessibility guidelines and best practices to the extent feasible, (d) involve individuals with diverse expertise during the design and development process, (e) consider multiple accessibility strategies. Finally, this section concludes with areas for future work.

Recommendations

The following recommendations are based on feedback from the TVIs.

A. Provide a precise definition of the construct to be measured. Discussion about what a task or assessment is intended to measure was a major theme of discussions with TVIs. In order to guide accessibility decisions for innovative TEAs, there needs to be a precise definition of the construct, i.e., the competency model (Thompson, Johnstone, & Thurlow, 2002; see the first element of “universally designed assessments”). This involves identifying knowledge, skills, and abilities (KSAs) (or competencies) that are part of the construct (i.e., targeted KSAs). However, it can also be very helpful to identify some of the key KSAs that are not part of the construct (nontargeted KSAs) (Hansen, Mislevy, Steinberg, Lee, & Forer, 2005; Hansen & Mislevy, 2006). For example, for the vast majority of educational assessments, the ability to see is likely to not be part of the construct. A clear definition of the construct facilitates the identification of accessibility barriers. For example, if a student’s own capabilities cannot meet the demand (requirement) for a nontargeted KSA of sight imposed by the testing situation, then an accessibility barrier likely exists. Identification of the barrier leads to consideration of possible strategies for addressing the barrier. First, one can reduce the requirement of the nontargeted KSA by providing accommodations. For example, accommodating a student by reading aloud the content of a test can reduce or eliminate the demand for sight. Second, one can increase the student’s capability in that KSA, which may enable the student to meet or satisfy the nontargeted requirement. In many cases in which the student’s limited capability in a given KSA is due to a disability (e.g., the limitation on the capability of sight associated with blindness), the KSA may not be amenable to increase, placing additional importance on the first strategy (use of accommodations).  Notwithstanding the importance of reducing requirements for nontargeted KSAs, reducing requirements for targeted KSAs is typically inappropriate because doing so would undermine validity. A clear definition of the construct is critical to identifying accessibility barriers and developing strategies for dealing with them.

B. Avoid cumulatively excessive requirements for nontargeted skills. There is a need to recognize and avoid nonobvious (hidden) requirements for nontargeted KSAs that are cumulatively excessive. For example, if “hold content in working memory” is a nontargeted KSA, demands (requirements) for that KSA may become excessive even when requirements for more obvious KSAs (sight, hearing) may be manageable. For example, consider that audio rendering of complex visual interfaces may increase the working memory load above what the students with a visual disability can handle, resulting in accessibility barrier. The excessive requirement for working memory may be related to requirements for a range of other KSAs, such as knowing how to use assistive technologies, navigating within the assessment platform, and being able to remember ideas over extended periods. Any one of these individual requirements for skills might be manageable, but the sum total of requirements may be cumulatively excessive. Another example might be the working memory load associated with use of screen magnification, which may require the user to keep in memory large portions of content that becomes invisible outside the viewing area. Thus, both obvious and hidden accessibility barriers can undermine the validity of the assessment results.8 Cumulatively excessive nontarged requirements might be at least partially addressed by ensuring sufficient extra time, providing a tutorial and practice with the computer-based testing platform (an approach taken by the CBAL program during formative tasks), allowing students to take notes or use mnemonic aids, and giving extra breaks.

C. Follow accessibility guidelines and best practices to the extent feasible. Some of the problems identified by TVIs might be addressed by following existing accessibility guidelines and best practices. For example, the World Wide Web Consortium (W3C) Web Content Accessibility Guidelines (WCAG) 2.0 (Caldwell, Cooper, Reid, & Vanderheiden, 2008) emphasize the importance of allowing users to have content read aloud (e.g., via synthesized speech) and keyboard operability (Caldwell, et al., 2008; see guidelines 1.1 and 2.1), both of which are important to individuals who are blind. Guidance for implementing such features is from the W3C and several other sources (see Architectural and Transportation Barriers Compliance Board, 2000 [Section 508]; Dolan et al., 2010; Hansen & Mislevy, 2006; Heath & Hansen, 2002; Thompson, Thurlow, Quenemoen, & Lehr, 2002; Thurlow, Lazarus, Albus, & Hodgson, 2010; IMS, 2011). Key accessibility features of a computer-based testing platform for students with visual impairments are: voicing of both content and navigation controls; text descriptions for nontext content (e.g., graphics, video, audio, tables); keyboard operability; visual enhancement (enlargement/magnification of text and graphics, color modification of text and background); refreshable braille displays; and practice and familiarization in use of accessibility features.  A variety of assessment delivery systems now implement several of these accessibility features (especially visual enhancement and read-aloud of text content [but generally not of navigation controls]).  As of this writing, the use of refreshable braille displays is just beginning to be explored for computer-based tests.

D. Involve individuals with diverse expertise during the design and development process. It is very helpful if, at any point in the assessment development process, assessment designers are able to draw upon diverse experts in areas such as accessibility, technology education of students with visual disabilities, special education in the content domain (e.g., reading), research design, etc. For most organizations, ensuring access to such diverse expertise will involve not only building expertise within the organization, but also being able to draw upon outside experts (e.g., teachers of students with visual impairments and assistive technology experts). Access to such experts may be facilitated by becoming involved in organizations that are developing accessibility standards (e.g., IMS Global Learning Consortium, WAI/W3C). Expertise of an organization’s staff can also be improved through participation in teams that are developing prototypes and systems for accessible assessments. To the extent feasible, items and systems to be used by individuals with disabilities should be evaluated with a sample of such individuals.

E. Consider multiple accessibility strategies. Multiple strategies are needed to address the range of accessibility challenges identified by the TVIs. The following are offered as examples.

Provide a user interface with consistent and simple format and layout. Accessible interfaces tend to be consistent, logical, and relatively simple. The format and layout of tasks (e.g., the location of controls and content) are consistent. A response interface that is appropriate for a blind user may allow the user to select a choice from a list of options that describes a task element (e.g. each of several peoples’ views on an issue) rather than interactively filling out a table with the same information. Policies and practices for the use of highly visual representations and concepts need to be developed in consideration of the nature of the construct and an understanding of the accessibility implications.

Consider trade-offs between short and extended tasks. A task may arguably be only as accessible as its least accessible component, which, for an extended task (where many student actions may be required to complete the task), means more potential opportunities for inaccessibility. Such a consideration argues for using short (or discrete) tasks whenever possible. The advantages for this approach need to be weighed against the value of extended tasks in measuring the intended construct. For constructs such as that described by the CBAL reading competency model or by the English language arts Common Core State Standards (Council of Chief State School Officers & National Governors Association, 2010), extended tasks are essential. As noted, some CBAL reading tasks require students to copy or remember content from multiple documents, tabs, or information sources and fill in a slot in a form. The use of extended tasks is particularly important for assessing the competencies that students must develop to be college- and career-ready, because these same operations are encountered in real-world activities on a daily basis.  Removing them from assessment—and instruction—would be a disservice, certainly to the general student population and, arguably, to those with visual impairments as well. Resolving such issues depends on assessment designers having a clear understanding of the construct that the assessment is intended to measure and the availability of task situations that will allow that construct to be measured for students with diverse access needs.

Consider item replacement and deletion as well as alternative items when accessibility challenges are difficult. Where an item cannot be made accessible, one should seek to replace it with another item that can be made accessible. For example, one should braille those tasks that are braillable and where a task is not braillable and the content must be assessed, one should provide another task that measures the same content at the same difficulty level and is braillable. Where the item tests skills that are essential but that cannot be made accessible (or replaced with another item that measures the same skills accessibly for virtually all students, and can be adapted to be accessible for students with visual impairments), then that item might be replaced by one only for use with visually impaired students. The original item might still be used for the general population to the extent it measures skills that are part of the construct. While not the focus of this project, in some cases one may need to consider using “alternative” items that are specially created and structured for delivery specifically to students with visual disabilities. Creation or selection of replacement or alternative items must be done with caution so as to preserve content-coverage, difficulty, and other essential measurement objectives.  How such comparability might be achieved is beyond the scope of this project.  In some cases, one may be able to simply delete the inaccessible item without replacement without undermining the validity of the assessment results. In some cases, where none of the foregoing strategies work, one may need to identify a replacement construct or skill that can be assessed. This approach is generally consistent with that set forth by Thurlow et al. (2009).9

Future Work

It seems clear that prototype innovative TEAs such as those examined in this project need to be made more accessible to students with visual disabilities.  Future work should try out some of the suggestions provided by the teachers.  Following some of those suggestions may entail moving to assessment delivery platforms that are more accessible. We are exploring such platforms. We are also exploring the use of Evidence Centered Design (ECD) to design items that are accessible while maintaining or enhancing validity.  This study was a first step toward more-accessible innovative items. There is no doubt that innovative assessment prototypes can be created which do better with respect to accessibility.  Whether such prototypes can succeed both in fully assessing the competencies of interest and in being accessible is still an open question.  Future efforts should focus on maximizing accessibility while maintaining or enhancing validity.

Acknowledgements

The authors gratefully acknowledge the support for the ETS Research Allocation through the Validity Initiative and the CBAL Initiative for this project. They also acknowledge the contributions of Tenaha O’Reilly for presenting background on CBAL reading and for participating in discussions about the competency model and other aspects of CBAL; Kathleen Sheehan, for helping ensure CBAL reading program participation in the study; and Randy Bennett, Don Powers, Ruth Loew, and James Carlson for their helpful suggestions.  Finally and most importantly we would like to thank the six focus group participants for their time and excellent feedback.

References

Architectural and Transportation Barriers Compliance Board. (2000). Electronic and Information Technology Accessibility Standards (Section 508). Retrieved from http://www.access-board.gov/sec508/standards.htm

Bechard, S., Sheinker, J., Abell, R., Barton, K., Burling, K., Camacho, C. . . . Tucker, B. (2010). Measuring cognition of students with disabilities using technology-enabled assessments: Recommendations for a research agenda. Journal of Technology, Learning, and Assessment, 10(4). Retrieved from http://ejournals.bc.edu/ojs/index.php/jtla/issue/view/143

Bennett, R. E.  (2011).  CBAL: Results from piloting innovative K-12 assessments (Research Report. ETS RR-11-23).  Princeton, NJ: Educational Testing Service.

Bennett, R. E. (2010). Cognitively based assessment of, for, and as learning (CBAL): A preliminary theory of action for summative and formative assessment. Measurement: Interdisciplinary Research and Perspectives, 8(2–3), 70–91.

Bennett, R. E., & Gitomer, D. H.  (2009).  Transforming K-12 assessment: Integrating accountability testing, formative assessment, and professional support.  In C. Wyatt-Smith & J. Cumming (Eds.), Educational assessment in the 21st century: Connecting theory and practice (pp. 43-61).  New York, NY: Springer.

Bennett, R.E., Persky, H., Weiss, A., & Jenkins, F. (2010). Measuring problem solving with technology: A demonstration study for NAEP. Journal of Technology, Learning, and Assessment, 8(8). Retrieved from http://ejournals.bc.edu/ojs/index.php/jtla/article/view/1627

Caldwell, B., Cooper, M., Reid, L. G., & Vanderheiden, G. (2008). Web content accessibility guidelines 2.0: W3C recommendation. Retrieved from http://www.w3.org/TR/WCAG20/

Council of Chief State School Officers (CCSSO) & National Governors Association (NGA) (2010). Common core state standards for English language arts & literacy in history/social studies, science, and technical subjects.  Retrieved from http://www.corestandards.org/assets/CCSSI_ELA%20Standards.pdf

Deane, P. (2012). Rethinking K-12 writing assessment. In N. Elliott and L. Perelman (Eds.), Writing Assessment in the 21st Century, pp.  87-100. New York: Hampton Press.

Dolan, R. P., Burling, K. S., Rose, Beck, R., Murray, E., Strangman, N., Jude, J., Harms, M., Way, W., Hanna, E., Nichols, A., Strain-Seymour, E. (2010). Universal Design for Computer-Based Testing Guidelines. Pearson: Iowa City, IA. Retrieved from http://www.pearsonedmeasurement.com/

Embretson, S. (2010).  Cognitively based assessment and the integration of summative and formative assessments.  Measurement: Interdisciplinary Research & Perspectives, 8(4), 180-184.

Federal Register (2010, April 9).  Department of Education: Overview information; Race to the Top Fund Assessment Program; Notice Inviting Applications for New Awards for Fiscal Year (FY) 2010.  Federal Register, 75(68), 18171-    Retrieved from http://edocket.access.gpo.gov/2010/pdf/2010-8176.pdf

Hansen, E. G., & Mislevy, R. J. (2006). Accessibility of computer-based testing for individuals with disabilities and English language learners within a validity framework. In M. Hricko & S. Howell (Eds.), Online assessment and measurement: Foundation, challenges, and issues. Hershey, PA: Idea Group Publishing.

Hansen, E. G., Mislevy, R. J., & Steinberg, L. S. (2008). Evidence centered assessment design for reasoning about testing accommodations in NAEP reading and mathematics (Research Report 08-28). Princeton, NJ: Educational Testing Service.

Hansen, E. G., Mislevy, R. J., Steinberg, L. S., Lee, M. J., & Forer, D. C. (2005). Accessibility of tests for individuals with disabilities within a validity framework. System: An International Journal of Educational Technology and Applied Linguistics, 33(1), 107-133.

Hansen, E. G., Zapata-Rivera, D., & Feng, M. (2009, April). Beyond accessibility: Evidence centered design for improving the efficiency of learning-centered assessments. Paper presented at the meeting of the National Council on Measurement in Education, San Diego, CA.

Heath, A., & Hansen, E. G. (2002). Guidelines for testing and assessment (Section 9). IMS guidelines for developing accessible learning applications. IMS Global Learning Consortium. Retrieved from  http://www.imsproject.org/accessibility/accv1p0/imsacc_guidev1p0.html

IMS (2011).  APIP content & user profile tagging map. Lake Mary, FL: Author. Retrieved from http://www.imsglobal.org/community/forum/messageview.cfm?catid=110&threadid=662

Individuals with Disabilities Education Improvement Act. (2004). Public Law 108–466. Retrieved from http://www.copyright.gov/legislation/pl108-446.pdf

Kopriva, R. (2008). Improving testing for English language learners. New York, NY: Routledge.

Linn, R. L.  (2010).  Commentary: A new era of test-based educational accountability.  Measurement: Interdisciplinary Research & Perspectives, 8, 145–149.

Mislevy, R. J., Steinberg, L. S., & Almond, R. G. (2003). On the structure of educational assessments. Measurement: Interdisciplinary Research and Perspectives, 1, 3-67.

National Center for Accessible Media (NCAM) (2008).  Effective practices for description of science content within digital talking books.  Retrieved from http://ncam.wgbh.org/publications/stemdx/index.html

National Research Council. (2004). Keeping score for all: The effects of inclusion and accommodation policies on large-scale educational assessment. Committee on Participation of English Language Learners and Students with Disabilities in NAEP and Other Large-Scale Assessments. Judith A. Koenig and Lyle F. Bachman (Eds.). Washington, D.C.: National Academy of Sciences.

O’Reilly, T., & Sheehan, K. M.  (2009a).  Cognitively based assessment of, for and as learning: A framework for assessing reading competency (Research Report 09-26).  Princeton, NJ: Educational Testing Service.  Retrieved from http://www.ets.org/Media/Research/pdf/RR-09-26.pdf

O’Reilly, T., & Sheehan, K. M.  (2009b).  Cognitively based assessment of, for and as learning: A 21st century approach for assessing reading competency (Research Memorandum 09-04).  Princeton, NJ: Educational Testing Service.

Quellmalz, E. S., & Pellegrino, J. W. (2009). Technology and testing. Science, 323(5910), 75-79.

Salend, S. (2009). Using technology to create and administer accessible tests. Teaching Exceptional Children, 41(3), 40-51.

Sweller, J., van Merrienboer, J. J. G., & Paas, F. G. W. C. (1998). Cognitive architecture and instructional design. Journal of Educational Psychology,  91, 2334-2341.

Technology Assisted Reading Assessment Project.  (2007).  Technology assisted reading assessment.  Minneapolis, MN: University of Minnesota, NARAP (National Accessible Reading Assessment Projects.  Retrieved from http://www.naraptara.info/

Thompson, S. J., Johnstone, C. J., & Thurlow, M. L. (2002). Universal design applied to large scale assessments (Synthesis Report 44). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes. Retrieved from http://education.umn.edu/NCEO/OnlinePubs/Synthesis44.html

Thompson, S. J., Thurlow, M. L., Quenemoen, R. F., & Lehr, C. A., (2002). Access to computer-based testing for students with disabilities (Synthesis Report 45). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes. Retrieved from http://education.umn.edu/NCEO/OnlinePubs/Synthesis45.html

Thurlow, M. L., Laitusis, C. C., Dillon, D. R., Cook, L. L., Moen, R. E., Abedi, J., & O’Brien, D. G. (2009). Accessibility principles for reading assessments. Minneapolis, MN: National Accessible Reading Assessment Projects. Retrieved from http://www.narap.info/publications/reports/NARAPprinciples.pdf

Thurlow, M., Lazarus, S. S., Albus, D., & Hodgson, J. (2010). Computer-based testing: Practices and considerations (Synthesis Report 78). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes. Retrieved from http://www.cehd.umn.edu/nceo/OnlinePubs/Synthesis78/Synthesis78.pdf

Tucker, Bill. (2009). Beyond the bubble: Technology and the future of student assessment. Washington DC: Education Sector.  Retrieved from http://www.educationsector.org/publications/beyond-bubble-technology-and-future-student-assessment

Appendix A: Figure Descriptions

Unless such a student has and can make use of a tactile representation of the task feature (such as a graphic organizer),10 she or he will rely heavily on a verbal description of its structure and use. Because creating tactile representations for interactive tasks is at best complicated, and because not all blind students are skilled in the use of tactile representations, the figure descriptions are written under the assumption that that no tactile representation is in use. Furthermore, each figure description may include some details that might not be included in a figure description for students taking the assessment. These details are included here to inform readers of presentation details that may affect the accessibility of the original figures. A figure description intended for students would use age-appropriate vocabulary. Additionally, an actual description for students might provide more detailed instructions on how to find specific features of the task (e.g., the item stem and bullet points), and how to interact with the interface.

Description of Figure 1

The figure shows location information (Task: CBAL Reading Test Section 1; Question 18 of 29) and “next” and “back” buttons (with the “back” button grayed out) in a color-differentiated bar at the top of the screen. The main portion of the screen displays instructions, a bulleted list, and a graphic organizer consisting of three levels: one top node (labeled “Benefits of School Uniforms”), three second-level nodes branching off the top node (“Benefits for Parents,” “Benefits for Teachers,” “Benefits for Students”), each of which has two bottom nodes branching off below it.  Below the node “Benefits for Parents” are nodes labeled “Would save money,” represented as having been filled out in advance, and “Clothes can change a person,” represented as having been filled out by a student from the bulleted list. Under “Benefits for Teachers” appear a blank node and a node represented as having been filled out in advance from the bulleted list by the phrase, “Fewer students would be sent home for Dress Code Violations.”  Under “Benefits for Students” is a node labeled “Peer pressure about clothing would be reduced,” represented as having been filled in by a student from the bulleted list.

The instructions read:

“More of the chart has now been filled in for you. Move three of the phrases (click on a phrase to select it for moving) from the list below into the proper places in the third row of the chart.

“To move a sentence into the chart, click on the sentence. Then click on an empty space where the sentence belongs. If you change your mind about a sentence, click on it again and then click on the bulleted list again.”

"The bulleted list is:

  • Less time would be wasted disciplining students about clothing
  • [blank space, indicating that the entry was moved to the chart during this task]
  • Parents wouldn’t need to argue with kids about what to wear
  • Many jobs require a dress code
  • [blank space, indicating that the entry was moved to the chart during this task]"

Description of Figure 2

The top panel is similar to the top panel in Figure 1, and indicates question number 21 of 29. The main panel shows two tabs: “Question / Your Answer” (currently selected) and “Alison Dupres” (not selected). The top of the panel reads:

“Alison makes the following argument:

“And how are all families supposed to be able to afford these uniforms? Not every family can buy whole new wardrobes for their children every year. More and more, families rely on clothes from older brothers and sisters to be handed down. If we have a new school uniform policy, those “hand-me-downs” will be impossible, at least for now. If all uniforms need to be purchased form the school for the same price, nobody will be able to shop around for the best bargains. Our parents already pay taxes for us to go to school—why should they be forced to pay even more money?”

Above the set of graphs is the instruction: “Select the graph that would most weaken this argument.”

The first graph is displayed. It is a pie chart titled, “Different Types of Clothing Purchased by Middle School Parents Across the USA.” The chart has a color-coded legend with percentage data shown next to and just outside each segment of the pie. The data are:

Type

Percent

Footwear

11%

Pants

29%

Shirts

33%

Jackets

9%

Accessories

19%

The three graphs, not shown in this document, were part of the item and are briefly described as follows:

Table with two rows and five columns of data comparing the percentage of students who prefer new clothes to students who prefer hand-me-downs each year from 2004 to 2008. The data show that from 76% to 88% prefer new clothes and correspondingly from 24% to 12% prefer hand-me-downs.

Bar chart representing average expenditures in dollars for clothing by parents of middle school students for different regions of the country. Five regions are shown, with expenditures ranging from approximately $190 in the Southeast to $250 in the Northeast.

Line graph showing the number of parents of students at Wintergreen School who say they use hand-me-downs, for the years 2004 to 2008. Data points are shown for each year, and start at just over 120 parents for 2004, declining to approximately 115, 85, 65, and 10 for the years 2005 to 2008.

Description of Figure 3

The top panel is similar to those previously described, and shows question number 24 of 29.

The main panel has five tabs. The left-most tab, “Question / Your Answer,” is selected. The other tabs indicate arguments made for or against school uniforms by various individuals. The selected screen tab consists of instructions, a bulleted list of seven names, four of which match the nonselected tabs, and a chart with three columns and six data rows. The columns are headed “Oppose School Uniforms,” “Neither support nor Oppose School Uniforms,” and “Support School Uniforms.” (The column headings are shaded, in all upper case and centered in their columns). All of the data rows are blank and unshaded.

The instructions read:

“Move the names of the people who OPPOSE school uniforms into the column on the left. Move the names of the people who SUPPORT school uniforms into the column on the right. Move the names of the people who neither SUPPORT nor OPPOSE school uniforms into the middle column.

“To move a name into a column, click on the name. Then click on an empty space where the name belongs. If you change your mind about a name, click on it again and then click on the bulleted list again.”

Description of Figure 4

Figure 4 shows task 25 of 29 (with a similar top panel to the other figures). The screen is divided vertically, with the left side displaying four tabs, one of which is selected. The right side shows a multiple-choice question and a blank area indicating that text is to be pasted there.

The selected tab on the left side shows part of a letter from the principal to parents explaining her views on school uniforms. The letter’s heading is right-justified at the top, and the letter’s body consists of numbered paragraphs displayed ragged-right. Vertical scrolling is required to see the entire letter.

On the right side of the screen is the multiple choice question: “Which individual expressed a viewpoint opposite to Principal Kwo’s belief that school uniforms would save parents money?”  One of the four answer choices is to be selected with the mouse. Each choice is a name corresponding to one of the unselected tabs on the left side of the screen. Clicking on one of those tabs would display the argument written by the named individual. Below the answer choices is the additional instruction: “Copy and paste one of more sentences from one of the documents that supports your choice.”

Appendix B: Implementation Considerations

This appendix includes a detailed discussion of implementation considerations for two types of tasks (tasks which use graphic organizers and tasks which require maneuvering between multiple documents). 

Graphic Organizer

Even where the system provides a means for an audio user who is blind to interact via keyboard and audio, the challenge of accessing a graphic organizer (as used in task 1) can still be significant and require very careful design to address. The brief description that follows describes a possible way the graphic organizer might be delivered via keyboard and audio. It illustrates some of the interface issues that must be considered and the complexities facing the user of even a relatively simple interface. A consideration of those issues should help in deciding whether the best approach is to try to make the current delivery interface accessible or to develop a different format for the task.

Consider how one might attempt to make the current interface accessible via keyboard to an audio user who is blind and so is unlikely to be able to use a mouse.  First, the user would have to be given a description of the organizer’s layout and instructions for interacting with it, ideally accompanied by a tactile representation to assist the user in grasping the spatial layout. A user who does not have or cannot use a tactile representation must rely on a mental map developed with the aid of verbal descriptions and indicators provided during navigation.

One design approach for enabling access by audio users relies only on the tab and enter keys. Other approaches may add special keys for different kinds of movement, e.g., use the up arrow to go up to a higher node and the down arrow to go to a lower node.  There are tradeoffs to either approach: A tab-and-enter approach requires learning very few keys, but is not always efficient—there is no way to jump directly to a desired location, and it can be confusing: when should one use tab and when enter? An interface with more key combinations permits more direct movement, but can be more difficult to learn. The ideal would be to build in multiple approaches, just as the standard Windows interface does.

A tab-and-enter interface might work as follows: In general, when an item is first presented, pressing the tab key would cycle between major clusters of controls, with spoken indications of the control that has focus, and its content if any. For example, when the graphic organizer item is first presented, pressing the tab key might move the focus from the cluster for the item content (stem, bulleted list options), tabbing again to the graphic organizer (i.e., its cluster of nodes), tabbing again to a cluster of navigation controls (next, back), tabbing again to the cluster of document sources (e.g., passage about the issue of school uniforms), tabbing again to feature options (student selection of accessibility features desired), tabbing again to the indicators (how many items are done and remaining to be done, time remaining), and so on. Once at a cluster, pressing the enter key selects that entire cluster, after which point the tab key moves the focus within the cluster, with each node audibly announced as it gets focus (audio: “Graphic organizer: Root node, that is, the level 1 node.”)

If enter takes a user inside a cluster and tab moves within that cluster (typically in a circular fashion, so that tabbing from the last element returns one to the first element), some other key must be provided to exit from the within-cluster level of navigation to the between-clusters level of navigation. Typically the escape key can be pressed into service for this function, a strategy that is consistent with typical Windows practices.

Other access methods might be implemented to improve navigation within the graphic-organizer tree structure. One approach would be to allow, within the cluster, the use of another key or keys such as the up or down arrows to move between one level of the tree and the next. Another strategy might involve having multiple clusters for the one graphic-organizer tree, perhaps with special keys to move between those clusters.

Regardless of the specific navigation methods provided (which would need to be user-tested and refined before implementation), it is generally important to provide a variety of information to users. Key examples include the following: (a) Audio indicators of one’s location, level within the organizer (along with the text, if any is provided in a given node), and (b) A way of auditorily assisting the user in interacting with the interface. This might be an audible prompt (“Level 2, press enter to select”, “Level 2 selected, press tab to move within this level”) or a standard context-sensitive help system that would provide the prompt as a response to a help key.

Finally, a means would need to be provided for the student to select one of the bulleted statements and move it to the desired position in the organizer. One possibility would be to enable the standard Windows cut (control-x) and paste (control-v) keystrokes, since these are likely to be familiar to blind students.  Perhaps some use of tab and enter might also be feasible, but using them for cutting and pasting could interfere with their use for navigation.

Suffice it to say, even with keyboard and audio access, it can still be challenging to select and enter one’s response on a task such as this, particularly compared with the intuitive and familiar way that a sighted mouse-user could perform the same task. The challenge of interacting with the interface arguably may materially interfere with the task’s ability to measure the intended construct.

Maneuvering Multiple Documents

One challenge faced in both tasks 3 and 4 was maneuvering between multiple documents.  For example, for Figure 3 a student would need to tab to each article, read it again, then tab back to enter the answer, and finally complete a “click and click” response which poses the same sorts of interface problems as previously discussed.  A similar accessibility challenge was found in task 25 (see Figure 4), which requires copy/paste operations between one of the several documents and the answer area.  Even without the interface issues for the particular platform or item type, finding an accessible way to ask and answer questions about multiple materials is problematic because document switching is a much more difficult task for blind students than for their sighted peers.  For example, for blind students, the intuitive visual interface of tabs and click-and-click is essentially unavailable.  Instead the student who relies on keyboard and audio must work from descriptions and audio cues.  See the discussion of the graphic organizer for a description of some of the issues and difficulties involved.  Students with low vision will (depending on their individual degrees of visual impairment) need to contend with the increased memory requirements resulting from the reduction in the amount of information that can be seen at once, if magnification is available, and/or with the same audio issues (if they use audio) faced by blind students.

To illustrate these problems concretely, this is what would happen for a blind student working with question 24 (from Figure 3), to which a tab/enter/escape and text-to-speech interface had been added.  On entering the item, they would hear something like this:

Question 24 of 29. This question requires you to indicate for each of six people whether they oppose school uniforms, support school uniforms, or neither support nor oppose school uniforms. On screen are a set of tabs. Selecting the first tab displays the directions, list of people, and a table for recording your responses. The rest of the tabs display each person’s statement of their views. Under the tabs are these directions, then the list of six people, followed by a table with three columns, titled from left to right, Oppose school uniforms, Neither support nor oppose school uniforms, Support school uniforms. The table has six rows under each column header.  To respond to the question, use the tab and enter keys to return to this question, and then move the names of the people who oppose school uniforms into an empty cell in the column on the left, titled Oppose school uniforms. Move the names of the people to support uniforms to the column on the right, titled Support school uniforms. Move the names of the people who neither support nor oppose school uniforms into the middle column. To move the names, use the tab and enter keys to move to and select a name, then use the tab key to move to an empty cell in the column where you wish to place each name, then press enter to place the name. Continue to use the tab and enter keys to place each name in one of the columns. If you change your mind about the placement of a name, tab to it, press enter, then tab to the list and press enter again to return the name to the list.  You can then move the name to a new location in the table.

As the student enters each element, it is then spoken. For example, if the student tabs to the “Samantha Billings” tab on the top of the screen, he would hear, “Samantha Billings. Press enter to hear the text under this tab. Press escape to return to the tabs.” Samantha’s statement would then be read aloud by the system to the student. Ideally, there would be a means to pause, resume, and to navigate by word, sentence, paragraph, or character. After having listened to enough of Samantha’s statement to determine her position on the issue, the student would then press escape to move back to the tabs, then tab back to the question, then work through the interface to place Samantha Billings into one of the columns.  The student would then need to repeat this process for each of the other names.

For question 25 (shown in Figure 4), in addition to providing a way to work through the tabs, there would need to be a way to use the keyboard to mark text for copying and pasting into the required text box, as well as moving between the two sides of the screen and pasting the text into the answer area. Depending on a student’s memory capacity, the student might have to read through (listen to) all of the statements multiple times in order to locate and place the requested sentences.

Endnotes

1 It is also important to make assessments accessible to English language learners (Kopriva, 2008).

2 The competency model is essentially a representation of the construct to be measured by the assessment.

3 The generalization of lower severity for successive group number is not exact. This inexactness may be most prominent in the progress from group 1 (braille) to group 2 (audio), which does not necessarily correspond to a decrease in severity of disability.

4 The Voiced GRE is intended to be usable by test takers who are blind and rely on audio. It is operable by keyboard and provides text-to-speech for both test content and navigation.

5 A braille transcriber determines how to most accurately present information from a source (e.g., text, graphics) into a braille version and then transcribes it into braille.

6 Refreshable braille is limited to braille text, whereas hard copy braille would also provide students with access to tactile graphics, which are sometimes referred to as “braille graphics,” “raised-line graphics,” or “raised-line drawings.”

7 The National Center for Accessible Media (NCAM) provides guidelines for describing graphics and for creating accessible tables (NCAM, 2008).

8 These barriers may also result in a reduction in the learning that some assessments are intended to result in (Hansen, Zapata, & Feng, 2009; Sweller, van Merrienboer, & Paas, 1998).

9 Thurlow et al. (2009) note: “A process to examine the kinds of adjustments that might be acceptable includes checking what the test is intended to measure, determining whether additional accommodations might be provided, and considering dropping item or identify replacement skills” (p. 18).

10 It cannot be assumed that all blind individuals, even those who can read braille, can make use of tactile graphics. Like braille reading, interpretation of tactile graphics is a skill that must be learned, and which a given student may or may not have mastered.

Back to top


The Journal of Blindness Innovation and Research is copyright (c) 2014 to the National Federation of the Blind.