Skip to main content
All CollectionsImproving Student Learning
Student Assessment for LD-Focused Schools Presentation
Student Assessment for LD-Focused Schools Presentation

Presentation given for the Association of LD Schools

Updated this week

Welcome, everyone.

I am really happy to be meeting with you guys. And I've had a long relationship with a a number of, member schools and, from the Association of LD Schools. And so it's, you know, great to have this opportunity.

And, also to get your feedback, I'm excited to share with you basically some of the insights that myself and my team have gained over the years of focusing on helping students who learn differently and particularly around assessment and intervention.

I do like to use the chat quite a bit during the assessment, and sorry, during the presentation. So feel free to chime in with questions or feedback, and, that'll make things more more interactive and can help me tailor things to what the interest of the group is.

So just a little background on me.

I've been, really, my whole career focused on students who learn differently. I studied, cognitive development at, the Harvard Graduate School of Education where I got my doctorate and, interned at Harvard Medical School and then worked at a variety of different, educational technology firms and also learning centers for kids who've learned differently and created a number of, interventions and assessments mostly using technology.

And, I had this radical idea along the way, which was, I'm curious. Are are interventions actually helping students?

And so I looked into, you know, the data that was available at the, you know, schools that we are working with. And we, you know, analyzed the data, but we really couldn't answer the question. Were the interventions working? Because the data was so far removed from where the students were in their learning and where our interventions were entering with students in their learning.

And so then the next step was to bring in some outside assessments.

And, and even after bringing in the outside assessments, we weren't able to answer the question because we still had that same problem is that the assessments were so far removed from where the students were in their learning and also, what we were, you know, intervening on. And then there were other issues as well, which which really had to do with, the assessments just weren't that friendly for the students. Like, they were really, in some cases, very distressed and and fatigued and frustrated. And we could just tell from watching the students that they weren't the same students we had in our intervention program or in the classroom, that they really were showing up quite differently on these standardized tests. And so we we felt like we weren't getting a good, you know, reading of where the students were in their learning.

And it it made us question the scores that were coming to us on the on the assessments because in some cases, there was so much kind of, you know, discouragement or frustration or anxiety with the assessment.

And so I started looking into assessment more, and I really came to appreciate how it's it's one of the unique aspects of the, you know, the school community where so many people are dependent on assessment data to do to fulfill their role.

And, so it's it's, you know, kind of, you know, central to the whole operation. And when things are going really well with assessment, then each stakeholder, each role is is getting the information that really helps them do their best.

And when we're not getting, you know, the best assessment data, then there can be challenges at all of these different, you know, aspects.

So I'm curious if you don't mind putting in the chat, what your role is at your school and then what questions you have of assessment data.

So what's your role is, and then what do you most want to learn or look to see when you get assessment results back from your from your student's assessment process.

So what what I came to appreciate is that there's this basic paradox between the mission and the values and the culture of the independent school that's focused on LD students and the assessment tools that many of the schools I've worked with were using. So on the one hand, you have these very sophisticated, educational institutions focused on students with very intense learning needs, and then at the same time, often using some of the assessments used for the biggest school districts in the country who have very different questions that they ask of the data, and they have very different priorities and values. And so for me, I've I've kinda come to see that be a real paradox because the or, you know, the question becomes, are the assessments really serving the LD focused school mission?

So I'm just gonna dip into the chat here and look at some of the questions here.

So how best to explain data to families?

I wanna see student growth, how to speak fam speak with families about data, how much do we share, what we're looking for is progress, system to better track and manage progress over the course of several years, identification of sub skill deficits and tracking of their development, looking at data, tracking information to share with parents and teachers, student growth in areas that need remediation.

So so what jumps out at me is is how different some of those questions are from, the, let's say, the larger public school districts in the country.

The the focus on sub skills, the focus on growth are are very often not a part of the equation along with with, you know, other other aspects of the assessment process that are more more interesting at the LD focused school level.

So we have, on the one hand, LD focused schools that are highly specialized, differentiated experience for students who learn differently.

And then we have the reliance on large scale assessments designed for the biggest school districts who have very different objectives and values. And so I just wanna take a minute to go into some of those in a little more detail.

So the the first thing is is that these assessments just weren't created to meet the sophisticated needs of LD focused schools and their students.

So you have highly trained staff very often, in sophisticated interventions and diagnostic tools and things like that, where in the larger public school districts, there's not the time, to respond in such a nuanced and sophisticated way. There's very often not the expertise.

And because the numbers of students are so large, there's not the ability to respond to student needs in such a detailed way.

And there's some real, kind of fundamental or systematic differences behind these, you know, institutions that, produce those differences in values and questions. So, you know, the the driving force in, you know, many of the larger school districts is, you know, the legal obligation. There's so much, you know, red tape and so many mandates and initiatives and then, you know, different, you know, education laws, that there's a a long list of things to check-in terms of what assessment needs to accomplish.

And then at the at least the LD focused schools that I've worked with, I would say learning acceleration and growth is the motivating force, is helping where are students you know, how do we help them move forward? That's, you know, absolutely primary and really distinguishes those those schools.

The criteria for success, in some of the larger districts is, the mastery of a broad set of standards, so state approved curriculum standards.

And the question is, has the student mastered this broad set of standards, which are often defined by, you know, sometimes forty or more specific standards with substandards and different skills.

And so it's really more of a coverage model, kind of has everything been been covered and internalized or learned, where at the LD focus schools, it's much more about increasing critical learning ability. So let's just say math ability or reading ability and kind of every, you know, nuance and nook and cranny of the curriculum, isn't as much the priority as moving forward in the core learning skills so that a student could go forward and access, you know, curriculum at grade level or above.

The mechanisms are very different between the two institutions. You have uniform, large group instruction, which means, we're we're taking that, you know, coverage model of broad standards, and then we're, you know, instructing students in groups of fifteen, twenty, twenty five students at a time, compared to sophisticated, individual intervention. So, assessing students, diagnostics with students, tutoring with students, small group instruction with students, very, you know, very different, and and those require different kinds of data.

The the information systems are quite different. The at the, large public school districts, they're using a system of record.

And at the small LD focused independent schools using a system of improvement, and I'll talk a bit more about those. But that's very fundamental to this, assessment question.

The strategies are different. The key strategy from my experience of of working, particularly with the larger public school districts, is focusing on what they call the the bubble students, which are the students right on the bubble of passing. You know, if you if you do your early early in the year benchmark assessment, you'll identify students who are just below the line or above the line. And so the curriculum gets focused very intensively right in that area to try and help as many students who are on the verge of passing or not passing the state test, over that hurdle.

Where at the, independent LD focused schools, it's it's really quite different. There isn't an arbitrary, line to get over. It's, you know, where are they students now in their learning, and, how can we accelerate that learning? How can we remove learning blocks? How can we make them make as much progress as possible?

And then the last one is is is the data protocol. How is data used?

And, at the larger public school districts, it's really, you know, data driven, based on compliance. And so very often, there are protocols where if you score a certain score on a certain assessment, you then go into this program, or you are then labeled with, you know, this, you know, learning challenge or subgroup or different different criteria like that, where at the small LD focused independent schools, it's it's data informed. It's it's usually educators coming together in a team meeting and looking at multiple sources of data and not just following, an abstract protocol based on one test score, but but really coordinating multiple data points to, create a holistic picture of how the student is doing their learning and then deriving next actions from there.

So the public districts are required by law for, you know, testing and accommodations and really need to document that.

They're in a compliance driven system, and they really need to ensure that students are are tested and categorized for broad interventions and accommodations. A lot one of the other big priorities is answering the question is, does the student have access to the curriculum, which in some cases, means simplifying the curriculum or modifying it in some ways. And, again, there's the, focus on the, bubble kids. The kids really targeted on that pass or fail line.

Where with LD focused schools, it's really about identifying where they are in their learning and giving them everything possible to succeed to their highest level. They may already be above, you know, a a state passing or failing, benchmark. It really doesn't matter. It's where are they in their learning, and let's move forward from there.

And so the focus is on the individual student and meeting her needs, and it's a problem solving system. It's identifying problems that, that, you know, present and as barriers to the students' learning and then solving those problems.

So here just from my experience of working with LD focused schools, some of the goals that they have with assessment, and and many of you have mentioned these in the chat.

And I'd I'd say, you know, like, the one that I had was our interventions working. You know? Do we need to change the intervention?

Is the is the intervention helping the student catch up?

And another another goal is identifying asynchronies or learning gaps. And so that's where you have students with more complex learning profiles where, there are actually maybe some aspects of ELA that are really strong, let's say, you know, vocabulary or comprehension, and then other aspects that are really challenged, let's say, decoding.

And so if you're just using a average you know, a measure of ELA that averages everything into one score and doesn't provide components, subscores, then that student can, you know, can look quite different. In fact, the average of those high and low scores represents a student that's completely unlike the student, being assessed, because none of their subscores are at that average level. They're all, you know, well above or well behind. So, again, a much more complex view of student learning.

So as far as the system of record, this is, very often, you know, common with large institutions where, the focus is on compliance and and data needs to be, you know, centralized. And there really is a focus on kind of printing printing documents, saving documents, accumulating documents in a case file.

The data is mostly used backwards looking to kinda document what happened.

There's not an emphasis on analytics because the data isn't really being used to generate new insights. It's being used to document what was done.

There's limited user interaction for the data. In other words, in terms of clicking, drilling down, relating one dataset to another dataset, looking longitudinally, that's usually not the interaction. It's more based on static screens of data or PDF documents.

There's often not, you know, meaningful benchmarking in terms of the specific, learning style of the student or the learning challenges of the student. It's just kind of a general either district average or national average.

The data is often, you know, reactive looking at it. You know, sometimes the state test results are not available for, you know, three to five months after the assessments were given.

And then those that data is looked at, and then changes are made, you know, for the next, you know, school year, very often, you know, half a year or a year later. So we're not so much focusing on driving continuous improvement with with real time feedback.

So from my experience, LD focused schools are much more interested in systems of improvement or or are implementing systems of improvement. It's all about real time data. A lot of this data comes from their curriculum measures, and from intervention measures, tutoring data, classroom data, small group instruction, so that the, interventions and lessons and curriculum can be modified in real time and changes can be made in real time. So in this environment, we're looking at real time data that's dynamic and adaptive. It's adapting to the student as the student's rate of progress changes or learning trajectory changes.

The emphasis is more on analytics, being able to, drill down into the data, look longitudinally, look at different skills, compare the group to the class, to the individual.

And the focus is really on the progress rate. So not that arbitrary bubble line, you know, pass, fail, but where did the student start, and then what is their rate of progress?

The data is forward looking in the sense that it's used to kind of predict, well, if if all things stay the same, where is, you know, where will this intervention take the student?

The data needs to be completely transparent. A lot of this has to do with the sophistication of the staff and the teachers who really want a more detailed view of the student's work and how the student did, and they're not as content to just get a, you know, a printout that says, you know, here's the percentile score, and here are the three skills that need to be emphasized or or or taught.

And then, again, looking at longitudinal data and then looking with, flexibility student needs.

So if we look at the, profile of students who learn differently, there's some general characteristics that really impact assessment.

So one of them is that there's more variability in day to day schools. When, when we look at our assessment data, I can tell pretty quickly whether I'm looking at a LD focused school or a, you know, larger school district very often just by the variability in scores. So, the the kind of average, student, you know, close to the middle of the bell curve, will have, you know, scores from one test window to the next pretty close in percentile to the previous score and the next score. Where with students who learn differently, the variability is more the rule. The, you know, the the percentile scores can vary dramatically. You know? You know, at the fortieth percentile in the fall, at the sixty fifth percentile in the winter, and then, you know, thirty fifth in the spring, and finishing the year at the seventieth or seventy fifth percentile.

And so what that means is is that if we rely on just one data point to make a decision, we're really it really just depends what day did we catch the student on. Because there's so much variability, we really need longitudinal data. We need multiple data points, and we need to look at overall trends instead of just you know, this is your end of year assessment. That's your score. You know, that's how you did. I mean, that's, really leads to a lot of these, you know, different frustration points.

So and then, also, students these students are very, sensitive to the developmental level of where they're being assessed at. So, the the it's really quite amazing. I'm I I I'm I'm guessing that you've seen it before, but sometimes you'll have a student who they get into an assessment, and they they they maybe see two or three questions, and they intuitively know what the past, and they immediately shut down, and they just start clicking. And they don't wanna do it, and they don't wanna be there. And it's really painful for them. It's really frustrating, and they're seeing, you know, longer reading passages they can handle. They're seeing vocabulary that's, you know, not making sense to them.

And so where in in you know, with with other students who might persist longer, who might give it their best effort because they don't have that history of, you know, being so outmatched by assessments and being so overwhelmed.

In this case, you you you get the experience where the student really just shuts down. And at the end of the assessment, you're really not sure what you captured because the student you saw taking the assessment was so unlike the student you see every day in school and in the classroom and things like that.

Other other really important characteristics is the ability to focus for long periods of time can be a challenge, and the stamina, you know, for engagement can be a challenge. And so when you have when you have these, you know, longer assessments and they're also challenging and potentially overwhelming, you're you're you're hitting on a lot of the sensitive areas of students who learn differently. And what happens is is you're not really getting a picture of their learning ability. You're getting a picture of all these other emotional factors.

And then the the last one here on the list is just the the the possible cause for missing a test question can be extensive.

So if I I'm just gonna jump ahead here. If just taking a test question from a state test.

You know, the general strategy here, and this is what, again, what what districts will do is they'll look at they'll do a item analysis on the state test, let's say, in this case, third grade.

And they'll look at what question did students, you know, did the most students miss or or was more missed by by students than other questions. And then that will be an area of focus the next school year. And the general assumption is that, the when the student answers this question incorrectly, it's because they haven't mastered the standard. So in this case, three dot m d dot c dot five recognize area as an attribute of plain figures and understand concepts of area measurement.

When you're working with students who learn differently, I mean, if you if you don't mind just putting in the chat, what would be some of the reasons other than understanding area as an attribute of plain figures for a student to answer this incorrectly?

So, you know, one of them, of course, is, you know, this is a math question, but there's so much language. So so, yes, so Laura points out the student can't read the question or doesn't process the language. So so not only is this really a reading comprehension question, but then it's also a language question is that the structure of the language and the concepts being talked about are at a pretty high level.

So whereas in a larger district, yes, on a percentage basis, most students who missed this may have missed it because of this specific standard.

But when you're working with students who learn differently, they're, you know, they're ten easy other causes for why a student may have missed this. And so to take a student who learns differently who missed this question and then design an intervention around three dot m d dot c dot five, intuitively, it doesn't feel right to us because, we've got, you know, these other issues. Was it anxiety?

Was it reading? Was it language processing? Was it anxiety?

Was it their ability to focus for the entire test?

And then, Gwen points out, you know, it's not just about area. The student needs to understand true, false, and how to answer a true, false question. So so absolutely.

And so that's the, you know, that's the risk for, I think, for those of us who work with with students who learn differently is this is the state of the art in education today, are these these large scale assessments and then deriving from one assessment question a single cause for the reason it was answered incorrectly, which in this case would be recognizing area as an attribute of plain figures, and then designing an intervention or emphasizing this in the curriculum based again on this on this one question.

So if we look at the bell curve, which, you know, these, you know, assessment results are are based on and reported on as the the percent particularly the percentile scores is the construction of the bell curve. And so we can see where the for the large scale assessments, where they're focusing their attention.

They're focusing primarily on that sixty eight percent in the blue the dark blue in the middle.

And those are actually the students who are probably, you know, reading at a high enough level and, you know, have the stamina and have the focus to do the longer assessments and can understand that question. They understand the true false. They can handle the more complex language. So so it's very possible that when they answer that incorrectly, it is just because of the math concept and not all those other issues.

Where when we go to the left of the bell curve, and that's where a lot of the students that we work with are, that's where you have students answering for all sorts of other reasons. You know, they guessed, but they hadn't seen you know, they don't understand the true false concept setup.

There are three parts to the question, which conceptually, they weren't sure, you know, what they were being asked, what they were supposed to do. So this is, you know, the challenge with then taking the results from those assessments and kind of, scaling them up to all the different roles and stakeholders at the LD focus school because it's based on data at this level, and and the data is is quite challenging.

So another thing about this approach is even when we look at this kind of state of the art educational approach of large scale assessment and then recommending teaching practices or emphasis based on assessment results very often based on one test question is there isn't really a great research basis for saying that that's an effective form of education.

It's very widely practiced, and it's kind of built into a lot of these larger assessment platforms.

But there there are few, you know you know, major studies on it. This is one done by the US Department of Education that found there was no no benefit for a school district who who implemented the large scale assessment and then, you know, train the teachers on using the data to emphasize instruction on the specific, you know, skills that were missed and making subgroups based on students who missed certain questions, things like that. So even in these environments, it's not a a, initiative or a framework that has a lot of weight behind it, but it's but it's very popular.

So these are some of the the pain points that, really show up in LD focused schools when using those kinds of assessments.

So, there's the, you know, no accommodations in the form of, audio narration for the math. So the math test becomes a reading test.

There's the issue where the tests are not adaptive enough. So even those adaptive tests, if we go back to the bell curve, most of them are just adapting within that sixty eight percent range. They're not adapting down to the fourteen percent on the left, the two percent on the left, and they're also not adapting up to the fourteen or two percent on the right either. But sometimes that our students we work with, we have students whose math skills are above level and ELA or reading skills are below level, and the and the test doesn't really hit them at either end.

So we've got the I the the problem with the test, you know, being so long, which is necessary when the criteria is to assess the broad scope of all the standards. So when you've got to have a very superficial test assessing all these standards, you you end up with a long test. And then if you've watched students, at your schools take these assessments, you may notice in some cases where towards the end of the test, it's, you know, it's click, click, click, or just filling in bubbles or or what what might have you.

The over simplistic presentation of results where you get that one, test question that was missed is being recommended as an intervention or instruction on that one scale when it really could be so many other things.

With the assessments being quite long, they're sometimes only given once a year, where we know that the norm with students who learn differently is the up and down scores, and we need a a longitudinal view instead of just one one data point a year.

And we've got the attempting to cover too much in one sitting, which leads to the the long assessment.

So part of the response to this is really making sure that we assess our students at their instructional level, because when we when we assess students above their level, it doesn't really tell us where they are. If we have a fifth grade student who is reading below level, that fifth grade assessment, especially if it's not adaptive, really all it tells us is that the student is not at the fifth grade level. It doesn't tell us that they're fourth or third or second or first.

It only really had test questions at that fifth grade level.

And so it can it can give a very low percentile score. It can take a guess at where the student's level is, but it doesn't it doesn't really show by going to that student's level.

And so if we think back to all the different stakeholders and for those of you who, you know, put your the role your role at the school in the chat, this leads to frustration in all these different parts of the, school community. Because let's say from the admissions, you know, perspective, we wanna craft a, you know, incoming student class that has the needs that our school addresses and that fits what our, you know, where our curriculum is, where other students are, and things like that. But if we're, you know, getting, you know, assessment data that's more large scale, then we're not getting the, you know, the detailed level that we need, or we're relying on, you know, Woodcock Johnson and independent psychoeducational testing that, you know, may may be a few years old at this point or done quite infrequently.

The, the biggest complaint I hear from teachers is that the progress that they see with their students and on their curriculum based measures isn't reflected in the assessments.

And so they don't they don't they're not very invested in the assessments because they see such a gap between what the students can do when they're assessed on level and then what they do when they're assessed, with a assessment that doesn't adapt to their level.

This makes it hard for administrators to tell if programs and interventions are working because there's this mismatch in the data. For the head of school, difficult to communicate, institutional success to the community, board members, similar.

For students, it's it's just a you know, can be a very frustrating and and discouraging experience to to kind of repeatedly have these assessments scheduled that are not at level, and they become stressful and anxiety provoking.

And, and that's where we have students who, you know, just when the assessment starts, they already are, you know, starting to shut down because they wanna protect themselves from that kind of icky feeling of of really not doing well question after question.

Then we have parents who have a, you know, a poor understanding of where their students are, what their strengths are, what their weaknesses are, and where they might be going in the future because they're getting this kind of mixed view from from from data.

Tutors finding it difficult to see what to work on and what has been mastered. Then you also have, you know, donors. You have accreditation. You have all these different players who are making use of assessment data.

So what what does it look like to have a LD friendly assessment program?

So number one would be, you know, brief assessments. So we we need, you know, you know, shorter assessments because our interest isn't so much in this broad mastery of curriculum and state standards. We're really developing capacity and learning ability, and that's a narrow focus that goes more in-depth.

And with students who learn differently, very often, they it's more of an intensive instruction and intervention in that area.

So a brief assessment is is not only sufficient, but it's better because we don't have, you know, as often students, you know, getting discouraged and kind of phoning it in or clicking their way through the test. And then if they're, you know, getting tired, if they're not feeling great, they can exit the test and complete it on another day. Because it's it's much more valuable to kinda have the disruption of needing to come back and finish it than it is to just kinda pumping more data into the system that's not representative of the student's best learning.

We need math and language questions, meaning grammar and vocabulary and things like that that have audio narration, so that we're not measuring reading at every subscale at every question of the assessments that when we're measuring reading, it's focused on reading and decoding and comprehension.

But in other areas, we can have audio narration, so we can really see what their progress is with math and other areas of ELA.

We need, assessments that are highly adaptive because our students are beyond that kind of narrow bell curve. Just adapting within that range where most students in the country are isn't sufficient because it's not where our students are.

And we also need the ability to assess off grade level. So even within a, you know, an assessment that adapts up to four grade levels, sometimes, you know, you you need to, come in at a different level even than that. So, just being able to assign the level that a student starts an assessment is gonna be a much better experience for the student because they can come in a little bit below their level and start off with some success and adapt into their challenge level.

And then we're also gonna get better data because we have more test questions that are targeted right at their learning level.

We also needed assessments that adapt by the subtest or the subcategory or domain.

Because if the test just adapts by overall ELA or overall math, it's, again, not hitting the student with questions that are in their learning level. So, for example, an ELA assessment that just adapts based on all of the questions, but we have, comprehension and vocabulary is quite strong, but decoding and foundational reading is relatively weaker.

We need those tests to get harder in the vocab and language areas where the student is more proficient, and we need them to get easier in areas where, there's more challenge.

So we also need assessments that capture an accurate an accurate baseline. So this is very often if you kinda have the experience of team meetings or parent teacher conferences where it it's it's not clear exactly, you know, how the student's been doing, very often, it's because there isn't a a clear baseline of where the student was when they started.

So at the public districts, it's enough to just say that the student's not at level or not passing the state test or things like that. But, really, when we're working with students who learn differently, we need a baseline because from that baseline is how you're gonna calculate your rate of progress. And without the rate of progress, we don't know if our interventions are working. We don't know if the student's catching up. We can't characterize the amount of progress that they've made.

We need rich data views to inform decision making. So that means transparent assessments where, staff can see the student they can see the test question. They can see the student response. They can see how much time the student spent.

But just like that example test question we looked at, very often when you see the question and you see the student's answer, that's when you know exactly what the skill is that the student had trouble with or why, you know, why they had trouble with the question. Sometimes you see how much language there was. Sometimes you see they only spent three seconds on it, so you can tell they just guessed.

But, really, there's there's a hunger to have a deeper view of the data to develop that holistic picture of what's going on with the assessment process.

And then finally, we need LD friendly data frameworks and workflows. So that's basically saying that at, LD focused schools, we're using the data differently than at the at the larger public districts. We're not just assigning students to, you know, categories or labels or tiers. We're really, getting the, you know, the deep dive on the data and identifying what their needs are, designing interventions that meet them at that level, and then tracking their progress from there, which which can be quite different.

So if you wanted to, kind of self assess how your how your assessments are working for you, these are just some questions that I put together from really just from my experience of working with, schools that focus on students who learn differently.

And these are you know, I've been to, let's say, team meetings where we spend more time discussing, the data, like, what the data means, what the different assessments mean than we did helping the student. And we kind of everyone in the room had a different opinion on the assessments and the data. And, you know, the teacher would offer, you know, oh, well, I was there when he took the assessment, and he, you know, he raced through it. And somebody else says, yes.

But you can see a pattern here. And, really, what we wanna do is we wanna come to a team meeting. And within the first five minutes or so, we we've all looked at the data, and we agree on where the student is and what the needs are and what's going on. But if we don't have the right data that answers those questions, then then we do spend the meeting kind of figuring out what what in the world is this data actually telling us, and why does it seem to conflict.

There's also the the, you know, basically, the experience of your students when they take the assessment. So when they finish an assessment, are they engaged and ready for what's next, or are they kind of done for the day? Are they, you know, fatigued and discouraged?

That's often an indicator of, you know, how the assessment was for them. And it really in my opinion, it impacts the data that you're getting from those assessments because you're not seeing your student at their best.

Do your assessments give you a progress rate for each student in each subject, for each intervention program, for each domain?

The progress rate is basically answering those questions.

Are our interventions working? Are our students catching up?

It allows you to compare intervention programs and their impact.

At the large public school districts, it's just enough to know that the student's not at level.

But, really, when working with with students who learn differently, we we wanna know, what is the impact of what we're doing and by what proportion or what degree? Is it two times, you know, better? Is it three times better?

And so a progress rate helps answer those questions.

Do you get an accurate baseline for where each student is when starting the school year, or do you just generally know that the student's below level? So so, again, almost by definition, when a student's at our school, at an LD focused school, we already know they're not at level, particularly, you know, usually in in language or or reading. So that's not enough. We really need a baseline so then we can, throughout the year, track progress. And, again, you know, we have students who are challenged with language and reading who can actually, you know, test above level, not only in math, but also in, you know, language and and and reading. So it's about making progress from the baseline. And if we don't have the baseline, then we then we don't know.

And then you also have the you know, what's the level of buy in with the data, that that you're collecting? So I've so I've worked with schools where everyone's kind of dismissing the data. Like like, yes. We have the data, but we don't really use it because of some of these issues that I've described so far, like the way the students present when they're taking the assessments and how they feel and whether they engage with the assessments or not, or if the assessments were so many years beyond their level that it wasn't really giving them anything meaningful.

So so you can often tell just just from that the level of buy in.

And then do your assessments, do they match the richness and level of individualization as your curriculum and your school culture and, your interventions and everything else? Is it all aligned with that same level of individualization?

Yeah. So just to kinda, tie things together, I'm just gonna present a case study here.

This is basically a composite of, you know, after having been in many team meetings and parent teacher conferences and just kind of seen this pattern of of just kind of combined it here. But so we have Jesse, a bright fifth grade student, attends an independent school that specializes in supporting students who learn differently.

She started with the school in second grade after experiencing difficulties at her local elementary school in first grade.

During first grade, Jesse struggled with reading, but excelled in math as highlighted by standardized testing showing a significant gap between her math and reading skills.

Jesse's parents enrolled her in an independent LD focused school for better support to better support her unique learning needs. Over the past three years, her classroom teachers have noted remarkable improvements in reading, progressing from frustration to greater fluency and comprehension.

She's also maintained her proficiency in math, excelling in the classroom, and developing a love for learning.

So then we have a fifth grade parent teacher conference where concerns arise as Jesse's standardized test scores continue to present a different picture.

Despite significant progress in reading and continued success in math through classwork, her recent test scores showed a decline in math, and reading scores are below grade level. Jesse's parents expressed deep concern and questioned the effectiveness of the reading interventions and the quality of math instruction.

They asked why they couldn't find a school that could effectively support Jesse in both reading and math.

The teacher attempted to reassure Jesse's parents explaining that standardized test scores might not accurately reflect Jesse's true abilities.

The teacher emphasized Jesse's strong performance in the classroom and with reading intervention work. However, this explanation offered little comfort as the standardized scores suggested otherwise.

So Jesse's parents struggled to make sense of the conflicting information and started to worry about Jesse's future academic progress.

The teacher mentioned that standardized test scores could be affected by anxiety and focus, but Jesse's parents left the meeting feeling feeling distraught.

What should have been a celebration of Jesse's hard work and the transformative effects of the school culture, curriculum, and instruction turned into an event that left Jesse's parents questioning everything and rethinking all decisions.

In the coming weeks, Jesse's parents meet with the department heads for both math and ELA followed by a meeting with the head of school.

In each conversation, they received slightly different interpretations of Jesse's various assessment results, which further eroded their confidence in the school's approach.

Frustrated and seeking clarity, they decided to consult an independent pediatric neuropsychologist to gain more comprehensive understanding of Jesse's needs.

So the analysis of this is really that this is a a problem of the fit of the assessment for the student and a lot of the frustrations and and misfit that I've been, you know, talking about in the previous slides.

So the assessment and, again, I'm just kinda using one example that I've seen a lot of patterns with.

These are, you know, an assessment that's used by the Houston Independent School District, Chicago Public Schools, and other large districts results in a com in a complete assessment mismatch between Jesse's experience in the classroom, her performance in her classwork, and the standardized assessment results.

So the math assessment becomes increasingly language based with each successive year and provides no audio narration.

So Jesse's math scores have declined because the math test is more of a reading comprehension test for her, and it's increasingly so with each year. Because in the second and third grade years, it was much more about, you know, simple number sentences and problems with simple language.

An LD friendly assessment with audio narration and a mix of question problems of equation problems and language based problems would have revealed that Jesse has been making expected progress in math every year and keeping pace with her peers. So that's the benefit of partcheling out the language part and the reading part.

While Jesse has made more than expected progress in reading each year, all that the assessment shows is that she is below level. She was below level when she arrived in grade two, and she's still below level in grade five.

Because the assessments being used did not capture a baseline level in grade two, it did not capture asynchronies and skill development. They cannot tell the story that Jesse has progressed from a pre k level in decoding to nearly on grade level for decoding.

And while she is now a few months behind in decoding, she was previously several years behind her peer group.

Now that her decoding has dramatically improved, her comprehension is coming along as well, and she's expected to continue improving at a pace necessary to catch up to her peers as she reads much more often now and for pleasure and for school.

Jesse's teacher did not have specific training and data informed decision making for students who learn differently and therefore was not in a position to convey to Jesse's parents the full story of Jesse's dramatic reading improvement and steady math improvement.

The combination of LD friendly assessments and the correct data informed decision making and professional development would have led to a very different parent teacher conference experience.

The other school leaders who spoke with Jesse's parents were not speaking from a common data informed school playbook, and therefore, each gave their own spin on the data interpretation.

If the school had a common data framework and data workflows, Jesse's parents would have heard the same reassuring story from everyone they spoke to at the school.

So it's it's a a composite case study of a kind of pattern, that I've seen repeatedly over my career. And and what is amazing to me is that oh, oh, kinda how much time waste and frustration is created from not having the right data to tell the the true story or the whole story. And so, usually, it results in additional team meetings and additional parent teacher conferences and additional testing.

And, really, what's at the root of it is assessments that weren't able to capture the progress of the student at the end of the day.

And so that's really at least you know, these days, what most of my work is is, motivated by is really trying to bring clarity to help schools, bring clarity to parents, bring clarity in the classroom and the team meetings by providing better data, by making the assessment process, more comfortable for students.

No one really, you know, loves the standardized assessment process, but there's definitely, ways to make it, more comfortable, more engaging, at the student's level so they can feel and see the the success that they're experiencing in other parts of the curriculum.

So that is what I, I have set out to share with you guys today. I hope I hope that was, useful or insightful in some ways and useful for your work. If there are any questions or any feedback, please feel free to either unmute yourself or to comment in the chat.

Okay. Thank you, Laura, for your comment.

Let me just put my email up there.

So, yeah, so if you want a copy of the of the presentation slides, if you have any questions for me, any any additional information you wanna reach out by email, my email address is david at true progress dot com.

And, David, for, those folks who are still on the call who might be interested in potentially learning more about the assessment tool that you put together that helps kind of answer some of these solutions. Just emailing you is the best way to go about that.

Yeah. So anyone interested in in looking at the, track my progress, assessment platform, which is our LD friendly, set of assessments, feel free to email me, and we can schedule a Zoom and do a quick demo and see, how it might fit for your students and staff.

David, I have a question for you.

Please.

So looking at, TrackMy Progress as an assessment tool and comparing that, I really appreciate everything you shared today about the differences and what, you know, public education needs in terms of testing compared to what our schools need. With track my progress, how does that compare for kids who are leaving our schools and heading back into a public school or for kids who are in our schools with, say, state funding and need some sort of standardized assessment data to show that they are making progress?

Yeah. Sure. So so our assessments are are nationally normed.

So they're standardized assessments and, provide a percentile score, grade level equivalent, proficiency score, scale score, all of those psychometric properties that are typical of the other assessments.

And then our assessments are used by school districts across the country in the in the public setting as well. I would say that the the school districts that are attracted to working with us have a larger, spread of student ability in their districts. So these are often very high achieving school districts, and they have their share of of, you know, learning challenges and students at the other end of the spectrum. But it's really in that environment where you have a large spread of ability.

That's where the conventional assessments weren't designed. You know, they're they're designed to focus on the heart of the bell curve and where we're really focusing on being as adaptive as possible. And and we're as interested in the student as a you know, above the bell curve is in the middle and as below. So, yes, so we we do have public school districts using our assessments, and those, assessments can be used to report back to, sending schools, sending districts on student progress at your school.

Perfect. Thank you.

Yep.

Well, we are right at an hour. So I am going to take this time to say thank you to David for being here, for sharing us sharing with us some of, what, as Laura said, was very validating in the work that we do.

And like we said, if you're interested in learning more about the product that he has created to be able to support this problem, please read it reach out to David directly, and he'd be happy to set up a time.

Thank you all for being here.

Alright. Thanks, everyone. Great day.


Did this answer your question?