Guiding question: How can technology be used to help transform “standardized assessment” practices?
We took the perspective of Administrators this week.
Here is our screencast.
Here is our original slidedeck.
Here is link to the connected learning video.
Please feel free to post comments and feedback below.
~Anthony on behalf of John, Sandra and Thava
I really enjoyed your take on this week’s problem. I feel it was very well thought out and you came up with very logical solutions that I feel would really help enhance the current testing practices we have in Ontario. One thing I’m struggling with is that some of the suggested solutions for improvement may not be attainable by the school administrators. For example, one suggestion made was to lengthen the duration of the testing throughout the school year and randomize the questions given. While I think it’s a very valid suggestion, the way in which the tests are created and administered by an external body such as EQAO, may be out of the control of the school administrators so while it’s a very good idea, those who are in that role may not be able to change the current practice in such a way. Great presentation everyone!
I do not disagree with your observations about the administrators being somewhat restricted in what changes they can directly affect with respect to testing practices. However, speaking completely as an outsider to the bureaucracy inherent in provincial education, ultimately everyone is accountable to someone. That being said, if administrators raise concerns and provide constructive feedback to their respective school boards, who in turn channel the same feedback up to the Ministry, is it plausible that the Ministry would have the authority to modify the mandate and/or testing practices of EQAO? Since there are corporations who create the psychometrically tuned standardized assessments and charge for their use, it lends credence to a client serviced model where customer satisfaction should have an impact on current practices. This would most likely require a macro level, collaborated response in order to affect any change.
Amazing work! I had never really considered external factors, such as mood, sickness and stress…so true! I agree with the comment…”inaccurate measure of school effectiveness.” My son’s EQAO grade 3 results came home last week…apparently he is at the “provincial standard.” However, he is not functioning at grade level in language. We continuously work with him at home and he attends Oxford 2x’s a week to work on reading and writing skills. He had the entire EQAO test scribed for him, last June. So, did he answer the test questions on his own or was he prompted to answer correctly? I feel that he was prompted throughout most of the test. The top of the assessment results indicated that he had an accommodation to complete the test…BUT, it did not state what the accommodation was (in this case it was that he had the test scribed for him). His school is always in the top 5 in Durham Region…now I know why!!!!!
Excellent YouTube video and slides group! You focused on relevant issues and came up with innovative solutions. I like the authentic assessment and incorporating technology. I thought your questions were thought provoking. I liked how you linked the administration to transformational leadership. Your solutions are realistic and resourceful. Great additional video resource! Thank-you! Corrine McCormick-Brighton (CMB)
Thank you for your feedback. I think transformational leadership is akin to being visionary. The continuum of leadership ranges from managing to leading, the difference between the two terms being that managers have control whereas leaders most often have vision as well. This is demonstrated when you look at what is perceived as the greater compliment: “He/she is an excellent Manager” vs. “He/she is an excellent Leader.” Transformational leaders question the status quo and challenge its utility. This is the model of continuous quality improvement that leads to innovation. Thanks again.
I really like the idea of spreading the testing over a longer period of time to capture more reliable results of student ability than can be achieved during a brief high-stress period. I related to the comment that students can be disengaged by paper and pencil tests and not try their best. If more students interact positively with technology than with pencil and paper and the results offer feedback that can be acted on more quickly, the transition needs to begin now.
Great video group. I really like your ideas and the solutions you came up with in order for standardized tests to provide an authentic picture of student learning. Although I agree, I was wondering how you would assess those “rich tasks” and “product based tests” such as FLL so that the assessments were not subjective? One of the arguments around EQAO and its validity as a standardized test is that it is subjective because there could be many different answers as well as many different markers for the tests.
Amy, great question about how to assess the rich tasks. I know in my environment of military training, we always aim for criterion-referenced vs. norm-referenced testing. The intent is to remove subjectivity as you’re assessing students against a standard versus their performance relative to their peers. However, in any supply-based test response model, there will always be some variance and subjectivity in evaluating the response. One way we look at it is by determining what type of an assessment instrument to use for the type of assessment deliverable, be it a product or a process. If it’s a process, we use a checklist to determine that the candidate completed all the steps or phases. If its a product, we use a solution or answer sheet. Then we try to calibrate our assessors ahead of time so that they all agree on what determines satisfactory vs. unsatisfactory performance. Granted, this would be logistically impossible in the case of EQAOs and perhaps out of the scope of what an administrator could change. However, assessing rich tasks may provide a method to complement the EQAO process by providing additional feedback on student performance.
I think another direction administrators could go is to advocate for and pass along feedback including needs, wants and desires of students and teachers in terms of standardized testing. No one knows their school community better than those that are part of it. With all the external factors to consider, administrators are the link that can communicate the school community’s needs with the government better than anyone else. For example, students from a low socioeconomic community that have little access to computers may not have mastered content and demonstrated skills digitally thus indicating that digital technology may not be the best assessment tools to use for standardized testing. Alternatively, perhaps this school community relies heavily on using technology to demonstrate learning and would better exhibit this through standardized testing that uses digital tools. An administrator would know this about their school community and be better equipped to advocate for the best assessment methods.
Everyone has “off days” and one-time tests can certainly reflect that, especially if the “off-day” is caused by an external factor that affects many students such as a heat wave or excitement over an upcoming or recently passed event. I whole-heartedly agree that administrators could be pursuing the possibility of testing over a period of time rather than one snapshot. This just makes pure sense should they be looking for a statistical outcome that best represents the students’ knowledge and skills in a normalized manner. As Alissa mentioned, administrators may not be able to solely make this decision, as I mentioned above they are the best advocates for such requests from students/teachers.
Hi Robyn. I really like your point about technology possibly disadvantaging a particular student body. Perhaps the role of the administrator would be to evaluate the standardized testing process and look at implementing accommodations for the student body to level the playing field relative to other schools. I think we could have a very long discussion/debate over those topics. Thanks for the feedback.
Thanks so much for your comments Alissa. Our take in the solutions section was to look further than the specific perspective of our role and suggest wider solutions to update and improve the place of standardized testing in 2014.
Amy, you are right that a more open ended task would seem more subjective, more difficult to assess and would require dialogue and discussion. However, using a rubric and having answers or projects graded by a large number of teachers (crowd sourcing) might be a methodology that leverages technology but more importantly offers the opportunity for authentic assessment. This method of assessment for more open ended projects worked well in a MOOC I completed last summer where all the participants in the course (admittedly educators) in the course were asked to grade each others’ projects using a checklist and rubric. No doubt the high and low result deviations were eliminated from the data but because your project was assessed by many, you tended to have fair indication of your project’s effectiveness. In the First Lego League, open dialogue using clear guidelines allows teams to be ranked accordingly despite very different presentations.(i.e. Ranking creates quite the debate as you can imagine!) Certainly there is little room automation in this assessment process but perhaps this might be a better indication of the student’s abilities,
I would agree with Anthony. Amy definitely raises a valid concern about subjectivity, but I think that with current a future technologies, there may be transformative ways of feedback and assessment that would not previously be possible. We are starting to see some examples through MOOCs and crowd sourced responses to online content in general. These feedback methods are by no means perfect, but they do point in a direction that has previously not really been possible.
Looking ahead…could this subjectivity be eliminated by being assessed by a complex algorithm? To be honest, being evaluated this was does seem a bit concerning to me at first, and is also not a perfect solution, but I am also often amazed at how accurate Google is able to make predictions about me based on all the “Big Data” it has.
I really like your suggestions and I am finding that we all thought on the same path. I agree with Robyn’s comments about the administrator knowing their community best. A question I have is how the administrator would be able to change a standardized test in anyway? There was a previous point made that they would have to channel the suggestions to the board office to send to the Ministry, but in reality, the admin would really not be able to change the test.
I really thought your video touched on many of the issues facing the standardized tests here in Ontario. I love the idea of getting feedback for the teachers. Currently in Ontario, the schools receive the same information as the parents. The level achieved by the students. There is no information about how a student answers or tries to complete any part of the test. This information would be wonderful for the teacher to have in order to create and implement a proper educational pathway for each student. You did a good job of showing how the administrators are truly the middlepeople trying to see and understand both sides of this issue.
Terrific overview of your discussions and debates on this topic from the viewpoint of an Administrator. It is unfortunate as you suggest, that the public thinks that high standardized test scores means effective teaching, and low test scores means ineffective teaching. It is hard for the public to see all the variables that can adversely affect the results, such as the inclusion of exempted students such as those in the Learning and Life Skills classes.
Individually, Administrators can have a huge affect on how technology is being used in the school. If there is a lot of support and technology training for teachers and students alike, then students who use assistive technology, logically must perform better on their standardized tests. In previous years, when there was little assistive technology training, I saw students with IEPs refuse to use assistive technology for EQAO. In subsequent years, I have also seen students that used technology more regularly, and had no problems using assistive technology for EQAO. A technologically savvy administrator can model effective use for parents/teachers and students alike, and support the development a 21st Century model school.
I agree with a number of comments to your presentation that suggest that administrators have a limited individual affect on changes to standardized testing, but I believe collectively they may have more influence -whether through their own union, or through speaking with the school board, and might be able to effect some of the changes you suggest.
During your presentation, you mentioned the Lego Robotics League challenge. I am just going through this with students in my school, and am seeing all the incredible 21st Century learning with this project! What an awesome experience. Wish we could use this type of model for a standardized assessment:)
Thanks Cat. We appreciate your perspective having been in the classroom during standardized tests with students, working with students with IEP’s writing those tests and assessments and also your experience as an administrator. I also like your comment about shifting funding away from standardized tests and towards enriching and immersive experiences like the First Lego League. Other experiences like making, programming and creating might also have promise to aid students prepare for the future. This article is quite interesting on a partnership between Google and the Canadian charities to bring programming and coding to students across the country. http://www.cbc.ca/news/technology/google-announces-project-to-get-canadian-kids-coding-1.2785348 Perhaps we need to create a generation of makers, programmers, creators and world-chargers (both online and F2F) rather than test-takers.