Many of my twitter followers have been very interested in the work we are doing the New Literacies Research Lab here in Connecticut. Currently we are working on an IES grant to develop valid and reliable measures of online reading comprehension. So I have been asked to document a few of the challenges and obstacles we face.
I am sure by now most of you are familiar with Leu's model of online reading comprehension: Question, Evaluate, Synthesize, and Communicate. Of these skills synthesis has always been the hardest for us to measure. This plays out in both anecdotal evidence and our data. First how do you make evident something that happens in the head (for you cognitive folks) or in the act of doing ( for those more situativley inclined)? Second in all of our factor analytic patterns we have not developed a model that separates synthesis from communication. As we begin our cognitive labs of our items we are determined to get a measure of synthesis.
A Little Background
I figure we can start with Bloom. When in doubt with assessments its a great place to start. Bloom and his crew (1956) placed synthesis among the higher order thinking skills.It had to do with the assembly of knowledge: putting parts into whole. When Krathwol revised the taxonomy his team renamed synthesis create and moved it to the top of the pile.
I like that it where it belongs. Nothing makes me cringe more when teachers equate synthesis to citations. It is an act of creation. You take multiple streams of information and combine them into something new (Thanks NCTE definition of 21st century literacies I really enjoy that phrase. For us this has been especially hard to capture in a comprehension assessment.
I recently sat in our Scientific Advisory Board meeting and got to listen in as Spiro, Pearson, Kirtsch, and Klienman had a lively debate on synthesis. They all agreed it wasn't simply finding detail A and detail B to make summary statement AB. That was way too old Bloom. The SAB wanted synthesis to look more like A says this B says this so therefore the answer is C.
Thus the act of creation. Of course what isn't accounted for in this model is prior knowledge and unique experiences people brind to knowledge assesmbly. Rand Spiro kept reminding us of this point. So many times new knoweldge comes from such non-linear paths. What we know is often out of happenstance. Once again how do you measure this in an assessment of online reading comprehension.
In our first round of cognitive labs we immediately noticed the difficulty of measuring synthesis. We started with one screen in surveymonkey that had students take notes on all the websites they found and then combine them into one summary sentence. Kids hated it. They wanted to take notes as they searched for info or judged websites.
So we encouraged notetaking throughout the task and just made synthesis a final statement. The problem this time was brevity. Their summary statements were short, but their final communication showed evidence of them integrating many details they read. Also if we scored synthesis in the final communication some students who could combine ideas, and make them their own ,might lost points for not being able to use a blog, wiki, email or discussion board (our communication tools). Also what about prior knowledge? Should students not get a scorepoint for using what they already knew? So we needed to change the model again.
Our Latest Iteration
The scientific advisory board also suggested we needed to push the social aspect of our assessment and make it an authentic web experience (I would argue taking notes and using that information is authentic, but that's for another time). So we tried to accomplish two things at once: increase authentic task and embed synthesis. In our next round of cognitive labs we are trying three new ideas: a testlet embedded in instant message, a testlet that uses both surveymonkey but uses instant message for synthesis, and then a third version with your standard Word Document for taking notes. It will br interesting to see how these three versions play out.
I know many of you are suggesting that instant messages is so 2000 and late.I guess that is the nature of the beast we are trying to study. The instant message interface is just a way to simulate a two way communication embedded within our assessments. We have a talented group of programmers working on a response capture object. The latest idea is to make the assessment look like a social network. I am excited about this idea. I do worry that in chasing temporal and chique validity we may threaten actual validity. Have you ever used Facebook to solve a common problem or investigate an issue? I would still argue that discussion boards (for groups) or popular editorial blogs are where the debate around issues is centered. I guess there are specific fan pages out there that could serve as launching pads. It will be a wonderful line of investigation. If we go this route there will be wonderful opportunities to capture synthesis, even if it is still an incomplete model.