Subscribe to Instruction @ the UT Libraries feed
Updated: 4 min 1 sec ago

DART Recap: Teaching Evaluation

Wed, 12/13/2017 - 3:43pm

Recap by Mitch Cota

The latest DART session began with a working paper by Sam Wineberg and Sarah McGrew, called “Lateral Reading: Reading Less and Learning More When Evaluating Digital Information.” It was utilized to initiate a discussion around how we evaluate sources as individuals from different levels of education and different professional backgrounds. The article was about a case study into how four different groups of people at Stanford, history PhDs and faculty, undergraduates, and fact checkers, evaluate sources for their trustworthiness. They were each given two articles to evaluate under a time limit. Next, they were given a legal case and asked to find out who funded the plaintiffs legal fees, since they were children. Each group included in the study approached the problem with their specific skill sets in an effort to evaluate the source which led to different outcomes.

The historians approached the problem through a deep evaluation of the articles themselves, a vertical reading. Their specialization in primary documents led them to analyzing the articles for themselves, and shying away from activating any links that would lead to other sites and lateral reading. The fact checkers focused on examining each aspect of the article through lateral reading. This meant that they would activate multiple windows in their browser to search for more information on different people, organizations, and topics. It also meant they utilized the different links presented throughout the articles.

I am of course simplifying the process and I would encourage anyone invested in source evaluation to look further into the article for more details. The reason for presenting this article to our DART discussion was to examine how we can effectively include the idea of lateral reading into literacy instruction within the different UT groups we each support. In the age of linked data, we need to reevaluate the way we evaluate sources. Lateral reading provides for less reading while increasing the researchers ability to enact a more critical eye when evaluating sources.

Particular attention was paid to the current climate in relation to information consumption and evaluation. When multiple news outlets and content producers are being called into question, it is of the utmost importance that we begin to evolve our discussion around source evaluation in an effort to provide information consumers with the tools necessary to be successful. There was a consensus around the difficulties in assessment of the success of different approaches. The CRAP test was called into question specifically. The use of a checklist was seen as a failure in providing students with the skills necessary to discern quality resources. With every checklist, there is a new issue introduced that can negate its effectiveness.

So, if we are moving away from the CRAP test and checklists are showing to be less effective, where does that leave us in the classroom? The gamification of evaluation was discussed as a highly effective way to get students engaged around source evaluation without producing checklist-like results. An apt comparison was given about the students ability to effectively evaluate the social media presence of an individual they know, and an analogy drawn to the same toolset being effective in source evaluation. It was a bridge that was stated to be a move in the future that could reframe the students perspective on their ability to evaluate sources.

The takeaways from the discussion focused on:

  • Lateral reading as a necessary skill in source evaluation
  • Balancing vertical reading versus lateral reading based on topic, discipline, and professional background
  • Encouraging a broader and greater level of critical analysis
  • Accepting that experience still plays a large role in source evaluation, so there are limitations to what can be gained by incoming students and researchers
  • Focusing on habits of mind and ways of assessing these different skill sets we are providing
  • Establishing a way to break down the thought process that has become involuntary to those who have fine-tuned their ability to evaluate resources
  • Increasing a general assessment of the “lay of the land” when approaching the analysis of an article
  • Emphasizing the positive outcomes of “leaving the page” when analyzing online content

This topic is something that affects us all to some extent, whether in our professional library setting or personal lives. The article is a bit longer, but definitely worth the read as it stimulated quite a bit of conversation around what we are doing now and what we could move towards in the future. I would invite everyone to definitely take a look!

Do you have an article or topic you would like to bring to DART? Feel free to contact Elise Nacca with any ideas and feedback!

DART Recap: Teaching keywords

Tue, 10/24/2017 - 5:39pm

We discussed how we teach keywords in this DART with the frame, Searching as Strategic Exploration hovering in the background.  A lot of us use some sort of mind mapping or concept mapping to work through keyword instruction, and many of us guide students by contextualizing keyword brainstorming in discussion of issues like audience and popular vs scholarly information. For instance, Porcia starts such conversations off with a source type activity in order to teach students about the ecosystem of information within their topic or discipline.

A couple of us – especially those who teach freshmen – observe that students are reluctant, or don’t see the value in, brainstorming keywords or reformulating after failed searches. Students often don’t go beyond typing their topic into a search bar, “such as pros and cons of neoliberalism.” Interrupting that is difficult, but some attempt to interrupt this with following the conversation around the topic so students see how topics are debated and described in real life.  Porcia mentioned PICO, a mnemonic for the elements of a clinical question: Patient/Problem, Intervention, Comparison, Outcome. Most of us non-science folks recognized that this is similar to how we teach students to identify stakeholders and controversies when researching topics. Another tactic Porcia uses is to give students an abstract alone and then ask them to pull keywords out and write a title for the paper. I’m excited to see where I can use this in a session or course!

Developing one’s topic and developing a search strategy go hand in hand, so narrowing a  topic before searching often results in a topic with little written about it. Porcia teaches her students to investigate topics using something they already understand: the scientific method – this way they are compelled to test their topics through investigation, as well as to acquire new info and build upon existing knowledge, resulting in healthier topics. Sarah Brandt often sees students who start with an answer and then -plug in evidence later. This started us discussing how we wonder if professors are explaining to students that research is how we learn about disciplines, that there is a conversation happening that they need to tap into. Gina’s students recognize that they are not passive consumers of information – this is empowering for them in the research process. As Joe commented, there is a transition that occurs when you recognizes that the audience for your work is not just your professor, but also other scholars.

I don’t know that we have a silver bullet for teaching this tricky topic, but we shared our experiences and approaches. I’m looking forward to ongoing conversations about our teaching keywords as well as hoping that folks contribute tried and tested approaches in the Toolkit!