Research Areas
My overall research goal is to examine the impact, value, and outcomes of information behavior and interaction, focusing on how information interaction improves human capabilities across work, school, and everyday life contexts. My research interests can be broken down into four areas broadly:
- Searching as learning
- Credibility assessment in information seeking
- Credibility constructs in social media content
- Information behavior and information literacy
Searching as Learning
I am one of the leading researchers who initiated a research theme on “searching as learning” over the past five years. There is now a fast-growing community of researchers who believe that there are great opportunities to leverage and extend current search systems to foster learning by reconfiguring search systems from information retrieval tools into rich learning spaces in which search experiences and learning experiences are intertwined and even synergized. I was involved in organizing the first Searching as Learning (SAL) workshop held in conjunction with the Information Interaction in Context (IIiX) Conference in Regensburg, Germany in August 2014. There have been three additional workshops on this theme since then: SAL 2016 co-located at the SIGIR Conference, the Dagstuhl Seminar on Search as Learning (2017), and SAL 2018 at the ASIST Annual Meeting. I also co-edited a special issue on “Recent advances on searching as learning” for the Journal of Information Science and co-authored the introduction to the special issue (Hansen & Rieh, 2017).
I have contributed to the formulation of a research agenda for searching as learning with two distinct goals. The first goal is to develop a conceptual framework that can demonstrate how learning and searching intersect in the searching process. For instance, my co-authors and I wrote a comprehensive review article (Rieh, Collins-Thompson, Hansen, & Lee, 2016) in which we proposed the concept of ‘comprehensive search’ to illustrate iterative, reflective, and integrative search sessions that facilitate individuals’ critical thinking abilities (critical learning) and the development of new ideas directly (creative learning). The second goal is to develop methods, measures, and indicators that demonstrate learning experiences and outcomes in search systems. In a paper presented at the first CHIIR (Conference on Human Information Interaction and Retrieval) (Collins-Thompson, Rieh, Haynes, & Syed, 2016), we developed a rich set of learning measures based on a lab-based user study of an interactive search system. The analysis of data collected from the written summaries, questionnaires, and logs revealed a number of explicit and implicit indicators potentially useful for measuring learning in web searching.
Currently, I am focusing on investigating three research questions:
(1) What design features and functionalities of the future search system will foster human learning and improve the learning experience? I hope to design fundamentally different kinds of search systems whose purpose is not to find content that is merely relevant, but to provide a rich learning space in which learners can explore information broadly and openly. Through an extensive literature review and empirical studies, I want to identify a set of new design principles for search systems, such as academic library systems, that prove to be most effective in supporting learning.
(2) What measures and methods are most effective for assessing learning in information searching? I want to expand on my earlier work (Collins-Thompson, Rieh, Haynes, & Syed, 2016) to develop reliable measures, methods, and instruments for capturing changes in people’s knowledge level, learning experiences, and learning outcomes in various contexts of search behavior.
(3) How should we conceptualize the types of learning that search systems can foster, beyond acquiring and retaining knowledge? What kinds of search interfaces and tools need to be developed to support higher-level cognitive learning, including critical thinking and creativity? What alternative evaluation measures can be developed to assess critical thinking and creativity process that may occur when interacting with search systems?
- Smith, C. L. and Rieh, S. Y. (2019). Knowledge-Context in search systems: Toward information-literate actions. Proceedings of the ACM SIGIR Conference on Human Information Interaction & Retrieval (CHIIR ’19), 55-62.
- Rieh, S. Y., Collins-Thompson, K., Hansen, P., & Lee, H-J (2016). Toward searching as a learning process: A review of current perspectives and future directions. Journal of Information Science, 42(1), 19-34.
- Collins-Thompson, K., Rieh, S. Y., Haynes, C. C., Syed, R. (2016). Assessing learning outcomes in web searching: A comparison of tasks and query strategies. Proceedings of the ACM SIGIR Conference on Human Information Interaction and Retrieval (CHIIR ’16), 163-172.
Credibility Assessment in Information Seeking
I’ve developed two models of credibility assessment. The Model of Judgment of Information Quality and Cognitive Authority has demonstrated that people make predictive judgments about which Websites contain credible information and then follow through with evaluative judgments by which they express the quality of information encountered. Another credibility model, “a unifying framework of credibility” was developed with Brian Hilligoss based on the results of an empirical study about people’s credibility assessments in a variety of everyday life information seeking contexts. We identified three distinct levels of credibility judgments: construct, heuristics, and interaction.
My credibility research found how people tend to rely on credibility heuristics which provide ways of conveniently finding information and quickly making credibility judgments. In a series of qualitative studies, we found ample evidence that people tend to rely on heuristic judgments extensively when they make credibility assessments. This helped them to minimize their effort in examining the content on those Websites.
I have used diverse research methods for credibility research. I have been employing lab-based user research, diaries and interviews, and experience sampling methods. These methods enabled us to capture the context of credibility assessments, which includes elements such as user goals, intentions, search tasks, and information behavior. We have been able to correlate multiple dimensions of credibility assessment with respect to levels of user participation on the Internet, types of digital media, and topics of online content.
- Rieh, S. Y. (2014). Credibility assessment of online information in context. Journal of Information Science Theory and Practice, 2(3), 6-17.
- Rieh, S. Y., Kim, Y. M., Yang, J. Y., & St. Jean, B. (2010). A diary study of credibility assessment in everyday life information activities on the Web: Preliminary findings. Proceedings of the 73th Annual Meeting of the American Society for Information Science and Technology, Vol. 47.
- Hilligoss, B. & Rieh, S. Y. (2008). Developing a unifying framework of credibility assessment: Construct, heuristics, and interaction in context. Information Processing and Management, 44(4), 1467-1484.
Credibility Constructs in Social Media Content
I have expanded credibility constructs for social media research to demonstrate the role of credibility perceptions in content contributors’ online activities. For instance, we have investigated how content contributors assess credibility when gathering information for online content creation and mediation activities, as well as the strategies they use to establish the credibility of the content they create. From one study, we identified three distinctive ways of establishing credibility that are applied during different phases of content contribution: ensuring credibility during the content creation phase; signaling credibility during the content presentation phase; and reinforcing credibility during the post-production phase. We also discovered that content contributors tend to carry over the strategies they used for assessing credibility during information gathering to their strategies for establishing the credibility of their own content.
In another study, we examined how bloggers establish and enhance the credibility of their blogs through a series of blogging practices. Based on an analysis of interviews with independent bloggers who blog on a range of topics, we presented audience-aware credibility as a theoretical construct. Audience-aware credibility is defined as how bloggers signal their credibility based on who they think their audience is and how they provide value to that perceived audience. The analysis of bloggers’ credibility constructs, conceptualizations of their audience, and perceived blog value identified four types of bloggers who constructed audience-aware credibility in distinctive ways: Community Builder, Expertise Provider, Topic Synthesizer, and Information Filterer. The findings revealed that a multi-dimensional construct of audience-aware credibility serves as a driving factor influencing and shaping the blogging practices of all four types of bloggers.
- Jeon, G. YJ & Rieh, S. Y. (2015). Social search behavior in a social Q&A service: Goals, strategies, and outcomes. Proceedings of Annual Meeting of the ASIST
- Rieh, S. Y., Jeon, G. YJ, Yang, J., & Lampe, C (2014). Audience-aware credibility: From understanding audience to establishing credible blogs. Proceedings of the Eight International AAAI Conference on Weblogs and Social Media (ICWSM 2014), 436-445.
- St. Jean, B., Rieh, S. Y., Yang, J. Y., Kim, Y. M. (2011). How content contributors assess and establish credibility on the Web. Proceedings of the 74th Annual Meeting of the American Society for Information Science and Technology, Vol. 48.
Information Behavior and Information Literacy
I have conducted a number of studies to better understand people’s information-related behavior. For instance, we investigated how people’s perceptions of information retrieval systems, their perceptions of search tasks, and their perceptions of self-efficacy influence the amount of invested mental effort (AIME) they put into using two different IR systems: a Web search engine and a library system. This study also explored the impact of mental effort on an end-user’s search experience. We found that when AIME is low, people put little conscious effort into searching, and thus, they are less likely to learn from their searches. Our study demonstrated that subjects’ perceptions of systems and self-efficacy not only influence AIME in online searching but also shape people’s search experience in important ways.
I collaborated with Prof. Karen Markey for her IMLS-funded project in which we designed, deployed, and evaluated the BiblioBouts information literacy game. This project led us to publish a book entitled “Designing Online Information Literacy Games Students Want to Play” (Markey, Leeder, & Rieh, 2014). In this book, we presented how the game’s design evolved in response to student input and how students played the game in dozens of college classrooms, including their attitudes about playing games to develop information literacy skills and concepts specifically and playing educational games. My contributions to this project focused on providing theoretical frameworks of information literacy and a research design for the evaluation study in which we examined the effectiveness of BiblioBouts for teaching students how to conduct library research.
- Rieh, S. Y., Bradley, D., Brennan-Wydra, E., Culler, T., Hanley, E., and Kalt, M. (2019). Librarian role-playing as a method for assessing student information literacy skills. Proceedings of the Annual Meeting of the American Society for Information Science and Technology, 227-236.
- Markey, K., Leeder, C., & Rieh, S. Y. (2014). Designing online information literacy games students want to play. Lanham, MD: Rowman & Littlefield.
- Rieh, S. Y., Kim, Y. M., Markey, K. (2012). Amount of invested mental effort (AIME) in online searching. Information Processing and Management, 48(6), 1136-1150.