Assessing Data Literacy

Assessing Data Literacy

This article follows up on my analysis of data literacy started in my article previously posted on LinkedIn, Defining Data Literacy. (Cover image: Open Data Institute).

Evaluation or Assessment Framework

It is important to be able to evaluate or assess the level of data literacy competencies individually or across the organization for the purpose of assessing operational readiness and for the purpose of planning future training and development. Here we first provide an overview of some data literacy assessment programs, then consider some approaches or models of data literacy assessments, and finally consider some data literacy methods.

Assessment Programs

Following are some examples of relevant assessment programs illustrating key features of competency assessment in data literacy and related areas. This is by no means an exhaustive list.

OECD

Although not addressing data literacy specifically, it is worth considering in this context the OECD Programme for the International Assessment of Adult Competencies (PIAAC) and the Programme for International Student Assessment (PISA). PISA in particular “has become accepted as a reliable instrument for benchmarking student performance worldwide, and that PISA results have had an influence on policy reform in the majority of participating countries/economies” (Breakspear, 2012). While PISA is specifically focused on 15-year-old students, PIAAC, assesses “measures adults’ proficiency in key information-processing skills -?literacy, numeracy and problem solving -?and gathers information and data on how adults use their skills at home, at work and in the wider community” (Kirsch & Thorn, 2016).

PIACC Assessment. Kirsch & Thorn, 2016, 2.2.1.3

These tests are standardized objective tests, that is, they are defined to measure a specific range of competencies, and the assessments present questions to be answered or tasks to be completed, similar in form to a test or examination at a school or university. Specifically, in the case of PIACC literacy assessment, participants were asked to “access and identify tasks require respondents to locate information in a text, integrate and interpret tasks involve relating parts of one or more texts to each other, and evaluate and reflect tasks require the respondent to draw on knowledge, ideas or values” (Kirsch & Thorn, 2016, 2.2.1.3)

OECD has begun to address some questions more directly related to competencies related to data literacy, as for example in its study of the question “are 15-year-olds prepared to deal with fake news and misinformation?” It reports that “An average of 54% of students in OECD countries reported being trained at school on how to recognise whether information is biased or not,” with Canada ranking with above-average performance abilities and opportunity to learn (see diagram) and that “education systems with a higher proportion of students who were taught whether information is subjective or biased were more likely to distinguish fact from opinion” (Suarez-Alvarez, 2021 113).

Guidelines for Assessment and Instruction in Statistics Education (GAISE)

Endorsed by the American Statistical Association, the Guidelines for Assessment and Instruction in Statistics Education (GAISE) emphasize that there is no one route to teaching and assessing statistical literacy and notes that “mastering specific techniques is not as important as understanding the statistical concepts and principles that underlie such techniques” (GAISE, 2016, 8).

The framework of essential concepts and 22 examples emphasize the integration of statistical reasoning in the context of real-world examples. Students are asked about the investigative process, which includes formulating questions, considering data, analyzing data, and interpreting results, a pattern resembling the data analytics workflow described above. These goals “require assessments of the students’ statistical understanding through their written communication. For example, students should be able to interpret and draw conclusion from standard output from statistical software” (Johnson, 2018).

Sample Statistical Literacy Assessment. Bargagliotti, et al., 2020, p. 110.

For example, in the assessment above, the student is presented statistical data in the form of a graph and asked to answer questions interpreting the information depicted. The question involves recognition of patterns in the data as well as the use of these patters for prediction or extrapolation.

Eckerson Group Data Literacy Imperative

By contrast with the OECD and GAISE programs, the Eckerson Group describes data literacy assessment specifically and includes assessment not only of individual data literacy but also of the organization (Wells, 2021). Assessments are based initially on a comprehensive Data Literacy Body of Knowledge (DLBOK) defined by the organization. The DLBOK is used for gap analysis and training program development.

“Individual assessment is only the beginning. It is the foundation upon which organizational assessment is built, and organizational assessment is an essential process when building a culture of data literacy. Literacy assessment with business impact is performed at three levels—by individual, by role, and by group.”

Data Literacy Assessment Process, Wells, 2022

A data literacy assessment based on the Eckerson DLBOK can be viewed at eLearning Curve (2022). It offers a set of 50 questions testing the respondent’s data literacy knowledge, skills and attitude through questions about terminology, processes, tools, functions, and expectations.

Data Literacy Assessment Example, eLearningCurve, 2022

Data Literacy Model-Based Assessment

The examples in the previous section, though they vary in content and format, are consistent in the requirement that assessments be based on a formal, or structured, representation of the knowledge being assessed. There is however little if no agreement on what such a model should look like.

The formal creation and validation of such a model is well beyond the scope of this article. However, it is important to consider the essential elements of such a model and to offer insight on what such a model would look like, for the purpose of further discussing the assessment of data literacy and mechanisms for developing or improving it.

Data Literacy Model-Based Assessment

In the analysis of data literacy competencies described in the first section of this report we obtained an unstructured list of competencies. These competencies were organized into different categories by various studies, but there was no consistency whatsoever in the categorization scheme from study to study.

What is offered here is a model based on a slightly modified full list of competencies drawn from the data literacy studies cross-referenced with a comprehensive skills taxonomy. Again, there is a range of taxonomies to choose from, and a detailed discussion of these taxonomies is beyond the scope of this report, therefore for the sake of consistency with much of the work done previously a slightly modified version of Bloom’s taxonomy is used (Bloom, 1956).

While Bloom’s taxonomy is best known for six levels of cognitive skills development (from ‘remembering’ to ‘evaluation’) in fact three separate taxonomies were described: cognitive, affective and psychomotor. These can be thought of corresponding with the already-described taxonomy of knowledge, attitudes and skills, respectively.

Bloom's Taxonomy

There are some caveats to this use. First, while Bloom’s is often thought of in terms of levels of achievement, the elements in each of the three domains are interpreted specifically as a taxonomy, with no presupposition as to progression through stages or higher degrees of attainment. Progression, rather, ought to be seen as occurring within each element (more on this below). This we would say that these elements can be represented as distinct skills or competencies.

Further, this taxonomy needs to be extended to accommodate both individual and organizational competencies. For this purpose, we revisit the definition of data literacy competencies from above.

Bloom's Taxonomy Adapted to Individual and Organizational Competencies

The following three subsections briefly expand this model, mooting possible definitions for each element.

Knowledge

Bloom's Taxonomy for the Knowledge Domain Adapted to Individual and Organizational Competencies

Skills / Competencies

Bloom's Taxonomy for the Skills Domain Adapted to Individual and Organizational Compete

Attitudes

Bloom's Taxonomy for the Attitudes Domain Adapted to Individual and Organizational Competencies

Considerations

It was noted in the first section that the Databilities list of competencies does not include, among others, data ethics. It should be observed that the model just described, produced employing a theory-based approach, also does not include data ethics. It may be that Bloom’s is an inappropriate taxonomy to employ for this purpose, it may be that the specific list of competencies is not fully described (perhaps, for example, ‘ethics’ should be added as an ‘attitude’), or it may be that data ethics should not properly be considered a data literacy competency.

What such considerations make clear is that any theory-based conceptualization of data literacy is employed, it should be mapped against existing discussions of data literacy, including the studies cited above, and validated through research and discussion with those responsible for managing and enhancing data literacy capabilities.

Levels

Levels. DuBois, 2022

Many data literacy assessment models report their results in terms of ‘levels’, where the level indicates a degree of proficiency.

For example, Means, et al. (2011) categorize teachers' ability to use data to inform instruction as ‘below basic’, ‘basic’, ‘proficient’ and ‘advanced’. This test measured, for example, whether teachers could find relevant data in a complex table or graph (basic) or manipulating data from a complex table or graph to support reasoning (advanced).

The NU Data project, “a professional development intervention aimed at preparing special education teams to use data-based decision making to improve academic outcomes for students with disabilities” (Doll, et al., 2014) finds a “data knowledge scale”, a “a single factor measure of data literacy” (Sikorski, 2016).

Quanthub, which measures data literacy for commercial clients, offers a scale of seven ?‘personas’, “each of which represents progressively more sophisticated skill levels up to ‘data scientist’” (DuBois, 2022). DataLiteracy (2021), which “works hand-in-hand with organizations who seek to improve team-based data literacy”, offers a five-level “maturity” rating. Many more examples of data literacy ‘levels’ can be identified in the literature. As in the case of other measures, there is no standard or consistent approach defining how these levels are determined nor even what they mean.

Role-Defined Data Literacy

It is arguable that a single-factor measure of data literacy is insufficient to account for the variability in both the set of data literacy competencies and also the varying degree to which each competency is required in different job functions or roles. Accordingly, a role-defined data literacy model is proposed here.

Defining a Skills Profile

This figure illustrates the calculation of a role-defined data literacy profile. It consists of a combination of the set of competencies as defined in the data literacy model with the actual job or function description. This allows for a definition of the relative importance of each competency for that function, demonstrated here in the form of a radar chart (also known as a spider chart).

As discussed above, the precise definition of data literacy competencies ought to be undertaken in consultation with relevant personnel. Job or function descriptions may be obtained from extant text (the example in the diagram is from the forces.ca Careers page) or drafted as text by managers and those occupying the position. The competency profile may be created by a simply counting of the frequency of relevant terms, or by a more nuanced analysis, perhaps using machine learning.

The same process may be used to create actual competency profiles for each individual evaluated, by employing test results or actual communications generated by the person in question (such a process would be subject to ethical and privacy considerations). A similar process may be used to generate organizational level competency profiles.?

Assessment Methods

Four major forms of assessment were identified in the literature:

·??????? Self-Report

·??????? Skills Test (Open Response)

·??????? Skills Test (Multiple Choice)

·??????? Analysis

Self-Report

In a self-report assessment, the user is presented with a series of questions about their capabilities to which they respond (presumably) honestly. While used in some cases for individual assessments, such forms of assessment are useful for assessments of organizational capabilities since there is typically no direct or objective means of testing.

Self-Assessment. Jones, 2019

The website ‘DataLiteracy’, for example, offers a “17 Key Traits of Data Literacy Self-Assessment” evaluation where for each train respondents use a sliding scale tool to indicate their proficiency and the relative importance of the trait (Jones, 2019). Similarly, a Udemy course offers self-assessment based “Data Literacy Assessment for Every Employee” (Jones, 2021). The Canada School of Public Service offered a very similar “How Data Literate Are You” quiz for federal employees (illustrated below).

There is good reason to be sceptical of self-reported cognitive capabilities, even when respondents are being honest. Subjective assessment may bias responses for a variety of reasons, including a desire to provide the right answer, or as a result of imposter syndrome. But even where no bias is present, “convincing evidence of the association between self-report scales and actual cognitive performance has not been demonstrated” (Williams, et al., 2017).

In research in 2020, NRC and DRDC researchers employed the Databilities (Data to the People, 2023) data literacy assessment, which employs a self-report method. Aware of this concern, two additional sets of questions were included in the study, a set of ‘objective’ questions to measure the respondents’ actual capabilities (at least in statistical reasoning) and a set of questions intended to measure whether respondents had a bias toward pleasing others. Analysis found a correlation between objective test scores and self-reports. However, such analysis must be conducted with caution to avoid an autocorrelation effect (Fix, 2022).

Skills Test (Open Response)

In an open-response skills test the respondent is asked a question and provided a space in which to provide a response. Typically, there is a correct answer, or minimally, a possibility that some answers may be better than others. Assessment of the response may be based on the factual content expressed in the response (i.e., the answer) or on criteria related to the formulation of the response (e.g., use of evidence or proper argumentation).

For example, students taking the Ontario Secondary School Literacy Test (OSSLT) (EQAO, 2020) might be given a passage to read and asked, “Explain why Montreal’s approach to graffiti is beneficial. Use specific details from the selection to support your answer.” This question would require a specific response (why it’s beneficial) produced in a specific way (using details from the text) (St. Mary’s, 2018). Or the question may require a more complex construction, as in the example illustrated below:

Evaluation criteria for such a question would include completeness (answering all the ‘w’ questions), comprehensiveness (addressing both image and text) and focus (toward a newspaper audience).

St. Mary’s Practice Test 2017/2018

An advantage of open-response assessments is that they more closely emulate real-world contexts. For example, being a data-literate teacher means being able to draw open-ended conclusions from a set of data (Athanases, et al. 2013). The same holds for other professions, including military professions, where there may be no fixed, specific, or ‘right’ answer to questions, only better or worse ways of working with the data.

Levels. Sickler, et al., 2021

Assessment of open-response tests therefore requires a set of evaluation criteria or rubric. While accepting that “there is not a clear consensus of what it means to be ‘data literate’,” Sickler, et al. (2021) propose a scoring rubric for the measurement of data literacy skills in undergraduate education. The rubric is based on “data skill indicators,” which on examination are analogous to the data literacy competencies identified in the first section of this report.

A major weakness of open-response assessment is the need for individual and interpretive grading, which requires significant human resources and time, and may introduce elements of subjectivity and bias. Work is underway in the development of automated assessment, for example, AI-supported essay grading (see, eg. Kumar and Boulanger, 2020).

Skills Test (Multiple Choice)

The intent of multiple-choice tests is to obtain the same quality of assessment as with open-response assessments, but with less effort required on the part of both respondents and graders. Automated grading of multiple-choice tests is well-established and is frequently employed in online courses and web-based resources.

Multiple Choice. Ontario Secondary School Literacy Test (OSSLT), 2021

While they may be easier to grade, multiple-choice tests are difficult to design. It is important that the responses measure the skills being tested, and not unassociated skills (for example, incidental subject knowledge, or the ability to decipher double negatives). The choices offered need to be plausible, but distinct and easily distinguished by someone with the appropriate skill. It is important to design and develop such tests using recognized methodologies, such as Rasch modeling, which “assumes that the underlying construct that is being measured varies along a single dimension” (Bond & Fox, 2012).

Most data literacy assessments are offered in the form of multiple-choice tests. The eLearning Curve (2022) assessment referenced above is one. Another commercial example can be found by trying the Questionmark Data Literacy test by Cambridge Assessment (2021). Most of the Ontario Secondary School Literacy Test (OSSLT) is in the form of multiple-choice questions, as is the Ontario School data literacy assessment used as ‘objective questions’ in the previously mentioned NRC-DRDC research project.

Analysis

“Content analysis is a method designed to identify and interpret meaning in recorded forms of communication by isolating small pieces of the data that represent salient concepts and then applying or creating a framework to organize the pieces in a way that can be used to describe or explain a phenomenon.” (Kolbe & Burnett, 1991)

Content Analysis. Golonka, et al., 2017, p. 161

Content analysis is more often used in the context of research rather than testing and assessment, but as machine learning analysis becomes more prominent, we may expect it to be used more frequently to access data literacy. In an analysis a body of typically unrelated content is subject to labeling and categorization by researchers to identify semantic, communicative, or cognitive elements. That said, the practice of content analysis as an educational research and assessment tool is well understood in the community (Kleinheksel, et al., 2020).

An illustration of this methodology may be found in the qualitative analysis of chat transcripts to identity peer interaction in text chat (Golonka, et al., 2017). Analysis requires a taxonomy of the entities being identifies, as for example provided by a concept map. In the case of data literacy, this set of concepts is analogous to the data literacy competencies described above, and an analysis would contain two parts for each: the name or type of competency being attempted, and the degree to which it was successful.

Some examples of analysis to identify data literacy are extant. Suryadi, et al. (2020) study data literacy in physics students (and based on the analysis find it deficient). Noting that “the meaning of data literacy varies depending on who uses it, and its concept is often conveyed in terms other than data literacy,” Yousef, et al. (2021) use text analysis to identify data literacy communities. Piro and Hutcheson (2014) analyze “changes in perceptions of comfort toward data-literacy behaviors before and after an instructional intervention called a Data Chat.”

Validity and Reliability

As mentioned in passing above, it can be a challenge to ensure that assessments accurately measure the data literacy skills and competencies they are intended to assess. Hence there should be a process to ensure the assessments are valid and reliable.

Validity and Reliability

As is often the case, indices from other disciplines can be useful here. For example, Cohen’s kappa index goes beyond assessor agreement to take into account for the possibility that they actually guess on at least some variables (McHugh, 2012). While used in medicine, it has been used for example by Ebbeler, et al. (2016) to calculate the inter-rater agreement in data literacy assessments.

A similar measure is Cronbach’s alpha coefficient, which measures how closely related a set of items are as a group, used by Delmas, et al. (2007) in an assessment of statistical literacy. It is a function not only of the difference between items but also of the number of items evaluated. Other measures of reliability include Fleiss kappa, the contingency coefficient, the Pearson r and the Spearman Rho, the intra-class correlation coefficient, the concordance correlation coefficient, and Krippendorff’s alpha (McHugh, 2012).

There are several different types of validity (Linn and Millar, 2005):

·??????? Face validity - do the assessment items appear to be appropriate?

·??????? Content validity - does the assessment content cover what you want to assess?

·??????? Criterion-related validity - how well does the test measure what you want it to?

·??????? Construct validity - are you measuring what you think you're measuring?

There is no single test for validity of data literacy assessments; it “involves amassing evidence that supports these interpretations and decisions” to produce a “validity argument”. However, in other disciplines, quantitative approaches to content validity estimations such as Lawshe’s CVR and Aiken’s V are used, which in turn are based on expert assessments (Ikhsanudin & Subal, 2018).



References


Athanases, Steven Z., et al. “Fostering Data Literacy through Preservice Teacher Inquiry in English Language Arts.” The Teacher Educator, vol. 48, no. 1, 2013, pp. 8–28.

Bargagliotti, Anna, et al. Pre-K-12 Guidelines for Assessment and Instruction in Statistics Education II (GAISE II): A Framework for Statistics and Data Science Education. American Statistical Association, 2020, https://www.amstat.org/docs/default-source/amstat-documents/gaiseiiprek-12_full.pdf.

Bloom, B. S. “Taxonomy of Educational Objectives”, Handbook: The Cognitive Domain. David McKay, 1956. https://www.scirp.org/(S(i43dyn45teexjx455qlt3d2q))/reference/ReferencesPapers.aspx?ReferenceID=1506705.

Breakspear, Simon. “The Policy Impact of PISA: An Exploration of the Normative Effects of International Benchmarking in School System Performance.” OECD, Feb. 2012, https://doi.org/10.1787/5k9fdfqffr28-en.

Canada School of Public Service (CSPS). How Data Literate Are You? Government of Canada, Apr. 2022, https://catalogue.csps-efpc.gc.ca/product?catalog=DDN302&cm_locale=en.

DataLiteracy. “The Data Literacy Score | Data Literacy.” Data Literacy | Learn the Language of Data, Feb. 2021, https://dataliteracy.com/data-literacy-score.

---. “About This Project - DataLiteracy.Ca.” DataLiteracy.Ca, Intyernet Archive, 22 Dec. 2021, https://web.archive.org/web/20211222131031/http:/dataliteracy.ca/about-this-data-literacy-project.

Data to the People. Databilities. Viewed 2023. https://www.datatothepeople.org/databilities

Delmas, Robert, et al. “Assessing Students’ Conceptual Understanding after a First Year Course in Statistics.” Statistics Education Research Journal, vol. 6, Jan. 2007, https://iase-web.org/documents/SERJ/SERJ6(2)_delMas.pdf.

Doll, Elizabeth. Using Data to Foster the School Success of Students with Disabilities. Institute of Education Sciences, 2011, https://ies.ed.gov/funding/grantsearch/details.asp?ID=1131.

DuBois, Jen. “Data Literacy Assessment Is Vital to Data Culture.” QuantHub, QuantHub, 24 Feb. 2022, https://quanthub.com/data-literacy-assessment.

Duncan, Alan D., Donna Medeiros, Aron Clarke, et al. “How to Measure the Value of Data Literacy.” Gartner, Gartner, Apr. 2022, https://www.gartner.com/en/documents/4003941.

---. “Toolkit: Data Literacy Individual Assessment.” Gartner, Gartner, Apr. 2020, https://www.gartner.com/en/documents/3983897.

Ebbeler, Johanna, et al. “Effects of a Data Use Intervention on Educators’ Use of Knowledge and Skills.” Studies in Educational Evaluation, vol. 48, Mar. 2016, pp. 19–31, https://doi.org/10.1016/j.stueduc.2015.11.002.

Education Quality and Accountability Office (EQAO). “Ontario Secondary School Literacy Test (OSSLT).” EQAO, Government of Ontario, 2020, https://www.eqao.com/the-assessments/osslt.

eLearningCurve. “Request Access to the Data Literacy-Self Assessment”. Apr. 2022, https://ecm.elearningcurve.com/Articles.asp?ID=369.

Fix, Blair. “The Dunning-Kruger Effect Is Autocorrelation – Economics from the Top Down.” Economics from the Top Down, 8 Apr. 2022, https://economicsfromthetopdown.com/2022/04/08/the-dunning-kruger-effect-is-autocorrelation.

GAISE College Report ASA Revision Committee. Guidelines for Assessment and Instruction in Statistics Education (GAISE). American Statistical Association, 2016, https://www.amstat.org/education/gaise.

---. Guidelines for Assessment and Instruction in Statistics Education (GAISE) College Report 2016. American Statistical Association, 2016, https://www.amstat.org/docs/default-source/amstat-documents/gaisecollege_full.pdf.

Golonka, Ewa M., et al. “Peer Interaction in Text Chat: Qualitative Analysis of Chat Transcripts.” Language Learning & Technology, vol. 21, no. 2, June 2017, pp. 157–78.

Ikhsanudin, and B. Subali. “Content Validity Analysis of First Semester Formative Test on Biology Subject for Senior High School.” Journal of Physics: Conference Series, vol. 1097, no. 1, Sept. 2018, p. 012039, https://doi.org/10.1088/1742-6596/1097/1/012039.

Johnson, Beth. “Assessing Written Communication Skills in STEM Courses.” Innovations in Teaching & Learning Conference Proceedings, vol. 10, Aug. 2018, https://doi.org/10.13021/G8itlcp.10.2018.2241.

Jones, Ben. “A Data Literacy Assessment for Every Employee.” Udemy Business, Udemy, July 2021, https://business.udemy.com/resources/data-skills-assessment-template.

---. “Take the 17 Key Traits of Data Literacy Self-Assessment. DataLiteracy”. 2019 https://dataliteracy.com/take-the-17-key-traits-self-assessment/ ?

Kirsch, Irwin, and William Thorn. Technical Report of the Survey of Adult Skills (PIAAC) (2nd Edition). Organisation for Economic Co-operation and Development (OECD), 2016, https://www.oecd.org/skills/piaac/PIAAC_Technical_Report_2nd_Edition_Full_Report.pdf.

Kleinheksel, A. J., et al. “Demystifying Content Analysis.” American Journal of Pharmaceutical Education, vol. 84, no. 1, Jan. 2020, https://doi.org/10.5688/ajpe7113.

Kolbe, Richard H., and Melissa S. Burnett. “Content-Analysis Research: An Examination of Applications with Directives for Improving Research Reliability and Objectivity.” Journal of Consumer Research, vol. 18, no. 2, Sept. 1991, pp. 243–50, https://doi.org/10.1086/209256.

Kumar, Vivekanandan, and David Boulanger. “Explainable Automated Essay Scoring: Deep Learning Really Has Pedagogical Value.” Frontiers in Education, Oct. 2020, https://doi.org/10.3389/feduc.2020.572367.

Linn, Robert L., and David M. Miller. Measurement and Assessment in Teaching, 9th Edition. Pearson, 2005, https://www.pearson.com/us/higher-education/product/Linn-Measurement-and-Assessment-in-Teaching-9th-Edition/9780131137721.html.

McHugh, Mary L. “Interrater Reliability: The Kappa Statistic.” Biochemia Medica, vol. 22, no. 3, Oct. 2012, p. 276.

---. “Interrater Reliability: The Kappa Statistic.” Biochemia Medica, vol. 22, no. 3, Oct. 2012, p. 276.

Means, Barbara, et al. Teachers’ Ability to Use Data to Inform Instruction: Challenges and Supports. US Department of Education, Office of Planning, Evaluation and Policy Development, Feb. 2011, https://eric.ed.gov/?id=ED516494.

National Defence. The Department of National Defence and Canadian Armed Forces Data Strategy - Canada.Ca. Government of Canada, 3 Dec. 2019, https://www.canada.ca/en/department-national-defence/corporate/reports-publications/data-strategy.html.

Oberl?nder, Maren, et al. “Digital Competencies: A Review of the Literature and Applications in the Workplace.” Computers & Education, vol. 146, Mar. 2020, p. 103752, https://doi.org/10.1016/j.compedu.2019.103752.

Piro, Jody S., and Cynthia J. Hutchinson. “Using a Data Chat to Teach Instructional Interventions: Student Perceptions of Data Literacy in an Assessment Course.” New Educator, vol. 10, no. 2, 2014, pp. 95–111.

Sickler, Jessica, et al. “Measuring Data Skills in Undergraduate Student Work.” Journal of College Science Teaching, vol. 50, no. 4, Apr. 2022, https://www.nsta.org/journal-college-science-teaching/journal-college-science-teaching-marchapril-2021/measuring-data.

Sikorski, Jonathon David. Examination of the NU Data Knowledge Scale. University of Nebraska - Lincoln, 2016, https://digitalcommons.unl.edu/cgi/viewcontent.cgi?article=1268&context=cehsdiss.

St. Mary’s. St. Mary’s Practice Test 2017/2018. 2018, https://drive.google.com/file/d/1LsDbvAy_YHepMVD9VVEEWH6w5U_Dk1-E/view.

Statistics Canada. The Daily - Begin Your Data Journey with Data Literacy Training Videos. Government of Canada, 23 Sept. 2020, https://www150.statcan.gc.ca/n1/daily-quotidien/200923/dq200923f-eng.htm.

Suarez-Alvarez, Javier. “Are 15-Year-Olds Prepared to Deal with Fake News and Misinformation?” OECD, May 2021, https://doi.org/10.1787/6ad5395e-en.

Suryadi, et al. “Data Literacy of High School Students on Physics Learning.” Journal of Physics: Conference Series, vol. 1839, no. 1, Mar. 2021, p. 012025, https://doi.org/10.1088/1742-6596/1839/1/012025.

United Nations Industrial Development Organization (UNIDO). UNIDO. Competencies. Strengthening Organizational Core Values and Managerial Capabilities - PDF Free Download. United Nations, 2002, https://docplayer.net/9459584-Unido-competencies-strengthening-organizational-core-values-and-managerial-capabilities.html.

Wells, Dave. Building a Data Literacy Program. 4 Jan. 2021, https://www.eckerson.com/articles/the-data-literacy-imperative-part-i-building-a-data-literacy-program.

Williams, Paula G., et al. “On the Validity of Self-Report Assessment of Cognitive Abilities: Attentional Control Scale Associations With Cognitive Performance, Emotional Adjustment, and Personality.” Psychological Assessment, vol. 29, no. 5, 2017, pp. 519–30.


要查看或添加评论,请登录

社区洞察

其他会员也浏览了