The digital transformation of research has brought significant changes to how research is produced, disseminated, accessed, and used. Researchers can now collaborate across disciplines, institutions, and countries, use advanced tools and platforms, and share their outputs in various formats and channels. However, these changes also pose challenges for research evaluation and assessment, such as how to capture and measure the diversity, complexity, and dynamics of research activities and outputs, how to ensure the quality and validity of digital sources and data, and how to deal with ethical and legal issues related to data protection, privacy, and ownership.
-
The digital age has seen the rise of new research outputs, such as data sets, software, and interactive visualisations. These new outputs can be difficult to evaluate and assess using traditional methods.
-
Challenges: ?Managing and interpreting vast amounts of digital data. ?Ensuring data security and privacy in the digital research landscape. Opportunities ?Enhanced accessibility and dissemination of research outputs. ?Integration of advanced technologies for data analytics and visualization.
-
In the digital age, the main challenges for research evaluation and assessment revolve around the vast volume of data, the rapid pace of technological change, and the need for new methodologies that can accurately capture the multifaceted nature of digital scholarship. Conversely, the opportunities are equally significant; digital transformation offers innovative tools for data analysis, enables broader dissemination and collaboration, and fosters interdisciplinary approaches that can enrich research evaluation. As a researcher, I see the potential to leverage artificial intelligence, and big data analytics to develop more nuanced and dynamic assessment criteria that reflect the evolving landscape of digital scholarship.
One of the opportunities that the digital transformation offers for research evaluation and assessment is the use of alternative metrics, or altmetrics, which are indicators of the online attention and engagement that research outputs receive from various stakeholders and audiences, such as citations, downloads, views, shares, likes, comments, mentions, and endorsements. Altmetrics can complement traditional metrics, such as citation counts and impact factors, by providing a broader and more timely picture of the reach, influence, and relevance of research. However, altmetrics also have limitations and challenges, such as how to interpret and compare them across disciplines, platforms, and contexts, how to ensure their reliability and validity, and how to avoid gaming and manipulation.
-
In my opinion amount of research being produced is growing exponentially. This makes it difficult to keep up with the latest research and to identify the most important findings.
-
Opportunities: ?Diversifying assessment criteria beyond traditional citation counts. ?Incorporating diverse indicators such as social media mentions, downloads, and online engagement. Challenges: ?Establishing standardized and universally accepted alternative metrics. ?Addressing potential biases in alternative metrics.
Another opportunity that the digital transformation offers for research evaluation and assessment is the promotion of open science, which is a movement that advocates for making research more transparent, accessible, inclusive, and collaborative, by sharing not only the final outputs, but also the data, methods, protocols, code, and other intermediate products of research. Open science can enhance the quality, impact, and value of research by increasing its visibility, reproducibility, reusability, and interoperability. However, open science also poses challenges for research evaluation and assessment, such as how to incentivize and reward researchers for engaging in open practices, how to ensure the quality and integrity of open data and outputs, and how to balance openness with confidentiality and security.
-
Consider: 1. Transparency: Share all stages of research, including data, methods, protocols, and code, to enhance visibility and reproducibility. 2. Accessibility: Ensure research outputs are freely accessible to foster inclusivity and collaboration. 3. Quality Assurance: Implement rigorous checks to maintain the integrity of open data and outputs. 4. Confidentiality: Balance openness with the need to protect sensitive information and uphold security. 5. Interoperability: Promote standards that allow data and findings to be effectively used and integrated by others. 6. Collaboration: Foster a culture of shared learning and growth within the research community. With the right incentives, we can make science more accessible.
-
To add on this, Open science presents challenges in ensuring quality and credibility but offers opportunities for wider dissemination and collaborative advancements. For example, open peer review can be more transparent but may lead to biases. Conversely, open data allows for reproducibility checks and novel insights from shared resources. In my experience, open science accelerates knowledge exchange and fosters innovation, yet it requires robust frameworks to maintain research integrity in the digital landscape.
One of the challenges that the digital transformation poses for research evaluation and assessment is the risk of misuse and abuse of metrics, which are quantitative indicators of research performance and impact, such as citation counts, impact factors, h-indexes, and rankings. Metrics can have negative effects on research quality and culture if they are used inappropriately, uncritically, or exclusively to judge and reward researchers, such as creating incentives for gaming, misconduct, or salami-slicing, or creating pressures for productivity, competition, or conformity. To address this challenge, a responsible approach to metrics is needed, which is based on the principles of robustness, transparency, diversity, and reflexivity, and which recognizes the limitations, uncertainties, and biases of metrics, and the importance of human judgment and qualitative evidence.
-
Shut down after just three days, Meta's Galactica AI was designed to write scientific research papers and Wikipedia articles. Trained on 48 million scientific articles and textbooks, Galactica outperformed larger models like GPT-3 and BLOOM on measures such as BIG-bench, Beyond the Imitation Game Benchmark. The only problem? Galactica was prone to spewing scientific misinformation and even generated misleading research, including a Wikipedia-style entry on the benefits of eating crushed glass. Of course, humans can also generate fake science, but large language models might significantly lower the barrier to entry and amplify the scale of the issue. Training objectives need to be clearly defined to ensure scientific accuracy.
-
Responsible metrics advocate for the ethical and equitable use of quantitative indicators in research evaluation. They emphasize contextual understanding, acknowledging diverse research contributions, and avoiding unintended consequences such as gaming or biases. Responsible metrics promote fair and transparent assessment practices that prioritize quality, integrity, and the broader societal impact of research over simplistic numerical measures.
-
?Challenges: Balancing quantitative and qualitative, avoiding unintended consequences. ?Opportunities: Holistic impact, responsible use, ethical frameworks.
Another challenge that the digital transformation poses for research evaluation and assessment is the need to engage with a wider range of stakeholders and audiences who have an interest or a role in research, such as funders, policymakers, practitioners, industry, media, civil society, and the public. Stakeholder engagement can enhance the relevance, impact, and value of research by ensuring that it addresses the needs, expectations, and perspectives of different groups and sectors, and by facilitating the communication, dissemination, and uptake of research findings and recommendations. However, stakeholder engagement also requires new skills, methods, and tools for research evaluation and assessment, such as how to identify and involve relevant stakeholders, how to co-produce and co-evaluate research with them, and how to assess and demonstrate the societal impact and value of research.
-
Stakeholder engagement involves actively involving relevant individuals or groups in decision-making processes to ensure their perspectives, concerns, and needs are considered and addressed. It fosters collaboration, builds trust, and enhances the relevance and impact of initiatives by incorporating diverse insights and expertise from those affected by or involved in a particular project or endeavor.
-
?Challenges: Engaging diverse expectations, bridging academia-industry gap, ensuring participation. ?Opportunities: Collaboration, diverse perspectives, societal impact.
The digital transformation of research is an ongoing and evolving process that will continue to shape and challenge research evaluation and assessment in the future. Therefore, researchers need to be aware of the trends and issues that affect their field and discipline, and to be proactive and adaptive in developing and adopting new approaches and methods that are fit for purpose and context. Moreover, researchers need to be critical and reflective about the purposes and practices of research evaluation and assessment, and to be responsible and ethical in using and producing metrics and evidence that can inform and improve research quality, impact, and value.
-
Thing though with AI powered peer review is that the confidentiality of raw manuscripts is not guaranteed. Ideally research should be expose new ways of thinking, doing and being which AI may not be able to assess. Further more, AI is limited to information stored digitally whereas there is a wealth of knowledge such as indigenous knowledge that AI doesn’t have access to. If we replace human peer review with AI, we may unfortunately find that only certain ways of thinking become accepted which are not necessarily the best ways to move humanity forward.
-
?Challenges: Adapting to emerging tech, navigating ethical shifts, accommodating interdisciplinary research. ?Opportunities: Innovation, adaptive frameworks, continuous collaboration.
-
If thAI as Nikita Jain mentions also spewed out scientific misinformation havingbeen trained ofn so many articles then surely that should not just call into question AI for the intentions stated but indicate that is what it learned from tbe source material. We cannot just assume that Scientific articles are immune to containing bad or misinformation. We have all no doubt seen BS in some articles. One can be sure that the results of the AI were looked at very closely. But the source material could not be vetted by humans due to time and volume. It was assumed I presume that peer reviewed papers would be fault free. Its not so much AI which is the problem but quality of source material
-
In the digital age, research evaluation faces challenges like data overload and rapid tech changes, demanding updated assessment methods and addressing ethical concerns. Conversely, it offers opportunities such as innovative metrics (altmetrics), enhanced collaboration, and accessibility through digital platforms, alongside quicker feedback loops for research iteration. This dynamic poses a dualistic interplay of challenges and opportunities, shaping research evaluation and assessment’s future.
更多相关阅读内容
-
Higher EducationHere's how you can convey the importance of your research or projects to academic collaborators.
-
ResearchWhat are the emerging trends in industry research funding?
-
ResearchHere's how you can ensure reproducibility by effectively communicating your research methods and procedures.
-
ResearchWhat are the best ways to train primary researchers in primary research methods?