The Researcher's World
Publishers Universities Funders Researcher
Bootcamp Session 15: AI Policy
How researchers, publishers, and funders are approaching AI in research
Publishers
Challenges:
  1. Paper mills
  1. Exposure to litigation
  1. Research integrity
  1. Confusion of AI-generated content with plagiarism
  1. Fear or retractions/delisting
  1. Originality/Novelty
Opportunities:
  1. Automate parts of the peer review process
  1. Increase quantity of papers (works well with APC business model)
  1. Find appropriate reviewers
  1. Find editors for special issues
  1. Licensing content for AI (additional revenue source) ?
Early adopter persona
  • 'Big 5'- Springer Nature, Wiley, Elsevier, Taylor & Francis, Sage
  • Publishers in technology-related disciplines
How are publishers regulating the use of AI tools in research papers?
The "Declaration" Approach
How are publishers & funders regulating the use of AI tools in research papers?
1. The prohibitive approach
2. The encouragement approach
3. The "Declaration" Approach
AI Policy Done Right

wiley

AI Guidelines

Generative AI tools are becoming an increasingly valuable part of the writing and research process, offering new ways to enhance creativity, streamline workflows, and tackle complex challenges. Whether you’re exploring these tools for the first time or already incorporating them into your writing, understanding how to use them responsibly ensures your work remains original, ethical, and aligned with professional publishing standards.

AI-Powered Publishing Use Cases

Coda

AI Powered Publishing Use Cases

Grant Funders
Hurtles:
  1. Undermine peer review
  1. Research Integrity
  1. Confidentiality
  1. Bias
  1. Security of intellectual property
Opportunities:
  1. Help sift through endless applications
  1. Data replication
  1. Support open infrastructures?
Drafting v. Review
NIH- complete ban for review (see researchers comments)
Wellcome Trust- Guidelines for responsible use
Sample funder guidelines
UKRI Applicants Guidelines
  1. Apply caution when entering information into generative AI tools to develop an application. Sensitive or personal data of others must never be input into a generative AI tool without formal consent from the individual.
  1. Consider the risk of bias when using outputs from the generative AI tool or model and consider mitigation.
  1. Ensure proposals don't contain any information that is:
  • confidential and used without consent
  • falsified
  • fabricated
  • plagiarised
  • misrepresented
4. All applications must comply with relevant intellectual property and data protection legislation.
5. Applications are expected to be transparent where they have used generative AI tools in the development of an application. This information will not affect the assessment process.
6. Applicants must not use generative AI during interviews, where this forms part of the application process.
Assesor Responsibilities
Assessors, including reviewers and panellists, must:
  • not use generative AI tools as part of their assessment activities
  • comply with relevant intellectual property and data protection legislation
  • not take into account or speculate within their assessment whether generative AI has been used to develop the application
How might funders respond?
Reflective, contextual and personalised questions
Partially randomised funding (or ‘lottery’) process
Deeper relationships b/w funder and applicant
AI tools to support funders in running more streamlined processes
Researchers should:
  1. Remain responsible for science output
  1. accountability
  1. critical approach to avoid hallucinations and bias
  1. no authorship
Look out for bias in training data, prompting, invented citations and interpretability
Use generative AI transparently
  1. Detail which generative AI tools have been used substantially, for example: interpreting data analysis, carrying out a literature review, identifying research gaps, formulating research aims, and developing hypotheses.
  1. Declare in methods or other appropriate section
  1. References to the tool could include the name, version, date, etc. and how it was used and affected the research process.
  1. If relevant, researchers make the input (prompts) and output available, in line with open science principles
  1. Take into account the stochastic (random) nature of generative AI tools and consider replication issues
Privacy, confidentiality, & intellectual property rights
  1. Generated or uploaded input (text, data, prompts, images, etc.) could be used for other purposes, such as training of AI models. Therefore, they protect unpublished or sensitive work (such as their own or others’ unpublished work) by taking care not to upload it into an external AI system unless there are assurances that the data will not be re-used.
  1. Don't provide third parties’ personal data to external generative AI systems unless the data subject (individual) has given them their consent and researchers have a clear goal for which the personal data are to be used so compliance with EU data protection rules.
  1. Understand the technical, ethical and security implications regarding privacy, confidentiality and intellectual property rights. Check institutional guidelines, privacy options of the tools, who is managing the tool (public or private institutions, companies, etc.), where the tool is running and implications for any information uploaded.
Respect national, EU and international legislation
  1. Pay attention to the potential for plagiarism (text, code, images, etc.) when using outputs from generative AI. The output of a generative AI (such as a large language model) may be based on someone else’s results and require proper recognition and citation.
  1. The output produced by generative AI can contain personal data. If this becomes apparent, researchers are responsible for handling any personal data output responsibly and appropriately.
Learn how to use GenAI tools properly including by undertaking training
  1. Stay up to date on the best practices and share them with colleagues and other stakeholders
  1. Aim at minimising the environmental impact of generative AI
Refrain from using GenAI tools in sensitive activities that could impact other researchers or organizations
  1. Avoiding the use of generative AI tools eliminates the potential risks of unfair treatment or assessment that may arise from these tools’ limitations (such as hallucinations and bias)
  1. Examples of risky use include peer review, evaluation of research proposals etc.
EC Recommendations for Research Institutions
1. Promote, guide and support the responsible use of generative AI in research activities.
a. Provide and/or facilitate training for all career levels and disciplines, including for research managers and research support staff, on using generative AI, especially on verifying output, maintaining privacy, addressing biases and protecting intellectual property rights and sensitive knowledge
Actively monitor the development and use of generative AI systems within organisations
  1. Remain mindful of the research activities and processes for which you use generative AI to support its future use. This knowledge can be used to: provide further guidance on using generative AI, help identify training needs and understand what kind of support could be most beneficial; help anticipate and guard against possible misuse and abuse of AI tools to be published and shared with the scientific community.
  1. Analyse the limitations of the technology and tools and provide feedback and recommendations to their researchers
  1. Keep track of the environmental impact of generative AI within their organisations and promote awareness raising initiatives
Reference or integrate generative AI guidelines into general research guidelines for good research practices and ethics
  1. Use guidelines as a basis for discussion; openly consult research staff and stakeholders on the use of generative AI and related policies
  1. Apply these guidelines whenever possible. If needed, they can be complemented with additional recommendations and/or exceptions that should be published for transparency.
Implement locally hosted or cloud-based generative AI tools that you govern
Ensure the appropriate level of cybersecurity of systems, especially those connected to the internet.
EC Recommendations for Research Funders
1. Promote and support the responsible use of generative AI in research.
2. Review the use of generative AI in their internal processes. Lead the way by ensuring they use it transparently and responsibly.
3. Request transparency from applicants on their use of generative AI facilitating ways to report it.
4. Monitor and get actively involved in the fast-evolving generative AI landscape.
Trustworthy AI
Ethical Principles for AI Systems:
  1. Respect for human autonomy
  1. Prevention of harm
  1. Fairness
  1. Explicability
Key Operational Requirements
  1. Human agency and oversight
  1. Technical robustness and safety
  1. Privacy and data governance
  1. Transparency
  1. Diversity, non-discrimination and fairness
  1. Environmental and societal well-being
  1. Accountability.
Is Your Use Case 'Substantive Use'?
Universities & Research Institutions
Challenges:
  1. 'Cheating' on papers
  1. Alternative evaluation systems
  1. Slow to adoption
  1. Cost
Opportunities:
  1. Increase institution-wide publications
  1. Win more funding from competitive agencies
  1. Support researchers
  1. New teaching models
Earlier adopter personas
  • Developing research countries
  • GDPR could delay progress in Europe
Thank you
Feel free to reach out!
Website: aclang.com, sciwriter.ai
Email: avi@aclang.com
Linkedin: Avi Staiman
Twitter: @sciwriter, @aletranslation
Researcher tools
Summarization
1
Before
Reading the abstracts from the journals in your field
2
Problem
  • Abstracts don't cover specific details researchers are interested in
  • Static content
  • One paper at a time
  • Difficult to understand
3
Solution
  • Semantic search
  • Intelligent summarization & comparison between articles in different journals
  • news feeds with updating summaries
  • lay-language summaries
Search & discovery
1
Before
Simple keyword search on google scholar or Pubmed
2
Problem
  • Irrelevant results
  • Not comprehensive
  • Time reviewing results
  • Research literature has grown too large
3
Solution
  • Data extraction
  • Synthesized findings
  • Curation
Literature review
Author networks & connections between papers
1
Before
Hours of reading in the library, siloed research environments
2
Problem
  • No good way to connect between papers and/or authors, especially in interdisciplinary studies
3
Solution
  • Lit search
  • Paper visualization
  • Robust author networks
  • visual graph
  • Narrow down which papers to read
Writing
1
Before
Long, frustrating and expensive process of writing up research
50% of authors don't write second paper
2
Problem
  • Bias against EAL researchers
  • Delays research dissemination
  • Many scientists aren't writers
3
Solution
  • GenAI co-pilot tools to help researchers supercharge research writing and get back to the lab
10 unsolved challenges for researchers
Require mediation by professionals + valuable time + money.
  1. Generating data for research analysis
  1. Generating synthetic data to test research feasibility 
  1. Creating "digital twins" for experiments
  1. Methodology development 
  1. Paperwork for various committees (such as Helsiniki) 
  1. Data input, analyses and visualization
  1. Waiting for comments from reviewer/facilitator 
  1. Publication impact, self-marketing
  1. Upgrading existing presentations
  1. Effectiveness in niche areas
Solution- central tools / services that allow researchers a fast and accessible one-stop-shop, instead of spreading out over several tools at expensive costs and learning each one separately.
Solutions are too segmented
Made with