AI Ethic
Code of Ethics for Artificial Intelligence (AI) Usage
Proceedings International Conference on Health Science and Technology (ICHST)
Version: 1.0
Effective Date: 10 September 2025
1) Purpose & Scope
This document establishes ethical standards for the use of AI by all parties involved in the proceedings: authors, editors, peer reviewers, organizers, and supporting service providers. These rules apply to all stages: submission, peer review, editing, production, and post-publication.
2) Definitions
- Generative AI: Systems that produce text/image/audio/code (e.g., LLMs, text-to-image).
- Analytical AI Tools: AI for translation, grammar checking, plagiarism detection, summarization, statistical analysis, etc.
- Disclosure: A written and specific statement about the use of AI.
3) Core Principles
- Human accountability: Authors, editors, and reviewers remain fully responsible for content and decisions.
- Transparency: All uses of AI must be clearly, specifically, and auditable disclosed.
- Scientific integrity: Prohibition of fabrication, falsification, and plagiarism—including AI-facilitated plagiarism.
- Privacy & confidentiality: Prohibited to input confidential/restricted data into public AI tools without written consent.
- Fairness & non-discrimination: Strive to mitigate algorithmic bias.
- Safety & legal compliance: Comply with copyright, data/model licenses, and data protection regulations (e.g., GDPR/local rules).
- Reproducibility: Record tool versions, settings (prompts), and workflows.
4) Author Guidelines
4.1 Permitted Uses (with disclosure)
- Editorial assistance: grammar, style consistency, light paraphrasing.
- Draft translation.
- Brainstorming structure or outline.
- Summarization/non-substantive explanations.
- Initial code/syntax or boilerplate that is reviewed and tested by the author.
4.2 Uses Requiring Detailed Disclosure
- Substantial text, images, tables, or code generated by AI.
- AI-based data analysis (e.g., ML/auto-ML) that influences results/conclusions.
- Automated data cleaning, labeling, or augmentation with AI.
- Use of plagiarism/AI detectors influencing revision decisions.
4.3 Prohibited Uses
- Listing AI as an author.
- Fabrication of citations, data, or results (hallucination).
- Submitting AI-generated manuscripts without critical human review.
- Uploading confidential/personal data to public AI services without permission.
- Copyright/license violations (including use of unauthorized models or datasets).
- Misleading scientific image manipulation (e.g., splicing without annotation).
4.4 Additional Author Responsibilities
- Factual verification and citations remain the author’s responsibility.
- Re-test AI-assisted code/analysis; provide repository or appendix with scripts, tool versions, and key parameters.
- Include an AI Disclosure Statement (see Section 8).
- Ensure rights for data/images/code generated with AI (licenses, attribution).
5) Reviewer Guidelines
- Do not upload manuscripts (or parts thereof) to public AI tools without editor approval, due to confidentiality.
- If using AI for summarization/style-checking, do so locally or with services that guarantee confidentiality, and disclose this to the editor in confidential notes.
- Decisions, recommendations, and arguments remain the reviewer’s own; avoid quoting raw AI output as justification.
6) Editor & Organizer Guidelines
- Establish secure AI channels for routine tasks (style editing, metadata).
- Document detection policies (if any), and do not rely solely on one AI detector metric to decide violations.
- Maintain an audit trail: policy versions, disclosure forms, decision logs.
- Provide guidance on model/image licensing and copyright.
- Provide consent forms if sensitive data is involved.
- Annual AI ethics training for editors/reviewers.
7) Policies for AI-Generated Images, Code, & Data
- Images/visuals: Must be labeled “AI-generated/edited,” with process description. Prohibited edits that alter scientific meaning without disclosure.
- Code: Cite source (AI + library), license, and test results. Authors are responsible for security and compliance.
- Synthetic data: Only if methodology, proportion of synthetic data, and impact on results are explained.
- External datasets/models: Must include version, license, and usage limitations.
8) Disclosure & Attribution Template
Place in the Acknowledgments/Methods section or the first page after the abstract.
Example A (editorial assistance):
“The authors used [Tool Name, Version] for grammar checking and style adjustments. All interpretations and conclusions are the responsibility of the authors.”
Example B (substantive content):
“Section 3.2 (baseline), Figure 4, and the initial draft of the preprocess() function were generated with the assistance of [Model Name, Version] using the prompt provided in Appendix A. The authors reviewed, verified, and modified all outputs.”
Example C (AI-based analysis):
“Classification analysis used [AutoML/Model, Version] with parameters [summary]. Dataset, code, and training logs are available at [repository/DOI link].”
9) Privacy & Data Rights Compliance
- Ensure a legal basis for data processing (consent, contract, legitimate interest).
- Anonymize personal data; minimize data processed by third-party services.
- Review Terms of Service of AI tools; avoid services that store/retrain from confidential inputs.
- Follow restricted access rules for licensed data.
10) Enforcement, Sanctions, & Appeals
Violation levels & responses (examples):
- Minor: incomplete disclosure → request revision + education.
- Moderate: substantial AI content without disclosure → major revision or rejection.
- Severe: data/result fabrication, privacy/copyright violations → manuscript retraction, institutional notification, and/or temporary submission ban.
Appeals are allowed; submit evidence, logs, and clarification of AI use.
11) Traceability & Reproducibility
- Include an “AI Details” appendix with: tools & versions, prompts, key parameters, seed, access date, checkpoint/model card.
- Encourage use of model cards/datasheets and DOI for artifacts.
12) Communication in Call for Papers (CFP)
Include the following summary in CFP and website:
- AI may be used as an assistance tool with mandatory disclosure.
- AI cannot be listed as an author or used to fabricate data/citations.
- Authors may be asked to provide prompts, logs, or code for verification.
13) Author Compliance Checklist
- I have reviewed all AI outputs and take responsibility for their content.
- I disclosed AI use specifically (tool, version, manuscript sections).
- I hold rights to data/images/code used/generated.
- No confidential/personal data was uploaded to public services without permission.
- AI-assisted code/analysis is reproducible (repo/appendix provided).
- AI-generated/edited images/visuals are labeled and not misleading.
- Citations and references were manually verified.
14) Policy Review
This policy will be reviewed at least annually or upon significant regulatory/technological changes. Versions and updates will be documented in a changelog.
Appendix A — Example AI Usage Reporting Format
- Tool & version: [e.g., GPT-4o mini (2025-xx-xx)]
- Purpose of use: [editorial / generative / analytical / translation / other]
- Affected sections: [e.g., Section 2.1, Table 3, Code py]
- Prompt/parameter summary: [short copy or full link in repo]
- Manual validation: [what was checked, by whom]
- Risks & mitigation: [bias, privacy, licensing]
- Related artifacts: [dataset, model, code, DOI link]
Contact & Inquiries:
LPPM Universitas Aisyiyah Yogyakarta/ichst@unisayogya.ac.id
https://proceeding.unisayogya.ac.id/index.php/proichst/aiethic