Most tools have a free or “freemium” version. Please see our collected Generative AI Tools for a sampling of useful tools that have free versions.
Simply put, “generative AI” is a class of machine-learning algorithms that leverage vast datasets to generate novel outputs such as original text, unique images, and new sounds.
Purely AI-created works are not protected by copyright law; however, if a human significantly modifies the work, it may qualify for copyright protection. Prompts are not protected by copyright.
Hallucinations (falsified information), the quality of a prompt, and the opacity of the training data used within a generative AI tool can all contribute to how accurate and reliable a tool may be. A good rule of thumb is to take all output as potentially false until you can verify it using alternative means. Human oversight and critical evaluation are crucial, particularly in a high-stakes field like health care. Never take AI-generated output at face value, especially in a clinical setting.
It depends on the tool - and many tools are not transparent about this, which is a red flag. In general, you should look for tools that state outright what sources are used to inform responses.
It depends on the tool you are using. AI tools may keep and use the data provided to train and/or otherwise enhance their services, but they generally do not claim ownership of user information and input. Make sure you do your homework on the tool you want to use prior to use if this is something that is important to you.
For the most part, this is determined by the publisher of your content. Most journals have strict policies about use of AI in potential authors’ work; do your homework beforehand to ensure you are in compliance with publisher guidelines and rules when considering where to submit your manuscript. Consequences for violating these policies can be very severe, and may even damage your professional reputation.
For students completing research projects: Ensure you consult with your faculty advisor before using AI in your research work.
In order to give sufficient space for instructors to explore uses of generative AI tools in their courses, and to set clear guidelines to students about what uses are and are not consistent with the NECO Honor Code, Academic Affairs has set forth the following policy guidance regarding generative AI in the context of coursework:
The use of or consultation with generative AI shall be treated analogously to assistance from another person unless clearly stated by a course instructor. In particular, using generative AI tools to substantially complete an assignment or exam (e.g. by entering exam or assignment questions) is not permitted. Students should acknowledge the use of generative AI and default to disclosing such assistance when in doubt.
Individual course instructors are free to set their own policies regulating the use of generative AI tools in their courses, including allowing or disallowing some or all uses of such tools. Course instructors should set such policies and be explicit in their course syllabi, on Canvas, and during class. Students who are unsure of policies regarding generative AI tools are encouraged to ask their instructors for clarification.
In short: It is entirely up to the faculty of record for the course, and they should be consulted prior to use of AI within coursework.
The short answer is to proceed with extreme caution. There are many things to consider, including HIPAA and ethical standards. NEVER upload patient information to a system that has not been approved by NECO IT and/or your clinic director.
According to the 11th edition of the AMA Manual of Style, use nonproprietary terms like "chatbot" instead of brand names such as "ChatGPT" in articles unless referring to a specific tool. After first mentioning an AI tool, include its brand name, version number, manufacturer, and date used in parentheses. In the reference list, AI tools should be cited following the software citation format, including details like version, access date, and source.
Below is an example of what the resource should look like in a reference list:
- ChatGPT. Version 4.0. OpenAI; 2025. Accessed January 1, 2025. https://openai.com/
Uploading copyrighted material into generative AI tools could potentially infringe on copyright holders' rights, especially when producing derivative works.
For responsible AI use, consider:
- Terms of Use: Check the specific AI tool's terms for guidance on using copyrighted material.
- Licensing: Secure proper licenses for copyrighted material for legal and ethical compliance.
- Copyright law: Stay informed about copyright law and AI-related changes.
- Transparency: Disclose the use of AI-generated content.
It can be incredibly difficult to tell, but there are some clues to look for:
- Unnatural Language Patterns:
- Repetitive phrasing: AI models may overuse certain words or phrases.
- Awkward sentence structures: The flow might feel robotic or unnatural.
- Lack of nuance or subtlety: Emotions and complex ideas might be expressed in a simplistic way.
- Superficiality:
- Lack of depth or originality: The work may lack unique insights or perspectives.
- Generic and predictable: It might conform to common tropes or clichés.
- Focus on surface-level information: It might prioritize facts over deeper analysis.
- Inconsistent Tone:
- Sudden shifts in style: The writing might abruptly change from formal to informal.
- Lack of a clear voice: The author's personality or unique style may be absent.
Important Notes:
- No foolproof method: AI detection tools are still under development and not always accurate.
- Human judgment is crucial: Ultimately, careful reading and critical analysis by a human are essential.
- Focus on the underlying ideas: Even if AI was used as a tool, evaluate the quality of the ideas and arguments presented.
It depends entirely on the policy of the sponsoring institution. Make sure you check to see what is allowable prior to including AI-generated output in your presentations, reports, or anything else.