References
References from every section of this guide in one place.
- Akaissi, H., Mcarlane, S. I. (2023). Artificial Hallucinations in ChatGPT: Implications in Scientific Writing. Cureus 15(2).
- Artificial Intelligence: A Reading List. (2024) House of Commons Library.
- Australian National University. (2023). Chat GPT and other generative AI tools: What ANU academics need to know.
- Bailey, J. (2023). One Way AI Has Changed Plagiarism. Plagiarism Today.
- Balloccu, S., Schmidtová, P., Lango, M., Dusek, O. (2024). Leak, Cheat, Repeat: Data Contamination and Evaluation Malpractices in Closed-Source LLMs. In Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers), pages 67–93, St. Julian’s, Malta. Association for Computational Linguistics.
- Birhane, A., Han, S., Boddeti, V., & Luccioni, S. (2024). Into the LAIONs Den: Investigating Hate in Multimodal Datasets. Advances in Neural Information Processing Systems, 36.
- Braun, M., Vallery, A., Benizri, I. (2024). Obligations for Deployers, Providers, Importers and Distributors of High-Risk AI Systems in the European Union’s Artificial Intelligence Act. Wilmerhale Blog.
- Bsharat, S. M., Myrzakhan, A., Shen, Z. (2023). Principled Instructions Are All You Need for Questioning LLaMA-1/2, GPT-3.5/4. arXiv:2312.16171.
- Cardon, P., Fleischmann, C., Aritz, J., Logemann, M., & Heidewald, J. (2023). The Challenges and Opportunities of AI-Assisted Writing: Developing AI Literacy for the AI Age. Business and Professional Communication Quarterly, 86(3), 257-295.
- Caulfield., J. (2023). ChatGPT Citations | Formats & Examples. Scribbr.
- Chang, K., Cramer, M., Soni, S., Bamman, D. (2023). Speak, Memory: An Archaeology of Books Known to ChatGPT/GPT-4. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 7312–7327, Singapore. Association for Computational Linguistics.
- Chesterman, S. 2024. Good Models Borrow, Great Models Steal: Intellectual Property Rights and Generative AI. Policy and Society.
- Dodge, J., Prewitt, T., Tachet des Combes, R., Odmark, E., Schwartz, R., Strubell, E., ... & Buchanan, W. (2022). Measuring the carbon intensity of AI in cloud instances. In Proceedings of the 2022 ACM conference on fairness, accountability, and transparency (pp. 1877-1894).
- Dornis, T. W., Stober, S. (2024). Copyright and training of generative AI models — technological and legal foundations. SSRN.
- Edwards, B. (2023). Why ChatGPT and Bing Chat are so good at making things up. Ars Technica.
- Estes, A. C. (2024). What, if anything, is AI search good for? Vox.
- European Innovation Council and SMEs Executive Agency. (2024). Artificial intelligence and copyright: use of generative AI tools to develop new content. European Commission News Blog.
- Falconer, S. (2023). Privacy in the age of generative AI. StackOverflow Blog.
- Gumaan, E. A. (2024). Transformers (Community Article). HuggingFace.
- Hacker, P. (2024). The real existential threat of AI. OUPblog. Oxford University Press.
- Hao, K. (2024). Microsoft’s Hypocrisy on AI. The Atlantic.
- Harding, X. (2023). The Internet’s Invisible Carbon Footprint. Mozilla Blog.
- Hatakeyama-Sato, K., Yamane, N., Igarashi, Y., Nabae, Y., & Hayakawa, T. (2023). Prompt engineering of GPT-4 for chemical research: what can/cannot be done? Science and Technology of Advanced Materials: Methods, 3(1).
- Heikkilä, M. (2023). Three ways AI chatbots are a security disaster. MIT Technology Review.
- Henrickson, L., & Meroño-Peñuela, A. (2023). Prompting meaning: a hermeneutic approach to optimising prompt engineering with ChatGPT. AI & Society.
- Hicks, M. T., Humphries, J., Slater, J. (2024). ChatGPT is bullshit. Ethics and Information Technology, Volume 26.
- How do I cite generative AI in MLA style? (2023). MLA Style Centre.
- Jernite, Y., Nguyen, H., Biderman, S., Rogers, A., Masoud, M., Danchev, V., ... & Mitchell, M. (2022, June). Data governance in the age of large-scale data-driven language technology. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (pp. 2206-2222).
- Kardys, D. (2024). Demystifying AI: Going Beyond the Hype. Diagram Views.
- Klosek, K. & Blumenthal, M. (2024). Training Generative AI Models on Copyrighted Works Is Fair Use. Association of Research Libraries Blog.
- Koerner, K. (2023). Generative AI: Privacy and tech perspectives. International Association of Privacy Professionals.
- Kreuz, R. J. (2024). Plagiarism is not always easy to define or detect. The Conversation.
- Leo S. Lo. (2023). The CLEAR path: A framework for enhancing information literacy through prompt engineering. The Journal of Academic Librarianship, Volume 49 (4).
- Lo, L. S. (2023). The Art and Science of Prompt Engineering: A New Literacy in the Information Age. Internet Reference Services Quarterly, 27(4), 203–210.
- Long, D., & Magerko, B. (2020). What is AI literacy? Competencies and design considerations. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (pp. 1–16).
- Luccioni, S., Akiki, C., Mitchell, M., & Jernite, Y. (2024). Stable bias: Evaluating societal representations in diffusion models. Advances in Neural Information Processing Systems, 36.
- Luccioni, A. S., Hernandez-Garcia, A. (2023). Counting carbon: A survey of factors influencing the emissions of machine learning. arXiv:2302.08476.
- Luccioni, S., Jernite, Y., & Strubell, E. (2024). Power hungry processing: Watts driving the cost of AI deployment? In The 2024 ACM Conference on Fairness, Accountability, and Transparency (pp. 85-99)
- Maleki, N., Padmanabhan, B., & Dutta, K. (2024). AI Hallucination: A Misnomer worth clarifying. arXiv:2401.06796.
- Marcus, G., Southen, R. (2024). Generative AI Has a Visual Plagiarism Problem. IEEE Spectrum.
- Marsh, O. (2024). Chatbots are still spreading falsehoods. Algorithm Watch.
- McAdoo, T. (2023). How to cite ChatGPT. APA Style.
- Meehan, S. R. (2023). When AI Is Writing, Who Is the Author? Inside Higher Ed.
- Messeri, L., Crockett, M.J. (2024). Artificial intelligence and illusions of understanding in scientific research. Nature 627, pp. 49–58.
- Milmo, D., Hern, A. (2024). What will the EU’s proposed act to regulate AI mean for consumers? The Guardian.
- Nehring, J., Gabryszak, A., Jürgens, P., Burchardt, A., Schaffer, S., Spielkamp, M., Stark, B. (2024). Large Language Models Are Echo Chambers. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pp. 10117–10123, Torino, Italia. ELRA and ICCL.
- O'Brien, I. (2024). Data center emissions probably 662% higher than big tech claims. Can it keep up the ruse? The Guardian.
- Rahman-Jones, I. (2024). AI drives 48% increase in Google emissions. BBC.
- Rebelo, M. (2023). How to write effective AI art prompts. Zapier.com
- Recommended citation method for ChatGPT. Q&A: Citation. Documentation of Sources. The Chicago Manual of Style.
- Recommended citation method for DALL-E. Q&A: Citation. Documentation of Sources. The Chicago Manual of Style.
- Riedl, M. (2023). A Very Gentle Introduction to Large Language Models without the Hype. Medium.
- Rogers, A., Luccioni, A. S. (2024). Position: Key Claims in LLM Research Have a Long Tail of Footnotes. In Forty-first International Conference on Machine Learning.
- Rose, D. (2023). Generative AI vs.Traditional AI. [Video] LinkedIn Learning.
- Saenko, K. (2023). Is generative AI bad for the environment? A computer scientist explains the carbon footprint of ChatGPT and its cousins. The Conversation.
- Salvaggio, E. (2024). Challenging The Myths of Generative AI. Tech Policy Press.
- Schwamm, H. (2023). Navigating the AI Landscape in Research. Blog for the University of Galway Library.
- Silveira, L. (2024). The Fall of Z-Library: The “Burning of the Library of Alexandria” or Protection for Authors Against AI Companies, 27 SMU Sci. & Tech. L. Rev. 119
- Singh, M. (2023). As the AI industry booms, what toll will it take on the environment? The Guardian.
- Slattery, P., Saeri, A. K., Grundy, E. A., Graham, J., Noetel, M., Uuk, R., ... & Thompson, N. (2024). The AI Risk Repository: A Comprehensive Meta-Review, Database, and Taxonomy of Risks From Artificial Intelligence. arXiv:2408.12622.
- Thais, S. (2024). Misrepresented Technological Solutions in Imagined Futures: The Origins and Dangers of AI Hype in the Research Community. arXiv:2408.15244.
- Vakali, A., Tantalaki, N. (2024). Rolling in the deep of cognitive and AI biases. arXiv:2407.21202.
- Varoquaux, G., Luccioni, A. S., Whittaker, M. (2024). Hype, Sustainability, and the Price of the Bigger-is-Better Paradigm in AI. arXiv:2409.14160
- Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., & Polosukhin, I. (2017). Attention is all you need. In I. Guyon, U. von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, & R. Garnett (Eds.), Advances in Neural Information Processing Systems (Vol. 30). Curran Associates, Inc.
- Weidinger, L., Mellor, J., Rauh, M., Griffin, C., Uesato, J., Huang, P. S., ... & Gabriel, I. (2021). Ethical and Social Risks of Harm from Language Models. arXiv:2112.04359.
- When AI Gets It Wrong: Addressing AI Hallucinations and Bias. MIT Sloan Teaching & Learning Technologies.
- Wierda, G. (2024). When ChatGPT summarises, it actually does nothing of the kind. R&A IT Strategy & Architecture.
- Yin, Z., Sun, Q., Guo, Q., Wu, J., Qiu, X., Huang, X. (2023). Do Large Language Models Know What They Don’t Know? In Findings of the Association for Computational Linguistics: ACL 2023, pages 8653–8665, Toronto, Canada. Association for Computational Linguistics.
- Zitron, E. (2024). The Subprime AI Crisis. Where is Your Ed At?