Social Media Harms
Social Media Harms
  • Home
  • About Us
  • References
    • Adult- Depression/Anxiety
    • Teens 14-18 Years
    • Tweens 10-13 Years
    • Children 9 & Younger
    • Online Exploitation
    • AI
  • Media
    • Medium Blog
    • Podcasts
    • Videos - Social Media
    • Articles-Youth
    • Articles-Youth Up to 2023
    • Articles- Perspectives
    • Articles - Technology
    • Articles - Mental Health
    • Articles-Politics
    • Books
    • Videos - Technology
  • Activist Links
    • Activist Orgs
    • Tech-Trust & Safety
    • White Papers
  • More
    • Home
    • About Us
    • References
      • Adult- Depression/Anxiety
      • Teens 14-18 Years
      • Tweens 10-13 Years
      • Children 9 & Younger
      • Online Exploitation
      • AI
    • Media
      • Medium Blog
      • Podcasts
      • Videos - Social Media
      • Articles-Youth
      • Articles-Youth Up to 2023
      • Articles- Perspectives
      • Articles - Technology
      • Articles - Mental Health
      • Articles-Politics
      • Books
      • Videos - Technology
    • Activist Links
      • Activist Orgs
      • Tech-Trust & Safety
      • White Papers

  • Home
  • About Us
  • References
    • Adult- Depression/Anxiety
    • Teens 14-18 Years
    • Tweens 10-13 Years
    • Children 9 & Younger
    • Online Exploitation
    • AI
  • Media
    • Medium Blog
    • Podcasts
    • Videos - Social Media
    • Articles-Youth
    • Articles-Youth Up to 2023
    • Articles- Perspectives
    • Articles - Technology
    • Articles - Mental Health
    • Articles-Politics
    • Books
    • Videos - Technology
  • Activist Links
    • Activist Orgs
    • Tech-Trust & Safety
    • White Papers

Online Harms - negative effects of Artificial Intelligence

  1. Pre-print. Moore,  J., Mehta, A., Agnew, W., Anthis, J.R., Louie, R., Mai, Y., Yin, P.,  Cheng, M., Paech, S.J., Klyman, K., Chancellor, S., Lin, E., Haber, N.,  & Ong, D.C. (2026). Characterizing Delusional Spirals through  Human-LLM Chat Logs. https://arxiv.org/pdf/2603.16567
  2. Saba SK, Weeks WB. Patients Use AI—Clinicians Should Ask How. JAMA Psychiatry. Published online April 01, 2026. doi:10.1001/jamapsychiatry.2026.0451
  3. Shen E, Hamati F, Donohue MR, Girgis RR, Veenstra-VanderWeele J, Jutla A. Evaluation of Large Language Model Chatbot Responses to Psychotic Prompts. JAMA Psychiatry. Published online March 25, 2026. doi:10.1001/jamapsychiatry.2026.0249
  4. Shaw, S. and Nave, G (2026), Thinking—Fast, Slow, and  Artificial: How AI is Reshaping Human Reasoning and the Rise of  Cognitive Surrender (January 11, 2026).  https://doi.org/10.31234/osf.io/yk25n_v1, The Wharton School Research  Paper ,  Available at SSRN: https://ssrn.com/abstract=6097646 or http://dx.doi.org/10.2139/ssrn.6097646  
  5. Pre-Print Abdulhai, M., White, I., Wan, Y., Qureshi, I., Leibo, J.,  Kleiman-Weiner, M., & Jaques, N. (2026). How LLMs Distort Our  Written Language. arXiv preprint arXiv:2603.18161.
  6. Xiaoran Sun, Yunqi Wang, Brandon T McDaniel,  AI  companions and adolescent social relationships: Benefits, risks, and  bidirectional influences, Child Development Perspectives, 2026;, aadaf009, https://doi.org/10.1093/cdpers/aadaf009
  7. Pre-Print Ren, Richard & Agarwal, Arunim & Mazeika, Mantas & Menghini, Cristina & Vacareanu, Robert & Kenstler, Brad & Yang, Mick & Barrass, Isabelle & Gatti, Alice & Yin, Xuwang & Trevino, Eduardo & Geralnik, Matias & Khoja, Adam & Lee, Dean & Yue, Summer & Hendrycks, Dan. (2025). The MASK Benchmark: Disentangling Honesty From Accuracy in AI Systems. 10.48550/arXiv.2503.03750. 
  8. Hilbert, M., Cingel, D., Zhang, J., Vigil, S., Shawcroft, J Xue,  H. Thakur, A., Shafiq,Z. (2025, Nov) #BigTech @Minors: social media  algorithms have actionable knowledge about child users and at-risk  teens, Telematics and Informatics, Volume 103, 102341, ISSN 0736-5853, https://doi.org/10.1016/j.tele.2025.102341. 
  9. Iftikhar, Z. et al (2025, Oct 15) How LLM Counselors Violate  Ethical Standards in Mental Health Practice: A Practitioner-Informed  Framework, Proceedings of the Eighth AAAI/ACM Conference on AI, Ethics  and Society,  DOI: https://doi.org/10.1609/aies.v8i2.36632
  10. Pre-Print, Dohnány, S., Kurth-Nelson, Z., Spens, E.,  Luettgau, L.,  & Reid,  A., Summerfield, Shanahan, M., Nour, M (2025). Technological folie à  deux: Feedback Loops Between AI Chatbots and Mental Illness.  10.48550/arXiv.2507.19218. 
  11. Pre-Print, Khadangi,   A., Marxen, H., Sartipi, A., Tchappi, I., & Fridgen, G. (2025).   When AI Takes the Couch: Psychometric Jailbreaks Reveal Internal Conflict in Frontier Models.
  12. Pre-Print, Zhao, J., Fu, T., Schaeffer, R., Sharma, M., & Barez, F. (2025). Chain-of-Thought Hijacking, https://arxiv.org/abs/2510.2641
  13. Pre-Print, Berg,   C., Lucena, D.S., & Rosenblatt, J. (2025). Large Language Models   Report Subjective Experience Under Self-Referential Processing, https://arxiv.org/abs/2510.24797
  14. Pre-Print, Geng,   J., Chen, H., Liu, R., Ribeiro, M.H., Willer, R., Neubig, G., &   Griffiths, T.L. (2025). Accumulating Context Changes the Beliefs of   Language Models, https://arxiv.org/abs/2511.01805
  15. Pre-Print.  Gu,  L., Zhu, Y., Sang, H., Wang, Z., Sui, D., Tang, W., Harrison,  E.M.,  Gao, J., Yu, L., & Ma, L. (2025). MedAgentAudit: Diagnosing  and  Quantifying Collaborative Failure Modes in Medical Multi-Agent  Systems.
  16. Pre-Print. Chakrabarty,  T., Ginsburg, J.C., & Dhillon, P. (2025). Readers  Prefer Outputs of  AI Trained on Copyrighted Books over Expert Human  Writers.
  17. Pre-Print. Xing, S., Hong, J., Wang, Y., Chen, R., Zhang, Z., Grama, A.Y., Tu, Z., & Wang, Z. (2025). LLMs Can Get "Brain Rot"! https://arxiv.org/abs/2510.13928
  18. Pre-Print. Sharma, S., Alaa, A.M. & Daneshjou, R. A longitudinal analysis of   declining medical safety messaging in generative AI models, npj Digit. Med. 8, 592 (2025). https://doi.org/10.1038/s41746-025-01943-1
  19. Pre-Print, De Freitas, Julian, Zeliha Oğuz Uğuralp, and Ahmet Kaan Uğuralp. "Emotional Manipulation by AI Companions."  Harvard Business School Working Paper, No. 26-005, August 2025. (Revised October 2025.)
  20. Pre-Print, Morrin,  H., Nicholls, L., Levin, M., Yiend, J., Iyengar, U., DelGuidice, F., …  Pollak, T. (2025, July 11). Delusions by design? How everyday AIs might  be fuelling psychosis  (and what can be done about it). https://doi.org/10.31234/osf.io/cmy7n_v5
  21. Pre-Print, Larooij,  M., & Törnberg, P. (2025). Can We Fix Social Media?  Testing  Prosocial Interventions using Generative Social Simulation. ArXiv, abs/2508.03385.
  22. Moore, J., Grabb, D., Agnew, W., Klyman, K., Chancellor, S., Ong,  D., Haber, N. (2025) Expressing stigma and inappropriate responses  prevents LLMs from safely replacing mental health providers, https://arxiv.org/abs/2504.18412
  23. Sharkey,  L., Chughtai, B., Batson, J., Lindsey, J., Wu, J.,  Bushnaq, L.,  Goldowsky-Dill, N., Heimersheim, S., Ortega, A., Bloom,  J., Biderman,  S., Garriga-Alonso, A., Conmy, A., Nanda, N., Rumbelow,  J., Wattenberg,  M., Schoots, N., Miller, J., Michaud, E.J., Casper, S.,  Tegmark, M.,  Saunders, W., Bau, D., Todd, E., Geiger, A., Geva, M.,  Hoogland, J.,  Murfet, D., & McGrath, T. (2025). Open Problems in  Mechanistic  Interpretability. ArXiv, https://arxiv.org/abs/2501.16496
  24. Randomized Control Study, Fang,   C.M., Liu, A.R., Danry, V., Lee, E., Chan, S.W., Pataranutaporn, P.,   Maes, P., Phang, J., Lampe, M., Ahmad, L., & Agarwal, S. (2025).  How  AI and Human Behaviors Shape Psychosocial Effects of Chatbot Use: A   Longitudinal Randomized Controlled Study. ArXiv, abs/2503.17473.


Social Media Harms LLC

Copyright © 2021 Social Media Harms LLC - All Rights Reserved.

Powered by