New Artificial Intelligence (AI) Guidelines

To protect HM patient information and intellectual property, System Policy IM01 Acceptable Use of Computing Resources (Policy IM01) has been revised to introduce new requirements around using generative AI tools (e.g., ChatGPT, Bard, etc.). You can only use these tools for authorized, HM business purposes and must never enter any confidential information into these tools, including patients’ Protected Health Information (PHI), Personal Identifiable Information (PII) or any HM proprietary information/trade secrets.

What’s generative AI?
Artificial Intelligence (AI) is a computer system designed to perform tasks that typically have required human intelligence, such as recognizing images, understanding natural language and making predictions based on data.
While AI performs tasks based on existing data and rules, generative AI, like ChatGPT, can create new content, such as images, text and music. It analyzes large amounts of data and identifies patterns and trends then uses these patterns to generate new content that’s similar to the original data it analyzed. It’s interactive — you can ask it a question, and it will confidently give you an answer.

Why is it a privacy risk?
While technologies, such as ChatGPT and Bard are great at helping you do things like plan a trip, write a poem or complete a math problem, you should never enter any personal or confidential information into these tools, such as your name and address, patient Protected Health Information (PHI) or even HM intellectual property. Once this data is entered, this information is no longer private or secure. The AI model records and stores transcripts of your conversation, and that data is then used to train AI models, causing a privacy risk.

How can I keep data safe?
While these tools can be a great resource for you personally, be sure to use them with caution. Remember,

  • Never enter any PHI or HM intellectual property information into public, generative AI tools like ChatGPT or Bard.
  • For personal use, make sure there isn’t any personal or confidential information included in your prompts. You can also turn off the chat history and training which means the data you input won’t be used to train and improve the models.

 

 

 

 

 

 

© 2024. Houston Methodist, Houston, TX. All rights reserved.