Since the explosion of publicly accessible OpenAI, the question of how to monitor its use within an organization has been a frequently asked question.
Below are some topics relevant to the most common OpenAI services/features available today. Consider using these topics/suggestions as a starting point to creating a scope of topics relevant to security governance, and to help develop security policies for your organization.
Publicly Accessible OpenAI services
- Description: Web sites like OpenAI’s ChatGPT provide a wealth of knowledge and an opportunity to accelerate a user’s knowledge on an infinite number of topics.
- Security Policy Consideration: Pasting corporate information into a public facing site of any kind should be considered prohibitive.
Corporate Licensed OpenAI services
- Description: OpenAI resources such as Azure OpenAI can be enabled at low cost within the cloud. These AI models can be customized to solve complex challenges within an organization or provide public facing features which enhance a corporation’s service offerings.
- Security Policy Consideration: Creation of resources in openAI based tools such as Azure OpenAI Studio and PowerApps should be controlled and monitored by the security team.
End User OpenAI Related Productivity Tools
- Description: Microsoft’s Copilot is an example of end-user OpenAI tools that will change they way people work, and it will have a dramatic affect on their productivity.
- Security Policy Consideration: Authorized use of AI tools, such as Copilot should be monitored.
Be aware of ‘Self-Aware’ OpenAI Tools
Description: If you’ve used Auto-GPT, you might be concerned about the ability of OpenAI tools to be given full root/admin control to do whatever it takes to provide the answer to a question. This includes creation of scripts, adding/deletion of files, and even rebooting your pc.
Security Policy Consideration: Strict monitoring of any open source OpenAI tools that are running on enduser pc’s or on servers should be strictly monitored and approved for use.
Security Monitoring and Best Practices
- Monitoring of all use of AI generated activity should be monitored via EDR, CASB, SIEM etc.
- Discuss with your vendors the best practices on how their OpenAI tools can be monitored.
- Test/simulate the use of each OpenAI tool and validate your ability to monitor its activities, including individual user access and change controls.




