From IT and HR to sales, customer service and marketing, generative AI will revolutionise every part of business, with opportunities to boost productivity and improve customer experience. This game-changing tech has the potential to usher in a new economic age, but it also presents new cybersecurity challenges, particularly around data and privacy.
Recent Salesforce research revealed that while 71% of Australian employees use or plan to use generative AI at work, nearly 60% of those don’t know how to do so in a trusted way.
Here are some tips and tools to help you stay ahead of the game, while keeping your business secure.
The biggest area of concern
There are several ways that the widespread adoption of generative AI exposes SMEs to new cybersecurity risks. For example, malicious actors can use it to create realistic and persuasive phishing emails that slip through traditional email filters.
But Rowena Westphalen, Senior Vice President of Innovation, Salesforce APAC at Salesforce APAC, says one of the biggest areas of concern actually comes from inside the business itself. That is, employees entering sensitive company data or personal identifying information about their customers into generative AI models, which may not guarantee privacy.
“Considering how much small to medium businesses build their reputation with their customers based on trust, sharing sensitive customer information outside of your business is a really high-risk move,” Westphalen points out.
Tips for securing your business
Over the next six to 12 months, Westphalen expects rapid innovation in the generative AI space, which will change the customer experience across all dimensions of business.
High adoption rates will leave some organisations scrambling to keep up, she adds, and every user will need to think about balancing the opportunities and the risks.
Here are some best-practice tips for safeguarding your business:
- Implement strong security measures to protect sensitive data, including platform encryption, access controls, and data anonymisation techniques.
- Develop systems and processes to verify the authenticity and quality of generated content.
- Educate employees on recognising and reporting potential phishing attacks, emphasising the importance of vigilance when handling suspicious messages, requests and links.
- Train your workforce on how to use AI ethically, effectively and responsibly. This doesn’t have to be expensive—Salesforce’s Trailhead provides free (and fun!) online learning resources teaching the latest AI skills.
- Thoroughly test AI models for vulnerabilities and don’t skip or snooze security updates from AI developers or service providers.
- Collaborate with AI and cybersecurity experts who can provide tailored advice, assistance and purpose-built products. For example, Salesforce’s generative AI CRM technology, Einstein GPT, is trained on trusted, real-time data and ensures inputted information is not stored or used in an insecure way.
“Every organisation needs to have a cybersecurity strategy,” Westphalen says. “And it needs to focus on how they’re going to respond to bad actors—because it’s a question of ‘when’ it happens, not ‘if’ it happens.”
Tools of the trade
One simple way to improve cybersecurity within your business is to switch on basic tools such as multi-factor authentication and zero-trust frameworks.
Having a “zero-trust framework” means you assume users can’t have access to anything, “and then you slowly switch on their access based on the specific requirements of their job,” Westphelen explains. “Rather than giving everyone access to everything straight away, which is a big, big red flag.”
Other solutions, such as Salesforce Shield, offer triple threat protection through data encryption, event monitoring, and a field audit trail.
However, it’s important to note that while you can use AI to enhance cybersecurity monitoring significantly, it’s not a standalone solution, Westphelen says. “Human-in-the-loop”, continuous monitoring, and proactive security measures are essential to a robust cybersecurity strategy.
AI must be used in conjunction with other security practices and regularly updated to adapt to new threats. As Westphelen puts it: “You don’t just want to automate these things—you actively want to think about it.”