Create a free account, or log in

AI responsibly: How to build AI guardrails

AI promises to transform the way we do business and free up the most precious resource we have: time. This is especially true for small businesses, where customer-facing staff need to make sense of a complex set of products, policies, and data with limited time and support. 
Salesforce

Partner Content

building AI in businesses
Justin Tauber, General Manager, Innovation and AI Culture at Salesforce, ANZ. Source: Supplied.

Authored by Justin Tauber, General Manager, Innovation and AI Culture at Salesforce, ANZ.

AI promises to transform the way we do business and free up the most precious resource we have: time. This is especially true for small businesses, where customer-facing staff need to make sense of a complex set of products, policies, and data with limited time and support.

AI-assisted customer engagement can create more timely, personalised, and intelligent interactions, but no business can operate without trust, so we all must learn to use the power of AI in a safe and ethical way.

According to the AI Trust Quotient, however, 89%of Australian office workers don’t currently trust AI to operate without human oversight and 62% fear that humans will lose control of AI.

Small businesses need to build competence and confidence in how and when to use AI in a trustworthy way. The companies that combine the best of human and machine intelligence will be the ones that succeed in their AI transformation.

To cultivate trust over time and build confidence in this still-emergent AI technology requires a focus on the employee experience of AI — integrating staff early into decision-making, output refinement, and feedback processes. Generative AI outcomes are better when humans are more than just “in the loop”. Humans need to take the lead in their partnership with AI, and AI works better with Humans at the Helm.

One strategy is to provide employees with reminders of AI’s strengths and weaknesses, within the flow of work. Surfacing confidence values — the degree to which the model believes its output is correct — can help employees treat model responses with the appropriate level of care. Lower-scored content can still have value, but human reviews can provide a deeper level of scrutiny. Configuring prompt templates for staff to use can ensure more consistent inputs and therefore more predictable outputs. Providing explainability or citing sources for why and how an AI system created the content can also address issues of trust and accuracy.

Another strategy is to focus on use cases that enhance trust with customers. The sweet spot is where productivity and trust-building benefits are aligned. One, for example, is using generative AI to pre-emptively reassure a concerned customer that a product will arrive on time. Another example is the use of AI in fraud detection and prevention. AI systems can flag suspicious transactions that a human analyst can then review, investigate anomalies and risks, and provide feedback to improve the accuracy and effectiveness of fraud detection systems going forward.

Salesforce’s role is to ensure the AI solutions we develop keep humans at the helm. This requires respecting ‌ethical guardrails in the development of AI products. But at Salesforce we’re going further by creating capabilities and solutions that lower the cost of responsible deployment and use by our customers — AI safety products.

Just as power sockets allow us to tap into the power of electricity safely, AI safety products help businesses use the power of AI without exposing themselves to significant risks. Salesforce AI products are already built with trust and reliability in mind, embodying our Trustworthy AI principles and making it easier for our customers to deploy those products in an ethically-informed way.

It’s not always realistic or fair for a business, especially an SMB with limited resources, to expect time-poor employees to refine every AI-generated output. So it’s important that we provide businesses with powerful, system-wide controls and intuitive interfaces so that people can make timely and responsible judgements about how and when to test and refine responses or escalate problems.

We’ve been investing in ethical AI for nearly a decade, focusing on principles, policies, and protections for us and our customers. We’ve introduced new guidelines for the responsible development of generative AI that expand on our core Trusted AI principles, updated our Acceptable Use Policy safeguards, and developed the Einstein Trust layer to protect customer data from external LLMs.

While we’re still in the early days of AI — and these guidelines are ever-evolving — we’re committed to learning and iterating in close collaboration with customers and regulators to make trusted AI a reality for all.

Read now: How can businesses build greater trust in AI? With a human at the helm