The future of marketing will be powered by AI. ✨🤖📈 Learn why that’s a great thing for creatives.

3 Takeaways from Talking Ethical AI with Navrina Singh

Bridget Johnston October 21, 2020

Creating a trustworthy and responsible artificial intelligence solution is a priority for Pattern89. That’s why we invited Navrina Singh, an expert on responsible use of AI solutions, to speak with our team.

Singh is the Founder of Credo AI, an AI Fund company that focuses on analyzing, auditing and managing risks introduced by machine learning. Singh knows that artificial intelligence is bringing huge changes and innovations to the way we work. With that though, comes new and unprecedented risks. Her goal is to forge the way for companies to adopt or create secure, compliant, fair and trustworthy AI solutions.

Drawing from her experience in solving new problems and finding technological solutions, here are three key takeaways from Singh’s discussion with the Pattern89 team.

Our instincts aren’t enough.

Humans have biases. All of us do. And despite our best efforts to combat them, it takes a lot of active work and focus to remove bias from your AI. In fact, most AI systems have “bias by design” to meet business outcomes.

“We’re taking what we think are our best systems, and injecting them into our AI,” Singh said. She continued to explain how these methods of developing applications cause problems down the road. By bringing our own biases to the development of artificial intelligence, our algorithms fail to be as accurate as possible.

Innovators need to focus on eliminating personal biases from their work. After all “product are a reflection of the people who have built them,” Singh states.

There are several ways to do this. Extensively auditing your data, your current algorithms, building a team of diverse backgrounds and experiences, and involving all team members in your devOps, are great places to start.

Everyone needs to be involved in the discussion. 

Oftentimes, we think about data scientists and engineering teams as having the sole responsibility of developing trustworthy and responsible AI. However, Singh states that it is imperative to bring the stakeholders who are incentivized for and have the expertise in managing risk and governance into the AI/ML design and development pipeline. Now more than ever, the auditors, risk managers, policy, compliance and lawyers need to be brought in early into AI development to lay the foundations for good governance.

Accountability and audit-ability are core to good AI governance. Hence, It needs to be at the top-of-minds throughout the organizations from product to sales to customer service.

Introducing accountability throughout your team and product development process ensures all perspectives are being considered.

Start by including compliance and auditing in weekly sprint operations. Inject diversity and ethics into devOps to make them more intentional. You should also be transparent with your prospects and customers about how your models are built, and what thought processes go into them.

Regulations are coming. You need to prepare.

Given Artificial Intelligence’s rapid adoption across industries, as well as the growing public concern with potential AI misuse, regulation is around the corner. In fact, The European Commission is currently developing a regulatory framework that could have a wide impact, similar to the GDPR, on any company looking to do business in the EU. In anticipation, businesses that are relying on AI need to start preparing for this today.

Though it may seem early to do so, now is the time to start auditing your AI solutions, and audit all of them thoroughly. By taking care of governance now, and eliminating biases and ensuring compliance, you will be saving your company from potential problems in the future– such as fines or regulations.

By getting ahead of the curve, companies can deploy trustworthy and responsible AI solutions at scale that work for both their customers and their business models.