For decades, artificial intelligence (AI) has demonstrated its heartbeat via the predictive power it possesses and the profound impact it can have on society. As with any emerging technology, AI went through its phase of hype where it was considered to be a futuristic magic pill, to becoming a reality, as we see organizations leveraging AI to light up aspirational use cases and develop new business models.
By attaining human parity across vision, speech, and text, AI has the potential to have a significant impact on business outcomes. At the Microsoft Technology Center, I interact with customers every day about infusing AI into their solution portfolio, and it is clear that we are past the incubation stage and the time is ripe to put our learnings into practice.
For organizations that are still trying to figure out their AI journey, here are four things I have learned from customers who have successfully implemented AI projects:
One of the common questions I hear from customers during our initial meetings is how they can experiment with AI to understand the ‘art of possibility’ and the eventual return on investment. I believe experimenting is absolutely the right thing to do since the most difficult task at hand is not implementation via technology but identification of the right use case. Being realistic about what you wish to solve or accomplish is the need of the hour, and this is where business and technology leaders must come together. It is also essential to strike a balance between the accuracy and precision you strike on day one versus the eventual goal post. The partnership should lead to choosing a use case that gives a head start to their AI journey and yet qualify to be a significant one from a business impact standpoint.
Invest in the evolution of an AI model
AI is no magic, and the outcome of implementing an AI model has a direct correlation to the underlying data that has gone into training it. Building an AI model involves eternal iterations, and the outcome only gets better with new or more data over time. Do not expect human parity on day one. Businesses need to invest in the evolution of the model that may take umpteen number of iterations, even before it reaches an acceptable level of accuracy and precision.
A real-world example that comes to my mind is related to our operations at the Microsoft Technology Center (MTC), where we host customers. A Technical Architect (TA) leads these engagements. By virtue of hearing and dealing with so many customer challenges, they get better each day in their ability to operate as trusted advisors. A TA’s knowledge graph gets updated daily with new learnings and they keep getting better to help our customers.
Similarly, an AI model has an evolution journey too. Invest in it.
Set guardrails for responsible AI
From a data science lifecycle point of view, when you choose a use case, you either realize that you do not have an apt dataset or you probably have taken on an area where technology is still evolving. Either way, organizations will fail fast or end up building an AI model with certain accuracy and precision.
But even before an organization starts building an AI plan and gets aspirational in infusing AI into their suite of applications, an element that should be the backbone of overall design thinking is related to principles of ethics and responsibility. Every organization needs to put into place guardrails on how it will develop and deploy AI models, and more importantly, the impact they will have on individuals.
At Microsoft, we have identified six principles for responsible AI that guide the development and use of AI with people at the center. These are – fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. You can read more about them here.
Organizations may develop their own principles according to the nature of their business, but the guiding principles will ensure that their AI models are trustworthy.
This is where one needs to understand that implementing a model into production alone is not success, but ensuring the outcome adheres to the design principles indicated above. Over a period of time, this will ensure that users can trust the predictions these models generate, and it should be an eventual goal post for any business.
For example, imagine a model in the healthcare domain, where the system predicts the outcome of a patient’s health check. For the care team to establish trust with the system, there have to be minimum ‘false positives’ and there needs to be a feedback loop for the care team to improvize the system. Hence, establishing ‘trust’ is critical on the trail of assessing AI impact for a use case.
To conclude, implementing AI in an organization needs business and technology leaders to invest in defining an operating manifesto to uplift the spirit of responsible AI. Success on this path will do a lot of good in instilling trust in the system. It is also imperative that one be realistic and practical about the expected outcome, and think long term.