
We look forward to presenting Transform 2022 in person again on July 19th and virtually from July 20th to 28th. Join us for insightful conversations and exciting networking opportunities. Register today!
According to a 2021 survey by NewVantage Partners, 77.8% of companies say AI capabilities are in widespread or limited production, up from 65.8% last year. This growth is helping drive down the cost of AI (as found by the Stanford Institute for Human-Centered Artificial Intelligence) while increasing the likelihood that organizations of all sizes will benefit.
However, doubling the AI can bring double problems. In particular, there are two issues that result in two critical types of AI risk:
1. The risk of skilled labor shortages stalling value realization.
Trying to get value from AI with small, overburdened data science teams is like trying to drink vital nourishment through a straw that’s too narrow. AI can’t help your decision-making and automation processes to scale when there’s an ever-growing queue for model training and management.
Without enabling others outside of your data science team to get more models into production faster, you risk failing the test of business leadership—“How much value are we realizing from these AI projects?”
2. The risk of a “black box” AI fueling legal troubles, fines and reputational damage.
Not knowing what’s in your AI systems and processes can be costly. An auditable, transparent record of the data and algorithms used in your AI systems and processes is essential to keep up with current and proposed AI compliance legislation. Transparency also supports ESG initiatives and can help protect your company’s reputation.
If you think you won’t have problems with prejudice, think again. According to the Stanford Institute for Human-Centered Artificial Intelligence’s 2022 AI Index report, the data shows that as AI becomes more powerful, so does the potential severity of bias. And AI skills are increasing by leaps and bounds.
One Way to Avoid AI Duplication: A Robust ModelOps Platform
Unlocking the power of AI to scale while de-risking AI-infused processes can be achieved through governed, scalable ModelOps – or operationalization of AI models – that enable management of key elements of the AI and decision-making model lifecycle. AI models are machine-learning algorithms trained on real or synthetic data that emulate logical decision-making based on the available data. Models are typically developed by data scientists to solve specific business or operational problems in collaboration with analytics and data management teams.
The National University Health System (NUHS) in Singapore was able to derive real AI value through ModelOps. NUHS needed a 360-degree view of the patient journey to accommodate the country’s growing patient population and aging population. For this purpose, NUHS has developed a new platform called ENDEAVOR AI that uses ModelOps management. With their new platform, NUHS clinicians now have a complete view of patient records with real-time diagnostic information and the system can make diagnostic predictions. NUHS has seen enough value in AI that they plan to operationalize many more AI tools on ENDEAVOR AI.
ModelOps combines technology, people, and processes to manage model development environments, testing, versioning, model storage, and model rollback. Too often, models are managed through a collection of poorly integrated tools. A unified, interoperable approach to ModelOps simplifies the collaboration required to scale ModelOps.
Two major challenges that ModelOps can help with are:
- Model complexity and opacity. Machine learning algorithms can be complex depending on how many parameters they use and how they interact. With complexity comes opacity—the inability of a human to interpret how a model makes its decisions. Without interpretability, it is difficult to determine whether a system is biased, and if so, what approaches can reduce or eliminate the bias. The governance and transparency that a ModelOps platform provides reduces regulatory risk and risk of bias.
- Modeling at scale. Scale isn’t just the number of models; Scale refers to how extensively AI is integrated into an organization’s offerings and processes. More integration means more models are needed, which ultimately means more potential benefits from AI. But when there aren’t enough data scientists to support it — and when a model drifts, is opaque, or is challenging to deploy — failed AI initiatives can result. By democratizing ModelOps to be scalable, organizations can move from incremental benefits to disruptive benefits.
Robust, scalable ModelOps delivers the technology and processes needed to build well-controlled, easier-to-use machine learning models faster. ModelOps allows data scientists to focus on modeling and democratizes AI by enabling data engineers and data analysts to use more AI across the enterprise. like dr Ngiam Kee Yuan, Group Chief Technology Officer at NUHS, stated, “Our cutting-edge ENDEAVOR AI platform is driving smarter, better and more effective healthcare in Singapore. We expect that ModelOps will accelerate the delivery of safe and effective AI-powered processes in a more scalable, containerized way.”
You don’t have to allow talent shortages to complicate the realization of AI values, or take a “black box” legal and reputational risk for AI. With a scalable, robust ModelOps platform, you can “Reduce risk with benefits”, gain AI adaptability for changing requirements, and governance agility for an ever-changing regulatory environment.
Lori Witzel is Director of Research at TIBCO.
data decision maker
Welcome to the VentureBeat community!
DataDecisionMakers is the place where experts, including technical staff, working with data can share data-related insights and innovations.
If you want to read about innovative ideas and up-to-date information, best practices and the future of data and data technology, visit us at DataDecisionMakers.
You might even consider contributing an article of your own!
Read more from DataDecisionMakers