sponsor content What's this?
Democratized AI offers great opportunity, but must be approached thoughtfully
Democratized AI can help federal agencies unlock more value from AI but requires expertise and guardrails to ensure beneficial outcomes.
Presented by Accenture Federal Services
Government agencies have a powerful opportunity to increase efficiency and improve performance by leveraging AI and machine learning insights to empower their decision-makers. However, they also face a fundamental challenge – there’s not enough talent to go around. As the National Security Commission on AI warned, “the human talent deficit is the government’s most conspicuous AI deficit and the single greatest inhibitor to buying, building, and fielding AI-enabled technologies for national security purposes.” And other federal sectors face similar challenges.
Closing this gap is challenging, given the premium that scarce talent enjoys in today’s marketplace. Traditionally, only the most advanced data scientists with years of training, often including a Ph.D., have been able to code algorithms to build AI models and predictions. However, demand is growing faster than supply, with Georgetown University’s Center for Security and Emerging Technology forecasting that AI-related occupations will grow twice as fast as the overall workforce from 2019 to 2029.
That’s why democratized artificial intelligence and machine learning could prove to be an incredibly potent tool for the federal government. Democratized AI lowers the skill threshold needed to program and refine these powerful but complex solutions. This trend is being fueled by the convergence of low-code development platforms, more powerful AI foundation models, and increasingly mature AIOps/MLOps frameworks. This creates an environment where so-called citizen data scientists can more readily apply the power of AI to solve complex business challenges.
With democratized AI and ML, insights come at greater speeds and lower costs, as agencies enable wider access to the components of an AI system so that more people can input data and receive their desired output. This gives mission and business owners the ability to harness the value of advanced analytics while reducing the burden on government agencies to hire or train highly technical talent. This will be essential to unleashing AI’s greater potential; survey data found that 89% of federal executives believe technology democratization is becoming critical in their ability to ignite innovation across their organization.
It's no wonder why the democratization of AI and ML tools has become one of the most compelling trends in AI today. It stands to create tremendous efficiencies for the federal government while refocusing top talent on higher-level tasks. But while the potential benefits are great, we advise taking a measured approach to adoption.
Without trained data scientists guiding and checking outcomes, algorithms can produce troubling results. For example, in the wake of the COVID-19 pandemic, many research teams rushed to develop AI tools that could help diagnose and treat patients or better support healthcare workers. Yet, none of these tools made a meaningful impact in the fight against COVID, reports the MIT Technology Review, and some even provided inaccurate or confusing results. One algorithm was fed chest scans from children in the dataset for all non-COVID cases. It learned to identify kids, not COVID.
“In yet other cases, some AIs were found to be picking up on the text font that certain hospitals used to label the scans,” the article also shares. “As a result, fonts from hospitals with more serious caseloads became predictors of [COVID] risk.”
Federal agencies must approach democratized AI thoughtfully and understand when and where to include the necessary expertise and guardrails to ensure beneficial outcomes.
Based on our work to implement AI solutions within the federal government, here are three strategies federal agencies should be using as they steer toward democratized AI.
1. Minimize risk by starting with more basic tasks. Cash-strapped and facing a talent shortage, agencies are quick to see the benefits of putting data science capabilities in every employee’s hands. But instead of going all-in—an approach that could leave organizations exposed to costly mistakes—start with smaller tasks more suitable to experimentation. Well-written algorithms can help agencies speed up highly tedious tasks like searching and counting across thousands of documents, freeing up human employees for higher-level tasks. As your teams become more accustomed to the technology, AI will become engrained deeper and deeper into your daily operations.
2. Democratize in tandem with human experts. The impact of citizen data scientists will likely only continue to expand, but for now, federal agencies may find the most success in melding democratized AI and ML with the work of traditional data scientists. Pairing democratized AI with human experts allows agencies to identify potential red flags early, before they bias results or produce unintended consequences.
In some instances, organizations have kept a data scientist in the loop to continuously train the model and provide feedback, find where bias could be muddying the waters, or just validate and verify that the output makes sense before it’s delivered to the end user. Given the stakes, all federal agencies should consider implementing these quality assurance procedures.
3. Be thoughtful about your guardrails. When it comes to delivering democratized AI and ML, the devil is truly in the details. It’s on the traditional data scientists building the algorithms to anticipate unintended consequences and apply appropriate guardrails that protect citizen users from biased results they may not have the expertise to notice. Appropriate guardrails include an emphasis on data governance; you need to structure your data for responsible access to ensure algorithms use it appropriately. Meanwhile, agencies can close the knowledge gap by deploying baseline employee trainings on statistics and algorithmic bias. Data scientists don’t become so overnight, but foundational training can help protect us from some common errors.
Because the end users for most government solutions are ultimately U.S. citizens, federal agencies have an even greater responsibility to deploy democratized artificial intelligence responsibly. With a thoughtful approach that pairs citizen data scientists with trained experts, it can be done.
Agencies looking to move or expand into this field should first think critically about the problem they’re looking to solve and the data they have to solve it—rather than diving in by purchasing tools without an end goal or outcome in mind. With a smart approach, democratized AI and ML can be an extremely effective way for federal agencies to overcome resource deficiencies and move into a more efficient and more productive era.
This is part of a series by Accenture Federal Services on the top four trends building the future of federal AI. Read our overview to see all the trends and stay tuned for more deep dives on each in the coming weeks.
Authors:
This content is made possible by our sponsor, Accenture. The editorial staff of GovExec was not involved in its preparation.
NEXT STORY: DoD Financial Management Modernization- A Digital Transformation Guidebook