In February 2019, the President signed Executive Order 13859 on the American AI Initiative, which set up the United States’ national strategy on artificial intelligence. Even prior to this however, government agencies had been heavily invested in AI and machine learning, applying it to a wide range of applications. While those previous AI implementations might have happened at the discretion of various agencies, this executive order put more of an emphasis on implementing AI more widely within the federal government.
In the context of this wider push for AI, many federal agencies are accelerating their adoption of AI, but are struggling with the best way to put those AI efforts into practice. AI and machine learning can bring about transformational change, but many federal government decision-makers struggle with knowledge, skill-sets, and best practices on how to move forward. To meet this need, the General Service Administration’s (GSA) Federal Acquisition Service (FAS) Technology Transformation Services (TTS), and the GSA AI Portfolio and Community of Practice created the GSA’s Artificial Intelligence (AI) Center of Excellence (CoE) to support the adoption of AI through direct partnerships, enterprise-level transformation, and discovery work.
The AI Center of Excellence aims to improve and enhance AI implementations in the government and help agencies on their journey with AI adoption. The relatively small team at the GSA AI CoE is helping to bring about some very impressive changes within the federal government. In this article, Neil Chaudhry, Director, AI Implementations at the AI Center of Excellence within the General Service Administration (GSA) and Krista Kinnard, Director, Artificial Intelligence Center of Excellence at Technology Transformation Services (TTS) at the General Services Administration (GSA) share more about what the CoE is all about, some of the most interesting use cases they have seen so far with the government’s adoption of AI, why trustworthy and transparent AI is important to gain citizen trust in AI systems, and what they believe the future holds for AI.
What is the GSA AI Center of Excellence?
Krista Kinnard: GSA’s Artificial Intelligence (AI) Center of Excellence (CoE) accelerates adoption of AI to discover insights at machine speed within the federal government. Housed within GSA’s Federal Acquisition Service (FAS) Technology Transformation Services (TTS), and coupled with the GSA AI Portfolio and Community of Practice, this collaboration can engage with agencies from information-sharing to execution. As part of the FAS TTS’ dedication to advancing innovation and IT transformation across the government, the AI CoE supports the adoption of AI through direct partnerships, enterprise-level transformation, and discovery work. Working in a top down approach, the CoE engages at the executive level of the organization to drive success across the enterprise while also supporting discovery and implementation of AI in a consultative type of approach. This also means building strong partnerships with industry. The private sector is quickly producing new and innovative AI enabled technologies that can help solve government challenges. By partnering with industry, we are able to bring the latest innovations to government, helping build a more technologically enabled, future proof method to meet government missions.
Recommended For You
How did GSA’s AI Center Of Excellence get started?
Krista Kinnard: GSA’s CoE program was conceived during conversations with the White House and innovative industry players, looking at government service delivery and the contrasts between the ease and convenience of interacting with private companies and the sometimes challenging interactions with the government. The focus being on a higher level of change and a redesigning of government services, based on access to the best ideas from industry and most updated technology advances in government.
It is a signature initiative of the Administration, designed by the White House’s Office of American Innovation and implemented at GSA in 2017. The CoE approach was established to scale and accelerate IT transformation at federal agencies. The approach leverages a mix of government talent and private sector innovation in partnership while centralizing best practices and expertise into CoE.
The program’s goal is to facilitate repeatable and sustainable transformation, scaling and accelerating transformation by continually building on lessons learned from current and prior agencies. Since inception, FAS TTS has formed six CoEs: Artificial Intelligence, Cloud Adoption, Contact Center, Customer Experience, Data & Analytics, and Infrastructure Optimization. These six capability areas are typically the key focus areas needed by an organization when driving IT Modernization and undergoing a digital transformation. The AI CoE was specifically designed in support of the Executive Order on Maintaining American Leadership in Artificial Intelligence.
How is the AI Center of Excellence being applied to improve or otherwise enhance what’s happening with AI in the government?
Krista Kinnard: Because AI has become such a technology of interest, the CoE is focused on partnering with federal agencies to identify common challenges — both in mission delivery and in mission support — that can be enhanced using this technology. We are not interested in developing AI solutions for the sake of technology. We are interested in helping government agencies understand their mission delivery and support challenges so that we can work together to create a clear path to a meaningful AI solution that provides true value to the organization and the people it serves.
Why is it important for the Federal Government to adopt AI?
Neil Chaudhry: Data on USAspending.gov in 2019 showed that the federal government spends over $1.1 trillion on citizen services per year. The American public, conditioned by the private sector, expect better engagement with government agencies. Using AI at scale can help modernize the delivery of services while improving the effectiveness and efficiency of government services.
AI can help in many ways, such as proactively identifying trends in service or projected surges in service requirements. AI is excellent at pattern recognition and can assist federal programs identify anomalous activity or suspected fraud much faster than humans can. AI can speed service delivery by automatically resolving routine claims, thus freeing up federal employees to focus on more complex problems that require a human touch.
Where do you see federal agencies today in their AI adoption?
Neil Chaudhry: It really varies. Overall, we see federal agencies as cautiously optimistic in the adoption of AI. Every large federal agency is executing on one or a combination of “proofs of concept”, “pilot projects”, or “technology demonstration projects” related to AI technologies while other federal agencies with mature data science practices are further along in their AI exploration, for example, implementing Robotic Process Automation, Chatbots, fraud detection tools, and automated translation services. The common thread here is that all agency leaders understand that using AI provides a competitive advantage in the marketplace for delivery of citizen services in a cost effective and impactful manner and are actively supporting AI efforts in their agencies.
How is the Federal government adopting AI compared to private industry?
Neil Chaudhry: Within the AI CoE, we have been able to develop a very broad and very deep perspective on government wide efforts related to AI adoption because we work with many federal agencies at different stages of AI adoption. Right now, most federal agencies are looking to institutionalize AI as an enabling work stream in a sustainable way — in this sense, they are very similar to the private sector in terms of AI adoption.
However, the crucial distinction between AI adoption in the private sector and public sector is that the federal government is heavily invested in learning resources like Communities of Practice that focus on sharing use cases, lessons learned, and best practices.
How do you see process automation fitting into the overall AI landscape?
Neil Chaudhry: Process automation is a critical component of applied AI. It is one of the best examples of augmented intelligence in this space right now. Process automation is critical because it is the key to upskilling knowledge workers in the federal workforce. It can take the drudgery out of routine work and free up time for these practitioners to do what they do best — come up with innovative solutions to solve ordinary problems in extraordinary ways. It can also reduce the amount of rework on service requests and claims applications due to human error by virtue of built-in error checking that gets smarter as more requests are routed through the AI application.
What are some of the most interesting use cases you’ve seen so far with the government’s adoption of AI?
Krista Kinnard: There are a number of impactful use cases. Broadly, we see a lot that focus on four outcomes: increased speed and efficiency, cost avoidance and cost saving, improved response time, and increased quality and compliance. We see these in a number of applications that enable agencies to provide direct service to the American public in the form of intelligent chat bots and innovative call centers for customer support. We also are starting to see AI making progress in the fraud detection and prevention space to ensure the best use and allocation of government funds. One of the biggest areas we’ve started to see advancement is in data management. Agencies are using intelligent systems to automate both collection and aggregation of government data, as well as, provide deeper understanding and more targeted analysis.
We have seen that the potential for use of natural language processing (NLP) is huge in government. NLP enables us to read and understand free-form text, like that of a form used to apply for benefits, or document government decisions. So much of government data exists in government forms with open text fields and government documents, like memos and policy documents. NLP can really help to understand the relationships between these data and provide deeper insight for government decision making.
What do you see as critical needs for education and training around AI?
Neil Chaudhry: At its core, AI is operational research and applied statistics analysis on steroids. The AI workforce of the future needs to have a fundamental understanding of statistical concepts, probability concepts, decision science concepts, optimization techniques, queuing theories, and various problem solving methodologies used in the business community. In addition, the AI workforce of the future needs periodic training around ethics, critical thinking, collaboration, and working in diverse teams, to name a few, to effectively understand things like global data sets that are generated by people with different norms and values.
The critical needs for education and training for AI revolve around soft skills, such as flexibility, empathy, and the ability to give and receive feedback, along with critical thinking, reasoning, and decision-making. Any seasoned AI practitioner has experienced instances in AI research where we end up correlating shark bites with ice cream sales. So the ability to seek out subject matter experts, convince an organization to share proprietary datasets, and communicate actionable insights are all critical needs for education and training around AI.
What is the CoE doing to help educate and inform government employees when it comes to AI?
Krista Kinnard: Education and training is a priority for many government agencies. Employees are passionate about serving the mission of their agency and delivering quality service to the American people. At the CoE we aim to empower them through the use of AI as a tool. As such, a critical component of the CoE model is to learn by doing; it’s experiential learning with real world application with a little coaching as they gain their AI footing. As our technical experts partner with agencies, we engage with their workforce in every step of the process so that when the CoE completes an engagement, the agency has a team of people who know the solution we delivered, can take ownership of its success, and repeat the process for future innovation.
Beyond partnering, the CoE reaches out more broadly to share experiences through the governmentwide AI Community of Practice. The AI Community of Practice supports AI education by creating a space to share lessons learned, best practices, and advice. We regularly host events and webinars and are forming working groups to focus on specific topics within AI. The challenge is that people learn in different ways. Class and certifications can certainly provide a foundation, but learning to apply those skills in a specific context can be difficult. If organizations can create a culture of experimentation where people can learn by doing in a safe and controlled environment, government will be able to build skills around AI adoption. For that, we have established a page on OPM’s Open Opportunities platform. Here programs and offices across government can post micro-project opportunities or learning assignments under the Data Savvy Workforce tag. Again, this isn’t just for data practitioners. Employees on program teams, acquisition teams, and HR teams can learn how AI could enhance their processes.
How can the Federal Government ensure their AI systems are built with citizen trust in mind?
Krista Kinnard: This point is critically important to the AI CoE. We have deep respect for the people and communities that government agencies serve — and the data housed in government systems that helps agencies serve those people and communities. Part of the CoE engagement model is to embed a clear and transparent human-centered approach into all of our AI engagements.
Another critical element of developing trustworthy AI is ensuring all stakeholders have a clear understanding of the problem we are trying to solve and the desired outcome. It sounds simple, but in order to effectively monitor and evaluate the impact of any AI system we build, we have to first truly understand the problem and how we are trying to solve it.
We also emphasize creating infrastructure to support regular and iterative evaluation of data, models and outcomes to assess impact of the AI system — both intended and unintended.
Creating trustworthy AI is not just about the data and technology itself. We engage early with the acquisition team to ensure that they are making a smart buy. We engage with the Chief Information Security Officer and the security team early and often since approval and security checks can be a hurdle for implementation. We engage Privacy Officers to ensure AI solutions are in compliance with organizational privacy policies. By bringing in these key stakeholders early in the AI development process, we help embed responsible AI into these solutions from the onset.
What advice do you have for government agencies and public sector organizations looking to adopt AI?
Krista Kinnard: I would offer two pieces of advice. First, start small. Choose a challenge that can be easily scoped and understood with known data. This will help prove the value of this technology. Once the organization becomes more comfortable with all that is involved with building an AI on a smaller scale, it can move towards bigger and more complex projects. The second piece of advice I would offer is to know what AI can do, and what it cannot. This is a powerful technology that is already producing meaningful and valuable results, but it is not magic. Understanding the limitations of AI will help select realistic and attainable AI projects.
Neil Chaudhry: AI is meant to augment the humans in the workforce, not replace them with synthetic media and autonomous robots or chatbots.
I always discuss what a successful AI implementation means to the partner and their frontline staff during our initial meetings because every successful Ai implementation that I have seen or been follows a hierarchy of people over process, process over technology, and technology as the tool used by people to improve organizational processes. As part of my discussions, I always advise the partner agency to think of how the frontline staff will use AI. My experience has shown that if the frontline staff cannot leverage AI in a meaningful way then the AI implementation is not sustainable or actionable.
If a partner is looking to replace people, then their AI adoption strategy will not be sustainable. In addition, if a partner is looking to circumvent or bypass an established law, regulation, policy, or procedure then their AI adoption will also be unsuccessful because it will amplify the biases inherent in the new processes.
The advice I always give my partners who are looking to implement AI is to define a set of sustainable use cases for AI and measure the impact of those use cases against the existing tech debt within their organization. It may be that the agency is ready to implement AI now but waiting a year may allow the agency to implement AI in a cost-effective manner.