Led by Nagaraj Garimalla, Protech Solutions, Inc., delivers public agencies with platform solutions that enable them to coordinate child support initiatives, as well as homelessness and corrections initiatives. At the 2025 Western Intergovernmental Child Support Engagement Council (WICSEC) conference, Protech led a session that focused on “Humanity at Heart” and the way that judicious use of AI can amplify services focused on human challenges and needs.
One key lesson from the session was “Empathize: Listen First.” This highlighted the importance of human-centered design that embraces people’s lived experiences, thoughts, and emotions. Rather than simply diving into data and identifying surface patterns, the app-based user experience begins with people and the function and utility of listening. Each case and situation is different. The core responsibility of those who design child support systems is to ensure they support public agencies’ humanistic mission.
Unfortunately, platform designers have a tendency to optimize from the perspective of system efficiency, without fully considering human outcomes. It’s important to reverse the standard technology vetting process. One must first define success as achieving trust, fairness, and dignity, and then ask whether it’s appropriate to apply AI in providing a solution.
AI does have the unique ability to winnow vast troves of data into something discrete and actionable. When properly designed and deployed, AI presents tremendous benefits in areas such as automatically adjusting benefit eligibility screenings, identifying service delivery gaps, and optimizing caseload assignments.
However, even here, the human safeguard element is critical. With AI, there’s a tendency to oversimplify complex issues, narrowing ideas too quickly. It’s better to apply AI as a “brainstorming partner,” amplifying ideas about various scenarios and then allowing human decision makers to determine which of these align with core mission and public values. It’s also important to consider AI output as not finished, but rather the grist for further prototyping. Humans need to test and ensure accessibility, usability, conciseness, and clarity prior to scaling. AI has no intrinsic knowledge of anything. Its output is only as valid as the data it’s trained on.
The Protech team notes that one must ask the hard questions about whether the platform, as designed, builds trust and fosters values of transparency, inclusion, and fairness. Those with disabilities, language barriers, and a lack of digital literacy often face challenges in receiving support. AI helps break down barriers through multilingual chatbots and voice assistants. These help people who are vulnerable break through systemic obstacles and clearly convey the type of assistance they need in their native languages.
Another AI benefit is eliminating the lengthy processing times inherent in many social benefit applications. When processing times are drawn out and the approval process onerous, people often give up before receiving the help they need. With AI pre-screening algorithms in place, applications are checked against eligibility requirements immediately. What once took weeks now takes mere minutes.
The bottom line is that AI should be approached cautiously and in conjunction with human decision making. That said, it should not be ignored. Nagaraj Garimalla and the Protect team believe it should be judiciously implemented in ways that make services more efficient, personalized, and equitable.





















