The GenAI Revolution, Ep 2: Howard Changemakers Leading the Way

HU2U AI Mini-Series, Episode 2

In This Episode

In this 2nd episode of our AI series, Dr. Kweli Zukeri dives deep into the transformative work happening within the Howard community to make the AI revolution inclusive, accessible, and impactful for Black communities and beyond.

Join us as we highlight groundbreaking projects going on right here, like the creation of an audio database of African American Vernacular English (AAVE) to revolutionize voice assistive technologies, the diversification of healthcare data to address disparities and the training of the next generation of Black data scientists. Plus, we’ll discuss an innovative speech therapy app designed with inclusivity in mind.

This episode unpacks Howard’s commitment to tackling AI bias and building inclusive ecosystems—efforts that reflect the proud HBCU legacy of using cutting-edge knowledge to uplift Black communities and drive progress for society at large.

From HU2U is a production of Howard University and is produced by University FM.

Host: Dr. Kweli Zukeri, Alumni; AVP of Web Innovation & Strategy, Howard University

Featured Interviews:

  • Sabrina Bramwell, Graduate student, English department; Elevate Black Voices project team member
  • Dr. David Green, Associate Professor of English
  • Dr. Legand Burge, Professor of Computer Science; Director of Howard West
  • Dr. Amy Yeboah Quarkume, Associate Professor of Africana Studies; Director of Graduate Studies, Center for Applied Data Science and Analytics 
  • Dr. Gloria Washington, Assoc. Professor of Computer Science; P.I. of the Elevate Black Voices Project
  • Gabriella Waters, Director of Operations, Center for Equitable AI & Machine Learning, Morgan State University 

Listen on all major podcast platforms

Episode Transcript

Publishing Date: September 24, 2024

[00:00:00] David: What are the dangers that students of color face with regards to these technologies?

[00:00:04] Gloria: We want to create a safe space in celebration of Black English, celebrating all the different dialects, the Southern, the different kinds of ways that Black people speak, so that you don't have to code-switch or pretend to be someone else.

[00:00:22] Amy: So, I have students in the program that felt that they never saw data scientists who look like them. So, we're creating the sense of, you can do work that is about you, that can center you, and that can empower you at the same time.

Can we create something within tech that doesn't destroy us? I think we can. It just takes time to be innovative.

[00:00:39] Kweli: In episode one, we discussed what generative AI is and why it contains bias. We also examine some industry approaches to rectifying it. So, what else are people doing to address the bias issue, amidst pervasive AI adoption at just about all large organizations.

At Howard, there are both institutional and ground level efforts, guided by some of our leaders and changemakers to find ways to make AI work for all of us. This is aligned with the historical HBCU tradition of creating and leveraging new knowledge, not just for knowledge's sake, but to apply it, to improve our own communities, as well as the broader society. Let's dig into it.

Welcome to HU2U, the podcast where we bring today's important topics and stories from Howard university to you. I'm Dr. Kweli Zukeri, Howard alum and assistant vice president of web innovation and strategy. And I'm your host for the second episode of our three-part mini-series that dives into how gen AI is affecting the globe with the focus on our university community.

In this episode, we will extend beyond gen AI and also look at other examples of narrow AI that our changemakers are working on.

Gen AI tools are being integrated into operations and processes for all large enterprises, from corporations to government agencies. And both K-12 and higher education are no exception.

Enterprises can create their own if they have enough internal IT resources, as well as adopt cloud-based ones that are now part of enterprise software solutions, such as resource planning platforms like Workday and productivity platforms like Office 365, which have integrated AI tools into their existing products and services. Using such tools can bolster an institution's capabilities and efficiencies. But as with all new things, it must be done with caution.

One way the institutions of higher education, including HBCUs, are dealing with the rapid rise of AI is by creating enterprise-wide councils and working groups that can advise. Howard's provost, Anthony Wutoh, and President Ben Vinson III recently established one at Howard, called the President's Artificial Intelligence Advisory Council, whose goals include preparing the next generation of AI professionals, advancing the state of the art in AI, and fostering a more inclusive and ethically grounded AI ecosystem. I'm excited and honored to be an inaugural member myself.

One very practical element of our work is to consider how students and faculty make use of gen AI-based tools in an ethical manner that supports continued critical thought development.

Now, before we go on, I want to be sure we understand the importance of HBCUs in the national landscape.

(singing)

During the past decade, some media outlets have questioned the continued need for HBCUs. Huh! The nerve.

Background:

HU!

You know!

[00:04:17] Kweli: For those who don't quite know, HBCUs are historically Black colleges and universities, one example of which, of course, is Howard.

According to a 2016 United Negro College Fund report about HBCUs nationwide, although they only represent about 3% of American universities, Black HBCU students represent 10% of the total Black undergraduate students enrolled and 17% of Black college graduates nationwide. This means that, not only do we enroll an oversized proportion of black students, HBCUs do a much better job of graduating them. So, I don't want to imagine a world without HBCUs, especially after the Supreme Court struck down affirmative action last year. I think the more appropriate and, perhaps, useful consideration is what can PWIs, or predominantly white institutions of higher education, and society, at large, learn from the resilience, grit, and excellence that exists at HBCUs, despite all of the obstacles they've historically faced.

In addition to producing Black graduates, which is, of course, an important ingredient for economic power and social mobility, part of the value of HBCUs are the spaces themselves. They are safeguards of the Black intellectual tradition, as the late scholar Manning Marable described, which is the practice of engaging in research and scholarship to develop new knowledge, not just to advance a field of study. It’s to simultaneously apply insight to uplift people of African descent.

I witnessed this tradition firsthand during my own time as a graduate student in Howard's department of psychology. The motivation to produce research from many of my fellow students and I was to eventually use it to inform our approach to empowering our community. In fact, some of my fellow students had even left PWIs for Howard because their research, which focused on outcomes for Black people was not sufficiently supported at their previous institutions.

Simultaneously, I believe that we who lead HBCUs should always be honestly assessing the trajectories of our institutions to ensure that we are truly working to uproot the systems shaped to hold us back, as opposed to simply leveraging our institutions to graduate students, to just join the workforce as it is, and thus, perpetuating the racist status quo. We are much more than that. We are changemakers.

With this tradition in mind when it comes to AI, I believe HBCUs are uniquely positioned to improve the technology for all. There are so many historical instances in which Black people have dismantled racism and bias in various domains, which subsequently benefited many other groups in society. The talent and knowledge is here, as well as the inherent motivation. The piece that is often lacking, however, are resources to do more.

Back to what's happening at Howard. In addition to forming councils, it's important to create research and academic centers on campus that support some of our lofty goals. In 2022, with support from MasterCard, Howard established a social justice-oriented center for applied data science and analytics, whose goals include increasing interdisciplinary use of data science and producing more formerly trained Black data scientists. This is an important part of Howard's AI story, because dynamics around data are closely tied to the quality of AI models.

Dr. Amy Quarkume, who, among other things, is an associate professor of Africana studies and the graduate director of the center's masters program, speaks about how its work of training Black data scientists ultimately supports equity in AI technologies.

[00:07:58] Amy: A lot of the model creation is done by scientists. So, the first goal is to train more data scientists, right? Train more people in data analytics. And that's what we're doing as a degree program. Yes, you can get a data science training in six weeks, in six hours, but how would we take a position where we want to get a degree? A degree and saying that you've been trained by an institution to also add and create new knowledge. Right? So, that's one — representation.

Two is in the projects we do. So, the questions that we ask are different from traditional questions. The data sets we look at are not traditional data sets. The work that students are cranking, thinking, framing are not traditional frameworks or questions. We want some students to be able to say, “I left this program and I created an algorithm that possibly could deal with closing the poverty gap, right? I left this program and created a response to this policy dealing with data bias.”

[00:08:50] Kweli: One other pioneering HBCU center that I want to at least mention is Morgan State University Center for Equitable AI. The center's director of operations, Gabriella Waters, speaks about its mission and significance.

[00:08:24] Gabriella: We want to make sure that AI systems are developed in ways that are trustworthy, explainable, governable. We want them to be ethical and equitable and all of these kinds of things that we don't have universal guidelines around, like, we should, but we don't.

When you make AI, you just go. It's the wild, wild west. There is no set guideline on, “This is how you build an AI system.” We don't have an agreed-upon. Everyone's just innovating and they're doing their own thing in their own way. And there are no rules. And so, while we're not saying, “Here are the rules,” we are saying, “Here are some guidelines that we should all rally around.”

And so, we're working with different entities to do that kind of work, so that everyone's on the same page. They're also coming to us for policy advisement, for advisement on regulations. And groups come to us to serve on committees and help them to understand and grapple with what this technology might mean for their individual communities.

It's very important work. I spend a lot of time presenting and speaking on AI across all disciplines, doing testimony with the legislature at the federal and the local levels. I take this very seriously because people are looking to us as an example of what they want to do if they bring a center like this to their campus.

[00:10:29] Kweli: The center's work definitely serves as a resource at Morgan and beyond. It held its first summit in April of this year, which brought together experts, policymakers, researchers, entrepreneurs, and industry leaders. And it was a great learning experience.

While Morgan Center is special in its equity-grounded approach, it's just one of many universities across the country and beyond that have established or plan to establish AI centers. At Howard, we're also considering establishing an equitable AI center to serve both the university and other HBCUs.

We've talked about some top-down university approaches, so far. Simultaneously, and often proceeding wide-scale efforts, individual faculty and departments are preparing themselves, as well as their colleagues and students, often, in culturally responsive ways to understand and utilize AI and lead efforts to address the bias of various tools. I talked to a few of the faculty changemakers who are leading the way at Howard.

Dr. Gloria Washington is an associate professor of computer science. She started teaching and researching at Howard nine years ago after having worked in industry and witnessing a lack of Black computer scientists.

[00:11:45] Gloria: I decided to come to Howard and utilize my skills and the ability to be able to impact the next Black girl or boy or anyone to know that they can be a part of tech, and give them opportunities to also create socially relevant computing techniques and AI and technology of their own.

The students really have great ideas. And I don't think that they're celebrated at the level that they should be.

[00:12:16] Kweli: Dr. Washington spoke about creating a culturally relevant and engaging student learning experience in which they're encouraged to produce new knowledge and explore how their skills could be applied to improve society.

[00:11:53] Gloria: We have many different projects that focus on social good so the undergraduates will be engaged in undergraduate research experiences where they can learn, how do you go into your community and pull out something that is important to you and the people that live there and then focus on a problem that you know that you can change?

It helps them think of, like, “I can be a technology producer rather than just a technology consumer.” So, that's how I like to think of my research and what I do at Howard and really inspiring people to know that they can be their own boss, if they want to, but they can also get a job.

[00:13:11] Kweli: Dr. Washington's research utilizes an equity-based approach called human-centered artificial intelligence, which centers the importance of the real-world impact of AI systems on humans, deeming it more important than profit or technological progress simply for progress's sake. The way she describes it, it sounds like just the approach we need to address some of the bias issues we discussed in episode one of this mini-series.

[00:13:35] Gloria: Human-centered AI lab is basically exploring how we can take human in the loop learning and have research scientists who are teaching human robot interaction, explainable artificial intelligence, these interdisciplinary fields that take in psychology, sociology, ethnographic studies, communication, and use them for AI purposes.

You're going to need the AI people, but you're also just going to need a regular person to take a step back and say, “Hmm, I wonder what could go wrong here,” and to actually identify something that no one ever thought about.

[00:14:15] Kweli: All too often, biases illuminated by real-life user experiences, especially those of Black users, only after products are already released, but shows that user diversity is not adequately considered before product launch, even for products that have multi-million or billion-dollar plus development resources.

[00:14:35] Sabrina: Hey, Siri, how far am I from the crib?

[00:14:41] Siri: I can't help you with that.

[00:14:43] Sabrina: Hmm. Okay. Siri, how far am I from home?

[00:14:49] Siri: It looks like Upper Marlboro, Maryland is 15 feet away by car.

[00:14:54] Sabrina: Siri, get me to the crib.

[00:14:59] Siri: Getting directions to soul food crib.

[00:15:01] Sabrina: No, no, no, no, no, no. Siri, get me home.

[00:15:08] Siri: Getting directions to home.

[00:15:10] Kweli: That was Sabrina Bramwell, who's a graduate student in Howard's English department and has worked with Dr. Washington on one of my favorite projects, called Elevate Black Voices. She was demonstrating an example of the result of a widespread lack of inclusion of Black people in the training of AI-based technologies, in this case, for speech commands and speech-to-text tools. Each time Siri didn't understand her, she had to code-switch or change her speech from what she may normally have said to what we might call mainstream white American English in order for Siri to understand.

The evidence for this type of issue is growing. For example, in one scientific study in 2020, researchers looked at voice assistant tools from the five major American tech companies and found that African Americans experience difficulties with voice assistance understanding their speech almost twice as often as people speaking mainstream white American English.

Dr. Washington talked about another scenario in which voice assistant technology bias may show up.

[00:16:12] Gloria: When I'm driving my BMW or even, like, Ford has a voice assistant, like, it just doesn't understand Black people. And it is truly because those systems are also built off of these same systems that did not have Black people in the mix to even build or think about how that would impact their driving performance. That's dangerous. It puts you in the position of, you know, getting in a wreck or something like that.

[00:16:41] Kweli: In an effort to improve AI-based automated speech recognition tools, such as Siri or Alexa, Dr. Washington is the principal researcher for human-centered AI project supported by Google, called Elevate Black Voices. The mission is to create a speech audio database of African American vernacular English, which can be used by speech recognition tools to better understand this language and, thus, improve the experience for African Americans who use them.

[00:17:10] Gloria: We knew that we would have to start out by taking some of our samples from the audio and doing a study of how well these audio samples are able to perform in, like, the Alexa, the Google Assistant, and then relating that experience and showing them, well, this is how bad your systems are. This is how they can get better. We're doing a baseline study, determining how well the audio segments that we picked up are being used by voice assistants in technology.

We have 600 hours of AAVE data that has been collected all over the United States from many different regions. We've been traveling around the nation, so, born to the Black places. We went to Atlanta, Baltimore, D. C. This coming semester, we're also going to go to Houston. We're also going to try and go to Alabama and get some of those different dialects in New Orleans. As we know, Louisiana has many different dialects from New Orleans proper. But then, when you go into Northern Louisiana, it totally changes.

[00:18:20] Kweli: Dr. Washington noted that they'll soon be delivering their completed data set to Google.

[00:18:25] Gloria: We do believe that they'll use it to improve their Google Assistant products to actually be able to learn and listen to any kind of Black people rather than having to code-switch. For us, we're going to utilize it to do some other kind of research, like community-building, how we made sure to keep the community in mind when we were collecting the data. And then we're also going to do, like, guidelines around fair usage of the data, so that any individual can understand how we thought about protecting the community, protecting any word products that come from it that eventually will impact the Black community.

Changing narrative around AAVE

We also want to create a safe space in celebration of Black English, celebrating all the different dialects, the Southern, the different kinds of ways that Black people speak. It's something that we're really proud of.

[00:19:21] Kweli: Just like Dr. Washington, I believe it's monumentally important that Black people cherish AAVE as a diverse cultural treasure that is uniquely ours. Speech is self-expression. So, being able to speak in one's own dialect of vernacular matters deeply. Reflective of the Black American experience itself, its structure has roots in the languages of West Africa and to survive despite American society's pressures to subdue it. Hence, the so-called code-switching phenomenon most Black professionals feel they must abide by to this day. Racist pressure to brush it aside or keep it in the margins can impact their own feelings and relationship to it, perhaps, subconsciously swaying us to consider it lesser than, which is a form of internalized racism. That's why I'm so excited by the Elevate Black Voices project. As with many efforts to improve society for Black people, Dr. Washington noted that this project has potential to improve voice assistive technology for other groups who may not normally speak in standard American English.

[00:20:23] Gloria: Some of the lessons learned that we have on even collecting the different dialects, we’ve learned, can be passed on to African dialects, and then also individuals who are not native English speakers, because traditionally, these voice assistant technologies are failing them as well.

[00:20:42] Kweli: Back to the project itself, of course, Dr. Washington could not complete such an ambitious endeavor all on her own. She has an amazing team. And as she notes, it's key that the team is interdisciplinary.

[00:20:55] Gloria: Having these conversations with non-technical and involving them in the process of understanding, what are the gaps, because, like, I'm a computer scientist and tech person. If I'm not in sociology, if I am not in psychology, in some of the fields that understand how we can help Black and Brown communities, then how can I even think of AI?

[00:21:17] Kweli: One of her team members is Dr. David Green, an associate professor of English at Howard, who has thought a lot about the implications of the use of language in AI applications.

[00:21:31] David: My part is helping to translate, to understand, and to identify certain features of Black English and African American speech, and even Caribbean American and diasporic speech, in particular ways, developing an understanding of that and what's important to keep in for the technology to recognize, in certain instances, the regional differences, right, from D. C. to Maryland. Even Baltimore speech patterns are a little bit different. But even the references for certain foods, the references for certain restaurants, the references for certain ideas, people's customs and practices, the language is very much bound up within that.

[00:22:11] Kweli: Dr. Green explains how certain language patterns persist among people of African descent across the globe, which is what some scholars refer to as cultural continuity.

[00:22:21] David: Certain accents and gestures and, what we call, the use of the chirp… I call it a teaser, right? How people use that, this is connected to different Africanisms, as we know, different languages that exist across the diaspora, even within the content in itself, that work their way in various ways into Black speech amongst people across the diaspora.

So, recognizing that and recognizing why somebody might do that before or after a sentence or in a moment in which they're trying to think of what they want to say becomes extremely important because it's, it's about, if the technology is designed as a form of recognition and reply, it recognizes one's humanity.

[00:23:08] Kweli: Dr. Green gave some detail about the project's methodology or how its researchers collected speech from African-Americans across many regions of the country, which forms a resulting data set.

[00:23:19] David: Part of the project is, different participants agreeing to talk for a minute or so about your favorite home cooking. Some of these questions are about customs that happen with one's family. But then, they're also, how you go about completing certain processes, how you go about asking for certain things, how you go about thinking about certain elements of one's life, and the importance in them.

And in that, you get all of these different responses from people. And you can just see the range within Black speech. And it is not that linguists weren't aware of this, but I don't think we were aware of how vital that was to one's expression of themselves in the ways that it affects, not just consumerism, but it affects how people get work done.

Your ability to just have your natural speech accepted and responded to just allows a certain freedom in how you're… what you're able to think about, what you're able to articulate, right? You're not searching for certain words, you're not searching for certain ideas, you're not trying to put on a performance of a certain kind of voice that you would think would get a better response.

MLA taskforce and HU 1st Year Writing Program AI Working Group.

[00:24:35] Kweli: In addition to instructing and researching, Dr. Green is also the director of the Howard English department's first year writing program, which teaches the first year writing courses that all undergraduate Howard students are required to complete. He has been very active in helping shape thought and guidance on AI usage and integration within higher ed English teaching at large.

[00:24:55] David: A lot of this was brought to my attention at different conferences, maybe starting in 2022, the MLA, Modern Language Association, which is really an association for teachers of English and foreign languages, they were thinking about this issue and formed a task force to think collectively around how this might affect the teaching of writing. And they did that in partnership with another organization, Conference on College Composition and Communication.

Those two groups primarily serve and work with teachers who teach writing, either at the high school or college level, providing certain guidelines on how you might think about, not just citation practices, but what does it mean to do research in the humanities? What does it mean to do research with regards to academic or college level writing?

My interest or my background in language difference, really studying, not just Black English, but the way that language difference affects different pedagogies or how we teach writing, was one of the reasons that I was brought onto the task force.

[00:25:59] Kweli: Dr. Green explained that the task force's first job was to understand the new technology.

[00:26:04] David: What we've come to understand as standard, kind of, writing text, and in ways that seem familiar, that's right, so it's really just duplicating voices of other writers, it's pulling from other people's texts and doing it in such a way that it feels creative in some ways or that it feels as if there's a certain level of spontaneity embedded in it. And what we came to understand is, you know, that's not necessarily the case. These are algorithms. These are machines that they are learning, but they're learning certain algorithms. They're learning how to, kind of, scrape data from different places and how to spin that together in ways that are not conscious, right, or they don't have that kind of consciousness that we attribute to it. But it appears there's a performance or appearance of consciousness that was attached to it.

[00:26:04] Kweli: With a new understanding of gen AI and its capabilities, they went on to consider the implications for English curriculum and pedagogical practices.

[00:27:00] David: As we're doing this, learning about the different concerns that are arising instructors who become concerned about student plagiarism, working to identify examples of text generated or papers generated through either ChatGPT or other generative AI platforms, one of the main questions we have with it is the information that future teachers of English language and writing will need to understand in order to engage this rather than dismiss it.

[00:27:32] Kweli: As he worked with the MLA task force, he knew that the Howard English program would also have to tackle the issue, and with particular focus on its own student body.

[00:27:41] David: One of the things I wanted us to get ahead of with regards to Howard is, you know, what are the dangers that students of color face with regards to these technologies? So, what are the invisible, kind of, practices that either increase disadvantage or undermine culturally relevant aspects?

Really, I knew I couldn't do this work alone. I get a number of emails, either around concern about what to do if you suspect use of AI to write a paper for the classroom.

[00:28:19] Question 1: What should we do if students acknowledge that they got references from AI?

[00:28:23] Question 2: What if they used AI to correlate and gather information or sources for their research?

[00:28:27] Question 3: What happens when AI becomes part of the research process?

[00:28:30] David: I felt we needed a conversation, just because of the kind of specificity of the local population, and then the types of teaching that students come to expect at Howard. And so, these things play out very differently than, say, a generalized, kind of, pedagogy. We could just think about Howard, and then, in turn, begin to think about HBCUs as a group with a similar set of issues, concerns, and students.

[00:28:58] Kweli: The working group that Dr. Green and others in the English department established is continuing to study the issue, as well as provide resources for other university faculty.

Another faculty changemaker I spoke with is Dr. Legand Burge, who, among other things, is a Howard professor and former chair of the computer science department. He also serves on Howard's presidential AI council and has been Howard faculty for 25 years. And his connection to HCBUs and their tradition of uplifting Black education runs deep, to say the least.

[00:29:33] Dr. Burge: I'm a fourth-generation HBCU graduate. So, three generations of my family have gone through Langston University, which is located in Oklahoma. It's the only HBCU there. And my great grandmother was a Spelman graduate. My mother told me where I was going. You know, she said, “You can go anywhere you want to go after you go to Langston.” And I didn't understand what she meant by that at the time. But I found myself comfortable in my own skin. And I saw role models there that I could emulate. And that put me on the right track to where I could go anywhere after Langston succeed. 

[00:30:08] Kweli: Despite attending an HBCU, none of Dr. Burge’s computer science professors were Black, which fed his desire to ensure the experience for future HBCU computer science students was different.

[00:30:20] Dr. Burge: So, one of the big reasons why I'm at Howard is, when I went through Langston, unfortunately, in computer science, and in the math, the programs, I did not see anybody that looked like me. None of the faculty members look like the students that they were actually serving. And so, for me, I was actually lucky because I had a mother that was an educator. I had a father that was an electrical engineer, and he had his PhD. He actually exposed myself. And I have a twin sister. He exposed both of us to computers. In fact, we were the only students at the time. I'm, you know, mid to late-’80s. We were the only students at the time to have a personal computer in our dorm.

So, again, I was able to see that. But unfortunately, my colleagues, my peers at Langston, a lot of them, were challenged because they did not see people that look like them. Their confidence wasn't there. Also, they didn't know what the expectation was to succeed in the field. This is what prompted me to say, “Look, I want to dedicate myself to service and serving HBCUs and to making sure that all of the students that come through and alums that go out there, they're competitive, they're prepared, they're confident.”

[00:31:33] Kweli: Dr. Burge has played a major role in the development and trajectory of many Black computer science students at Howard in ways that are beyond the classroom. He prepares them both for working in the tech industry, where their demographic is extremely underrepresented, and also, for leveraging the technological prowess as a pathway to entrepreneurship.

We'll discuss this further in episode three of this mini-series. For now, I want to introduce you to his work as the principal investigator of the data science training aspects of the NIH’s multimillion dollar AIM-AHEAD consortium. As you may know, the NIH is the federal government's national Institute of health, which oversees billions of dollars of health-related research around the country annually.

In the consortium, Dr. Burge works alongside other national interdisciplinary experts toward a mission of building and utilizing AI and machine learning models and diverse healthcare records to address health disparities and inequities, as well as support diversity of researchers and healthcare providers who can utilize AI tools to support their practice. Dr. Burge broke down some types of AI tools being used by practitioners at large, already.

[00:32:44] Dr. Burge: What AIM-AHEAD is doing, one, we're trying to create more people or underrepresented researchers in the field that have the skill set. We're trying to train them up. We need to be able to train practitioners, like, how do I get a clinical researcher to actually use data science in their research? Or how do I get a medical practitioner to use AI tools in the research?

I'll give you an example. When you go to your physician, at the end of your visit, your physician sits down and writes up a summary of everything that he’s identified and found out that he wants to pass to you. He puts it in your chart. So, one of the big issues is, the way the healthcare system is made, they're optimizing their time.

They're really trying to be efficient. They're trying to get you in 15 minutes. Unfortunately, that's the way it is because it's about the turnover so that they can make enough money to pay the light bills, pay the insurance, and so forth. They're clearly writing their summaries very quickly. And then they tell you what you need to do, and then they'll see you the next time, right? That opens up some liability, because if they miss something that they were supposed to alert you to prescribe you, it could turn into a liability issue.

And so, AI technology could actually be used such that the conversation that's held as the physician is doing the examination, everything that they're saying could be captured. And then it could be summarized and placed into your chart. That's a perfect case of a large language model interpreting human language and then summarizing that language and putting it into a structure that fits the electronic health record and the chart for that patient.

[00:34:26] Kweli: AI tools are being used across the healthcare industry, but it doesn't mean they aren't creating harm despite the increased efficiency that can also introduce. So, in addition to upskilling more Black and Brown healthcare professionals, AIM-AHEAD is also working to improve the tools themselves. We learned about data and algorithmic bias in episode one of this mini-series. And Dr. Burge elaborates further as it relates to healthcare.

[00:34:50] Dr. Burge: Bias in AI and health assessment, basically, it can happen in many ways. So, there could be data bias. Then you have algorithmic bias where, even with diverse data, the algorithms are not designed to account for different demographics or intersectionality. So, they still may produce biases in the outcomes. And then you have clinical practice biases. So, AI may perpetuate existing biases in healthcare practices, let's say, if it's trained on historical data that includes biased human decisions.

When you go to the doctor and they plug in all your chart information, so at my age, I'm starting to get this now, they say, “Oh, you're at a certain percent risk of having some heart-related stuff or whatever. And you need to change a diet or you need to…” they're trying to push you on some meds or something. Well, those models that are being done, the data has actually been the data that was conducted of a study probably 50 years ago, and the population was all white males.

When you start thinking about Black people, you know, our bodies are just different. And so, we need more diverse datasets. And then again, as I said, intersectionality, let's say, if I grew up in a place where there's a food desert, I may have different indicators versus if I come up in the family or in the neighborhood that had nice vegetables, and I ate very good food. I may have a different microbiome and chemistry going on.

[00:36:18] Kweli: So, AIM-AHEAD is trying to improve these AI models by making new diverse data sets available that can be used to train AI models without a high level of racial bias.

[00:36:26] Dr. Burge: We're trying to unlock data, so that we can create more fair models out there that have a diverse representation. We're trying to unlock the data from Howard University's hospital. It goes back to the 18…, you know, whenever, you know, 1800s. We're trying to unlock that data and make it AI-ready and accessible so that researchers can use it when they're building models. We're not only focused at Howard, but we're focused with other minority organizations or hospital organizations, like Meharry Medical School, Morehouse, as well.

[00:36:58] Kweli: The more diverse data sets that are unlocked, the more potential to improve AI models. So, this work is so important and may continue for a long time. Another changemaker at Howard who's working hard to improve data set and data science research diversity is Dr. Amy Quarkume who is also on Howard's presidential AI council.

Our various roles at Howard are related to racial and cultural equity within digital technology. And there are a number of points of convergence. For example, during the fall 2023 semester, she taught an honors course entitled AI Tech Bias and Reparations. From that title alone, I would love to take that course. It's so relevant right now, being that the African American reparations movement, which is well over a century old, has undergone an unprecedented period of progress during the past five to 10 years. I asked Dr. Quarkume to tell me about the course.

[00:37:51] Amy: The promise of the course was, one, talking to non -STEM students. So, these are students who are not computer scientists. They're not engineers. They were philosophers, people in political science, economics, African American studies. And talking to them about tech, but framing tech and the concept about tech with this concept about reparations, like, how do we allow this opportunity, this boom that is happening across the globe, how do we use that to take back what, unfortunately, I would say is due to us? Like, how do we use this technology to create bridges or pathways to help us close the gap in education, in economics, in housing?

And we should take this time to think about the biases that exist in the tech. We won't act like it's all good, but can we find some good for ourselves in ways in which we can improve where we are right now? So, we talked about issues in healthcare, issues in maternal health, issues in banking and financing, and how we can look at tech as a solution and a pathway to be able to get back, close the gap that can help us on the legal field, criminal justice.

So, it was very exciting class, very exciting.

[00:38:50] Kweli: Dr. Quarkume is teaching the honors course again this semester, fall 2024. Similar to Dr. Washington, she has a research lab that centers finding solutions for the Black community via technology and data science. It's called the CORE futures lab.

[00:39:05] Amy: At the CORE are communities, open source data, innovation, and a sense of research, something we've been working on for two years, working with high school, undergrad, and graduate students. And I just feel like the ability to ask questions isn't something that only PhDs do. Research is just not for academics. Research can be for students in high school, people in middle school. We can all ask research questions.

And I think we've, kind of, had that model, generally speaking, in America, but I think for our communities, taking ourselves serious as scientists at a very young age is something that we're pushing. And we open Howard's doors to that, which is not new. So, if you have a question that is community-centered, connected to data, which in many cases are all questions, bring it to the lab and we will try our best to innovatively think about that, find solutions. So, innovation is also part of that.

Right now, we're looking at, how do we deal with air quality issues in communities that cannot move? The company that is putting chemicals in the air won't stop. What can we do? Does that mean that we have to figure out how much is in the air to figure out how much to decrease it? Does that mean to build something? Does that mean we have to find ways to create filters? And the questions that the community has are the questions that are guiding us. So, the community question is, when is it a good time for my child to go out and play?

[00:40:19] Kweli: Dr. Quarkume is also the inaugural director of the data science master's degree program, which was introduced two years ago by the Howard Center for Applied Data Science and Analytics that we talked about earlier. The program also offers a certificate in data science, which makes the material more accessible to a broader range of potential students.

[00:40:39] Amy: Our due credit for the center to the provost. So, the provost's vision to create a center where faculty across the university can converge around data and be trained around data to be able to support students’ projects around data. So, he has this vision. I've been running with it. And with that passion, we've been running and building.

It's been exciting. I've never been an administrator before. And I underestimated that job totally. But it's been exciting creating something that students can really sit in and learn and apply and soon possibly graduate and be making impact in the world.

So, data science is pretty, it's not as organized, I would say, as other disciplines because it's so new, but having something that is so open is very dynamic. So, our curriculum is unique, where we have an elective space where students can take courses across the university, but also we have storytelling, which is taught by a humanist. We have a machine learning class which is taught by someone in pharmacy. The program is very diverse. And students are seeing that, and we're hoping that we continue that conversation, that you don't have to come from computer science or engineering just to be in the data world.

[00:41:42] Kweli: Dr. Quarkume went on to contextualize why Black people's relationship to data and research is marred and how changing that relationship is so important for our collective progress, while still maintaining a critical lens on the many dimensions of data.

[00:41:56] Amy: The way we think about research in our communities, it's just not the cleanest story. We have a strong trust relationship with research. The same thing is with data or data, we should say. So, how we think about what's been collected about us, we should think twice. Think about how things have been sorted or organized on how we classify race, how we classify even what's missing and why it's missing. Those things need to be reexamined. There's been many discoveries about how calculations or formulas or models have been biased. And that came about when someone went again and looked at the model and began to think about, how was this constructed?

I mean, we talk about AI and models and automation. A lot of these models are used on us in different ways, whether it's Uber, whether it's Amazon, whether it's banking. But we know that they have not been based on us. So, it's unfair. So, in many cases, we find ourselves not getting benefits. We're dealing with the punishment, we're dealing with the consequences.

[00:42:51] Kweli: I could go on for days telling you about Dr. Quarkume’s work, but I'll just give you one more example, which is one of her latest projects called Worlds of Hello.

Dr. Quarkume was awarded a national science foundation grant for the research and build of an AI-powered tool that seeks to make speech development therapy for primarily children ages two to five more inclusive, culturally relevant, and affordable for Black children.

[00:43:13] Amy: Historically, children who have speech delays, generally, the option is speech therapy, which is, like, in-person. And that experience is mainly done during the daytime. And parents generally don't have the time or the space to be involved in that process. So, Worlds of Hello creates this space where a parent and a family member can participate in supporting that child through technology, recording their voices, recording their faces, and sharing it through an app and that child has the ability to have a familiar face with their speech development.

So, when they're with the therapist, the therapist can pull that up and show them, like, mommy saying “mommy,” daddy saying “daddy,” mommy saying “chair,” and also showing a picture of maybe a chair in their house. So, it creates this more inclusive experience.

[00:43:56] Kweli: Dr. Quarkume also explained why such a tool is needed.

[00:43:59] Amy: Traditionally, when we look at the field of developmental science, you generally see maybe a white woman or these bland uncultural examples. And in relation to the healthcare system, right, there are inequities in there, as far as when it comes to children of color and having therapists or therapy that is culturally supportive. 

If we don't have a therapist that understands that culture, you know, language and culture are the same thing, we lose that. And then when you come down to children with speech delays or children with special needs, feeling comfortable, familiarity, is a great support system.

And this solution, kind of, speaks to that, like, how can we use tech to create bridges, encourage support, create inclusivity, be affordable, be accessible for families, for children with speech delays, and then children in general? If you do want to support your child with their speech development, how can you do that in a way that is dynamic?

[00:44:54] Kweli: Dr. Quarkume went on to explain how the tool leverages AI technology.

[00:44:57] Amy: If I record one word, or a couple of words, for my child, the tech can create 10,000 words and study my voice, right. That's the power of AI. We also have a speech-to-text function where the child can also see what they're saying for supportive measures, as far as text to sight and learn their sight words. And then we hope to also allow the children to be able to create their own stories. So, you can also speak to the app and use your own words to tell your own stories and create your own images.

[00:45:27] Kweli: In today's episode, we examined the work of just a few of Howard's pioneering change-makers, making strides in a variety of ways that will ultimately make AI less biased and better suited to serve people of African descent. These efforts include creating an audio database of African American vernacular English to support voice assistive technologies, diversifying healthcare data sets to improve training of AI models to serve a more diverse set of healthcare practitioners and researchers, training more Black data scientists across disciplines, centering Black communities and tech and data research labs, and building an app to improve speech therapy for Black children.

We began our discussion with an overview of how such work fits within the historical tradition of HBCUs and the Black intellectual tradition of activist scholarship. I'm feeling inspired.

In the next episode of this mini-series, we’ll dissect how Howard supports computer science students in their quest for success in both big tech and entrepreneurship and how some alum entrepreneurs and in big tech are playing a role in the gen AI revolution already.

Thanks for tuning into HU2U in the second installment of our three-part mini series, examining the impact of the emergent gen AI revolution on people of African descent through a Howard lens.

Make sure to subscribe to this podcast, wherever you're listening so you can catch the rest of the series. Also, feel free to send me your feedback after listening to each episode. My contact info is in the show's notes. And I'd love to hear from you.

I'm Dr. Kweli Zukeri. See you next time. Peace!

Categories

HU2U AI Mini-Series and HU2U Podcast: Season 2