Advisory Panel Meeting – Vanderbilt University Medical Center

Advisory Panel Meeting – Vanderbilt University Medical Center


We are not recording the closed
session, just the open session from 9:00 a.m. to 10:10. Still can’t– can
Eric see the room yet? No. But I can– that’s all right. I can get going anyway
if you’re ready. I can’t see folks. I was trying to start to tick
off the list of a [AUDIO OUT] on seeing folks. But since I can’t
see folks, you’re going to have to actually
do a full roll call. Welcome, everybody,
to Vanderbilt, except for me and a few others. In this case, after
being sick for months, and having a surgery
about four weeks ago, and doing well in recovery,
I had the OK to travel. But Alaska Airlines did
everything in their power after 12 hours of airport
for me to get there, and I finally gave
up and came home. So I’m with you
virtually today, but not as good as having been there
for some blues last night and some barbecue and
those kinds of things. Karina, do you want to just
go ahead and do a formal roll, since we can’t see
most of the folks? And then I will dive
right into materials. Sure. So we have Eric, Tram,
Robert Califf just walked in. Lon Cardon is unavailable. Just double-checking if
you’re actually on the Webex. We have Tina Cheng. Jonathan Epstein is unavailable. Alejandra Gepp– we’re
expecting her on Webex. We don’t see her yet. We have Miriam
Guzman in the room. Ana Carolina Machado
is expected on Webex. She’s on. She’s on? Yes. OK. Yes, I’m here. Terry Magnuson is here. Marie Lynn Miranda is here. Bray Patrick-Lake is on Webex. [AUDIO OUT] Webex. Erica– Good morning. [INAUDIBLE] is here. Prashant Shah is here. Greg Simon is here. We expect Sharon Terry on Webex. She’s not on yet. We are also expecting
David Williams on Webex. We don’t see him yet. And Teresa Zayas-Caban
is unavailable. You skipped me. I’m sorry. Marylyn. I checked you off. Thank you. Marylyn Ritchie is here. I apologize. I saw you, and I marked you. [LAUGHTER] You’re accounted for. Thank you. I apologize. No problem. All right. Well, hearing no other, so
I will just get us started. So we’re doing an open
session, but unlike some of our open sessions
that are live, we didn’t expect them
to set it up this way. So we’re recording it and
then we’ll post it later on. This is about an hour and
10 minute session from 9:00 till about 10:10, when
we’ll take a break. I’ll give a quick
overview and then– think of it as Groundhog Day. [INAUDIBLE] often true. We’re going to do shorter
versions of discussions around Engagement Core
that Dara and Consuelo Wilkins will be involved,
or will be leading those discussions, then the
Research Hub Overview with Josh Denny. And then we’ll have more
time in closed session to go into those in a whole
lot more depth, including some demos and things that
would be difficult to do online. So let me kick it off. Wait– actually, you’ve
got the slides up. So that’s fine. If you go to the
next slide, there should be some happy
pictures of Tennessee. I am disappointed not to get
there for multiple reasons. My wife’s from Knoxville. There’s the Knoxville World’s
Fair tower, for those of you that recognize it. And many of the rest
of these are Nashville. You see the Grand
Old Opry there. You see some pictures
in the bottom left. Nashville was one
of our launch sites with our partners
at FiftyForward and at Vanderbilt there. [INAUDIBLE] them be one
of our launch sites. I hope that you’re going to
be able to get some good food that we don’t have in Portland. We’re way too healthy here. We need to [INAUDIBLE]
to Portland. So if anybody wants to send it
out to me, that would be great. This is our first in-the-field
Advisory Panel meeting. So this experiment of, instead
of hearing just from me, hearing directly from
awardees in the sites that they’re located at. So this is our first. Second will be in
Seattle in the fall. If you go to the next slide,
you know, this quick overview. Well, I do want to thank our
hosts, Vanderbilt and the Jean and Alexander Heard
Libraries, for hosting. I got to see the room briefly. Oh, wait. It’s back up. It’s even better now. And it does not look like
we’re in a federal building or in a hotel in Bethesda,
and that’s a good thing. So we’re out of the basement
of one of the Bethesda hotels. So that’s an accomplishment
and just a move forward. So I’m just going to
give you a quick overview really quickly about enrollment. And then I’m
[INAUDIBLE] over to Dara to lead the discussion
on Engagement and then the Research
Hub with Josh Denny. Q&A there. And we’ve got time for
much, much more in-depth Q&A in the closed
session as we revisit these in even more
depth than we’re going to do at the beginning. So again, on the next slide,
the basic story: Enrollment continues to go well. I mean, you all met with
Stephanie in late May. The enrollment continues
to be strong now, at about over 265,000 people
who have started the process, over 220,000 who have
actually consented, and 166,000-plus who have
actually completed the entire core protocol. Often about 3,000,
sometimes more, people per week becoming– completing the entire
protocol and signing up. So that’s going well. There’s 369 clinics open
right now, another hundred or so planned to go. Just in, what, our
first [INAUDIBLE] site to open that up. And we’re preparing
the new infrastructure to support a lot more direct
volunteer sites, which will help us reach more states. It’s taking us a long time. We’ll dive into this in more
depth in the closed session to get some thoughts
and advice from people. It’s taking us a lot longer
to get the nontraditional way of recruiting people. The direct volunteer
mechanism is up and running. So the majority of
these people are still brought in through the
health provider organization mechanism. We’re seeing signs of life
and some new infrastructure in place to support the
direct volunteer model. So good progress. I mean, our job one– we talked
to you about the priority areas in the last meeting– job one is recruitment
and retention. And on that front,
we’re all hands on deck to develop our first
retention metrics, but also to just continue
to drive that awareness engine, that engagement
engine of building relationships and
then trying to convert some of those relationships
to participants who sign up and join. Next slide. We should do the clapper. That would be good. [LAUGHTER] [CLAPPING] Still going to the next slide? We are. There’s a big lag. There’s a lag here. It’s my rural Internet. Sounds like a William
Wordsworth poem. No, I don’t think so. We’re doing well on gender
and on race and ethnicity. We’re doing well
in Underrepresented in Biomedical Research. And remember, our goal over
the long term of the program was 70% Underrepresented
in Biomedical Research. We’re over that now
significantly, about 81%, and 51%, 52% of that based
on race and ethnicity. The two numbers in the
last page and this one sometimes differ
because participants can mark more than one
race/ethnicity category. So we’re actually–
we’re currently 108% percent because people are
actively marking more than 1%. So I mean, you could
imagine we would be having a problem in a
discussion with the Advisory Panel at this point in our
lives saying, [INAUDIBLE] people signing up. That’s not the problem. We aren’t achieving
our race and ethnicity. That’s not the problem. I think our
challenge is really– and the next great challenge
is maintain that recruitment, but drive retention. Are we building enough
value in relationships with our participants to
actually carry that forward? And we’ll discuss some of
that throughout the day. On the next slide– we can actually time
the Internet delay here. The geography– it is
people from all 50 states. You can see the
[AUDIO OUT] to the left. And this is without us doing
any particular campaigns to pull in more people of age ranges. We’ll talk a little
bit today about where we are with regional campaigns. And then we’ll
have opportunities to sort of address those. We only have just released
four additional language kits, so simplified Chinese,
Vietnamese, Korean, and Arabic. If there was one area in
terms of race/ethnicity, it was some of the Asian
language categories. And as we’re working
towards, OK, we’ve got now some of the
collateral translated into these languages,
we have a better chance of doing proactive recruitment
campaigns in those areas. And also we’re continuing
to build more capacity around interactive
mobile exhibits that can go to remote
parts of the country and sort of go to
where people are. Next slide. [HUMMING] OK, there it is. I’ll always entertain you
with music if at all possible. So just a reminder: When
Stephanie talked to you in the last meeting about
those priority areas, a few of the top ones are
in spirit of recruitment and retention, improving
the participant experience, launching genomics, and
launching the Researcher Workbench. And that’s a lot. That’s pretty much
the primary things that the consortium is
focused on right now. We’ll talk a little bit
about some new investments we’ve done on the
direct volunteer front to really get that up and going. And we’re going to hear a
lot today about engagement, particularly the Engagement
Core around getting participant input. These pictures are from
some recent meetings that Dara and I did with
our CPGI partners, Community [INAUDIBLE] Gateway Initiative. These are really
the bread and butter of achieving those UBR numbers
that we’re actually doing. These are often national
or regional partners who have a local presence
in parts of the country. This was our first
face-to-face meeting just a couple months ago. You know, it was amazing
to watch this group network and be like, wait, we
know so-and-so here, and they just start
expanding their network, without any
encouragement from us. I think it’s a
natural thing to do for those who spend a lot
of their time community organizing, is– the networks that
they’ve started to build with each
other are even stronger than the power of the
individual [INAUDIBLE].. So amazing progress there. Dara and Dr. Consuelo
Wilkins will actually show you that that as we go. We’ll talk a little bit about
launching genomics and then launching Researcher
Workbench which, again, you’re going to hear
a lot about today. The two big values of being
here at Vanderbilt today are hearing directly from Dr.
Wilkins about the Engagement Core, and then hearing directly
from Dr. Denny and a range of other Vanderbilt colleagues
about what’s going on with the Researcher Workbench. So that’s just kind
of a preview of what’s to come throughout the day,
a little bit of enrollment over that. And let me turn it over to
Dara to jump in– a little bit more detail on what’s
going on with engagement and then followed up by Josh. We’ll have a little
bit of time for Q&A, and then that will
finish open session. And we can dive more
detail in closed session. I will turn it over to
Dara and/or Consuelo. Thank you so much, Eric. Tram told me that I had
to come to the podium. So that’s why I’m here. Sometimes I follow instructions. We’re really just excited
to be back with this group and sharing some of the
highlights of the great work that we’re doing in
our Engagement team, really to build a robust
infrastructure that we know is necessary to actualize
meaningful and impactful engagement and, as
Eric said, ultimately, to retain diverse communities
and individuals from all walks of life in our program. Many of you know that I
sat where you were a couple years ago prior to joining
the All of Us leadership team. And I really had an honor to
serve on the Advisory Panel, just helping to shape the
program from the ground up. But now, as the chief
engagement officer, together with my awesome team,
I have had the great opportunity to build on an
exceptional foundation that many of you in this room
and on the call have laid. And you’ll probably think
I’m a little biased. And admittedly, I am,
but very objectively I can say that while we have a
tremendous amount of work yet to be done, our engagement
infrastructure is really taking great shape. And over the next
few minutes, I’m just going to provide you with
a high-level overview of some of the key components
of our ever-growing and developing engagement
infrastructure landscape. And then after my
brief overview, I’m going to turn it over
to Dr. Consuelo Wilkins, who directs our Engagement
Core, to highlight some of the unprecedented work
that we are doing to live out one of our program’s
core and key values, which is participants
as partners. So next slide. You can click [INAUDIBLE]. OK. Great. So this– what you see here is
our engagement infrastructure. There are so many critical
components of this structure. And we’re committed to adding
new and exciting elements as we learn and grow. I meant just to
start, I’m just going to share a few of
the key elements that we already have in place. Moving clockwise
from the top, we have our community
and provider partners. Eric talked a little
bit about that. But in addition to the Community
and Provider Gateway Partners, which is a group of more than
35 national, regional, and local organizations, we have five
national community partners. And we really are
so fortunate to have more than 40 funded
partners who are really allowing us to
borrow their trust and respect that
they had in meetings. And they do events. They do activities. And I agree with
Eric wholeheartedly that I do believe that they
are creating that pipeline that is allowing us to have the
great diversity that we have in our program. Next you see our
participant partners. And that’s where Dr. Wilkins
is going to talk about. So I’m going to skip
right over that. But I think the one
thing that I will say there is, in
my humble opinion, participants are the heart
and soul of our program. And that’s my story. I’m sticking to it. The National Network of
Libraries of Medicine– this is an organization of over
7,100 member organizations. That includes academic
libraries, health sciences libraries, public libraries, and
also pharmaceutical and other biomedical libraries. And they have a Community
Engagement Network that’s working with our program
to enhance education, awareness of our program, and very
importantly, to help us bridge the digital divide, given
our program’s highly digital nature. And just being in
communities, they can meet people where they are. And that’s very, very important. They’re also positively
impacting our program through a speaker series,
an All of Us Speaker Series. Dr. Collins and I did an
inaugural session in March. And they also provide very
rich content and expertise to help us build out our
training and education materials. Moving along the line and
in purple at the bottom, you see the local
community advisory boards and participant advisory boards. This is really a key element
of our on-the-ground strategy. Our Engagement Team provides
best practice capacity-building tools and supports to ensure
that all awardee partners implement participating
Community Advisory Boards, who meet regularly, who
are knowledgeable about our program, and whose
members are representative of the participants in their
communities, and also– very important– whose members
have a passion, commitment, determination, and the
courage to speak up, to identify program
challenges and opportunities, but most importantly, to
help us find solutions, to optimize the outreach,
the education, the engagement and retention efforts on both
the local and national levels. Next you see the
engagement leads. This was one of my
first official acts after joining All of
Us Research Program was to create this
group of individuals who are responsible
for representing the awardee engagement
across the consortium. They’re key thought leaders. They’re designated by
their respective principal investigators at
the awardee level. And we meet regularly with
them to share best practice, lessons learned, and also
to co-create with them, work collaboratively
with them to design, develop, and implement
impactful All of Us capacity-building
engagement and retention strategies. Eric talked a little bit about
the mobile engagement assets. These are actually
experiential-learning vehicles, which, many of you
know, we now have two. One launched in 2017, and
a second, as you know, launched late last year with
the capacity for provision of blood and urine specimens. So you can complete the
whole protocol, if you wish. And these really are impactful. They go to health fairs. They really are a nice
way to expose communities who wouldn’t ordinarily
have access to our program to our program. So that’s great. Frontline staff, as you
can see at the top– really, the vision for
this key engagement and capacity-building initiative
came from Eric after he personally met with the
consortium frontline staff and engaged them in a discussion
about what it [AUDIO OUT] and much like the vision
for engagement leads, the convening of our
frontline staff– typically, they are research
associates– came out of a desire to build a
capacity with the knowledge and the effectiveness
of the employees who are truly on
the front lines. These are the
individuals who are often providing the very
first impressions of the All of Us
Research Program to potential participants. And so our goal is to make
sure that these key team leaders have the tools
and information they need to do their jobs very well. And it’s going very, very
well with those groups. So as I close, as I mentioned
before, I just want to– I have a couple of slides that
highlight some accomplishments of our community partners. I’m not sure why it’s so small. But the national community
partners that we talked about, we have Delta Research
and Education Foundation, [AUDIO OUT] PRIDEnet
for All of Us. We have the National
Alliance for Hispanic Health, FiftyForward, and we added
just recently the Asian Health Coalition. And collectively these five
national community partners have hosted more than 300
digital and non-digital events in 28 states from 75 active
sites across the country. They contributed over 23
op-eds, articles, blogs; hosted nearly 100
events in partnership with our mobile engagement
assets, bringing All of Us directly to their
respective communities. And while their milestones are
really limited to awareness and engagement, as
Eric said, we really are asking them to help us
direct people to our website. And hopefully, they will help
people get more interested and enroll in our program. Similarly for the CPGI
partners, since October 2017– this is that network of over
30 national and community and provider organizations– they have actually
subawardees under their group, over 140 subawardees, more than
1,500 digital events, op-eds. And as you can
see, they’re really blanketing the nation with
activities about our program. I think Eric said
something about, you can fly over
the states, and you can see– anywhere you
can see, there’s something happening about All of Us. And we’re really excited
to be contributing to that. So, you know, we really
are excited about what we’ve done to date. But even as we celebrate
how far we’ve come, we’re also looking ahead and
mapping out the next steps to help us continue to
achieve impactful, meaningful, and value-added bi-directional
engagement and retention in our program. And because I’m
sticking to my story that our participant
partners are the heart and soul of the
program, it now gives me great pleasure to turn it
over to Dr. Wilkins, who, as you know– may know– is the
VP for Health Equity and the co-PI of Vanderbilt
Recruitment Innovation Center. She’s the director of
our Engagement Core. And she’s working with our
team to meaningfully engage participant partners
in the program. She’s modest. She has a lot of skills. But she’s really a
nationally-known expert who, as you know,
is widely recognized for developing and
testing innovative methods to impactfully
engage participants. And she was just
recently appointed to the Secretary’s
distinguished Advisory Committee on Human Research Protection. So we are really fortunate
to have Consuelo on our team to help us shape
our game-changing and groundbreaking work to live
out our value of participants as partners. Consuelo? [APPLAUSE] Dara, thanks so much
for the overview of the really comprehensive
approach and infrastructure you’ve developed for engagement,
and for the kind introduction. Welcome to Nashville, for those
of you who are in the room. And those of you
who are virtual, we’ll see you next time. As Dara mentioned, we
have the responsibility for really engaging participants
as partners in the program. For those of you who are really
familiar with the initial working group report or
been on the Advisory Panel for a long time, you’re familiar
with this phrase, participants as partners, one of the
themes in the initial report. And it’s really been a pleasure
for us to bring that to life. So just for starting
ground here, I want to make sure that
we’re on the same page about engagement
versus recruitment. This is something that– for a program that’s going to
have a million or more people, there’s a lot of
focus on recruitment. And we’re specifically, as
the Engagement Core, focusing on engaging participants
who’ve already enrolled in aspects of the research. So in general, we think about
engagement as bidirectional. It’s involving stakeholders
in some aspect of the program. And it’s not the ultimate goal
of enrolling in the study. And that can be confusing. Because if you do
engagement well, you actually will
increase recruitment. And so a lot of Dara’s
efforts and focus with the national
and regional partners has been in the space of
awareness and acceptance. And that’s part of the
recruitment continuum. So sometimes we
forget that people who’ve not been exposed
to research before, who don’t really know a lot
about clinical trials, not only have to
be aware, but they have to accept that this
is something that they want to participate in. And we’re talking
about so many groups that have been marginalized
or underrepresented. The process of getting them
to accept that invitation to be a part of a study involves
that they know more about it, they trust; that they believe
that, despite the historical abuses, this is a good program. So a lot of effort goes
into engaging groups around awareness and acceptance. But that doesn’t always
yield recruitment. And if you’re only doing
that core recruitment, though, we’re missing
the bigger picture, which is that if we’re really engaging
participants, participant communities, our research, we
believe, is going to be better. It’s going to be more relevant. It’s going to,
hopefully, lead to– translate into–
discoveries that will be put into practice quicker. So our mission for
our Engagement Core is, again, focusing on engaging
participants who are already involved in the study in
all aspects of the Research Program. We’ve assembled what I think
is an amazing team of people, most of them here in Nashville. So in the room we have,
really, the people who do the brunt of the work. Alecia, Selena, and Juan
are leading the operation. They’re liaising with
the participants. They know them well. They have a great
contact with them. We have faculty
leaders who are really thought leaders in engagement. Karriem Watson is at the
University of Illinois and Liz Cohn is in New York. Laura Beskow is here, and
she’s done some phenomenal work around really understanding
consent and rigorously involving participants in
different aspects of research. And part of her
team, Kate Brelsford and Catherine
Hammack, are really helping us to do our evaluation,
as is Melinda Aldrich, who is newer to our team. We have three aims. They really focus on
creating the infrastructure to integrate participants in
all aspects of the program. And the second
aim is making sure that we have some explicit
ways of identifying a diverse group of people who
can participate in these roles and then removing
all of those barriers to them actually participating. And then finally,
not just for our sake but for research in
general, we really need to assess engagement
and be able to demonstrate its impact on the program. How is it changing? What are we doing
differently because we now have these participant
voices at the table? As we began this core, we
were just actually awarded– I say “just,” although it
seems like a long time, we were just brought in to the
program in January of 2018. So in about 18 months
we’ve accomplished what I think is a lot. Sometimes I think the team– not necessarily me, but
other members of the team make it look easy. But we really do put
a lot of emphasis on making sure that we
understand what the needs are of the individuals that we’re
bringing on board, making sure that we communicate
clearly to them. We have lots of protocols,
processes in place to be assured that we
are compensating them for their time, that we
have structures in place. When they have to
travel, that we pay for their travel in
advance, that we are considering the needs of the participants
who have challenges with ambulation or
need different things for their diet, that we
are proactively making sure that we know
everything they need so that when they
arrive to a meeting, they’re able to participate
and they are fully present and their voices can be heard. So we definitely believe
that participants can provide meaningful
input, and often, we’re just in their way. So we do our best to
try and clear the way. We currently have
36 individuals who are participating in
different roles in All of Us. So they are serving on the
Steering Committee, Executive Committee. They are participant
ambassadors. Here on the Advisory
Panel with you. And we have a
Director’s Think Tank. I’ll say that when we initially
proposed this Engagement Core– for those of us, any of you who
are familiar with engagement know, oftentimes,
you have to say you’re going to do a lot because
you expect people are going to tell you it’s too much, you
have to get rid of some things. I was really so pleased, when
I met with Eric and Dara, that they actually
added more work for us. So maybe others on
the team weren’t happy to have more work added,
but I was really delighted that, because the Steering
Committee is so big, Eric and Dara thought we
need to have participants who are actually
on the Executive Committee and that smaller
group providing input. And Eric also really
wanted to have a group of people who were in
D.C. who the leadership can bounce ideas off, do
some in-person work with. So those are actually things
that we didn’t initially propose. So the left, in blue,
those are things that we’re already doing. We have planned
and hopefully we’ll have our first
participant polling being a return of results,
return of value survey that we’ll launch to
participants sometime soon. And then we also
have plans for making available engagement
studios, which are one-time sessions
where you can get specific feedback on a
particular project or idea. So these are the individuals
who are currently serving in these roles. Miriam is making a
face about her picture. Sorry. These are individuals
who, before the launch of the program,
were participants. And we asked for them–
asked for volunteers in the newsletter
of February 2018. We were hoping that we’d
get a few people interested. We got more than
a hundred people who responded to this
call for individuals. It was available in both
English and Spanish. And we went through
a process of blindly reviewing their
personal statements, why they wanted to be involved,
and then we selected 15 of them that we then interviewed. And then we made
our final selections based on some diversity we
hoped to have in the program. So again, these are
the six individuals. Michael, Katherine,
Michelle, and Richard serve on the Steering Committee. Michelle and Richard also serve
on the Executive Committee. And it’s been really great
to have their voices there. I think any of you who’ve
been in those groups know that they are really
well-accepted members of those committees. They speak up. They’re involved. And we are pleased to
see that some culture has shifted in some ways. And that’s perhaps going to
be one of the biggest changes that we see but may not be able
to measure as well as we like. So we expect that they will
receive meeting materials in advance. Some of them are taking
their lunch break, and if they don’t have
meeting materials in advance– we’re expecting them to attend
these weekly meetings– there’s no chance that they’ll
actually be prepared enough to participate in these
really fast-paced meetings. So we pushed for that. And I’m happy to say that
despite some mmm-hmm, we can’t do that, initially,
the program really stepped up and made a commitment
to getting those meeting materials out at least
24 hours in advance. And we’re tracking that. So we follow the timestamp
when those materials were released so that we
can actually report on that. We also have 22 individuals. They represent each of the
health provider organizations and FQHCs. These are our
participant ambassadors. There are also three who
are representing the VA and three who are representing
the direct volunteers group. These individuals were
nominated by the engagement leads Dara mentioned. And they are serving
in a different role. So they have monthly
meetings where they can deliberate on things
that we put in front of them. They can also propose
ideas, concerns, projects. And so we’ve got a
great group there. Karriem Watson leads
that and is co-chaired by two of those
participant ambassadors. Those individuals, as well
as the participant partners I mentioned, are serving on all
of these governance entities. So those individuals
are actually involved in these workgroups,
subgroups, committees, boards. And so we’ve gone through all
of the processes of actually making sure they understand
what that looks like; that they are committed, that
they know the time commitment; that we provide all of the
acronyms, the list of things, the expectations. And this is also
another culture change in that we wanted to make sure
that the participants could actually identify the
committees or boards they wanted to most participate in so
they could self-nominate and we could help
them decide that. And that decision of which
of these they participate in is not up to the program. It’s up to us as the Engagement
Core with Dara’s approval where they best fit. And we’re looking
forward to seeing how they are able to help shape
change and be involved here. The Director’s Think
Tank I mentioned– this is a group of
people who actually live in D.C. They have
varied experiences. With a small group
like this, it’s hard to get all of the
diversity that you want. But there’s so many
layers of different perspectives here from people
and who’ve now participated in research, who are really– have health conditions that
make them strong advocates. But we also had a process in
place, so you’ll see the theme. There’s always a process. How do we identify
people in a way that we think is equitable
with calling for nominations, reviewing them, and then
selecting them and onboarding them in a way that we
think is going to be beneficial to the program. So some of the things
this group has done is they’ve provided
some input early on, on concerns about privacy,
how to address those. At an in-person session, Eric’s
idea of some boxes and boards, and building visions,
and through iterations, but it’s always interesting to
see the different perspectives there. And we’re looking
forward to moving them into additional work. Hopefully, they’ll get
to see some upcoming work with the Mood App
and some other things that are being proposed. These are just great
pictures of the groups. We had a retreat with them. Again, getting all
of these people to D.C. with their
varied needs from all across the country [AUDIO OUT]. This is the session
back in October with the Think Tank in person. And again the boxes– you can see Eric, and Dara
and John and Daozhong, working with them. And this is our first time where
the participant ambassadors actually provided some feedback
to the Steering Committee in person, on what
they thought would be the most important things
to first return, in general but also specifically related
to genetics and genomics. And then I’ll end with just
a snapshot of our approach to evaluation. It really is multipronged. We are focusing not just
on subjective, qualitative feedback, quantitative feedback,
but also objective feedback. So we have PIs who are mentors
to the Steering Committee members. We are evaluating it from
both their experiences as a mentor/mentee– and by the way, there
was a process for that. The PIs actually had
to fill out a form. They had to tell us why
they wanted to be a mentor. And we chose based on that. We are– hopefully
later today we’ll get to talk some about
the readiness survey. So we have surveyed
the consortium numbers to really understand if
they’re ready for this kind of engagement, what
are their perceptions, and making sure that we’ll
be prepared for that. And then finally here the focus
on how can we be objective. So blindly reviewing
of meeting minutes to make sure that participants
are able to have input, the time stamp for when the
meeting material was sent. And then we’ll be doing some
pre- and post-documents. So we ask for
input, how did they change, based on that
input from participants. So I’m not sure
if we’re supposed to get to questions
yet, so I will end here. Thank you. [APPLAUSE] So let me go ahead
and turn it over to Josh who will talk a
little bit about– and again, you’re getting a taste
of two discussions. We can have a
longer conversation. And then when Josh
finishes, we’ll spend a good 15 minutes on
this and then take a break. And then we’ll dive into deeper
topics in closed session. So hold your questions
for a few minutes. We’ll hear from
Josh and then have a little bit of time with
Q&A to get us started. OK. Please bear with
us for a minute. Because [INAUDIBLE]. Were you on the
broadcast as well? I am. OK. If I can share my screen. You know, I had a meeting
with Dara and Consuelo and a lot of the team
there at Vanderbilt yesterday and was
saying I feel like we’re doing a great job on this. But man, we have so
much further to go. And our big challenge
here is really, how can we continue
to get– and even more, both in
volume and in range, a wide range of
diversity of input but also stay nimble
and able to act quickly. And balancing those
things is hard. Because we get a lot
of input, not just from participant partners–
from engagement partners, advocacy groups. And managing all of that
input while also trying to innovate and
move things quickly is going to be a challenge
for us in an ongoing way. But we have a lot more
just that we can actually do on this participant
partner front. All right. Josh, are you up? Did I patter long enough? [LAUGHTER] You did. Thank you. Yeah. All right. You’re welcome. Josh is taking my
laptop to the podium. It’s a feeling of power. I think that’s a
violation of federal law. That’s OK. I’m just saying. [LAUGHTER] All right. This will make it easier to do
a live demo that we want to do. Thanks. Well, first let me welcome you
all to Nashville and Vanderbilt again. It’s a real pleasure to have
you here and have this– I guess this first on-site
Advisory Panel and I get a chance to talk through
maybe a bit more about– Data and Research Center
is maybe too narrow a focus as we think about
all the engagement activities and pilot testing and
demonstration projects and things like that
we’re working on. But we really see it as our
mission to guide and facilitate a lot of what’s
happening, and you’ve heard a great part of that
from Consuelo and Dara. And I’m going to talk a little
bit about the Data Browser and where we’re headed
to with the what we call our Research
Hub as an introduction and just a brief
demo of what’s live. So: a brief background
of what the DRC is. Then we’re going to go and
talk about Research Hub, talk about its goals, and
the major levels and access between public and then, once
the user applies and has access to the system, what
they’ll have access to, and then I’d go through a
little Data Browser tour and talk about our timelines
for when new features will be coming, to orient you to that. We are a multisite group. So we’re representing
Vanderbilt, but Verily and Broad
are key parts of this, and Anthony
Philippakis and David Glazer as the other co-PIs
with me on this project. Columbia is a huge part
of this as well, really around a lot of the EHR
processing and intake and helping organize the
32 sites that are currently sending us electronic
health record data and working really
on a continuous basis with those folks to pull that
in, as well as our team here. And you’ll hear
from Robert Carroll later, who’s a big part of
that intake and curation team. Our responsibilities are
to operationalize the data, to get it in, to make it
useful, and facilitate access to execute data
security agreements. We felt like we got a side
education in law [AUDIO OUT] executed agreements
with every data provider across the consortium, and
developed secure ecosystem for researchers that share
data into the program and then access it. Deliverables include our
data repositories app that we call HealthPro, which
is the way by which people facilitate the interaction
with participants and inputting their data and then tracking
status of people coming into the program. The EHR upload and
quality control support. The Researcher
Workbench is what we’re going to focus mostly on today. And that’s that large
researcher-facing, external-facing tools that
we’ve developed and are working on hard now, and
a number of dashboards. If we’ve oriented you
to the Research Hub, and the Research Hub is live–
it’s an evolving entity– @ResearchAllofUs.org, we
have right now public sites that are live. And those include the ability
to look at data snapshots, to get high-level summaries. A lot of the slides
that Eric showed were taken from the data
snapshots on the Research Hub. And so users,
participants, media can come in and see how many
people we have recruited, what’s our diversity
look like, what’s our geographic
distribution, et cetera. And then we’ll have a lot more
information about the resource on the public site. And that will continue to grow
as we watch the Researcher Workbench, especially, which
is that restricted-access part of the site where a user will
come in, develop research projects, explore that
individual, row-level data on participants, as opposed
to the aggregate data that we have on the public side. On the public
side, the big parts of that which were
launched in May of 2019 are, in addition to the
metadata [INAUDIBLE] program and the data snapshots, the Data
Browser and a survey explorer– you can download and you can
search through all the surveys. You can download them
in English and Spanish and see what’s there
and how they were built. That’s all live. It was launched at our one-year
anniversary of national launch on May 6 of 2019. And then the
Researcher Workbench is what we’re building. And we have this target
date of [AUDIO OUT] which extends from about
December to February, when we hope to be
able to launch that with the initial tools to
explore the data in detail and actually develop
those research projects, do the exploration, do science. And the difference is going
through the application process, which we
are calling a data passport, by which a researcher
would be verified and allowed to use the resources
for exploration. We divided it into
three data tiers. The Public tier is
what it sounds like. It’s public. It’s aggregate data. It’s anonymous– summary
statistics, basically. And then we have two levels of
the individual row-level data that researchers will
access, the Registered tier and the Controlled tier. The Controlled tier is
the more restricted. That’s where
everything will have obvious identifiers removed. But the Controlled
tier is where we’ll touch the– especially the
rare variants, and initially, all the genomic data where
we’re intending things like clinical notes,
de-identified, where those will live. And then the Registered tier
won’t have free text data. It will have all
the PPI questions saved for the free text
responses they provide. It will have quite a bit of
electronic health record data, but we’re providing more
formal privacy protections. You know, I used the word
“de-identified” here. We’re not– this is a research
project where participants have given the results,
given their data access for research purposes. And in that process,
it’s moved out of HIPAA into a research context. So “de-identified”
is sort of in quotes. This is, we’re removing
all obvious identifiers; we’re guided for human
subject protections by things like HIPAA, but
it’s not actually in HIPAA. But it’s a word that
we’re familiar with, so I use it for ease here. The bars to get that
controlled access, in addition to identifying
who a researcher is and the training that
everyone will go through, an attestation to
acceptable use, which includes a lot
of education about what is acceptable use. Certainly things like
re-identification is one of those prohibited. And we advise against research
that would be stigmatizing, would not be in public
health interests, would not be in the goals
of the program as well. And we have processes by
which people can submit ideas at the end that
are maybe they’re unsure about what they’re
going to look like. In the Controlled
tier, there’ll also be an institutional
sign-off piece in addition to the registered-tier
components. And initially, everything will
launch with the NIH eRA Commons login, which is a kind of login
that all of us who write grants are very familiar with. It’s interventional. But it is something
that’s registered to an institution, a
background institution, at the federal level. The Data Browser is that
interactive view of the Public tier participant data. It goes across electronic health
records, survey responses, physical measures. I’ll show this very shortly. The goal is really to allow
people to see what’s there and get a sense of the
breadth of the data, the distribution
of data to start providing that kind
of information that makes them interested and want
to do more so that they’ll want to apply for access
and get deeper access. Is it even worth my
time to apply this? Will you have patients
with prostate cancer, breast cancer, heart disease? Do you have patients that
have high creatinine measured, and what percentage? How many have
[INAUDIBLE],, et cetera. All these kinds of things
get that initial response. And also it helps them do
things like plan their grants. As they’re submitting for
grants, what kind of data would be there? I mentioned we launched in May. Currently, we have data,
about 116,000 people in there. We’ll continue to
update that over time, both for data cleanliness,
and also, as we get new data. So the data– it’s a living
thing that periodically we’ll get better new data. Once you’re an
approved user and you go through that data passport
process, and the training and agree to the
data-use agreement, and potential sign-off
by your institution, you’ll get access to
the Workbench, which is that restricted access,
cloud-based environment where people can conduct analyses
using either of those two data tiers. We are launching with– Registered tier
first is our plan, which is that more anonymous,
more privacy-protected data tier. And the application
suite includes– you’ll see a lot of this later
in the closed session, where we’ll show you what
this looks like in terms of both graphical tools, to let
you pick and choose cohorts, point-and-click interfaces
and searchable interfaces, as well as the computational
interfaces, where you can use common statistical programming
languages, initially R and Python, to
do data analysis. And both of these
will be supported by the research support
infrastructure, both help forums and the ability to see
online tutorials, webinars, and things like that that
will help researchers come on. I mentioned that we intend
to launch the Registered tier in winter. We think that at that point
we may be around 200,000 participants total. The tier, the data that launches
may be a little behind that based on what data are curated
and ready to be released at that point. I mentioned the research
support exercise. We’re going to have
help desks and forums, training materials,
and really putting this through a number of different
venues and online tools and webinars. And a key part of this will be,
we know that once people come into some of the
programming environments, though they may be
familiar with R or Python, or maybe they’re familiar with
other statistical packages– and we intend to expand
that support over time– using a new environment,
it’s always a bit of a lift. So we’re going to have
well-documented notebooks that walk you through,
here’s an analysis. Change this variable
here to pull in your cohort instead of the
cohort we did in this example. And that really helped to
facilitate people getting on quickly. And then we’re also make
our data dictionary public. And actually a lot of this is
exposed to the Data Browser, which facilitates the ability
to search quickly and find something. In terms of where we are,
we first launched something in Quarter 2 of 2018. And that was
really, we are here, the “Hello, world” equivalent
for the Research Hub– and talked a little bit about
what our components were. We had ability for people
to sign up for updates. And we started doing
some things like links to the protocol and the surveys. And then the May release I
mentioned and talked mostly about. And then version 3
here that we intend to launch in the
winter, which is, really, the key feature would
be the addition of the Workbench and tools to do the
individual exploration. So now hopefully all of
this will work smoothly. So this is when you come
into the Research Hub– I’m just jumping specifically
to the Data Browser. And there are, I mentioned, data
snapshots and things like that. As you walk through the page,
we have some FAQs and videos, a guide here. We display different EHR domains
that we have information for. You can see that
these are out of a– we have a smaller number
of EHRs that are curated. Then we have survey questions. We have on 104,000
people who’ve filled out survey questions for The
Basics and Overall Health, brief descriptions, and
then program measures. You can go to here– Josh. Josh, does this mean that the
Data Browser currently has 100– roughly 100,000
quality-assured participant data uploaded, that
there’s another 150,000– Yeah. So we have, in what we
call our Curated Data Repository that feeds this. We have pulled together
data in over 100,000 people. And each domain has that
different denominator. But exactly, that’s
exactly true. Now it’s not all
perfectly curated. And one of the things– I like to say that
this is a journey. And our bent is to release
data early and often. But we are curating
it along the way. So these are passing all
the initial quality checks. If someone has, for instance,
a diagnosis of hypertension, it doesn’t necessarily mean for
sure they have hypertension. But it does mean a doctor
has entered a diagnosis for hypertension at some point. And you do monthly or
quarterly releases? We think we’ll probably,
in that, something like every four– three
or four months of release. At this point, we’re
kind of letting this flow and see what looks like it
becomes the right cadence. When we first launch, we
may have an initial release last a little bit longer
because we don’t want to change too frequently on new users. There’s a lot of protocol
we’ve thought about. Once you start working in
a given data environment, though, you can continue to
work in that data environment even if we release
a new version. And the same CDR that’s
under the Data Browser will essentially be
a Workbench as well. So if we look– for instance,
I just mentioned hypertension, it’ll query through the data. And you can see that there
are 59 corresponding diagnoses here. And you can see the distribution
of what they look like and the different percentage
of the population. Now this is maybe how a doctor
would enter the term, right? So we understand someone
might come in and also look for high blood pressure. [AUDIO OUT] But we
support the synonomy there and that supports directly
to hypertensive disorder, with the synonym
high blood pressure, and 35% of the population
has that entry. It works, obviously,
across different things. And we mentioned breast cancer,
which was mentioned before. You can see 2.6% of the
population has that. And you can go through and
with each of these conditions, you can explore. And as you would expect,
there’s more females diagnosed than males with breast cancer. And in this case, to preserve
privacy, if there’s more than 0 but less than 20, everything
gets binned into 20s. So it just says less than 20. And I don’t know if there’s
one person or there’s 19 or 20 of them. And each column is going to
be grouped at bins of 20. You can see the age
distribution across. And a lot of these terms
are hierarchical and include a number of component terms. So this would probably
be for more power users and physicians, maybe
your researchers wanting to explore different subdiseases
or components of this and see what the
breakdowns are– maybe not as relevant
for breast cancer as it would be for something
like heart failure, for instance, where you want to
look at systolic and diastolic and components like that. Let’s say you wanted to
look for a particular lab. So I was actually asked
this recently by someone submitting a grant, how many
of your patients have PSA? So actually, I deliberately
typed it in the wrong box, under the conditions
of PSA as a lab test, prostate-specific
antigen, but yet it will tell you that in
this case, we have raised prostate-specific antigen. And if we go to the
Data Browser main page, we can actually look at
the lab measures for that. And as you [AUDIO OUT]
not surprised, we have more people
with the lab measure. And we can look at
the distribution. So 7% of the
population has that. And we can go through,
and it’s not surprising that more men have
it than with women. And you can see that– you
mentioned about the curation aspect, and being completely
honest, not all of these labs– we get things, for instance,
in different measurements from different users as
these different 32 groups are sending us EHR data
on over– more than 10 different EHR vendors,
as we work through that and that is something that
we’re working on harmonizing across the resource. And I showed you one
that has maybe a few more vendors than others. Josh, I was just
looking at creatinine. And it’s– you would think
it’s like a renal failure population. We do. But it’s not showing up there. I just wondered if you shouldn’t
have a flag by the ones that really haven’t been curated. So what we are thinking
about doing in the– Josh? Josh? Can you repeat– can you repeat
the questions for both myself and the people on the bridge,
but also for– we’re recording this for open session. Great question. So the question was when
looking at creatinine, which I’ll do now,
there are a population, for instance, that is going
to have elevated creatinines and have renal disease. And the question
was, should we have some sort of flag about the
level of curation of these and which ones have
been curated or not? And one of the things we’re
talking about doing, especially as people come in and
access the individual data, is having what we have
defined as our curated data– our clean, curated dataset and
then the curated raw or all, so that people, if
they are looking to do a lab that we haven’t
yet really harmonized, that they can go in and
curate that themselves and still get
access to the data, while also knowing that
things like creatinine we have harmonized. And the creatinine
data here looks very clean in terms
of our distributions. Those– they’re all in
one unit, for instance. Scroll back up. There’s an interesting–
you have a normal bar and an abnormal bar. And those are totally wacko. Oh, in the no unit part? Yeah. Yeah. So again, these are a small
number of submissions. But most of them are high. Most of them are high, right. And I don’t know– Doesn’t match with your actual
table, so it’s pretty good. Right. Right. So this is a group that is–
they’re submitting the values based on these measurements. It’s a categorical value. And maybe we need
to work on making that clear that these are
different labs than the other. One of the things– you can see that
both of the counts here are actually the milligram
per deciliter, not the high, low. And they’re actually
different lab submissions– and making it more clear,
having that the default be the lab that is
the more common lab. Great. So I think we’re actually
officially into question time, not only by virtue
of having questions answered [LAUGHTER] but I’m
actually done with this talk. And I think you want to open
the questions across the variety of presentations we’ve
heard this morning from Eric [INAUDIBLE]. Yep. [INAUDIBLE] Just as a reminder, if
you’re asking a question, please indicate your
name and speak up so that folks on the Webex
can clearly hear you. Thank you. Sure. Sure. This is Greg Simon. So a question really
following on Rob’s question from a minute ago,
which has to do with people who you refer to
sort of curation or cleaning. As an important
philosophical question about does the browser view
things as they came in? Are we intending to accurately
represent what we would see, or expecting someone
else is going to need to place a layer on
top of that to interpret it? Or do we say, no, we really
want to, ourselves, interpose a layer which is some–
whatever you want to call it– value set, computable phenotype,
some way of processing that into something which
we have somehow vetted? To me that’s sort of
an interesting question about how much you want
to say it is what is or we’ve actually cleaned
it up for you a bit. Yeah. It’s certainly cleaned up a
lot from when it comes in. And the question, to
repeat, just in case, is how much do we want to
put a filter on the data and say, we’ve cleaned it
up, versus the Data Browser– and this will go for
the Workbench too– is that this is what we got. This is a topic that I’d love
to hear more discussion on, especially this afternoon. We’ll have a lot of time to
have more discussion on those and get your opinions
on as an Advisory Panel. My view and our initial
thoughts are to basically have both of those
views, that everything has a certain degree of curation
where they actually have– we have a codified unit
behind it, for instance. We have a value. They are being mapped
to a standardized term. The Data Browser is
limited to those things that common match
those initial– There’s actually other
data that doesn’t even meet those initial
quality checks and that we haven’t been able
to work with the site to map into standardized vocabularies. And the Data Browser
right now is a little bit in between both of those
goals, because we have not produced that completely curated
set of a hundred labs or so. And that’s what we’re
working with the sites to do now is to have
that squared off– these labs are going to
be very high-quality. And we’re going to, to
every degree possible, have those units be
mapped, get rid of those are extreme values,
all that kind of stuff. You’re going to also have
access to the full raw data as well if you want
to look at it as it came in before our
set of clean rules. I guess our philosophy
in the Data Browser was to sit at a little
bit higher level and kind of to show data as it came in. And then in the Workbench– have the, here’s
the cleanest set, this is what these
hundred labs are. The diagnoses, for instance,
have been all standardized. And here’s everything, if
you want to access something that we have. So for instance, you’d clean
up a white count to say, some came in as thousands–
some came in at 6,700, some came in as 6.7. Right. We’ll harmonize that. Right. So that, you’d say,
you’re not going to just have it represent
those which could be orders of magnitude off. Right, exactly. So the idea is in
that clean set, we’ll clean out those
6.7 versus 6,700 and have the units all
be standardized to a set. You’ll have the other
data set as well. We want to move, and
we are working actively on actually moving, as much
as possible, the 6.7, 6,700 to actually computationally
harmonize those in the Data Browser as well. Where we can do it. Any other sort of
topics or questions that you want to deep dive
in, even if we don’t have time to answer it all now? What are other topics for
Josh or for Dara and Consuelo that we could get to as
we go throughout the day? Well, we saw a lot
about enrollment but I didn’t see
anything about follow-up. And my experience so
far in similar studies is, follow-up is a beast. Yes, it is. So Rob’s question
was on follow-up. We saw a lot about enrollment
but not a lot about follow-up. Eric, do you want to take that? Or do you want me
to talk about it? I mean, it’s a great discussion
because we’re going to have a– we want some input about broader
retention strategies anyway. So I think we should
capture these. I don’t know if you guys have
a whiteboard in the room. We should capture these
and make sure that we deep dive into all of those. How confident are you that
you have a valid email addresses for everyone? How confident are we that we
have valid email addresses? Ah. The question is,
how confident are we that we have valid
email addresses. [LAUGHTER] I don’t know that data well. Does anyone else? Eric, do you? Yeah, I mean, what I’ll tell you
is initially when we started– I mean, it’s going to be an
issue, particularly in cases where either a direct volunteer
partner or a health provider organization partner is
helping somebody sign up for email for the first time. It’s not necessarily an
email that they actually use. It’s something that
certainly happens. When we launched,
we only had email. And we’ve added SMS to
address a request particularly from specific communities
that were like, we use email, but we actually
have text messaging. The whole recontactability
is clearly a huge issue that we’re actually working
on, and we’ve only recently had the analytics in place. And quite frankly, a CRM
tool, a customer relationship management tool, that allows us
to start to [AUDIO OUT] where the openings of emails, did the
email ping back, and so forth. I don’t know the numbers
off the top of my head. We should really have
a follow-up discussion on this and folks run
the analysis on it and talk about it. Our biggest concern
is particularly those vulnerable
populations at FQHCs or in an environment where
that address was created when they came into the clinic
or when they came to an event– recontactable through that
particular methodology. [AUDIO OUT] about
three months, we’ll have a much more nuanced both
preference engine about how people actually get communicated
to and a wider range of contact options than what we
had out of the gate. And we’ll have the
analytics in place. We can actually start to see it. But it is something that
we’re concerned about. In general– well, I don’t
want to make up numbers, because I can’t remember. But I– we’ve got the charts
on did people actually open the email about
the follow-up surveys, has it not been opened, how many
times has it been looked at. We’re now [AUDIO OUT]
basic analytics of that. But I don’t want to try to make
up numbers off the top my head. I can’t remember it. I feel the same way about
having numbers in my head. And I know if I say one,
I’ll be precisely wrong, as opposed to– I keep thinking we know it. And then I’m like, I’m not sure. I’m like, oh, is that the
right chart in my head? But yeah, I don’t
want to make it up. But this [INAUDIBLE] I’m sorry. So just doing a time check,
we have two more minutes for this open
session, and then we can– once we get into
the closed session, we can certainly
do a deeper dive into some of these
other pressing questions that we have. So we can take one more
question before we close this, turn off the recording, go
into break for about 10 minutes and then go into
the closed session. So I– It’s Greg. General question, which
probably would take way more than two minutes, for
Dara and Consuelo, I guess. When you do this
engagement work, if you really do it right,
people will disagree. And you can’t paper
that over, but you need to move forward
and make decisions. So I’m just curious
about how you worked with that
in terms of living in disagreement about things. That’s probably a
day-long discussion or a life-long discussion. [LAUGHTER] Or a lifelong discussion. Yeah, that’s exactly
what we talked about when we met with Eric
and Consuelo yesterday. We need to figure out the
best forum to get the input, and to do it in somewhat
of a democratic way, by actually asking actual
participants what they want, as opposed to researchers
who think they know what participants want, but
understanding that we’re not a one-size-fits-all program. You’re going to have
differing opinions even within the demographic, racial,
and ethnic or community groups. So just as in any
difficult situation, you have to move forward
with the answer that meets the needs of the most
people, while not causing harm. And how do you do that? If I had the answer
to that, I wouldn’t be sitting in this room. [LAUGHTER] But I know
we should discuss it, because I think there are
some really unique ideas that have been done in
various organizations. For example, in my
prior organizations, everyone had a vote. That’s one way to do it. In many of the fraternities and
sororities, there is a count. In our Steering
Committee, we vote. That’s one way to approach it. But I think we have
to just figure out, how best can we leverage the
core values of our program, which in some cases
are competitive, but– and best meets the
needs of the overall gestalt of what we’re trying to
accomplish as a program. And we’re going to have some
people who won’t be happy. But let’s hope that we
have fewer people that are unhappy than we
have that are happy. That’s my best rambling answer. But I think we’re going to have
to figure that out together. If I could add one
thing to that– and Dara brought this up
yesterday– is that we– and I certainly
agree– that we have to make sure we’re not treating
the participants differently than we treat the
researchers, because I think the researchers
are disagreeing more than the participants are. And so how are we actually
resolving those disagreements? And when there are
really disagreements, what’s the next step? Are we getting additional
information and data to make sure that we
have some informed decisions around those things? So let them disagree, just
like we disagree, I think. I mean, I’ll close
out this open session with giving you
an example that’s live that we’re that we’re
struggling with right now. When we released the public
Data Browser back in May, we made a somewhat
last-minute decision to remove the ability
for people to be able to search by race,
ethnicity, and particularly by tribal affiliation, at the
direct request of the Tribal Advisory Council for NIH. And out of abundance
of respect– and we’re in the
middle of our process, and I’ll give you some
updates in a minute about multiple
consultations we’re doing, as well as
listening sessions, with tribes around the country. But the short of it was, we
erred on the side of caution. The Vanderbilt team worked many,
many, many midnight and weekend hours to change the code
base and change the dataset before we released it. And when Dara is walking
through a series of exercises, as is Consuelo with
our participants, and Dara with our
community partners, to get input about if and when
and in what ways in the public Data Browser should we put in
the ability to sort on race and ethnicity, you can imagine
a lot of people who don’t– [INAUDIBLE] And remember, it’s a
public Data Browser. We don’t have any
idea who’s using it. And headlines can come out
where somebody makes a claim that we all know
is spurious and not something that you can generate
from aggregate statistics of the sort. And it’s been kind
of like whiplash. Dara will have one
session and they’ll come back like, oh, they’re
really mad that we haven’t already put the ability to
sort by race and ethnicity, because they really
want us to, as well as some other categories. And then we’ll have
another session. We come in. It’s like, they absolutely
don’t want us to do this, right? And so I think hearing
that debate and the reasons underneath people’s concerns– in some cases, we
can add capabilities, or features, or rules
that address concerns, and we haven’t made the decision
about how and when we’re going to do that, because
we’re still getting that input. But by definition, no one is
going to be perfectly happy, because really
different opinions. I think if we focus on the
root issues, [INAUDIBLE] ways that we can address
the root issues, as well as then communicate, we
can, thank you for your input, but here’s the reasons
that we actually made the decision that
we did, I think that’s the best that
we can do right now. But if you guys have ideas
about how we can navigate that– And take any, any
issue with our program, any single issue, and we get
that variety of response, like, how do you navigate that? How do you move that forward
is the challenge that we face. But don’t ask for the
input if you’re not going to actually use it
in that type of forum. And we’re really trying to
wrangle with it and use it. With that, I will officially
close the open session. And we have a 10-minute break. So I can’t see the
clock there that you all are operating
from, but Tram, let everybody know when
they need to be back by, and then we will start with
closed sessions and some deeper dives. Sure. So I want to make sure everyone
gets their 10-minute break. So we will resume at 10:25.

Leave a Reply

Your email address will not be published. Required fields are marked *