Laurie R. Dien, The Pinkerton Foundation on Metrics and Funders

As part of our series on each of the five strategic levers included in the Pathways to an Inclusive Economy: The Future of Workers Task Force Blueprint, Gregory J Morris sat down with Laurie R. Dien, Vice President/Executive Director for Programs, The Pinkerton Foundation.



Gregory J Morris, NYCETC:  What do you think about metrics as a funder? What is it that you’re expecting to hear and learn from a provider?

Laurie R. Dien, The Pinkerton Foundation: Normally we ask our grantees to let us know what outcomes they will report on. But with workforce programs I expect specific program outcomes. Why is that? Because their bread and butter are the numbers related to recruitment and enrollment, graduation from training, number of certifications earned, placing participants in jobs and retention. And these numbers aren’t collected for the sake of collecting data—they help determine the bright spots and the pain points. What’s working and what isn’t working? It’s easy for funders to compare a body of workforce development programs with numbers. But each program has its own set of demographics so they can’t be clumped together and compared. One workforce program may work with a more vulnerable group than another.  It’s not effective to compare apples to oranges.  Public/Private Ventures did the famous Apples to Apples report on benchmarking. That was very influential for the field. We need to know the context around each metric and what makes each metric successful or challenging per different demographics.

I think if we – as a field – pulled together and were open about metrics, we can really drill down and know what works and what doesn’t work. Of course, over time this all needs to be reassessed because our knowledge of what works evolves. For example, the field is still trying to figure out the employer engagement piece. How do employers help determine training curricula so that we know that we’re not just training and praying? And how does it relate to placement? The other added piece now is how employers make themselves sensitive to different populations who are being hired.

Morris: What is the history of benchmarking? There had been an effort to think about using benchmarking tools in the past. Why wasn’t that adopted?

Dien: Outside of New York City, it was adopted. I think benchmarking was a bit ahead of its time when it started in New York City.  Technology wasn’t as advanced as it is now. So it fell away. People get tired. We talk about our workforce system being siloed. I think it is fatigued.  But bringing benchmarking back makes sense to me. It’s very pragmatic… We have great programs in New York City, but we’re just not drilling down on what we’ve got. We’re looking for the next new thing.  And with each new administration, they are too.

Workforce programs are also fatigued because of post-Covid hiring freezes, layoffs.  Because organizations are so fatigued, we’ve lost ground. I don’t think the agencies are remembering the value of data as a way of feeding back knowledge into their programs to make them better. Organizations shouldn’t do this alone. One of the best components of benchmarking was the learning networks that were created.  Everybody shared their data and delved into it. Providers came together to analyze the data- What’s working, what’s not working? How are you doing this? What can be done better? No one could say, “no that’s not possible.” Because someone was making “that” possible. It was a field of providers—the City’s partners– working together.

There’s no question that all funders are looking for outcomes from workforce development organizations. How we each come to it is different- and we’re no better than the City. We’re supposed to be the ones who listen to our grantees and try to help them. We should be helping our grantees figure out how to do their work better.  We trust that the programs want to do quality work. Collectively—providers, funders and the City- could decide on what outcomes should be reported so that workforce development organizations could provide one report and focus on their work.

Morris: What’s it’s like to be in your position?  I think we as providers – or historically, as a provider – I didn’t necessarily know what you were up against. I knew there was probably going to be some sort of template I had to respond to. I knew that there was going to be some sort of timeline. I knew that you had a slate of organizations either that you were already funding or that you were seeking to fund. I didn’t know then what else you were going through-

Dien: I think the most important thing to remember. In most cases, this money is not “our money.” We report to boards. It’s each board’s fiduciary responsibility to decide on which grants get passed and which grants don’t get passed. Most of us also don’t have unlimited funds, so difficult decisions need to be made.

Morris: There is an on-going conversation about braided public and private funding and the establishment of a Talent Investment Fund. Is that a strategy that you think can work as is?

Dien: Our board would not be interested in providing all our workforce money into a pool. They are paying me and my colleagues to work with grantees — we take listening to and learning from our grantees very seriously. As well as working with our grantees to help them.  As far as promoting a unified system, I go back to the learning communities of providers as an important tool in raising the quality of work in a field. First, we need to understand as a field what works. But even before we do that, we need the infrastructure to capture the data. Organizations need the technology and especially need to train staff on how to use platforms like Salesforce. A nice by-product is that you are investing in and lifting up your own staff. And hopefully you will be able to pay them more because they are building new skills. We need to fund infrastructure and create an army of data administrators for nonprofits—that would be a good Talent Investment Fund.