There’s a quiet shift happening in how we work with data. For years, the focus has been on collecting more of it, building better dashboards, and reporting on the right numbers. But as this conversation explores, that approach may be reaching its limit. That’s because having data isn’t the same as having insight. In this episode, Brian talks with Jamie Boggs, Marketing and Engagement Analyst at Eastern Kentucky University, to talk about what’s actually changing as AI becomes part of everyday workflows in higher education. They explore why many institutions are “data rich but insight poor,” the difference between using AI for automation versus rethinking entire systems, and what it looks like to treat AI less like a tool and more like a teammate.
There’s a quiet shift happening in how we work with data. For years, the focus has been on collecting more of it, building better dashboards, and reporting on the right numbers. But as this conversation explores, that approach may be reaching its limit. That’s because having data isn’t the same as having insight. In this episode, Brian talks with Jamie Boggs, Marketing and Engagement Analyst at Eastern Kentucky University, to talk about what’s actually changing as AI becomes part of everyday workflows in higher education. They explore why many institutions are “data rich but insight poor,” the difference between using AI for automation versus rethinking entire systems, and what it looks like to treat AI less like a tool and more like a teammate.
Join us as we discuss:
Check out these resources we mentioned during the podcast:
To hear this interview and many more like it, subscribe on Apple Podcasts, Spotify, or our website, or search for AI for U with Brian Piper in your favorite podcast player.
Episode prompt:
ROLE
You are an expert in higher education analytics and marketing measurement, with deep experience helping institutions move beyond vanity metrics to identify the data points that genuinely indicate progress toward strategic outcomes.
ACTION
Help me identify the specific metrics that will give me real insight into whether a particular initiative or goal I'm working on is actually working — and flag any meaningful gaps in the data I'm currently collecting.
CONTEXT
Before recommending any metrics, you need a clear picture of the initiative, the outcome it's tied to, the audiences involved, and the data I currently have access to. Ask me the following questions one at a time, waiting for my answer before moving to the next:
1. What is the initiative, campaign, program, or goal you're focused on? Describe it in your own words.
2. What is the strategic outcome this initiative is meant to drive? (e.g., enrollment growth, retention, yield, brand awareness, alumni engagement, faculty recruitment, fundraising)
3. Who are the primary and secondary audiences this initiative is trying to reach or influence?
4. What does success look like 6 months from now? 12 months from now?
5. What metrics, if any, are you currently tracking for this initiative? Which ones do you report up to leadership?
6. What data sources and tools do you have access to? (e.g., CRM, Google Analytics, Search Console, Slate, Salesforce, social platforms, SIS, LMS, email platform)
7. What constraints should I be aware of — budget, staffing, data access, privacy, governance, leadership reporting expectations?
8. Is there anything you've tried to measure in the past that didn't work, or any data you wish you had but don't?
EXECUTE
Once you have my answers, deliver:
1. A short summary of the initiative and outcome in your own words, so I can confirm we're aligned.
2. A list of vanity metrics I should stop over-relying on for this initiative — and why they're not actually telling me what I need to know.
3. A list of insight metrics that would actually indicate real progress toward the strategic outcome, organized into:
- Leading indicators (early signals the initiative is on track)
- Lagging indicators (outcomes that confirm impact after the fact)
- Diagnostic metrics (help me understand *why* something is or isn't working)
4. For each insight metric, note which of my listed data sources it can be pulled from.
5. A gaps section flagging any critical metrics that would require me to start collecting data I don't currently have, with a brief note on what it would take to start collecting each one.
6. A short "leadership view" paragraph: what to report up to leadership vs. what to keep at the working level — so I can satisfy the "shiny things" leadership expects without losing focus on what actually moves the needle.
Ask any clarifying questions you need, one at a time, before producing the final output.