Frequently Asked Questions
Who is this product designed for?
Mountain Top Metrics is designed for:
-
Learning & Development leaders
-
Program owners
-
Executives who need clear answers, not raw data
No statistical or technical expertise required.
What data is required for a learning program evaluation?
To conduct a structured learning program evaluation, we require basic training data such as:
-
Eligible learners
-
Enrollments
-
Completions
-
Completion dates
Optional but valuable inputs include:
-
Participation targets
-
Completion targets
-
Expected time-to-complete
-
Pre- and post-training performance measures
-
Certification or attainment results
Performance outcome data is optional. The course health assessment remains fully functional without it.
Do we need impact or performance data to measure training effectiveness?
No.
MountainTop Metrics separates course execution health from performance outcomes. If outcome data exists, it is incorporated into the course performance analysis. If it does not, the system clearly indicates that outcome evidence is unavailable.
There is no forced ROI calculation or artificial impact modeling.
Why not just look at completion rate?
Because completion rate alone is misleading.
A course can have:
-
High completion but low participation (no reach)
-
High participation but poor completion (content issue)
-
Decent completion but excessive time to finish (friction issue)
MountainTop Metrics prevents incorrect conclusions by identifying the weakest constraint, which is typically the true root cause.
Is this a dashboard or an L&D analytics reporting tool?
No.
This is not a generic L&D dashboard. MountainTop Metrics delivers a structured course evaluation framework designed to support executive decision-making.
The focus is on:
-
Clear performance signals
-
Clear course health status
-
Clear recommendations
It is built for decisions — not exploratory reporting.
How is this different from a dashboard?
Dashboards require interpretation.
Mountain Top Metrics provides answers.
Instead of presenting dozens of charts and KPIs, the platform:
-
Diagnoses the problem
-
Explains the cause
-
Recommends the action
This reduces analysis time and enables consistent, defensible decisions across teams.
What problem does this actually solve?
It solves decision paralysis.
Organizations often collect learning data but struggle to answer:
-
Which courses are worth scaling?
-
Which need intervention?
-
Which should be redesigned or retired?
Mountain Top Metrics turns learning data into decisions, not reports.
Isn’t this just dressing up activity metrics?
Short answer: No — the metrics are inputs. The value is in the logic.
MountainTop Metrics intentionally uses familiar activity data (enrollment, completion, time to complete) — but it does not report them independently. The platform applies a structured diagnostic framework that translates those signals into clear decisions.
Traditional dashboards answer:
“What happened?”
MountainTop Metrics answers:
“Why did it happen — and what should we do next?”
What’s the analytical strategy behind the product?
The strategy is based on constraint-based diagnostics, not raw reporting.
Every learning program can fail for different reasons:
-
People never enroll
-
Learners drop off partway through
-
The experience is too difficult or inefficient
MountainTop Metrics separates these into three independent signals:
-
Reach (participation)
-
Completion (content value)
-
Friction (experience difficulty)
Each signal is evaluated independently and then combined to determine overall course health and the appropriate action.
How long does a course performance analysis take?
Once data is received, a single-course evaluation can typically be completed within a few business days.
The onboarding process is standardized to minimize technical setup and reduce implementation friction.
Can this work with our LMS or learning platform?
Yes.
The system is LMS-agnostic. As long as your learning management system can export training data (CSV or Excel), it can be evaluated.
No system integrations or API development are required.
What if our training data isn’t perfect?
Most organizations do not have perfectly structured learning data.
The framework is designed to:
-
Work within common data limitations
-
Identify data sufficiency clearly
-
Avoid overstating conclusions
Confidence indicators are included in every report to reflect data quality and reliability.
How are recommendations determined?
Recommendations are based on a structured review of:
-
Participation versus expected reach
-
Completion quality and learner follow-through
-
Time-to-complete friction
-
Performance outcomes, when available
The primary driver influencing the course is identified, and directional guidance is provided (scale, monitor, redesign, or defer).
How do you use signal for recommendations?
Recommendations are deterministic and explainable.
Each recommendation directly maps to a diagnosed constraint:
| Identified Issue | Recommendation |
|---|---|
| Low Reach | Increase Awareness |
| Low Completion | Improve Content |
| High Friction | Reduce Friction |
| Strong Signals | Maintain or Scale |
There is no subjective interpretation required.
Is this predictive analytics or AI?
No — and that’s intentional.
Mountain Top Metrics is decision analytics, not black-box prediction.
Instead of producing opaque scores or statistical outputs, it uses:
-
Transparent rules
-
Explainable signals
-
Repeatable logic
This ensures stakeholders can trust, understand, and act on the results.
Does this calculate ROI on training programs?
No.
MountainTop Metrics evaluates training effectiveness and course performance signals. It does not produce financial ROI models.
If financial impact analysis is required, that would be a separate engagement.
Can multiple courses be evaluated?
Yes — one course at a time.
The framework is intentionally structured to prevent portfolio distortion and ensure each course receives focused analysis.
Is this a one-time report or ongoing evaluation service?
It can be either.
Some organizations request a single course evaluation. Others incorporate recurring learning performance reviews into their governance process.
