Improving Analytic Dashboards with AI

Terry Elliott Market Trends

Analytic dashboards are attractive functions within many process management systems in business.  Slick charts have helped Salesforce.com close many sales. Our sense, however, is that the ease that dashboards can be created has led to an analytics overload.  There are too many dashboards with too many data points that have little to do with what the user is trying to accomplish.

Some of this problem is undoubtedly due to the user not wanting to miss anything that might be important so they try to track everything.  Another factor is that the original dashboards provided with the system were developed by “systems people” and not by people in the field that understand their critical success factors.  A third factor is that many of these systems report current status, not historical trends and forecasts that provide a basis for making decisions.  

Jobscience provides systems for managing recruiting and contingent workforce deployment and we have built our share of analytic dashboards.  As the company’s Chief Data Scientist, I am working on applying Artificial Intelligence (AI) to make our dashboards more meaningful for our customers.  In this role, I have interviewed many customers and reviewed the dashboards that they have built.  My conclusion is that many of these user dashboards have little to do with achieving the overall goals of their firm.  There is too much reporting on factors that do not matter.

I first noticed this disconnect in the priority system we use internally to determine system enhancement projects.  Customers vote on what is most important.  Analytics is usually near the bottom of this stack.  I don’t think this means that our analytics are so good that they do not need to be improved.  I think that it is far more likely that our customers do not want to deal with more charts.  Since my job is to improve those charts, this is a basis for alarm.

As a part of an internal research program, we are working with a group of customers to develop a new dashboard reporting system and we have now compiled enough historical data to begin forecasting expected outcomes based on changing productivity patterns.  One part of this project matched the internal analytics our customers developed with actual outcomes.  For example, one customer used 28 factors to rank the productivity of their permanent placement recruiters. We compared these rankings with the actual placement results in the following quarter and found that there was no correlation between the 28 factors and the actual results that we could find.

What is lacking, we think, is a clear understanding of the goals at each level of the organization and how analytic dashboards can be developed to support these goals by tracking performance and forecasting outcomes.  AI can play a role in “learning” to forecast earlier and more accurately, and in recommending corrective action as required, but there has to be a goal to compare progress against.

For example, the Salesforce consulting InsightSquared conducted a study a few years ago of recruitment and staffing firms a few years ago that reported that the fastest growing quartile of permanent placement agencies had significantly more job orders per recruiter than the other firms.  Jobscience’ own studies suggest that time-to-fill is a primary factor in agency profitability.  By plotting curves between job orders for each recruiter and time-to-fill for each recruiter, an agency could determine the right balance between growth and profitability, as well as track recruiter performance against these goals.  In this way, the analytics can be tied to what it is that you are trying to accomplish.

I also need to address the issue of data point overload.  Many users do not want to miss anything that might be important so they develop dashboards that try to track everything.  Essentially, there are so many analytic data points being produced that it all becomes noise.  Many industry organizations and commentators publish suggested “Key Performance Indicators” (KPIs), “Productivity per Head” (PPH) metrics and Objectives and Key Results (OKRs) and they typically suggest no more than 3 to 5 goals and key results measures.  These certainly sound like reasonable guidelines and the analytic dashboards designed to track progress against these goals should have the same limits to concentrate user attention on achieving their goals.

Principal Component Analysis” is an important AI methodology that is based on the thesis that, while there are numerous factors that could be predictive variables, only a few “principal components” have significant predictive power.  The rest is noise.  Please keep this in mind when you consider the dashboards you use today and think about dashboards that could be truly meaningful in measuring your progress towards the few goals and key results that you are trying to accomplish this quarter.