Self-Service Data Analytics and Governance for Managers. Nathan E. MyersЧитать онлайн книгу.
replace many of the unstructured spreadsheet processes they perform each day, or at least replace many of the processing steps embedded within such processes (always remember the residual tail). In this section, we wish to highlight self-service data analytics as an indispensable evolving discipline that is rising to prevalence across medium-to-large organizations. This specific subset of data analytics will be one of the key focal areas of this book.
The flexible, customizable, and low-code, no-code capabilities offered by self-service data analytics tools allow process owners to very quickly structure a workflow to extract source data for processing, whether selected and consumed from systems or arriving in free text, as a clean data array in a flat file or a spreadsheet or as an image for OCR/ICR data extraction. Once data is extracted and ingested, it can be transformed (joined or enriched with data from additional datasets, filters can be applied, mathematical operations completed, and field order or file format can be changed) before the data outputs are loaded to another system for further processing, or perhaps to a visualization application or dashboard for display.
These functions are commonly referred to as extract, transform, and load (ETL) capabilities. ETL represents some of the most common use cases for self-service data analytics tools. If you think about what operators in your respective organizations are doing in spreadsheets all day, it is very often starting with one or several system extracts, then enriching them further by joining together a number of other flat files or spreadsheet files, performing any number of operations on the dataset, before transforming data to a specific output format such as a report, or the required format for load to another tool. As a last step, the enriched and transformed dataset can be either input directly to another system or tool or transmitted via any number of delivery methods.
Many readers that carry out their processing work in spreadsheets day in, day out could likely save time and reduce errors, if processes are migrated to a regimented workflow in self-service data analytics tools. If you think about the accounting, finance, and operations departments in your respective firms, how many office workers are spending a good portion of their days doing exactly these things? Are there hundreds? Are there thousands? What if one to two hours per day could be saved for each of them by adopting self-service data analytics tools? Would that free up thousands of hours? Could your organization save millions of dollars? Now we have your attention!
Of course, many of these steps could be eliminated by adding additional features and functionality to core systems. If there was interoperability between systems upstream, such that datasets were adequately rich within core systems, users may not be required to enrich them downstream outside of systems. No longer would they need to open six spreadsheets and use key fields and VLOOKUPs to pull back all the data required for a given processing operation. If system reporting suites were adequately rich and flexible, operators could forego the “transformation” steps they perform outside of systems to reorder fields or to reformat system outputs.
The authors submit that in no way should tactical self-service tooling replace the core tech backlog delivery. Managers should continue to push the change apparatus in their respective organizations for the delivery of processing functionality in strategic applications. We have already discussed that lengthy wait times often accompany a full core tech backlog; however, there is no need to suffer while you wait. In many cases, process owners, themselves, can quickly structure their own processes in self-service analytics tools, like Alteryx, for example, dramatically reducing the time spent performing processing in spreadsheet-based end-user computing tools (EUCs), and can even reduce their number altogether. In Chapter 4, we will prescribe a control point to ensure that all tactical self-service data analytics builds can be cross-referenced back to an enhancement request in a strategic core technology system backlog. This ensures that tactical builds are a stopgap measure with a limited shelf-life, until the strategic solution can be delivered behind it.
In the meantime, tactical builds can pave the way for strategic change by forcing end-users to systematically think through and articulate requirements. The tactical builds can also serve as working proofs of concept for the requested system enhancements and can be used to demonstrate the required functionality, as part of the requirements package handed off to technology. Further, once built, the tactical tool can be run in parallel, to assist with user acceptance testing (UAT), as the strategic enhancements are developed in core systems behind it. This can significantly speed testing cycles and allow for additional and more comprehensive testing and testing coverage by reducing the required testing lift with the assistance of the tactical tooling.
There is one final point to make about self-service data analytics. They are not meant to be a bandage for a broken, overly convoluted, or an inefficient process. Process owners should map out their processes from start to finish, preferably with swim lanes to readily identify inter-functional touchpoints with other parties and stakeholders (see the section Process Map (Swim Lanes) in Chapter 7 for an example of this artifact). They should take the opportunity to highlight and eliminate any low value-added steps, where possible. They should ask the Whys to understand the root causes and rationales for any accommodation steps in workflows. Only when process owners have distilled and rationalized their processes down to eloquent simplicity should they embark on a tactical automation project with data analytics tools. The idea is not to take the pain out of a broken process with tactical automation, so that it is smoothed over and forgotten, and left to age with all of its pimples and warts, out of sight, out of mind. After all, pain points have a way of festering when left unaddressed.
Dashboarding and Visualization
As a freshman at Indiana University in the mid-1990s, one of your authors took an Introduction to Business class taught by Professor Tom Heslin. He didn't have to tell you he was from Brooklyn, as his accent stood quite apart in southern Indiana, but he loved to work his Brooklyn origins into his lectures two or three times each Thursday evening anyway, and his students loved him for it. Oddly, by contrast, he never mentioned that he was a Navy veteran and a World War II war hero. We did know that he had come to the Indiana University Kelley School of Business to teach and to give back, after a lengthy career at Bell Labs. He was no-nonsense and full of energy, and had a way of giving students punchy one-liners that would stick. Several come to mind, but one was “You gotta have a plan!” which he employed liberally to drill into his students that they must exercise forethought, be purposeful in their actions, and leave little to chance. But importantly for this section, when introducing the concept of business controls to bright-eyed freshman, he said “You can't manage it if you can't measure it.”
It is this last aphorism that has really stuck. This is mentioned because in all of our businesses, there are key metrics that are actively managed (and frequently reported) to allow individuals, managers, or executives to closely monitor process performance. These measures are referred to as key performance indicators (KPIs) and are used to measure and report on the health and performance of an organization, a division, a function, or even a process within them. They tend to be some of the most widely reported numbers for internal audiences, and a portion of them may find their way to external stakeholders and regulators. A whole book could be written on how to make a thoughtful selection of KPIs for a given process, in order to convey health across a number of dimensions. However, for purposes of this book, we will assume that these have been arrived at separately and are effectively conveying business performance to allow for active and rigorous management. What we do want to cover in this section is the ways that KPIs can be compiled and displayed efficiently through the use of dashboards and visualization tools.
Most, if not all of our readers will be familiar with common temporal data visualization formats – bar charts (to show value comparisons), line charts (to show time-series movements), scatterplots (to show large numbers of observations), and sparklines (for trending). Some may be familiar with hierarchical visualizations like tree diagrams, sunbursts, and ring charts. A more select few will be familiar with multidimensional data visualizations that can communicate more than one variable for each observation. Examples of multidimensional data visualizations include pie charts and stacked bar charts that show observation values relative to the