5 min read

Upstream vs. Downstream Systems: Challenges and Mind-sets in System Modernization

Upstream vs. Downstream Systems: Challenges and Mind-sets in System Modernization

Upstream vs. Downstream Systems: Challenges and Mind-sets in System Modernization

When it’s time for a system modernization or replacement project for carriers, there are any number of challenges that must be recognized and addressed to get the highest ROI and lowest risk from the effort.  Build (and who should do the work) vs. buy (and which vendor)? Who are the constituents of the system, and how can they best have their say in what the system will do? Who’s paying, and how much do they have to spend?  How do you shoehorn a major system upgrade into the working lives of the operations staff who are already stretched too thin in the first place?

So the PMO springs into action.  A project manager is assigned. Defined and documented company procedures are brought to bear on the problem.  Formal system requirements are gathered, the budget is identified, and a project is set into motion. There’s nothing new here; we’ve all seen it before.

What I would propose, however, is that we need a new mindset about the systems themselves.  Too often those formal project methodologies are based on a one-size-fits-all approach to problems that are very different.  There are fundamentally different business processes and problems addressed by upstream versus downstream systems that must be taken into account.

Upstream Systems

Upstream systems are the systems of record for the pillars of the company’s operations.  In insurance, policy management and producer management are the poster children for the concept of systems of record and sources of truth.  They are self-contained; they own the data and processes in each of their domains.

By definition, a policy doesn’t exist if it isn’t in a policy management system.  Equally, no agents or agencies exist if a producer management system doesn’t say they do.  Whatever operational processes have been defined for these systems determine how data is entered and validated, how it will be stored, and how other systems might or might not use it.

And on that subject, upstream systems tend to be built to reflect their own needs and processes.  If the data in the upstream system is needed elsewhere, well, knock yourself outsourcing it, interpreting it, and using it.  The system will be self-sufficient, but it is not often architected to play well with others. ‘User experience’ wasn’t a thing when a lot of these systems were designed, and enterprise architects hadn’t even been born yet when the systems were implemented.

The thing to remember about upstream systems is that these systems cannot be wrong!  Whatever they tell us is regarded as the truth, at least until we discover otherwise.

Downstream Systems

Downstream systems are systems that are dependent on data from upstream systems.  They acquire data from the upstream system, provide additional processing, perhaps perform rule-based calculations on it, and, in general, interpret upstream system data to provide additional business value.  They are the systems of record for the data and analytics they create, but they are not the owners of the data they consume. And of course, a downstream system can be an upstream system for other downstream systems, too.

An example of a downstream system in the insurance or financial services world would be an incentive compensation management system.  An ICM system consumes data from the policy management and producer management systems in order to calculate commissions and bonuses for the producers for the policies they service.  Without transactions and policy data from the policy management system and producer data from the producer management systems, an ICM system is unable to perform its tasks. An ICM system will be the source of truth for the commissions and bonuses it calculates, but it is dependent on data that it does not own or control to generate its results.

Downstream System Interactions with Upstream Systems

Downstream systems depend on upstream systems for the raw materials they use to do their jobs.  Upstream systems are, by definition, ‘true’, even if they are wrong. When upstream systems have an error brought to their attention, they often overwrite the data to bring it in line with the new reality.  When legacy systems were designed and implemented decades ago, versioning and date-effectivity of data were far too expensive from a storage and processing point of view. But in a business that’s still using paper applications, fat-fingering can and does occur when the forms are transcribed into the policy admin systems.  So that attribute was mistyped? We’ll just overwrite it with the correct information instead.

But what if the downstream system has already consumed a prior version of that row of data, performed its magic, and generated results from it?  In the ICM system, that result might have caused commission checks for an agent hierarchy to be generated based on contract rules applied to it, rules that use the attributes of that row of data to decide who gets the commission and for how much.

An obvious example of the impact of dynamic, upstream data is the identification of the agent-of-record for a policy.  We thought Agent Larry was servicing this policy, but he retired and sold his book of business to Agent Mary last month and we only discovered it today.  So calculations based on Larry being agent-of-record are now wrong. The commissions we generated for Larry in the last month must be reversed and recalculated for Mary instead.

So we find ourselves with (at least) two problems that arise out of changes to upstream data: how do we learn about them downstream, and what provision must be made to account for the recalculated results that might be generated based on those changes?  Determining the answer to these questions is a fundamental difference between defining requirements for upstream vs. downstream systems.

Upstream systems are always right (at this moment, at least), so what you are looking for if you modernize is good workflows and reporting from a new system, and a vendor that treats you right if you decide to buy an off-the-shelf application.  Downstream systems have a different problem because they have to manage potentially dynamic data across its life cycle. Any competent system designer will build a system that is right the first time it sees a row of data, but knowing what to do with that row of data the second/fifth/eighth time we see it is thornier.  But dynamic data is an issue that must be addressed in defining and implementing the system.

In defining requirements for a downstream system, operational processes are as much a first-class problem as the data processing requirements are.  Changes to upstream data must be communicated downstream in a reliable and robust way. That’s an operational problem as much as a system requirement.  And dealing with the impact of changed calculations must be built into the processing and reporting requirements of the downstream systems along with the base formulas and rules themselves.

Project methodologies often don’t reflect these differences.  Taking a static view of upstream data can introduce risk to downstream system implementation projects.  The math is easy, but the data driving it can be dynamic, and that reality must be recognized and addressed in defining system requirements and mindsets when looking at downstream systems.

About David Kelly

Founder of BobTrak, Inc., providing best-of-breed tracking, management, and analytics of insurance distribution channels. Author of "The Book on Incentive Compensation Management: The Systematic Administration of Variable Compensation in the Enterprise", and of "Out of the Box, or Out of the Question: What Won't Your Incentive Compensation Management System Do?"; A thought-leader in the Enterprise Incentive Management / Sales Performance Management space; Skilled in creating accurate, maintainable, high-performing system architectures to solve complex business challenges. Please reach out to me at davidkATbobtrakDOTcom if I can help!

Website  |  + posts

Founder of BobTrak, Inc., providing best-of-breed tracking, management, and analytics of insurance distribution channels.

Author of "The Book on Incentive Compensation Management: The Systematic Administration of Variable Compensation in the Enterprise", and of "Out of the Box, or Out of the Question: What Won't Your Incentive Compensation Management System Do?";

A thought-leader in the Enterprise Incentive Management / Sales Performance Management space;

Skilled in creating accurate, maintainable, high-performing system architectures to solve complex business challenges.

Please reach out to me at davidkATbobtrakDOTcom if I can help!

P&C Policy Admin Systems- Take Data In, Do Things, Spit Stuff Out (Part 2)

10 min read

P&C Policy Admin Systems- Take Data In, Do Things, Spit Stuff Out (Part 2)

P&C Policy Admin Systems- Take Data In, Do Things, Spit Stuff Out (Part 2) by Luke Magnan (editor’s note – this is part 2 of a series on P&C ...

Read More
P&C Policy Admin Systems- Take Data In, Do Things, Spit Stuff Out (Part 3 of 3)

13 min read

P&C Policy Admin Systems- Take Data In, Do Things, Spit Stuff Out (Part 3 of 3)

P&C Policy Admin Systems- Take Data In, Do Things, Spit Stuff Out (Part 3 of 3) by Luke Magnan (editor’s note – this is the final of a series...

Read More
P&C Policy Admin Systems- Take Data In, Do Things, Spit Stuff Out (Part 1)

7 min read

P&C Policy Admin Systems- Take Data In, Do Things, Spit Stuff Out (Part 1)

P&C Policy Admin Systems- Take Data In, Do Things, Spit Stuff Out (Part 1) by Luke Magnan (editor’s note – this is part 1 of a series on P&C

Read More