[List] 5 more best practices for custodial reconciliation

Thanks to all the response I got from my previous article, Best Practice: 5 Considerations for Custodial Reconciliation, I was inspired to share 5 MORE Considerations for a sound Custodial Reconciliation process.  Please share your thoughts and let me know what you think of this list.

6. Clearly classify current reconciling items into research buckets.    

Aside the standard TOEC formulas, an additional set of calculation that may be included within the Bank Reconciliation process is an automatic classification of reconciling items by research bucket. This classification applies to current outages and is intended to help analysts in their research activities when identifying reconciling items. The following list represents a highly generalized classification of reconciling items classified by research category using data that should already be present to fuel the process. Far more refined research categories (and perhaps even automatic reconciling item identification!) may be accomplished depending on the overall quality, consistency and detail available within the source data inputs. The research categories provided below also represent a hierarchy meaning that items are classified as they meet the criteria of each bucket per the order below:

  • Paid in Full – Check for ending actual remittance balance to be zero along with a payoff date provided in the source data.
  • Liquidation – Applies to S/S remittance deals only. This classification is given when the actual remittance balance is zero and both the beginning and ending scheduled remittance value is not zero.
  • Reinstatement – Check for the beginning scheduled remittance balance to be zero and the ending scheduled remittance balance to not be zero. Also check for the beginning actual balance to be zero and the ending actual balance to not be zero as either of these conditions can be true for a reinstatement.
  • Modification – Simple; check if there is a modification date provided in the data.
  • Stop Advance – Similar to Modification; check for a stop advance date provided in the data.
  • Miscellaneous – This category catches any outages not linked to a research category above.

7. Apply a standard description to reconciling items.    

This one is key. Any outages or reconciling items resulting from the Bank Reconciliation process should be identified and categorized using a standard description that is meaningful to the business (i.e. reason codes). We suggest defining a comprehensive list of coded values representing all the different types of reconciling items in a typical reconciliation cycle. For example, consider grouping all reconciling items related to liquidations under a standard notation – LIQ1 to represent liquidation net loss, LIQ2 to record a service fee outage, and so forth. Another important detail to add to this master list of standard descriptions is the expected resolution type – in other words, if the item is expected to be resolved via wire, remittance adjustment or perhaps a system-level adjustment as would be the case for non-cash outages. Keeping in discipline with this consideration helps in fulfilling #8 below.  

8. Meticulously clear ageing reconciling items; start with the oldest first.    

It is both tempting and (theoretically) time effective to simply bump the list of reconciling items against wires /remittance adjustments by amount and delete these off the spreadsheet as resolved items. Unfortunately, any virtue found in this approach quickly goes away when a discrepancy is identified a couple months later (i.e. clearing a wrong item) and an analyst is tasked with trying to unravel the components in an effort to correct the issue. We recommend implementing a mechanism for tracking the resolution of reconciling items which also ensures that the correct wire /remittance adjustment is paired with the intended outage. Adopting a practice of applying standard descriptions along with an expected resolution type as suggested in #7 addresses the first part of this recommendation. A solution to the second part of the recommendation related to pairing wires /remittance adjustments to outages is offered under #9 below.    

9. Optimize Wire /Remit Adjustments for future clearing.   

This suggestion may require some coordination to accomplish and some discipline to maintain, but the added value of this effort will be well worth the work. The simplest and most effective way to properly pair wire /remittance adjustments to corresponding reconciling items is to link these together using a common reference number. Implement this consideration by assigning a unique reference number to outages identified during the current period. If a standard description and corresponding resolution type is assigned to each reconciling item as suggested in #7, a listing of required wires and remittance adjustments should be readily available at the conclusion of each Cutoff. Passing along this unique reference number to the wire /remittance adjustments request as the transaction identifier creates an immediate link between both items that can be leveraged for clearing. The real trick in having this work is convincing the downstream processors (i.e. Investor Reporting and Treasury or team responsible for wires) to include this value as part of their process from request through transaction settlement. As an extra credit bonus, include a unique identifier for these transactions at the account-level as well (i.e. remember, items in TOEC are at loan /pool-level but these settlements typically disburse as a rolled-up transaction by bank account). This additional step will save a lot of time pairing bank statement items to corresponding book wires, thus enabling book-to-bank reconciliation for Cashbook.

10. Track and measure the process.    

All the considerations leading up to this one focus on ensuring a sound Bank Reconciliation end result, which is fantastic. However; visibility and metrics gathering over the process as it is happening in real-time distinguishes a proactive team vs. a proactive team. What’s the difference? A reactive team sees smoke and eventually reaches the fire with whatever tools happen to be on-hand to try to extinguish the flames, and a proactive team sees the spark that started the fire – this level of visibility is afforded by adopting well-defined work assignments and developing a dashboard to track the resulting metrics. We recommend doing what most companies already do: create a spreadsheet to assign analyst resources to specific Bank Reconciliation reports, but we push it one step further by suggesting the inclusion of triggers to track the progress within a Cutoff as it is happening. Create a spreadsheet or tool that listens for status changes in Bank Recon reports (i.e. Pending to Approved) as well as a means to collect metrics (i.e. number of reconciling items by ageing or number of items resolved vs. outstanding) in an effort to get a meaningful pulse of the process as a whole. The development of the dashboard is certainly an evolutionary process; the trick is to subscribe to this mentality or management overview philosophy if the terminology is more fitting. Either way, evaluating the health of a process needs to occur as the process is happening and not after the process is completed – test this statement by applying it to a living body. Find creative metrics (and corresponding triggers) to track the process as it is unfolding to prevent a spark from becoming a forest fire.

 What considerations can you share about how you manage your Bank Reconciliation business process?

[List] 5 best practices in custodial reconciliation

An accurate and effective Custodial Reconciliation process is the cornerstone of a healthy Investor Accounting function.  Completing a monthly Bank Reconciliation for each Custodial account (i.e. P&I and T&I) accomplishes two important goals: (1) it confirms the Custodial bank account is in balance at the aggregate level; and (2) it ensures the individual loans /pools within the Custodial account are also in balance by performing Test of Expected Cash (TOEC) calculations.  Every company servicing loans in-house adopts some sort of Bank Reconciliation process – after all, it is a compliance requirement under Regulation AB.  Here are 5 considerations for building an optimum Bank Reconciliation business process based on our experiences working with companies like yours:

1. Clearly define Cutoff start and end dates.    

I know it sounds intuitive, but we’ve seen this mistake consistently – make sure that there is no overlap between Cutoff start and end dates as you define processing calendars. More importantly, verify that all activity considered by the process is restricted to this range; that is, all wires, remittances, remit adjustments, servicing data and investor reporting inputs must fall within the criteria. Not following this simple guideline will lead to a lot of transactional “noise” and incorrect TOEC calculations.

2. Roll from a previous period.    

Again, it may sound intuitive but it is surprising the number of companies we’ve seen that essentially “start anew” with their Bank Reconciliation process. Lesson learned – live with your results (and calculations). The true power of a Bank Reconciliation summary is in rolling it forward; in other words, tie together Beginning Balance from the current period to Ending Balance of the previous period. The value of this practice is accentuated for PLS. For these reconciliation reports, the following balances should roll forward from a previous period: (a) cashbook balance; (b) beginning scheduled balance for the expected remittances; (c) beginning actual balance per the actual UPB; and (d) actual remittance rolled forward from last period’s expected balance.

3. Ensure the bank account is in balance before digging into loan /pool level balances.       

Often overlooked and oversimplified, the Cashbook process serves a fundamental purpose in achieving an accurate and effective Bank Reconciliation process. It is important to actively balance the custodial bank account as a precursor to performing reconciliations at the loan /pool level rather than assuming than any discrepancies will simply float to the surface.  

4. Perform simple data integrity checks.    

Because we are working with two related but DISTINCT data sets when performing Bank Reconciliations (i.e. bank statement and cashbook vs. servicing and investor reporting inputs), there are some simple data integrity checks that can help verify that source data is complete and accurate before relying on these values for balancing. It is a good idea to perform the following sanity checks: 

  • Test that wire and remit adjustment amounts collected at the loan /pool level roll up to the amount reported at the bank account level. In theory, these values are extracted from the same transaction set, however; invalid translations /mappings between bank accounts and associated loans /pools can lead to different results. The potential for this discrepancy is magnified as the volume of bank accounts and loans /pool increases.    
  • Perform a Custodial Reconciliation Difference calculation to verify that all necessary inputs are collected in the Test of Expected Cash calculation. The sum of (a) P&I advance; (b) remittance adjustments; and (c) current period outages /reconciling items should equal zero, thus certifying that Servicing and Investor Reporting inputs are captured completely and accurately.

5. Have a clean TOEC formula.    

A whole new article could be written about best practices regarding TOEC formulas given the intricacies between PLS vs. GSE and considerations within Sch/Sch, Sch/Act and Act/Act remittances. For argument’s sake, let’s assume a consensus that a TOEC formula, at its most high level, should include (a) prepaids;  (b) delinquencies; and (c) remittance information (i.e. scheduled interest and principal, additional principal collections, etc). For extra credit, you could include curtailments and other items but these would point to a specific type of deal. More to the point, a TOEC formula SHOULD NOT include remittance adjustments as part of the calculation.  Including items such as HAMP incentives, claims and refunds, interest adjustments and other similar items unnecessarily abstracts the meaning of the calculated figure and misrepresents any true discrepancy in cash. In addition, including any of these items in the calculation prevents a clean roll-forward (see #2 above) particularly when one of these “expected” items does not materialize for whatever reason.  

The Big Data waste: Missed opportunities in mortgage servicing

The term “Big Data” has been buzzing about the technology industry for several years now and has crept itself into the vocabulary of business managers and corporate execs to mean the new must-have for the modern enterprise. Even though there may be great promise in the use of Big Data to solve complex business problems, I find that the concept itself is largely misunderstood. Making matters worse, Big Data has also become somewhat synonymous for “Big Company” and “Big Budget” – a luxury reserved for the Fortune 500 with large slush funds to spend on consultants to help figure out whatever alchemy Big Data is intended to accomplish. There certainly is truth to that, but the landscape is rapidly changing. With an increase in accessibility and simplicity of tools designed to wield Big Data into something meaningful, just about any company that produces data can use it to its advantage.

 As a software and service provider in the mortgage industry, I can confidently say that the volumes of data produced by companies in this space are astronomical, to put it mildly. I also dare to say that, for most of these companies, the wealth of information that could be extracted from this data is completely wasted. After speaking to a couple executives and frontline managers about their perceptions on Big Data and its analysis, I see a pattern contributing to the active disinterest toward exploring data sets:

  • General misinformation about Big Data, and all that comes with that, and
  • A lack of imagination about how data can be used to make a direct and meaningful impact on operations.

Perhaps it is far too ambitious to say that this article will solve both (or either) of these issues. But… maybe a brief introduction to Predictive Analytics and how these can be applied can help prompt a shift in this mentality. It is all about knowing how to ask the right questions.

What is Big Data Anyway?

Unceremoniously, Big Data is a large data set. Enormous actually. More specifically, it is a data set that is so large it requires special technologies to house, manage and analyze the information, as conventional tools prove inadequate or impractical. The term, however, has been expanded mostly thanks to marketing efforts of firms offering services in this space to also include the methods and practices available to interpret the data. Each marketing piece and slogan developed around Big Data has contributed to obfuscating its definition while brining some level of very marketable mysticism. Sales pitch aside, Big Data is the collective term for the troves of information produced across an enterprise.

What can Big Data tell us?

Well, it really depends who wants to know and for what purpose. That’s the key – identifying a specific purpose. Without clear ideas linked to measurable results, looking into Big Data is like getting a bunch of answers for which there are no questions; i.e. lots of information about nothing we care about very much. It is hard to come up with these questions. It is even harder when we don’t know how best to frame these questions to get the meaningful answers we might be expecting. I’m not a data scientist, and frankly, some of the theories and a lot of the math behind Big Data is beyond my grasp. I think in business processes and software-driven solutions. Learning about Predictive Analytics and its application has completely changed my attitude about Big Data, opening up a new playground of productivity. Here’s a little about how it works and how best to start thinking about applying Predictive Analytics so you can start phrasing your own questions.

Predictive Analytics = Forecasting the Future

Forecasting the future is not the same as seeing the future. Predictive analytics uses the sorcery of mathematical modeling and machine learning to predict the outcome of specific scenarios given some data inputs related to the process. The mathematical modeling component helps measure the likelihood of the different scenarios happening given past and fresh data inputs. Machine learning is the really exciting piece; it takes into account historical outcomes to better predict the likelihood of different scenarios, AND even predict new scenarios given patterns and nuances that only machines can identify in the data. So, the wrong way to ask Big Data a question is “Will [this scenario] happen?” – that’s seeing the future. An appropriate question for Big Data would be worded as “How likely is [insert scenario] of happening?” or “How much more likely is [this scenario] to happen instead of [this scenario]?”.

Opportunities Missed

For an industry that so heavily relies on data, it is somewhat crazy to me that this same love for data has not extended to using predictive analytics. The possibilities are virtually endless, only limited by the process owners’ imagination. Below are some quick ideas of questions I’d be asking Big Data to help tighten business processes and operations, albeit, from the high level perspective of a solution provider to the industry rather than the pointed precision expected from a business process owner:

 In Originations:

  • How likely is a candidate to close on a loan? How long with the process take? Where in the process chain can we expect hold-ups?
  • What loan products will work best for a particular group of prospective borrowers? How many post-close problems can we expect? What percent will represent buybacks?
  • How will volume be affected given a new incentive or program? Over time, does the metric hold true?

In Servicing:

  • How many loans are likely to have modifications, delinquencies or become paid in full? What factors are directly contributing to the portfolio performance?
  • What population of loans will have reconciling items? What is the expected source and resolution of these items? How many of these items hit 90 days or go to Reserve?
  • How many errors can we anticipate in a given business process? Where will these errors likely come from? If we implement a change, what might be the effect? Once implemented and over time, will these assumptions hold true?

These are a handful of questions in just two areas within the vast world of mortgage operations. Now that you have a framework for how to ask questions of Big Data, what would you like to know? How would you manage if you could predict the future? Where would you invest capital? Would you buy a lottery ticket? Wait, Big Data and predictive analytics cannot see the future.

Showing Cashbook some respect

Often overlooked and mostly oversimplified, the Cashbook process presents an important opportunity for reducing rework and increasing efficiency in Custodial reconciliation. During more than one occasion, I’ve heard people in Investor Accounting call it a mere formality; a means for validating the depository balance. Some have gone as far as not considering Cashbook its own process at all, but simply a data input to the real star of the show: the Test of Expected Cash (TOEC). 

Their rationale? Any outages in the account would just fall out while calculating the loan-level TOEC, so performing a full Cashbook reconciliation seems somewhat redundant.  I tend to agree, in principle. However, my experience has proven the opposite in certain situations. Any time savings gained in abbreviating the Cashbook process are more than lost when researching certain outages in TOEC.

At a basic level, the goal of Cashbook is to ensure the Custodial bank account is in balance. At a deeper level, the Cashbook process presents an optimal tool for certifying the bank statement (i.e. via performing a transactional book-to-bank reconciliation). This is good because collections, for example, recorded in the Servicing system would match deposits in the bank statement with any discrepancies falling out as reconciling outages. Yes, TOEC should catch these same discrepancies. 

How about this scenario: a wire is coded incorrectly and ends settling within the wrong P&I account? The TOEC process should also catch this, but the outage would not be linked to loan-level activity as it is an account-level item. In a sophisticated TOEC process, the outage may be caught early without missing a beat. If the process is not designed to specifically handle these scenarios, things start getting ugly. It may take analysts a lot of extra digging to identify why loan-level activity does not match up with the account balance. 

Also, consider how this outage would be recorded in TOEC. Is there an appropriate root-cause category code for it? Maybe; probably not. Lastly, consider timing (chronologically, not Reg-AB time). By the time the outage is identified in TOEC, this money may be in the incorrect Custodial account for 30 (maybe even 60) days, idle. Then, depending on the process, it may take another 30 days to initiate the transfer and move the money. Another good example for wasting time in TOEC: researching and correcting a true bank error.

From my perspective, all this could be avoided with a disciplined and well-structured Cashbook process; a proactive approach to handling account-level items that get resolved before they reach TOEC. It is time to show Cashbook some much well-deserved as past-due respect. In honor of this neglected business process, I am proposing 5 considerations for building a sound practice within your operations:

1. Clearly define start and end-date parameters.       

Avoid the common mistake of overlapping Cutoff start and end dates by double-checking data filtering parameters. This could get tricky as not all Cashbook reconciliations fall on month-end (think FHLMC) and processing cycles do sometimes become extended to work on a non-business day. In other words, verify that all activity for the bank statement is restricted to this range and that no book transactions enter this process ABOVE the defined range (consideration #3 below will explain why some book transactions from the previous period should be considered in the process). Not following this simple guideline will lead to a lot of transactional “noise” and a disorganized Cashbook reconciliation.    

2. Roll from a Previous Period

This may sound intuitive, but it is surprising how many times we’ve encountered companies performing their Cashbook reconciliation without considering results from the previous period. The key lesson here: it is important to live with your results (and calculations). The true power of a Cashbook Reconciliation summary is in rolling it forward; in other words, start by tying together Beginning Balance from the current period to Ending Balance of the previous period. Also, make sure to carry-forward any reconciliation discrepancies identified in the previous period to attempt resolution or continue ageing (see item #3 for more detail).

3. Track and Age Discrepancies.        

The only sound method for identifying reconciliation discrepancies within the Cashbook process is to perform a transactional book-to-bank matching of bank statement items. This means bumping up collections recorded in the Servicing system, for example, and matching them with deposits on the bank statement. The benefits of this process are two-fold: (a) matching book-to-bank transactions certifies the bank statements (i.e. the backbone of the entire Custodial recon process); and (b) the process will reveal any true discrepancies /reconciling items in the Custodial account. Please remember to roll-forward any book items not matched against bank statement items (i.e. deposits in transit) for the following cycle.

Adding some additional sophistication to the process, book-to-bank reconciliation could be performed on a daily basis. Bank statement data is available daily via BAI files and there are several reports in Black Knight and other Servicing platforms that provide daily activity, such as the T690 showing daily collections (i.e. daily version of the ZZ80). Performing this reconciliation on a daily basis catches issues quickly and allows those involved to correct the issue well before this becomes an outage in TOEC.

As far as best practice – track and age any reconciliation discrepancies at the Cashbook level (even if you might be tracking certain outages “twice” if these are also identified in TOEC). Why? The majority of outages in Cashbook will fall under one of two main categories: (1) errors in movements of cash; or (2) true bank errors, such as incorrect settlement amounts. For these types of issues, communicating the discrepancy with corporate treasury, for example, will be more effective at the bank-account level. This, in turn, should reduce the turnaround time for resolution and possibly correct the item before initiating TOEC (particularly if performing this reconciliation daily).   

4. Validate ALL Balances.        

The clear figure to validate here is the Depository Bank Balance. The Depository Bank Balance should be composed of the ending bank balance on the bank statement PLUS any deposits (or withdrawals) in transit that are yet to settle in the account. If your process is already taking account point #3 above, this value should be simple to certify.

Another important balance to validate is the depository balance according to the Servicing system. It may sound slightly counter-intuitive, but there could be a discrepancy between the calculated Depository Bank Balance and that which is presented in the Servicing system – think adjustments not entered correctly or manual transaction activity not recorded accurately (or at all) in the Servicing platform. We’ve found that it is best practice to perform a simple daily check to make sure both these values are in synch. 

5. Track and Measure the Process.      

All the considerations leading up to this one center on ensuring a sound Cashbook reconciliation, which is fantastic; however, visibility and metrics gathering over the process as it is happening in real-time distinguishes a proactive team vs. a reactive team. What’s the difference? A reactive team sees smoke and eventually reaches the fire with whatever tools happen to be on-hand to try to extinguish the flames. A proactive team sees the spark that started the fire. This level of visibility is afforded by adopting well-defined work assignments and developing a dashboard to track the resulting metrics. 

We recommend doing what most companies already do: create a spreadsheet to assign analyst resources to specific Cashbook reconciliation, but we push it one step further by suggesting the inclusion of triggers to track the progress as it is happening. Create a spreadsheet or tool that listens for status changes in Cashbook reports (i.e. Pending, Submitted, Approved) as well as a means to collect metrics (i.e. number of items matched vs. outstanding) in an effort to get a meaningful pulse of the process as a whole. The development of the dashboard is certainly an evolutionary process; the trick is to subscribe to this mentality or management overview philosophy if the terminology is more fitting. Either way, evaluating the health of a process needs to occur as the process is happening and not after the process is completed – test this statement by applying it to a living body. Find creative metrics (and corresponding triggers) to track the process as it is unfolding to prevent a spark from becoming a forest fire.

Below is an example of real-time processing metrics as offered within SunriseRecon. To get the full picture, it is not only important to see the status of current work completed (left chart), but understanding when the bulk of the work was performed (right trend analysis).

screenshot custodial reconciliation dashboard

What considerations can you share about how you manage your Cashbook business process?

[HousingWire] Embracing the future of mortgage servicing

The following article appeared in the February 2021 issue of HousingWire.

This year has brought plenty of disruption to mortgage servicing, from regulatory and economic uncertainties, to a long-term shift toward remote work environments. Meanwhile, the past decade has seen an explosion of digital solutions in mortgage origination, and servicing will inevitably follow suit. 

In this context, it’s natural to consider digital transformation; as all our processes are upended, this is perhaps an ideal time to rethink the business, and the technologies that support that business.  

But this is a decision to make with care. About 70% of digital transformations fail. The cause of these failures can often be traced back to not keeping the business goals at the forefront of the transformation process, or overlooking how technology impacts and interacts with the entire operational ecosystem. 

It’s important to remember that digital transformation isn’t just about implementing new technology. It’s about strategically using technology to help you achieve your business goals. If your organization is looking for digital transformation, these tips will keep you on track for success. Continue reading on Housingwire>>

Data challenges in mortgage servicing: Bank statements

Bank statement information presents a data challenge for many Investor Accounting and Reporting teams. Created especially for automation, BAI files offer a great solution. Switching to BAI files provides a means to streamline multiple business processes, an important preparatory step in digital transformation.

Why all the data fuss? Strategies for managing servicing system data

The servicing system is the main system of record for many Investor Accounting and Reporting processes. Regardless of which system you use, you’ve likely run into data challenges, such as lack of standardization. With some strategic thinking, you can overcome these challenges.

How to prepare a clearing account for audit in only 15 minutes

Find out how a leading non-bank mortgage servicer streamlined the clearing reconciliation process with Integra INVESTOR.

Using Integra INVESTOR to automate clearing account reconciliation, a top-20 non-bank mortgage servicer has substantially improved efficiency, consolidated operations, and introduced critical operational controls.

Results

  • 50%: Reduction in FTE’s required to complete the process
  • 89%: Payment clearing transaction automatically matched by the system
  • 3: Number of checks manually cleared per day, instead of 400+
  • 30: Hours required to train a new FTE on the new process
  • 15: Minutes it takes to prepare an account for audit, instead of weeks

Challenge

The company’s clearing reconciliation process required 5 FTE’s to manage 7 accounts using spreadsheet solutions that offered limited quality control. And because there was no formalized process for clearing reconciliation, training new staff took several weeks. Each clearing account also came with its own specific challenges. For example, payment clearing required coordination of data from multiple sources and presented an unmanageable daily transaction volume. Disbursement clearing involved multiple touchpoints and heavy manual intervention, resulting in higher risk of errors.

Solution

Implementation of Integra INVESTOR resulted in multiple key benefits:

  • The application-based process brings standardization and visibility. Introducing a single application was key to centralizing the function in one department and standardizing the process. Furthermore, the system’s workflow capabilities ensure easy oversight of the entire process.
  • Automation streamlines several aspects of the process. Thanks to automated data gathering and matching, analysts no longer spend time on tedious, error-prone tasks like collecting bank statements or manually entering data.
  • Built-in controls ensure processing integrity. The clearing reconciliation process no longer poses an audit concern, since built-in controls prevent unauthorized changes to data; timestamp analysts’ work; and keep analysts from submitting unbalanced reconciliations.
  • Audit preparation requires considerably less time. With Integra INVESTOR, preparing for an audit now requires about fifteen minutes. Analysts simply print or export the appropriate reports directly from the application.