Skip to main content

What is quality data?

Measuring and monitoring data quality is essential. While there is no universally accepted definition of data quality, there are shared best practices. In particular, there are several dimensions commonly associated with high quality data.

  • Accuracy – all data correctly reflect the object or event in the real world.
  • Completeness – all data that should be present are present.
  • Relevance – all data meet the requirements of the intended use.
  • Timeliness – all data reflect the exact moment in time.
  • Coherence – values and records are represented in the same way within and across data sets.

 

4 tips to optimize the quality of the data entered.

 

Define data entry rules

Data integrity begins when you enter your software. Establishing data entry standards provides a clear framework and uniformity of data – a prerequisite for effective use.

Data entry rules can include :

  • making certain fields mandatory
  • formatting certain fields
  • automating certain fields (such as creating reference codes)

 

 Staff training

Taking the time to train users is an investment that pays off very quickly, in terms of

  • reduction of data entry errors
  • improved data quality
  • increase in productivity

At KeeSystem, user training is an integral part of KeeXperience, our proven method to facilitate rapid adoption of KeeSense and accelerate the ROI phase.

 

A user guide adapted to your needs

Providing a user guide allows everyone to consult the rules for using the software at any time and to avoid numerous errors.

At KeeSystem, each new user receives a complete documentation that allows him to get used to the software quickly.

 

Be accompanied by an expert

When data quality is at the heart of your company’s performance, being accompanied by an expert is a must. On the one hand, it allows you to approach certain questions with a neutral eye, and on the other hand, it allows you to benefit from the best practices observed by other players in your industry.

At KeeSystem, data quality is addressed from the very first exchanges with our clients in order to identify how to guarantee the integrity of your data at every stage.

 

How does KeeSense control the quality of the data entered?

 

Manual data entry is often a source of errors.

In addition to applying the best practices mentioned above to optimize the quality of the data entered, KeeSense natively integrates several data verification measures:

  • automatic detection of duplicates
  • data traceability (who entered, when and which data was added or modified)
  • possible activation of a validation circuit for data entered by a third party

These devices and measures reduce almost all manual input errors and preserve data integrity.

 

How does KeeSense verify the quality of imported data?

One of the strengths of KeeSense is its seamless integration into the independent manager ecosystem. Instead of dealing independently with each custodian bank, bank data feeds are imported into KeeSense through secure and automated connections with the banks. To date, KeeSense is able to interface with over 100 custodian banks worldwide.

Also, financial information feeds such as Bloomberg or Swift can be imported directly into KeeSense. Users thus have a single tool where all their data – customer, banking, financial – are centralized.

The multiplication of imported data sources mecanically increases the risk of errors. That’s why several devices are put in place to control the quality of the imported data.

 

Daily verification of imported data

A dashboard on the KeeSense home page allows you to see if the positions and movements of each external source have been imported. A green color code is applied when the operation has been successfully completed, a red indicator appears in case of failure.

 

Semi-real time visualization of bank reconciliation

KeeSense includes a daily updated view to visualize the reconciliation between the bank data and the data present in KeeSense down to the finest level (by portfolio and by line). In case of error, it is quickly possible to identify the source of the error and to intervene to correct it.

 

Anomaly reporting

The reaction time to a data anomaly is crucial. The faster you react, the more you limit the risk of cumulative error (error in calculating the value, error in the buying or selling decision, etc.).

Thus, as soon as an anomaly is reported during data import, the KeeSystem team investigates to identify the source of the anomaly.

 

You want to discover KeeSense in more detail?

Book a time slot in the agenda to organize a demo with our team.

Related article

GDPR in Monaco : what is the impact ?