Personal Reflections on the IDT Summit from Jason Warlond, Regional Head, APAC
Fund Business held the 6th Investment Data and Technology (IDT) Summit on 12-14 October, 2021, which Curium Data Systems was, once again, proud to one of the sponsors. The event provided some great insights into the thoughts of investment data management specialists. I wanted to summarise what I saw as the recurring themes and key takeaways throughout the conference and then take the opportunity to describe how Curium Data Systems is addressing these various trends and issues.
Topics Discussed
Outsourcing of Data Management
- Outsource what you can, especially for homogenous data
- Hybrid models will be commonplace where much of the data management is outsourced but there will still be a need to maintain your own IP in some of the data
- Interoperability will be key to this hybrid model
- Low code/no code platforms are preferable to help simplify development, flexibility, and agility
- Cloud capability is an enabler and, when combined with offerings like Snowflake, allow for faster adoption of new data sources, whether managed as part of a Data as a Service offering or managed internally
- Larger firms should have the inhouse capability and ambition to see their data and data management as a key differentiator
N.B. Whilst not discussed, “on demand” data vs. persisted in house data provides a challenging conundrum. Whilst on demand has some attractions, it also provides more challenges in relation to data provenance and data retention, where you don’t have control over the “provided as a service” data. Unless of course you replicate in your own stores.
My own view on this is that generally we outsource things we think others can do better. However, there is significant benefit to having your own data management capability and it should be a capability that is increasingly strived for within an organisation, whether that be internally or with partner organisations. There is competitive advantage to be gained if you do it better than your competitors.
Data Management best practice principles still apply
- Data Quality is key, regardless of ease of use, ease of visualisation, or timeliness. Poor quality data is of little use
- Master Data Management and single source of truth are still basic tenets that should be strived for
- Data Governance, literacy, ownership/stewardship, glossaries/dictionaries, lineage, and provenance are some of the areas where there is still considerable need for uplift
- Only when you measure something, can you manage it. This applies to data quality and data coverage
- Data Governance should be seen as an enabler, not a series of policies and blockers
- Automation and STP should be leveraged as much as possible
- Try and reduce the amount of time your resources spend curating data
Whole of portfolio view
- Support for all asset types
- Includes full look-through and analytics over a fully resolved look-through view. This requires:
- Asset Structure
- Granularity of constituent data constructed from top down
- Multiple views of aggregated data back to required levels and asset classifications
- Different asset classes, especially private market/unlisted, require a different set of data points to the traditional listed assets. This is changing so flexibility is important
- Data timing can be problematic, and many unlisted assets do not have daily updates
ESG data is emerging but still a problem
- There are lots of data sources available but of questionable quality and a lack of standardisation when compared to other data sets
- Flexibility is key is to being able to digest and analyse ESG data sets
Data sourcing is challenging
The following is difficult but worthwhile to understand:
- Where is market data sourced from?
- Where is it used?
- How much does it cost?
- It is being used efficiently and providing good value?
Unless you have systems in place already to resolve these questions, you should be working towards this.
Moving forward, work with data vendors that offer good breadth, can grow with your needs, and provide flexibility or ease of use in integrating their data.
In your internal requirements conversations, question not only the data required but also the requirements around the frequency and timing of the data. Not all data is equal. And some of it is expensive.
Regulatory compliance
- This is particularly an issue for global organisations where much of the investment data is homogenous, but the regional regulatory regime is not. Regulations provide both a barrier to entry and challenge. Flexibility is key
Data consumption and visualisation
- This isn’t as simple as buying some tools and developers and then letting them loose. Understanding data context and ensuring that the visualisation tells the story it is trying to tell, accurately and quickly, is key
- Again, data quality is vitally important. Garbage in, garbage out, even if looks pretty
- Interoperability across data sets, locations and technology is important in your solution set
- Providing catalogues and data dictionaries is important to achieve self service
- The general consensus is to get to a single source of truth
- Getting agreement on things like look and feel, fonts, data formats can assist with the visualisation journey
Best of breed vs. end to end solutions
- The cycle on best of breed vs. end to end continues
- Data as a Service is attractive. But data is being seen more and more as an asset. Again, this points to the need of having of inhouse capability to provide your own competitive edge to your use of the data
- Specialist vendors are still attractive to work with if they provide good interoperability
- The “solution” needs to ensure that Business Users have the capability of delivering new work flows and they are not reliant on IT or a third party to deliver. “No code” development capability is preferable
- Organisations need to have and understand their own Target Operating model. How much do they have the capability and desire to do things inhouse vs. outsourced?
How Does CuriumEDM satisfy these needs?
I wanted to take the opportunity to describe how Curium Data Systems and the CuriumEDM solution can help organisations align to these trends and requirements.
- Strong Enterprise Data Management (EDM) across a wide array of capabilities. These are incorporated with our Data Management Best Practice principles.
- Data Quality Management (DQM) – including full exception ticketing, workflow, and quality metrics both now and over time and helps ensure you have trusted data as early as possible
- Integration/ETL – integration using a number of different technologies
- Matching capability across multiple potential identifiers and attributes. Whilst typically used in Security Matching, this capability is not limited to this data set
- Master Data Management – with data alignment and precedence settings they are defined via a business UI and transparent to the business via metadata reports
- Data governance, including metadata repository, data dictionaries and ownership. All construction logic is metadata and can be exposed and enquired upon via any BI toolset, include CuriumEDM itself of course
- Business Intelligence and visualisation, built directly from models to help ensure context. This can be applied across any data, including business data or Curium’s own “data operations” data and construction metadata. This capability includes drill through from one data set to other associated data sets
- Data provenance, audit, and lineage capability. Combine a lineage diagram with data sourcing metrics, quality metrics and ownership, all built dynamically from current construction data and business data
- Automation and straight through processing via CuriumEDM’s workflow orchestration layer
- CuriumEDM contains not only a “logical” model-based approach to modelling, but also implementation. Physical data sources are then aligned with the logical model. The logical model serves not only as a template and documentation repository but can also be reused as the starting point to implementation of physical sources to the model
- Interoperability via a wide array of technologies for data ingestion and distribution, or on demand usage
- Cloud and outsourcing ready via:
- Support for technologies such as Azure and Snowflake
- A “development” capability that provides low code/no code development of data sets and data flows, but can be separated from the “business” capabilities of data operations and analytics
- Infrastructure cloud hosting partnership
- Flexibility by combining the following into a single platform
- DQM across any range of data whilst reading data “in situ”, whether it be on premise or cloud host, API based or even flat file. Data does not need to be integrated and stored within Curium to be available to read, enquire, visualise, data check or integrate
- Matching and Mastering capability on any datasets, as required
- Ability to quickly extend the data model and data set reach to capture new datasets such as:
- Unlisted assets and their analysis
- ESG (please see our recent press release regarding CuriumEDM’s implementation with Martin Currie, also available here.). The ability to quickly develop integration and mastering of new data is one of many strengths
- Regulatory requirements
- New custodian, market data or other sources, including internal data, can all be implemented to a robust data management platform quicky
- Strong Investment Data domain knowledge and capability to build specific Investment functionality such as:
- Investment exposure look-through without a limit to its granularity. If the data can be sourced, then it can be integrated into a look-through view
- Ability to create an unlimited number of exposure classification and analytic models based on any attributes which are sourced, combining the ability to visualise and drill though
- Proxy/Contingent pricing capability to generate and validate prices based on benchmark movements and look-through
- Actual vs Target modelling
- Other specialist functions as required
- Flexible implementation models
- Hosted v On Premise
- Outsourced vs. Insourced (or a hybrid) creation of data management capability and new features
- Curium can provide the full Enterprise Data Management platform or is adaptable and agile to fill just your most urgent needs across its gamut of capabilities
- CuriumEDM can provide not only sources and destination of market data but can go much further in this area to:
- Compare, side by side, the different results that would be obtained from different sourcing/mastering scenarios
- Help review and optimise data costs. This is done using Curium’s Market Data Adaptor module which has the capability to track data acquisition costs for some data sources and ultimately give insight into how those costs can be minimised/optimised.
Hopefully this provides some useful information in relation the discussion points from the IDT Summit. It was certainly reassuring to hear from the industry and then compare to Curium’s own offering and roadmap and ensure that we are meeting the needs of the industry.
CuriumEDM is truly a remarkable product that provides incredible speed, flexibility and power to develop capability combined with Business Application for day-to-day Data Management. If any of these points resonate with you and you would like assistance, please do not hesitate to contact us.
Enter your details to receive regular blog updates
Get in touch
If you would like further information or have any queries about Curium please do get in touch. One of our team will be straight back to you.