CTOS vs CTMS, Which One Should Sponsors Choose for Effective Trial Oversight?

By Suvarnala Mathangi | Date: April 30, 2019 | Blog | 0 Comment(s)

As the coordinator or manager of a clinical trial, you will have one simple goal: You would want your clinical trial to be successful. After all, it is your responsibility.

However, to guarantee success, you need to be able to manage the project efficiently.

It is of prime importance that, at any time during the project, you have the ability to get a good overview of the progress of the study; that you can plan tasks, activities, and responsibilities; that you can track and monitor any action; and that you can easily organize all clinical study relevant documents and information.

But what system can make all these possible?

Are simple lists generated in Excel sufficient for recording all relevant information and documents? Are the lists clear and well-structured to control the development of my clinical study? Should I implement a Clinical Trial Management System (CTMS)?

If considerations like leave you with a sense of déjà vu,  then you are in the right place to get some answers.

After all, the devil is in the detail!

It is important to know which information we need to record for reliable planning and monitoring of tasks and activities in managing clinical trials. We can only make the right decision if we are aware of the requirements.

Unfortunately, this decision is often made in favor of an MS Office package (or comparable programs) without any serious consideration. I can understand that people are used to working with these applications on a day-to-day basis and with the help of these programs it is easy to come up with an overview list in a short time.

The real problem, however, lies in listings that can get out of hand quickly. As a result, there would be a plethora of documents and their different versions. Such a decentralized organization of information in multiple listings frequently leads to great complexity and confusion.

“Which version of which document shows me the truth?”

To make things worse, lists of different versions or revision status could be in circulation. You would soon be wondering which version of which document shows you what you need to know. In the worst case, members of the study team waste precious time working on the basis of divergent and/or not up-to-date information.

“Why does it look completely different on my screen?”

The completely different look syndrome stems from simple technical issues like incompatible program versions being installed by different users. If you are running a complex trial with an international study team, simple lists probably will result in disappointment and confusion in the long run.

So what is the alternative solution? Can a Clinical Trial Management Software (CTMS) help to improve the working routine for you and your study team? The answer is, a qualified yes.

I qualify my statement because the implementation of such a system calls for some additional effort. Also, in the beginning, not everyone will be thrilled to leave their “comfort zone MS-Office”. However, in my experience, it is worth the investment as the CTMS comes with the following strengths and benefits:

  1. Approach of centricity
    One big advantage with a CTMS is that study-relevant clinical data and information coming from different sources are organized centrally in one system. The same version of information is made available to all the members of the study team. Every member is on the same page – at all times during your clinical study.
  2. Continuity for audit readiness
    The collection of diverse information on your clinical trial’s progress all in one place makes the clinical study management much more comfortable for the whole clinical study team. Regular and timely checks and follow-up of clinical study activities and documents with regard to completeness and critical milestones can be performed with ease. Thus, your clinical trial is audit ready at all times.
  3. Integrity
    A Clinical Trial Management Software provides you with a comprehensive overview of your clinical study at every step. In accordance, you can continuously monitor the progress of your clinical trial and selectively plan and control your activities.
  4. Reactivity
    With a CTMS, you have your clinical trial in view at all times, you can identify potential problems (e.g. logistical issues) very early and can initiate corrective actions immediately. Also, entry errors can be minimized as a CTMS allows for (automated) quality checks within and between data sets.
  5. Flexibility
    Be it CDISC or any other described standards of data availability in the CTMS, the system is very flexible and can be tailored towards the specific needs of your clinical trial. 

If a CTMS is so capable, then why is a Clinical Trial Oversight Software (CTOS) necessary?

Well, to tell you the truth, the ‘M’ for Management in CTMS is no longer applicable to large scale clinical trials. Clinical trials spanning geographies often work with several CROs to conduct proceedings. These CROs have a management system of their own where they hold and manage the clinical trial data. With several CROs comes the problem of more than one CTMS. These disparate systems do not talk to each other. As summarized in this article, the important thing for the sponsor to do is oversee and not manage. That brings us to the CTOS.

Why Clinical Oversight?

Starting from the protocol development stage to IND/NDA submission to Regulatory authorities, the basic idea of clinical trial oversight is the identification of the risks on a continuous basis for all risk‐bearing activities throughout the clinical trials. This risk identification is done on the basis of existing and ongoing or emerging information about the investigational product(s).

By applying risk-based quality management approaches to clinical trials one can facilitate better and more informed decision making while making the most of the available resources. It is a prerogative of all involved parties to contribute to the delivery of an effective risk‐based quality management system.

How different is a CTOS?

A significantly large percent of sponsors and CROs still rely on manual processes and spreadsheets even though they probably have some sort of a CTMS in place. They are forced to extract data from different systems, so ultimately, they’re pulling everything together in spreadsheets and other manual tools in order to get a full-picture view.

What sets advanced trial oversight solutions apart is their ability to bring together all of the trial data (from all spreadsheets, data contained in different CTMS of CROs) all in one place without a big investment. Be it risk-based monitoring, data-driven enrollment decisions, or other data-driven mechanisms, the individuals involved in the study have a single place to go for that information. It’s a more efficient way to manage things.

Implementing a CTOS will offer you a number of significant advantages, listed below.

Faster trials: CTOS will equip study teams with easy to use role-based dashboards and streamlined navigation that will improve productivity and speedy trial execution.

Better decision making: It will also enable proactive closed-loop issue management and improve strategic trial planning with a complete real-time view of trial status.

Streamlined clinical operations: Finally, a CTOS will provide one seamless system of record for shared CTMS, TMF, and study start-up content, improving efficiency and streamlining operations.

What does the future hold?

Speaking from a process and a technology standpoint, the industry is going to see sponsors and CROs start to work more collaboratively together because both parties will need access to integrated data in at least near real-time. Emerging technologies hold out a promise to be key enablers for both the formulation and execution of strategy. To be prepared for what’s coming, organizations need to look at their entire process first and find the hidden manual processes – and eliminate them. They can overcome their inefficiencies by implementing technology solutions that enable transparency, agility and anticipatory oversight with the able support of technology like MaxisIt’s Clinical Trial Oversight Software (CTOS).

MaxisIT

At MaxisIT, we clearly understand strategic priorities within clinical R&D, and we can resonate that well with our similar experiences of implementing solutions for improving Clinical Development Portfolio via an integrated platform-based approach; which delivers timely access to study specific as well as standardized and aggregated clinical data, allows efficient data quality management, clinical reviews, and statistical computing.

Moreover, it provides capabilities for planned vs. actual trending, optimization, as well as for fraud detection and risk-based monitoring. MaxisIT’s Integrated Technology Platform is a purpose-built solution, which helps Pharmaceutical & Life sciences industry by “Empowering Business Stakeholders with Integrated Computing, and Self-service Analytics in the strategically externalized enterprise environment with major focus on the core clinical operations data as well as clinical information assets; which allows improved control over externalized, CROs and partners driven, clinical ecosystem; and enable in-time decision support, continuous monitoring over regulatory compliance, and greater operational efficiency at a measurable rate”.

De-mystifying data management with automation through metadata

By Suvarnala Mathangi | Date: April 30, 2019 | Blog | 0 Comment(s)

Things seem to be set in motion in clinical trial management when programmers ready to perform the analysis get their code-happy hands on the clinical study data – however, once the CROs drop the data off, how does it actually make the journey to the statistical programmer’s desktop? This is a critical process, but too often compressed deadlines result in fragmented systems built with a collection of one-off programs that tend to become complicated over time and the data workflow lifecycle becomes fragmented. This article discusses the challenges of supporting data from multiple vendors including the issues of not using a consistent framework. It goes on to discuss the benefits of a common platform, breaking clinical data management activities down into “actions” – and driving those actions with object-oriented metadata and rapidly building capability through automated processes.

 

The problem with outsourced data collection

Most clinical organizations receive data from outside vendors and prepare this data for statistical processing. Whether as a consequence of dealing with multiple outside vendors or due to the specialized needs that differentiate the underlying science within one study from another, the format of the data delivered by the CROs will vary (one vendor delivering SAS datasets, another delivering Excel spreadsheets) as will the preparatory activities that ready the data for analysis (like using passwords to extract a zipped archive, or applying coding and edit checks).

Another element complicating this ecosystem is the organizational structure of business. Inevitably, the strategic responsibility for defining the systems within the company will be jointly shared by functional groups, such as between informatics and IT.

To receive data from CROs, a system exposed to the outside world through the corporate firewall will exist (likely managed by IT) and data consumers (like informatics) will have to use these shared systems rather than build their own.

Many organizations lack a standard operating system and operate in a mixed environment using UNIX, Linux, Windows, and various file servers. A formidable enterprise consideration is scalability, manifested as the ability of the system to accommodate changes for future business without disrupting ongoing or legacy activities.

Speaking in the language of an applications architect, the system must:

  • Support varying input data formats from CROs
  • Externalize configuration details such as user credentials and file server paths
  • Support varying data preparation activities
  • The system should be flexible to changes in the infrastructure and portable across operating systems.
  • Be easily extensible as the business places new demands on the framework.

 

Chaotic Approaches to Defining Workflows

Many organizations function on a day-to-day basis within the chaotic environment of getting data from multiple vendors into the analysis pipeline. Perhaps when the group had to manage data for only a study or two it was possible to write unique programs for each workflow. As the organization begins to support more studies and concurrently more data, the one-off program approach becomes less ideal.

As people leave the company or move on to other roles, the knowledge about how to maintain the systems is lost. Fewer and fewer people are capable of fixing problems within each data flow, meaning there are numerous single points of failure within the clinical data management lifecycle where communicating program failures and quickly implementing solutions becomes quite taxing.

Over time, the tidy collection of custom programs dealing with one or two studies evolves into an unmanageable assortment of code and scripts. Challenged to support multiple legacy workflows and new business, the staff finds itself running around in circles trying to overcome breakdowns in the system just to keep the business in motion. Some organizations feel this pain profusely having long outgrown the ad-hoc approach to clinical data management; other organizations are at the early stages of growth and are realizing the need for something more robust while the pain is still manageable.

 

Automation through Metadata

Metadata is a nebulous term, one overused in the industry. In the context of this article, metadata is taken to mean the elements of data management that differentiate processes, and therefore enable automating workflows described as a sequence of reusable actions driven by externally captured metadata. The important thing to focus on is the flexibility of the technology to allow you to define and group metadata in a valuable way. A metadata ontology or syntax needn’t be too verbose, it simply needs to provide a vehicle to define data elements and their values. The system can interpret these values in a way that is helpful.

The art lies in determining the proper tradeoff between automation and flexibility. Certain elements of the system are inevitably not going to be standard – for example, some vendors may provide data that requires transposition. The system can support custom tasks while still preserving consistency by capturing the metadata about the locations and names of custom programs being run. This provides insight into the “unique” pieces of the system over time and potentially offers an opportunity to gain additional efficiencies by integrating those features as they stabilize. At a minimum, at least the framework maintains a “single source of truth” regarding the unique components of each flow and the standard components of each flow.

Metadata is one of those buzz-words that gets thrown around in a number of contexts and takes vastly different meanings. If the business had needs that could benefit from a metadata-driven solution, it is highly possible to build these needs into any re-design without forcing a technology or product choice. As an example, the normal ranges check and unit conversions are metadata driven processes. By associating a specific type of data transformation with a column of a table or row of data, the system could easily capture the calculation lifecycle for given data points.

Likewise, study level metadata such as the name of the trial could easily be associated with studies. Processing level metadata (such as the location of directories) is already used by the script, but could be used more aggressively to support less rigid coupling between the infrastructure and the systems. When discussing data, things are clearly defined (what is this value). Metadata can capture details not easily understood by having just the data (such as where did this value come from, or what error checks or value ranges were tested against this value, or where is this value present elsewhere in this data). This type of data, centrally managed, could provide a number of gains for generating documents and automating processes. Further, seeing a comprehensive view of the data and metadata could be valuable in making business decisions and clinical analysis. Metadata can also be used to drive jobs and processes, making system automation and configuration less painful.

 

To conclude

Data management is a task common to all clinical organizations. Supporting multiple vendors and simultaneously building scalable systems requires some attention to automation. The key benefits of understanding how to abstract your processes into definable tasks configurable via metadata is that the programming logic and business logic are separated. By compartmentalizing these two facets of the system, each can evolve without introducing undue strain on the other. The success of any new technology introduction depends heavily on understanding the needs of the user community and building a system to address the pain points of existing approaches. By automating data management through metadata, the laborious task of getting data to statistical programmers becomes greatly simplified and as such is exposed to a larger community than with any ad-hoc system. This creates a collaborative environment where data management, IT, applications development, and statistical programming could work together easily to get things done – and most importantly provide a common framework and therefore language for dealing with this task.

MaxisIT

At MaxisIT, we clearly understand strategic priorities within clinical R&D, and we can resonate that well with our similar experiences of implementing solutions for improving Clinical Development Portfolio via an integrated platform-based approach; which delivers timely access to study specific as well as standardized and aggregated clinical trial operations as well as patient data, allows efficient trial oversight via remote monitoring, statistically assessed controls, data quality management, clinical reviews, and statistical computing.

Moreover, it provides capabilities for planned vs. actual trending, optimization, as well as for fraud detection and risk-based monitoring. MaxisIT’s Integrated Technology Platform is a purpose-built solution, which helps Pharmaceutical & Life sciences industry by “Empowering Business Stakeholders with Integrated Computing, and Self-service Dashboards in the strategically externalized enterprise environment with major focus on the core clinical operations data as well as clinical information assets; which allows improved control over externalized, CROs and partners driven, clinical ecosystem; and enable in-time decision support, continuous monitoring over regulatory compliance, and greater operational efficiency at a measurable rate”.

Approaches to establishing data standards between sponsors and CROs

By Suvarnala Mathangi | Date: April 30, 2019 | Blog | 0 Comment(s)

CDISC or Clinical Data Interchange Standards Consortium is a standards developing organization that supports data exchange in medical research to benefit the medical community. CDISC standards are free to use, international, and universal. However, small to medium size pharma companies have low adoption rates when it comes to adopting or implementing CDISC standards. Is this due to lack of awareness or understanding? How does a company that is new to standards see the wood through the trees in the new regulatory environment? Let us explore a number of best practices in establishing data standards between sponsors and CROs.

Though we cannot imagine a daily life without standards, the pharmaceutical world has been a very slow adopter. Standards are still very much seen as something that FDA or PMDA want. Moreover, standards sound complex and stretch our budgets, so it is better to avoid them, right? This article aims to highlight some benefits of having standards early (or rather, the risks of not having standards), and show some approaches to implementing standards that are not overly complex and will not burn the clinical development budget of even a small biotech company with a very limited portfolio.

APPROACHES TO IMPLEMENTING A LIBRARY

While undoubtedly there are many more models and variants of models than mentioned in this article, we will consider models that can work for small to midsize biotechs, pharmaceutical, and device companies and academia. There is an assumption that the company either has in-house data standard expertise or has access to expertise through a third party (CRO, technology partner or consultant). The approaches presented below, gradually increase in complexity and associated cost, but also offer increasing benefits when implemented right. The article assumes that CRF builds and data management work are outsourced to a CRO, but each of the models can work with an in-house team as well.

APPROACH 1: THE CRO IS THE EXPERT, LET THEM DECIDE

Most small biotech companies, with a successful “first in human” clinical trial need to start thinking about their Compound Development Strategy, and probably their company’s growth strategy itself. At a time when costs are high and Return on Investment is still a distant goal, a company may find it wise to hire a biostatistician. Chances are, they will not prioritize hiring a statistical programmer, data manager, let alone a data standards steward. Nevertheless, many have heard that if they want to submit their data to the FDA or PMDA, they will have to adhere to CDISC standards. So, when writing the RFP for the next clinical trial, CDISC deliverables are required. All CROs confirm that their deliverables are CDISC compliant, so why does everyone say that these standards are complicated?

For two parties (Sponsor and CRO) to “speak the same language”, they need to agree on what is captured (the CRF, or in CDISC terminology – CDASH) and how it is represented (SDTM). While the “light” approach, where the Biotech company relies on the CRO to define the expectations, or present its challenges, there are situations where this model can work. If a Biotech company is committed to a single CRO, where this CRO is really a biometrics extension of the Biotech company, then this is a cheap and effective solution. Most likely the CRO will have invested in their own set of standards and the Biotech company profits from this investment.

APPROACH 2: THE SPONSOR OWNS A BASIC LIBRARY

At a minimum, a “simple” library should contain a CRF layout or other form of clinical data visualization and associated SDTM annotations. This will ensure the consistency of clinical data collection and clinical data reporting. In this case, the Sponsor will communicate with the CRO through a study set up package. This package contains a protocol, a (sub)set of CRFs and preferably data specifications.

The data specifications are particularly useful when there are multiple ways of collecting clinical data, and the protocol does not provide the level of detail that a CRO may need (refer to the example of the medical history of substance abuse). A clinical data specification package may be as simple as a list of the unique name and version of CRFs to be used. How confident can you be that the CRO delivers what you expect? The advantage of standardization is that there are tools on the market that facilitate the review of your SDTM data structure. While they will not check the content of the data, these tools are instrumental to checking your data for compliance and will give you some level of confidence that the data conforms with FDA or PMDA expectations. When would this be a good model? It is a sustainable option for a small to midsize company that has a focus on quality and consistency, may plan to bring a portfolio into full development (phase II/III), but has a limited budget for infrastructure. In other words, a library would be a standalone tool, in its most elementary form a versioned word document or excel spreadsheet.

APPROACH 3: THE SPONSOR OWNS AN INTEGRATED LIBRARY

Once standards are implemented, the options for automation, thus building efficiency and quality into your process, are almost endless. An integrated library not only contains a data visualization and SDTM mapping but will also have a metadata library. Additional information captured in this library includes code lists, computational methods, value level metadata, etc. Certainly, when your library is still small or managed by a limited number of users, it is still possible to maintain the library in Excel. Careful design of your metadata library will allow a very simple extraction routine for study builds and generation of define.xml according to the latest standards. Once the Integrated Library is in place, you can start implementing automation, including, but not limited to automated EDC build and automated validation of your data structure.

CONCLUSION

It is clear that clinical data standards are here to stay, and the sooner you build in the standardization into your process, the greater the benefits. Sponsors will still have to make decisions on the level of standardization. This will greatly depend on your company’s operational model, staff and budget. There is no one size fits all solution, but there is a solution for everyone. The key to success lies in looking at the long-term goals and processes and implementing the standardization in phases. Even a minimal investment into standardization can make a huge difference in how CROs and Sponsors communicate and manage expectations.

MaxisIT

At MaxisIT, we clearly understand strategic priorities within clinical R&D, and we can resonate that well with our similar experiences of implementing solutions for improving Clinical Development Portfolio via an integrated platform-based approach; which delivers timely access to study specific as well as standardized and aggregated clinical trial operations as well as patient data, allows efficient trial oversight via remote monitoring, statistically assessed controls, data quality management, clinical reviews, and statistical computing.

Moreover, it provides capabilities for planned vs. actual trending, optimization, as well as for fraud detection and risk-based monitoring. MaxisIT’s Integrated Technology Platform is a purpose-built solution, which helps Pharmaceutical & Life sciences industry by “Empowering Business Stakeholders with Integrated Computing, and Self-service Dashboards in the strategically externalized enterprise environment with major focus on the core clinical operations data as well as clinical information assets; which allows improved control over externalized, CROs and partners driven, clinical ecosystem; and enable in-time decision support, continuous monitoring over regulatory compliance, and greater operational efficiency at a measurable rate”.

This website uses cookies to help us give you the best experience when you visit. By using this website you consent to our use of these cookies. For more information on our use of cookies, please review our cookie policy.