IN TIME ACTION WITH ON TIME DATA

By Suvarnala Mathangi | Date: June 30, 2018 | Blog | 0 Comment(s)

Blog-Banner-blog

Clinical trials that function on a global magnitude have clinical sites and patients across multiple geographies. The very nature of clinical trials brings opportunities as well as some challenges, wherein the data strewn across the globe adds more complexities to this. Amid these convoluted multiplicities, the element of BYOD, wearable technology, and mHealth applications may seem overwhelming. However, they partially resolve the issue of multiplicity.

Let me explain how.

On time Data

The dispersed patient population refrains researchers and trial practitioners from collecting real-time and on-time data necessary to draw timely insights for the drug development process. Eliminating follow-up visits to research centers, seamless data collection, and reduced data cost per patient are some of the advantages that have brought these personalized technologies accepted in the clinical trial process by practitioners, patients, and also the regulatory authorities such as the FDA.

In August 2017, the National Institutes of Health clinical trials database returned over 170 results for the search term ‘fitbit’, over 300 for ‘wearable’ and over 440 studies 76 for ‘mobile app.’ This justifies the acceptance of BYOD, wearable technology, and mHealth applications in the data collection process during clinical trials by the entire ecosystem.

While this is comforting news on one side; the researchers, investors and practitioners must deal with a different challenge when they step onto the other side of bank. These technologies have led to device diversity which is followed by data diversity during clinical trials. With high diversity, the information systems and insight machines experience a continuous flow of data. By 2021, 504.65 million wearable devices are predicted to be sold. Extremely useful data is made available, but again not in a consumable form.

Many legal standards, statutory guidelines and recommendations for clinical data formats are already in place to resolve the issue of data diversity and complexity. However, it is also a matter of transparency, speed to deploy meta-schema and maintain data repositories for the same. For future inquiry and research, maintaining metadata repository is imperative. Apart from that delays and gaps in updating researchers and scientists of the new development and data points during the clinical trials would slow down the process drastically. It obviously has further adverse cost implications for both the investors and patients.

Data diversity also brings in the complexity of managing a variety of data formats. This creates heavy volume of data which gets unmanageable in the absence of a robust statistical computational environment for clinical reporting.

In-Time Action

Instead of going back and forth to verify data quality and timeliness out of a clinical trial, on-time data renders necessary information to eliminate expensive iterations. On-time data facilitates in-time action; and in-time action helps fulfilling regulatory compliance and shortening the cycle of each clinical trial, while reducing costs.

A single platform that understands, comprehends and interprets variety of clinical data, and clinical devises can produce such data into insights. A seamless, transparent, and on-time data based on an aggregation and standardization across the studies in a cloud-based data repository, backed by AI and built around SCE will enable all stakeholders involved in the clinical trial to act in-time.

MaxisIT

At MaxisIT, we clearly understand strategic priorities within clinical R&D, and we can resonate that well with our similar experiences of implementing solutions for improving Clinical Development Portfolio via an integrated platform-based approach; which delivers timely access to study specific as well as standardized and aggregated clinical data, allows efficient data quality management, clinical reviews, and statistical computing.

Moreover, it provides capabilities for planned vs. actual trending, optimization, as well as for fraud detection and risk-based monitoring. MaxisIT’s Integrated Technology Platform is purpose-built solution, which helps Pharmaceutical & Life sciences industry by “Empowering Business Stakeholders with Integrated Computing, and Self-service Analytics in the strategically externalized enterprise environment with major focus on the core clinical operations data as well as clinical information assets; which allows improved control over externalized, CROs and partners driven, clinical ecosystem; and enable in-time decision support, continuous monitoring over regulatory compliance, and greater operational efficiency at a measurable rate”.

The Superhero of Clinical Analytics & Reporting – Statistical Computational Environment

By Suvarnala Mathangi | Date: June 30, 2018 | Blog | 0 Comment(s)

Blog-Banner-blog

With all due respect to the herculean task undertook by the Human Genome Project which costed over 3 billion dollars and spanned over fifteen years, can be undone by parsing back the human genome in 26 hours. Though I was not involved in this project directly, it is an inspiration to a healthcare IT entrepreneur like me.

So, what is the point I’m trying to make?

I’m referring to the gap between 15 years and 26 hours. In the future, this gap may diminish to a few seconds, bringing down the costs too. This gap is disappearing with the proliferation of technologies that can facilitate super computations for drug discoveries in nanoseconds and a research-conducive environment that aids more as a library than a mere repository.

While faster computation at lower average cost is one good reason I could share to emphasize the need for a Statistical Computational Environment (SCE); crowdsourcing, singularity of diverse data formats, regulatory compliance, reference for future research, accountability, auditability, transparency, extensibility and scalability are a few other reasons for the need and adoption of SCE.

Looking back, what can you expect out of an SCE?

Statistical computations own certain features that are beneficial for clinical trials processes to thrive and support existing and prospective research. Few of them that come on top of my mind are that:

Analytical excellence: SCE available on the cloud allows you to integrate different analytical applications, within the system or with third-party solutions. It also prepares an environment for execution and control of different programming languages such as SAS, R, Python and so on.

Analytics requires good quality of sourcing which is best achieved in a controlled environment that supervises and administers all information via secure logins, audit trails, versioning and role-based privileges and policies.
Workflow optimization: Heterogeneity in data is now a well-known attribute of clinical trials. This is unavoidable due to the inherent substance of age, geography, and media used in a clinical trial.

SCE provides a global workspace with a workflow for review and approval across multiple data sources. It allows teams from across the global to access data, run computations, collaborate and share information using a single system. This also facilitates traceability of existing data back to the source data.

Data standard support and data preparation: Any format does not work, when you are preparing for an FDA approval. Metadata formats and data stacking as per CDISC models – including SDTM, SEND, ADaM and Define 1.0 and 2.0, and other extensible custom models are supported.

Regulatory Compliance: This is not just about supporting data and format management for regulatory approvals, but also about providing quick access to the user in creating instant data outputs for an adaptive and agile submission process.

Some exciting benefits that draw my attention

Some features that I strongly believe in which an SCE should possess.

Metadata management

Transparency is one of the aspects offered by an SCE to a clinical trial. Complexity in data repository hierarchies and its sharing patterns, demand the presence of a management tool that can expedite reproduction of most relevant data and in the most preferred format. This process includes data storing, indexing, cataloging and aggregating, and accessing during a search.

Controlled Environment

Apart from administration and security management, the controls within the environment also enable data version control across multiple iterations, limited access to users, role-based access , and context-specific permissions. This is made available across all knowledge workers in areas such as pre-clinical, clinical operations and medical affairs to drive global collaboration between internal team members, consultants, contractors and development partners.

Extensibility

Lack of extensibility outside an SCE is a reason attribute to high attrition rates in the latter stages of clinical trials. SCE to a greater extent supports the success of commercializing useful therapies and molecular formulae. Drug development can be optimized when data is integrated with the formats in which a scientist can access it. Owning an XIS platform at the core, the platform users can extract data from an XIS Server as well as other XML or Web service enabled source using a web-based application.

Scalability

Either in the case of classical pharmacology, forward pharmacology, reverse pharmacology or phenotypic drug discovery, scaling data and output swiftly is a yearned factor.

Application of mass spectrometry for elucidation of chemical structures from databases is a high-demand expectation of researchers and investors to make most out of existing and approved chemical formulae. Implementation of Nuclear magnetic resonance spectroscopy (NMR) on existing molecular structures during a discovery process or clinical trial would lead to many other channels within the drug discovery process. SCE allows such branching out of discoveries, enabling researchers to add more value to the clinical trial.

MaxisIT

At MaxisIT, we clearly understand strategic priorities within clinical R&D, and we can resonate that well with our similar experiences of implementing solutions for improving Clinical Development Portfolio via an integrated platform-based approach; which delivers timely access to study specific as well as standardized and aggregated clinical data, allows efficient data quality management, clinical reviews, and statistical computing.

Moreover, it provides capabilities for planned vs. actual trending, optimization, as well as for fraud detection and risk-based monitoring. MaxisIT’s Integrated Technology Platform is purpose-built solution, which helps Pharmaceutical & Life sciences industry by “Empowering Business Stakeholders with Integrated Computing, and Self-service Analytics in the strategically externalized enterprise environment with major focus on the core clinical operations data as well as clinical information assets; which allows improved control over externalized, CROs and partners driven, clinical ecosystem; and enable in-time decision support, continuous monitoring over regulatory compliance, and greater operational efficiency at a measurable rate”.

A Singular Statistical Computing Framework Answers Clinical Data Diversity & More

By Suvarnala Mathangi | Date: June 30, 2018 | Blog | 0 Comment(s)

Blog-Banner-blog

With the complexity involved in regulatory and exploratory reporting for clinical trials, adding another element of SCE might seem overwhelming to the clinical development team particularly the biometrics team, management and investors. I, would however, contest that it can enable to reduce data to reporting cycle time and related costs; further improve the turnaround time involved in statistical computing, review, and approval cycles as well as a significant improvement in everyone’s ability to manage and access regulatory submission standard data as well as reporting deliverables with required controls.

So, let me get started by elaborating on ‘What is a Statistical Computational Environment?’.

An SCE?

When I review the definition of Statistical Computational Environment (SCE) as “a data repository of source code, input data, and outputs with compliance features such as version-control of SCE elements, reusable programming ability, and links to the clinical data management system.”, to me, it is almost self-explanatory why your clinical trials should be nurtured in an SCE.

While the definition reveals some of the most apparent reasons to instill a clinical trial into an SCE, there are many other compelling, and not so evident aspects which makes an SCE imperative for your trial. An SCE also consists of a structured programming environment that eases project management by enabling workflows, an operational analysis data repository that optimizes data standardization, and a metadata-driven architecture containing information about data and status of various processes, while streamlining it for compliance and transparency.

Why is an SCE so imperative?

Here are a few intersections according to my observation that capacitate clinical trials to be stay more in alignment with not just the FDA criteria, but also the organizational vision.

Mastering Reproducibility

Statistical reproducibility is about providing detailed information about the choice of statistical tests, model parameters, threshold values, etc. It concerns with pre-registration of study design to prevent p-value hacking and other manipulations.

By being able to reproduce the workflow that includes code and data used to validate the decisions made using research documents, the clinical trial team would be able to save enormous amounts of time. To create and deliver newer drugs, with older formulae and data, SCE would allow access to pre-existing methods and results. This means other colleagues can approach new applications with a minimum of effort.

Site Selection

Trial and error site selection for clinical trials has disabled the research from creating a validated research foundation that can avoid large unwarranted costs and time lags. Disappointing numbers that say 80% of trials fail even before they meet enrollment while an investor has to spend almost $20,000 to 30,000 on an average on site selection signify the importance of site selection. Apart from this, researchers also have to deal with poor qualification of a site.

All these challenges can be answered by developing a data-driven site selection approach that not only fulfills the existing clinical trial, but also the ones to come in the future for a similar area of study.

Faster Approvals and Regulatory Compliance

FDA has been greatly instrumental in convincing clinical trial owners to use metadata and a unified format to submit for approval. To fulfill CDISC and other criteria, a clinical trial has to run in a statistical computational environment. Apart from this regulatory framework, there are many others that require creation and maintenance of metadata. It not only keeps it legal, but also ensures faster go-to-market time and a reference point for future studies at an extremely low cost.

While these are some strategic reasons for clinical trials to maintain an aura of SCE, there are some functional reasons too.

Rising complexities in clinical trials also need more computational power. SCEs add agility to the researcher and stakeholders by allowing them to store, maintain, access and edit information on a common platform. These environments provide ease for Non-clinical and Clinical Pharmacology analysis, translational medicine and Predictive modeling &
Simulation.

The team and human mindset working out of such should however, give way to an integrated approach in deploying SCEs. SCE combined with an integrated technology solution that processes data through a singular analytical framework can build a foundation for future clinical research.

MaxisIT

At MaxisIT, we clearly understand strategic priorities within clinical R&D, and we can resonate that well with our similar experiences of implementing solutions for improving Clinical Development Portfolio via an integrated platform-based approach; which delivers timely access to study specific as well as standardized and aggregated clinical data, allows efficient data quality management, clinical reviews, and statistical computing.

Moreover, it provides capabilities for planned vs. actual trending, optimization, as well as for fraud detection and risk-based monitoring. MaxisIT’s Integrated Technology Platform is purpose-built solution, which helps Pharmaceutical & Life sciences industry by “Empowering Business Stakeholders with Integrated Computing, and Self-service Analytics in the strategically externalized enterprise environment with major focus on the core clinical operations data as well as clinical information assets; which allows improved control over externalized, CROs and partners driven, clinical ecosystem; and enable in-time decision support, continuous monitoring over regulatory compliance, and greater operational efficiency at a measurable rate”.

Data savvy drug development needs a Good Statistical Computing Environment (SCE)

By Suvarnala Mathangi | Date: June 30, 2018 | Blog | 0 Comment(s)

Blog-Banner-blog

Diverse data sources, heterogeneous data formats, the multitude of drug outputs and there is always more, within a drug development process. A highly agile and adaptable environment supported by cohesive data management and reporting processes can only accommodate and respond to the perpetuating mutations and metamorphosis in the process.

Who can save the day to match such adaptations? Statisticians and their supporting programming team is the answer. The statistical programming and analysis team that works in tandem with the discovery and development team can seamlessly deliver data analysis and even adhere to necessary data standards only when they are supported by a data-savvy environment which includes infrastructure, tools, processes and mindset.

This is where I always recommend a good statistical computational environment (GSCE) to create viable and market-ready drug discoveries. Most importantly, it eliminates the chaos that is brought by data heterogeneity and adds traceability, data security, accessibility, auditability, efficiency and control to meet regulatory compliance needs, for the final user.

Why Good Statistical Computational Environment is as important as the process?

A good SCE is a consequence of two events — need to expedite drug discovery and obligation of adherence to FDA and other approval associations.

The patients and the medical community also accept a drug only when there is data that proves its safety and efficacy. In fact, FDA has stipulations that require a drug to pass through the criteria with certain data proof made available as per their prescribed format and proforma with auditability, traceability, and via well-governed process. The FDA expectations for electronic submission guidance defines what FDA expects to receive: multiple types of data files, documentation, and programs—the major components of an analysis environment.

To match these, there are a slew of good practices recommended by ICH E9 which has prescribed best practices from a regulatory perspective. It emphasizes on the good statistical science of documented statistical operations which further ensures validity and integrity of prespecified analyses, lending credibility to the results.

Data-driven processes such as Pharmacogenetic testing, Multigene panel testing, Targeted genetic testing for rare diseases and hereditary cancers, and human genome sequencing has seen the advantage of thriving is a statistical computational environment. They also have the high potential of receiving insurance coverage.

The major challenge that is curbed by ICH E9’s standardized formats is the diversity involved in different stages of clinical testing that focus on different statistical skill sets. The drug development process that begins with pharmacokinetic and pharmacodynamic studies requires early proof-of-concept supported by dose-ranging studies. Once, the drug passes through this stage, it is tested in the confirmatory phase for further efficacy and safety among a more heterogeneous population.

The life cycle development stage and the phase of post-marketing study follow in the development process. Each phase with different data demands needs to be aligned with computations that can deliver relevant outputs.

How to begin envisioning a good SCE?

A good SCE needs to fulfil inclusion of empowerment to data-driven mindset, metadata-driven computing, technological scalability, analytical performance, an integrated approach with a single source of truth, role-based controlled processes and continuum in different stages of the process chain.

Most clinical data is processed among several programming activities which need to follow good statistical practices lined up by ICH E9. Apart from this, other bodies that offer a framework to develop a sound statistical environment are Clinical Data Interchange
Standards Consortium (CDISC) data standards, the CDISC analysis data model (ADaM) guidance, HL7 data standards, the harmonized CDISC-HL7 information model (BRIDG), electronic records regulations, FDA guidance for computerized systems, and electronic Common Technical Document (eCTD) data submissions.

ADaM guidance provides metadata standards for data and analyses. This enables statistical reviewers to understand, replicate, explore, confirm, and reuse the data and analyses. A
transparent, reproducible, efficient, and validated approach to designing studies and to acquiring, analyzing, and interpreting clinical data will achieve a faster drug-to-market cycle.

Genetic counselors, hospital administrators, pharmacy advisory firms, academic studies, and consulting reports involved in a SCE also enjoy monetary benefits. Drugs that adhere to those standards also attract investor interest. Investor valuations for biotech companies are not purely based on profits but also, on long term potential, given the longer gestation period of drug development.

To reap these benefits, industry needs a Statistical Computing Environment that can support deriving statistical insights in a controlled, adaptable, auditable, and traceable manner utilizing an integrated approach that is based on metadata driven processes and relies on single-source of truth for data i.e. an enriched clinical data repository.

By achieving this, the records of decision stored from different stages of the process will keep the clinical cycle of the drug transparent, accessible and helps other scientists and clinical research stakeholders for future research. This would retain the format of data intact, in its original form at the same time does not interrupt the cohesiveness of the insight or information that the repository holds.

All put together, it indicates that the drug development process should incubate in a sound statistical computational environment for best results.

Data, a rising mindset and methodology for drug development

By Suvarnala Mathangi | Date: June 30, 2018 | Blog | 0 Comment(s)

Blog-Banner-blog

I have been discovering more and more headlines reporting about genome-wide dissection of genetics, high-throughput technologies, genetic modifications and other biomedical breakthroughs. All lead to one common catalyst — a data intensive drug discovery model used by the success team. But, such a model only thrives in the presence of a sound statistical computational environment.

To me, two elements drive the success of data management during clinical trials. One, is the ability to bring diverse data forms in a single comprehensible format and the other is data driven in-depth knowledge to evaluate and validate the drug concept. The variety of data forms fiddles with security, accessibility, traceability and audibility which otherwise empower a clinical practitioner with better insights and adherence to regulatory compliance. These factors can be achieved by introducing the drug into a data driven environment, supported by a team with a similar data-intensive mindset who can build a statistical computational environment (SCE). Using SCE, customers can establish control, auditability, traceability and gain efficiency without compromising regulatory compliance.

NCBI also identifies that the publicly available biomedical information is the basis for identifying the right drug target and creating a drug concept with true medical value. This has also helped them enhance their understanding about the pathophysiological mechanisms of diseases with the help of a data driven development model.

Prediction of metabolism, application of translational bioinformatics in reporting long-read amplicon sequencing of chronic myeloid leukemia and multi-drug resistant bacteria, and gene mutations are results of a data-intensive model. Apart from looking at data-intensive drug development model as an inhouse capability, I also see it as a competitive factor for R&D units and scientists to hit the market.

Here is why it is competitive. It has led way to virtual drug development models that allow scientists to test and analyze a molecule even before the formulation stage. Eliminating challenges in the pre-clinical stages can reduce development costs and the lead time taken by each formula to transform into a druggable output. Nature, a popular science magazine has confirmed through its research that the unprecedented challenges of a pharma business model can be met by shifting investments to the earlier stages of drug discovery.

To establish this approach, pharmaceutical and healthcare companies involved in drug development need to invite an integrated data management approach. This involves defining data sources for reliability and consistency, building a robust data mining and warehousing infrastructure, and processing different insights useful to various stakeholders of the project and data users. Among these McKinsey has pointed that the first step itself takes about an year to complete.

Having involved with multiple integrated clinical development platform projects, I have seen that drug-to-market lead time was brought down when traditional drug development model was transformed into a data-intensive and technology enabled environment that is integrated and automated to allow establishing focus on data analysis and avoid mundane data processing tasks.

To gain the competitive edge more and more successful drug development companies are towards thriving in a data intensive drug development & discovery environment. Are you on the banks of Data intensive drug development process or stifled by the challenges of a traditional approach?

MaxisIT

At MaxisIT, we clearly understand strategic priorities within clinical R&D, and we can resonate that well with our similar experiences of implementing solutions for improving Clinical Development Portfolio via an integrated platform-based approach; which delivers timely access to study specific as well as standardized and aggregated clinical data, allows efficient data quality management, clinical reviews, and statistical computing.

Moreover, it provides capabilities for planned vs. actual trending, optimization, as well as for fraud detection and risk-based monitoring. MaxisIT’s Integrated Technology Platform is purpose-built solution, which helps Pharmaceutical & Life sciences industry by “Empowering Business Stakeholders with Integrated Computing, and Self-service Analytics in the strategically externalized enterprise environment with major focus on the core clinical operations data as well as clinical information assets; which allows improved control over externalized, CROs and partners driven, clinical ecosystem; and enable in-time decision support, continuous monitoring over regulatory compliance, and greater operational efficiency at a measurable rate”.

The key to faster clinical development – Identifying the potential indicators of change.

By Suvarnala Mathangi | Date: May 31, 2018 | Blog | 0 Comment(s)

In order to avoid constrained growth model, a R&D focused organization needs to empower its business stakeholders with the cross-functional insight and portfolio-level information management strategy.

Prospective solution should allow necessary roll-down and roll-up flexibilities in identifying potential indicators of change (e.g. risk and/or opportunities) and its impacts, while taking decisions with confidence.

To mobilize such business transformative strategy, the technology enablers must ensure that the current clinical development processes, its existing investments, partnerships, and corporate strategies do not get impacted. Therefore, the evolution of AN INTEGRATED DATA MANAGEMENT PLATFORM begins with bringing in or (rather) aligning existing and new data sources into a single-source of truth (Data Repository); wherein they can all have common characterization, identity, accessibility, predictable movement in reference to other. Above all, a governance layer that establishes complete control over the clinical data assets as they move forward in the clinical development processes.

Overall, we understand that there has been huge gap, rather siloed approach, between the data capture, data management and the processes utilized in deriving actionable insight. Data capture systems have improved a lot over time, and so did the increase in the sources and varieties of data, but, data management, the analysis & reporting processes still suffer – this demands integration, aggregation, and harmonization of data with the integrated tools to achieve maximum downstream benefits within Analytics, Reporting, and Mining of critical Clinical & Operational Data horizontally across the silos.

Single-source clinical data repository with its ability to deliver timely access to an aggregated form of data across the historical studies, and real-word data, have paved ways to detect diseases at earlier stages when they can be treated more effectively and timely; manage specific patient and population health; and detect health care fraud or clinical risks more quickly and efficiently.

Development of medicines through agreement on evidence standards, including extrapolation of available clinical data for predicting drug behavior and encouraging a multi-arm, multi-company approach to reduce patients in clinical trials are some outcomes of global trials. This requires drug manufacturers to choose an integrated clinical data management solution that allows integrated capabilities across the life cycle of data from source to submission.

About MaxisIT
At MaxisIT, we clearly understand strategic priorities within clinical R&D, and we can resonate that well with our similar experiences of implementing solutions for improving Clinical Development Portfolio via an integrated platform-based approach; which delivers timely access to study specific as well as standardized and aggregated clinical data, allows efficient data quality management, clinical reviews, and statistical computing.

Moreover, it provides capabilities for planned vs. actual trending, optimization, as well as for fraud detection and risk-based monitoring. MaxisIT’s Integrated Technology Platform is purpose-built solution, which helps Pharmaceutical & Life sciences industry by “Empowering Business Stakeholders with Integrated Computing, and Self-service Analytics in the strategically externalized enterprise environment with major focus on the core clinical operations data as well as clinical information assets; which allows improved control over externalized, CROs and partners driven, clinical ecosystem; and enable in-time decision support, continuous monitoring over regulatory compliance, and greater operational efficiency at a measurable rate”.

A Perfect Match – When finding the right solution provider is as important as the solution itself

By Suvarnala Mathangi | Date: May 31, 2018 | Blog | 0 Comment(s)

The global clinical data analytics market will be worth $13.8 billion by 2023 according to a ResearchAndMarkets report. It identifies the growing popularity of EMR’s as the key driver of this growth. However the clinical development industry is still grappling with problem of disintegrated and non-standardized critical trial processes which is further handicapped by its nature of data heterogeneity in the market.

The costs of clinical trial operations are continuously being evaluated for ways of achieving greater efficiency while reducing expenses. Many biopharma companies, now more than ever, focus on containing costs while maintaining quality. This approach applies to eClinical technology used to support a trial. An eClinical solution should include best-in-class integrated technology that is cost-effective, high quality, and efficient.

An example of this type of technology used in clinical research is electronic data capture (EDC) combined with interactive response technologies (IRTs) such as IVR (integrated voice response). This software combination has proven effective in reducing the cost of clinical trials. These integrated solutions will increase the efficiency of a clinical trial and provide a single environment with the flexibility and scalability to keep pace with demands.

Most of the integrated eClinical solutions are the result of single technology solutions that are paired together or integrated into a suite of eClinical solutions, resulting in a single platform with multiple components such as EDC and IVR. Comprehensive multicomponent solutions that contain EDC/IVR/CTMS (clinical trial management system) labs built from the ground up as a single integrated solution are uncommon and rarely best-in-class.

The emergence of integrated solutions results from the need to gain access to central lab data, EDC, and IVR data, including ePRO within a single system in the fastest, most reliable manner. As integration solutions have evolved, the continuous challenge is to improve the level of data integration beyond a transfer or sharing of flat files to true integration.

Integrated approach makes business sense

A platform that integrates data sources in real time benefits the end user by providing immediate access to various forms of clinical data including central lab, IVR, and EDC data. Such visibility across different sources can bring immediate attention to data quality and patient safety issues. This in-turn will reduce errors, mitigate risk, reconcile data and minimize redundancy.

An integrated platform based approach causes a ripple effect which starts with making standardized data available for single study as well as aggregated across studies in a single repository. In effect, the end user no longer need to log in to multiple systems. This single source of truth reduces data redundancy, increases data traceability across technologies and improves the quality and reliability of shared data between applications. The centrally held data can now be efficiently used for reporting, insight generation with analytics leading to faster decision making. Thus, it makes a lot of business sense for the sponsor or the CRO to select a vendor that provides an integrated platform based solution.

Traditionally, clinical projects are executed in siloed manner; with redundancy, delay, inflation and ambiguity that emerge as byproducts, more than the valuable insights. The siloed approach in the clinical context is viewed as a socio-technology problem. It needs the cooperation of both realms and this itself indicates the need to integrate clinical data processes. An end-to-end integrated approach would resolve the complexities that rise out of silos.

In this context McKinsey and Co articulate the need of an integrated approach very aptly as, “Effective end-to-end data integration establishes an authoritative source for all pieces of information and accurately links disparate data regardless of the source—be it internal or external, proprietary or publicly available.”

Implementing an end-to-end integration requires certain capabilities: inclusion of trusted sources of data and documents, the ability to establish cross-linkages between elements, robust quality assurance, workflow management, and role-based access to ensure restricted visibility of specific data elements.

Flexibility – How innovative, and right-size company like MaxisIT can be more flexible and agile?

Unable to take up smaller slices of a large project, large companies increase the cost of the project compelling the sponsor or CRO to depend completely on the one vendor. However, with an integrated platform approach, MaxisIT can reduce the costs by rendering technology solutions to specific, standalone problems.

MaxisIT as an Integrated Clinical Development Platform strives to embrace a problem-solving approach rather than a scale-congenial approach. Our process allows us to solve clinical data and development problems using an integrated approach in a flexible and agile manner.

Most importantly, we have been able to demonstrate that our solution is compliant with regulatory guidelines and can be configured to the type of study or data management needs. The highly frustrating problem of cohesion between compliance and data format is fully taken care by our solution.

We see that though large software companies have bigger bucks for visibility, they don’t have enough bandwidth to pay attention to small challenges that arise during implementation. MaxisIT along with its customer support attends to the smaller details too while delivering the integrated solution.

Why MaxisIT for an integrated solution?

At MaxisIT, every clinical data management project matter to us. We see to that each process of a project revolves around one common mission—to strive for healthier and better life. To make this happen, we try harder and focus on integrating different processes across the lifespan of a drug development process, rather than worrying about the size of the project.

We clearly understand strategic priorities within clinical R&D, and we can resonate that well with our similar experiences of implementing solutions for improving Clinical Development Portfolio via an integrated platform-based approach; which delivers timely access to study specific as well as standardized and aggregated clinical data, allows efficient data quality management, clinical reviews, and statistical computing.

Moreover, it provides capabilities for planned vs. actual trending, optimization, as well as for fraud detection and risk-based monitoring. MaxisIT’s Integrated Technology Platform is purpose-built solution, which helps Pharmaceutical & Life sciences industry by “Empowering Business Stakeholders with Integrated Computing, and Self-service Analytics in the strategically externalized enterprise environment with major focus on the core clinical operations data as well as clinical information assets; which allows improved control over externalized, CROs and partners driven, clinical ecosystem; and enable in-time decision support, continuous monitoring over regulatory compliance, and greater operational efficiency at a measurable rate.

Exigency of the Industry – Integrated Clinical Development

By Suvarnala Mathangi | Date: April 30, 2018 | Blog | 0 Comment(s)

There has been a collective realization throughout the industry that the key data management processes must show efficiency, continuous compliance, scalability, sustainability and measurable productivity gains that, over time, will transform the research and development processes used to develop new drugs.

Industry leaders are addressing this problem by delivering solutions that can address various needs ranging from study design to data integration to clinical process optimization to targeted therapies to benefit-risks assessments to cost reduction.

Primarily clinical data management solutions empowers the ecosystem with decision making abilities backed by precise, accurate and timely insights. Some best practices culminate to build an integrated clinical data management solution that works for current processes and the future of clinical trials.

Integrated Data Management Platform: There is no single software to capture all kinds of clinical data. During clinical trials, practitioners use multiple software, devices and apparatus to capture data. Though EDCs are fast catching up, globally, some local circumstances and low capital-intensive projects still rely on paper-based case reports and ePROs. Some data is also in the form of X-ray, scans which is different format altogether. Therefore, industry needs a platform that allows integration across these diverse sources of data, and further provides all of the data in a single clinical data repository.

Real-time Data processing and insights: For a long time, information churned out of large data sets have been processed in isolation causing complexities. But today, we see a pressing need to use real-time analytics with integrated data from devices, health systems, payers, patients, providers and other systems and participants to deliver services.

Metadata driven process: To deal with entropy in clinical data, efficient CDMs use metadata. Metadata facilitates information flow, improves retrievals, helps in discovery and provides context by applying metadata principles to information. It establishes some of the most lacking comforts for a data scientist trying to arrive at meaningful insights. Some of those benefits include user friendliness with a simple and intuitive interface.

Reporting Tools: Seldom clinical data management solutions are integrated with reporting tools that can configure, execute, and review reports with built-in analytical algorithms in support of analyzing the data quality, clinical significance, operational performance, as well as reviewing genetics and proteomics data.

Risk Based Monitoring & Assessment: Solution that helps mitigate risk by allowing organizations to focus on critical study parameters. A robust risk monitoring & assessment needs to be empowered with analytical and reporting capabilities, third-party integrations with open APIs, workflow engine and a knowledge base.

Regulatory and Compliance: An integrated Data Management solution should be configurable to support different types of trial data, trial protocols, results and subject-level data. This configurability comes with an inherent need of maintaining regulatory compliance & audit traceability. Such solution should be robust enough to configure study specific needs and translate data to meet the regulatory and compliance standards both globally and locally.

The increasing volume, variety in data and the velocity at which clinical data is being produced, a recommended solution needs to be agile and robust to respond to such versatile dynamism.

About MaxisIT
At MaxisIT, we clearly understand strategic priorities within clinical R&D, and we can resonate that well with our similar experiences of implementing solutions for improving Clinical Development Portfolio via an integrated platform-based approach; which delivers timely access to study specific as well as standardized and aggregated clinical data, allows efficient data quality management, clinical reviews, and statistical computing.

Moreover, it provides capabilities for planned vs. actual trending, optimization, as well as for fraud detection and risk-based monitoring. MaxisIT’s Integrated Technology Platform is purpose-built solution, which helps Pharmaceutical & Life sciences industry by “Empowering Business Stakeholders with Integrated Computing, and Self-service Analytics in the strategically externalized enterprise environment with major focus on the core clinical operations data as well as clinical information assets; which allows improved control over externalized, CROs and partners driven, clinical ecosystem; and enable in-time decision support, continuous monitoring over regulatory compliance, and greater operational efficiency at a measurable rate”.

Current Gaps in Clinical Data Management

By Suvarnala Mathangi | Date: April 30, 2018 | Blog | 0 Comment(s)

When it comes to data management, there are a lot of data gaps that requires filling. Below are the areas of data management where the industry is facing problems. If any of this sounds familiar, you might need help!

Data Capturing & Aggregation: The prevalence of ‘data heterogeneity’ in clinical trials is what makes data capturing and aggregation a complex process. Capturing data is yet to be perfected because it comes in different forms and formats. It is generated from multiple devices used by practitioners who follow distinct regulatory protocols. The cleanliness of clinical data is also subject to issues like non-adherence and data variability which arise due to a different set of on-site challenges. This primary challenge stifles the data lifecycle, affecting analytics and quality of insights.

Data Cleaning & Discrepancy Management: Clinical trials deal with unstructured data all the time. Though digital documents such as EDCs, EHRs and ePROs are embraced in clinical trial projects, use of PCRFs cannot be eliminated completely. This fractional use of PCRFs and multi-source aggregation adds to the complexity of data cleanliness. Clean data, devoid of discrepancies, ensures use of accurate and rationale datasets for further analytics.

Data Storage & Data Security: Data storage and security issues always give rise to the debate of ‘on-site or cloud model’ since it is coined with the function of cost. In either case, care must be taken to establish disaster recovery, cost efficiency, immunity against security breaches and healthcare specific compliance with HIPAA security rule.

Data Stewardship & Data Querying: Clinical data has longer shelf life, as it not just used for the current and specific research, it is archived for future research too. Sometimes, the trials also have traces of unutilized datasets that can solve disconnected healthcare issues. Owning and retrieving such data among the large volumes of data repository over time is an area that needs attention. Having historical and secondary data handy can resolve many issues. These also raise red flags through the procedure that pertain to data updation, interoperability and sharing.

About MaxisIT

At MaxisIT, we clearly understand strategic priorities within clinical R&D, and we can resonate that well with our similar experiences of implementing solutions for improving Clinical Development Portfolio via an integrated platform-based approach; which delivers timely access to study specific as well as standardized and aggregated clinical data, allows efficient data quality management, clinical reviews, and statistical computing.

Moreover, it provides capabilities for planned vs. actual trending, optimization, as well as for fraud detection and risk-based monitoring. MaxisIT’s Integrated Technology Platform is purpose-built solution, which helps Pharmaceutical & Life sciences industry by “Empowering Business Stakeholders with Integrated Computing, and Self-service Analytics in the strategically externalized enterprise environment with major focus on the core clinical operations data as well as clinical information assets; which allows improved control over externalized, CROs and partners driven, clinical ecosystem; and enable in-time decision support, continuous monitoring over regulatory compliance, and greater operational efficiency at a measurable rate”.

Clinical Data Ecosystem – It’s evolving faster than ever

By Suvarnala Mathangi | Date: April 30, 2018 | Blog | 0 Comment(s)

Blog-Banner-blog

Since the first few citations of registered and structured clinical data in the 1940s, the methods of capturing clinical data, use and application of clinical data, global inclusions, and role of payers, users and regulators have evolved.

At present, Clinical Data universe comprises of the siloed internal and external data sources within clinical development processes, which is a conceivable evolution; and currently, every other organization across this industry is experiencing similar scenario due to that.

This resulted data universe is primarily due to the strategic decisions made in the interest of business to support organizational consolidations, to manage cost & revenue pressures, to enable focus on core R&D and management, and to ensure continuous compliance to dynamically evolving regulatory guidance.

Staying focused on the core R&D business and continually improving operational & clinical performance have always been the topmost priorities among the business stakeholders; but, a collective and timely insight across the horizontally spread data and information haven’t always been possible. In such cases, the business decisions have often delayed, or relied on outdated information or lacked the cross-functional impact. The resulting outcome has become more like a spider-web with the knots at every corner.

Other reasons for this evolution could also be that for a long time, the conduct of clinical trials and healthcare initiatives have been carried out by separate functional silos within an organization using separate “legacy” applications & cookie-cutter solutions. Such approaches are typically focused on a specific process or function. This has resulted in multiple, disparate, and inefficient solutions that may not work well together. This has deprived organizations of the ability to make timely decisions, as well as increase efficiencies and overall productivity at the corporate portfolio level while controlling costs & mitigating risks involved at a specific study or functional level or at a portfolio level.

However, the adoption of digital means such as electronic patient reported outcome (ePRO) and electronic data capture (EDC) systems in place of paper-based case report form (PCRF) has changed the landscape of clinical data. It has reduced the time taken to collect data and relay into the next stage of the process from five days to fifteen minutes. Also, has aided in expediting the process to develop drugs.

Another milestone is the evolution of data structures. Big Data has allowed management of large volume of data and conversion of the same into comprehensible and insightful visualizations. To realize such benefits, clinical data management processes are compelled to use standardized fields, formats, and forms. This requires the application of the metadata-driven process.

The uprise of Clinical Trial Globalization has eliminated gaps between global study expectations and national standard protocols that lead to many operational complexities. Regulatory bodies like FDA and EMA have come together to synchronize requirements on a protocol-by-protocol basis.

To control financial risks for managing patient populations and to provide continuous access to electronic data, private networks have started building centralized data repositories. This can be further used for a range of analytics, including predictive modeling, quality benchmarking, and risk stratification.

These breakthroughs have eased out many challenges while implementing critical trials but have also increased the responsibility of CROs and Clinical data managers to upgrade and standardize their systems in response to these progressive developments.