A Singular Statistical Computing Framework Answers Clinical Data Diversity & More

By Suvarnala Mathangi | Date: June 30, 2018 | Blog | 0 Comment(s)

Blog-Banner-blog

With the complexity involved in regulatory and exploratory reporting for clinical trials, adding another element of SCE might seem overwhelming to the clinical development team particularly the biometrics team, management and investors. I, would however, contest that it can enable to reduce data to reporting cycle time and related costs; further improve the turnaround time involved in statistical computing, review, and approval cycles as well as a significant improvement in everyone’s ability to manage and access regulatory submission standard data as well as reporting deliverables with required controls.

So, let me get started by elaborating on ‘What is a Statistical Computational Environment?’.

An SCE?

When I review the definition of Statistical Computational Environment (SCE) as “a data repository of source code, input data, and outputs with compliance features such as version-control of SCE elements, reusable programming ability, and links to the clinical data management system.”, to me, it is almost self-explanatory why your clinical trials should be nurtured in an SCE.

While the definition reveals some of the most apparent reasons to instill a clinical trial into an SCE, there are many other compelling, and not so evident aspects which makes an SCE imperative for your trial. An SCE also consists of a structured programming environment that eases project management by enabling workflows, an operational analysis data repository that optimizes data standardization, and a metadata-driven architecture containing information about data and status of various processes, while streamlining it for compliance and transparency.

Why is an SCE so imperative?

Here are a few intersections according to my observation that capacitate clinical trials to be stay more in alignment with not just the FDA criteria, but also the organizational vision.

Mastering Reproducibility

Statistical reproducibility is about providing detailed information about the choice of statistical tests, model parameters, threshold values, etc. It concerns with pre-registration of study design to prevent p-value hacking and other manipulations.

By being able to reproduce the workflow that includes code and data used to validate the decisions made using research documents, the clinical trial team would be able to save enormous amounts of time. To create and deliver newer drugs, with older formulae and data, SCE would allow access to pre-existing methods and results. This means other colleagues can approach new applications with a minimum of effort.

Site Selection

Trial and error site selection for clinical trials has disabled the research from creating a validated research foundation that can avoid large unwarranted costs and time lags. Disappointing numbers that say 80% of trials fail even before they meet enrollment while an investor has to spend almost $20,000 to 30,000 on an average on site selection signify the importance of site selection. Apart from this, researchers also have to deal with poor qualification of a site.

All these challenges can be answered by developing a data-driven site selection approach that not only fulfills the existing clinical trial, but also the ones to come in the future for a similar area of study.

Faster Approvals and Regulatory Compliance

FDA has been greatly instrumental in convincing clinical trial owners to use metadata and a unified format to submit for approval. To fulfill CDISC and other criteria, a clinical trial has to run in a statistical computational environment. Apart from this regulatory framework, there are many others that require creation and maintenance of metadata. It not only keeps it legal, but also ensures faster go-to-market time and a reference point for future studies at an extremely low cost.

While these are some strategic reasons for clinical trials to maintain an aura of SCE, there are some functional reasons too.

Rising complexities in clinical trials also need more computational power. SCEs add agility to the researcher and stakeholders by allowing them to store, maintain, access and edit information on a common platform. These environments provide ease for Non-clinical and Clinical Pharmacology analysis, translational medicine and Predictive modeling &
Simulation.

The team and human mindset working out of such should however, give way to an integrated approach in deploying SCEs. SCE combined with an integrated technology solution that processes data through a singular analytical framework can build a foundation for future clinical research.

MaxisIT

At MaxisIT, we clearly understand strategic priorities within clinical R&D, and we can resonate that well with our similar experiences of implementing solutions for improving Clinical Development Portfolio via an integrated platform-based approach; which delivers timely access to study specific as well as standardized and aggregated clinical data, allows efficient data quality management, clinical reviews, and statistical computing.

Moreover, it provides capabilities for planned vs. actual trending, optimization, as well as for fraud detection and risk-based monitoring. MaxisIT’s Integrated Technology Platform is purpose-built solution, which helps Pharmaceutical & Life sciences industry by “Empowering Business Stakeholders with Integrated Computing, and Self-service Analytics in the strategically externalized enterprise environment with major focus on the core clinical operations data as well as clinical information assets; which allows improved control over externalized, CROs and partners driven, clinical ecosystem; and enable in-time decision support, continuous monitoring over regulatory compliance, and greater operational efficiency at a measurable rate”.

Data savvy drug development needs a Good Statistical Computing Environment (SCE)

By Suvarnala Mathangi | Date: June 30, 2018 | Blog | 0 Comment(s)

Blog-Banner-blog

Diverse data sources, heterogeneous data formats, the multitude of drug outputs and there is always more, within a drug development process. A highly agile and adaptable environment supported by cohesive data management and reporting processes can only accommodate and respond to the perpetuating mutations and metamorphosis in the process.

Who can save the day to match such adaptations? Statisticians and their supporting programming team is the answer. The statistical programming and analysis team that works in tandem with the discovery and development team can seamlessly deliver data analysis and even adhere to necessary data standards only when they are supported by a data-savvy environment which includes infrastructure, tools, processes and mindset.

This is where I always recommend a good statistical computational environment (GSCE) to create viable and market-ready drug discoveries. Most importantly, it eliminates the chaos that is brought by data heterogeneity and adds traceability, data security, accessibility, auditability, efficiency and control to meet regulatory compliance needs, for the final user.

Why Good Statistical Computational Environment is as important as the process?

A good SCE is a consequence of two events — need to expedite drug discovery and obligation of adherence to FDA and other approval associations.

The patients and the medical community also accept a drug only when there is data that proves its safety and efficacy. In fact, FDA has stipulations that require a drug to pass through the criteria with certain data proof made available as per their prescribed format and proforma with auditability, traceability, and via well-governed process. The FDA expectations for electronic submission guidance defines what FDA expects to receive: multiple types of data files, documentation, and programs—the major components of an analysis environment.

To match these, there are a slew of good practices recommended by ICH E9 which has prescribed best practices from a regulatory perspective. It emphasizes on the good statistical science of documented statistical operations which further ensures validity and integrity of prespecified analyses, lending credibility to the results.

Data-driven processes such as Pharmacogenetic testing, Multigene panel testing, Targeted genetic testing for rare diseases and hereditary cancers, and human genome sequencing has seen the advantage of thriving is a statistical computational environment. They also have the high potential of receiving insurance coverage.

The major challenge that is curbed by ICH E9’s standardized formats is the diversity involved in different stages of clinical testing that focus on different statistical skill sets. The drug development process that begins with pharmacokinetic and pharmacodynamic studies requires early proof-of-concept supported by dose-ranging studies. Once, the drug passes through this stage, it is tested in the confirmatory phase for further efficacy and safety among a more heterogeneous population.

The life cycle development stage and the phase of post-marketing study follow in the development process. Each phase with different data demands needs to be aligned with computations that can deliver relevant outputs.

How to begin envisioning a good SCE?

A good SCE needs to fulfil inclusion of empowerment to data-driven mindset, metadata-driven computing, technological scalability, analytical performance, an integrated approach with a single source of truth, role-based controlled processes and continuum in different stages of the process chain.

Most clinical data is processed among several programming activities which need to follow good statistical practices lined up by ICH E9. Apart from this, other bodies that offer a framework to develop a sound statistical environment are Clinical Data Interchange
Standards Consortium (CDISC) data standards, the CDISC analysis data model (ADaM) guidance, HL7 data standards, the harmonized CDISC-HL7 information model (BRIDG), electronic records regulations, FDA guidance for computerized systems, and electronic Common Technical Document (eCTD) data submissions.

ADaM guidance provides metadata standards for data and analyses. This enables statistical reviewers to understand, replicate, explore, confirm, and reuse the data and analyses. A
transparent, reproducible, efficient, and validated approach to designing studies and to acquiring, analyzing, and interpreting clinical data will achieve a faster drug-to-market cycle.

Genetic counselors, hospital administrators, pharmacy advisory firms, academic studies, and consulting reports involved in a SCE also enjoy monetary benefits. Drugs that adhere to those standards also attract investor interest. Investor valuations for biotech companies are not purely based on profits but also, on long term potential, given the longer gestation period of drug development.

To reap these benefits, industry needs a Statistical Computing Environment that can support deriving statistical insights in a controlled, adaptable, auditable, and traceable manner utilizing an integrated approach that is based on metadata driven processes and relies on single-source of truth for data i.e. an enriched clinical data repository.

By achieving this, the records of decision stored from different stages of the process will keep the clinical cycle of the drug transparent, accessible and helps other scientists and clinical research stakeholders for future research. This would retain the format of data intact, in its original form at the same time does not interrupt the cohesiveness of the insight or information that the repository holds.

All put together, it indicates that the drug development process should incubate in a sound statistical computational environment for best results.

Data, a rising mindset and methodology for drug development

By Suvarnala Mathangi | Date: June 30, 2018 | Blog | 0 Comment(s)

Blog-Banner-blog

I have been discovering more and more headlines reporting about genome-wide dissection of genetics, high-throughput technologies, genetic modifications and other biomedical breakthroughs. All lead to one common catalyst — a data intensive drug discovery model used by the success team. But, such a model only thrives in the presence of a sound statistical computational environment.

To me, two elements drive the success of data management during clinical trials. One, is the ability to bring diverse data forms in a single comprehensible format and the other is data driven in-depth knowledge to evaluate and validate the drug concept. The variety of data forms fiddles with security, accessibility, traceability and audibility which otherwise empower a clinical practitioner with better insights and adherence to regulatory compliance. These factors can be achieved by introducing the drug into a data driven environment, supported by a team with a similar data-intensive mindset who can build a statistical computational environment (SCE). Using SCE, customers can establish control, auditability, traceability and gain efficiency without compromising regulatory compliance.

NCBI also identifies that the publicly available biomedical information is the basis for identifying the right drug target and creating a drug concept with true medical value. This has also helped them enhance their understanding about the pathophysiological mechanisms of diseases with the help of a data driven development model.

Prediction of metabolism, application of translational bioinformatics in reporting long-read amplicon sequencing of chronic myeloid leukemia and multi-drug resistant bacteria, and gene mutations are results of a data-intensive model. Apart from looking at data-intensive drug development model as an inhouse capability, I also see it as a competitive factor for R&D units and scientists to hit the market.

Here is why it is competitive. It has led way to virtual drug development models that allow scientists to test and analyze a molecule even before the formulation stage. Eliminating challenges in the pre-clinical stages can reduce development costs and the lead time taken by each formula to transform into a druggable output. Nature, a popular science magazine has confirmed through its research that the unprecedented challenges of a pharma business model can be met by shifting investments to the earlier stages of drug discovery.

To establish this approach, pharmaceutical and healthcare companies involved in drug development need to invite an integrated data management approach. This involves defining data sources for reliability and consistency, building a robust data mining and warehousing infrastructure, and processing different insights useful to various stakeholders of the project and data users. Among these McKinsey has pointed that the first step itself takes about an year to complete.

Having involved with multiple integrated clinical development platform projects, I have seen that drug-to-market lead time was brought down when traditional drug development model was transformed into a data-intensive and technology enabled environment that is integrated and automated to allow establishing focus on data analysis and avoid mundane data processing tasks.

To gain the competitive edge more and more successful drug development companies are towards thriving in a data intensive drug development & discovery environment. Are you on the banks of Data intensive drug development process or stifled by the challenges of a traditional approach?

MaxisIT

At MaxisIT, we clearly understand strategic priorities within clinical R&D, and we can resonate that well with our similar experiences of implementing solutions for improving Clinical Development Portfolio via an integrated platform-based approach; which delivers timely access to study specific as well as standardized and aggregated clinical data, allows efficient data quality management, clinical reviews, and statistical computing.

Moreover, it provides capabilities for planned vs. actual trending, optimization, as well as for fraud detection and risk-based monitoring. MaxisIT’s Integrated Technology Platform is purpose-built solution, which helps Pharmaceutical & Life sciences industry by “Empowering Business Stakeholders with Integrated Computing, and Self-service Analytics in the strategically externalized enterprise environment with major focus on the core clinical operations data as well as clinical information assets; which allows improved control over externalized, CROs and partners driven, clinical ecosystem; and enable in-time decision support, continuous monitoring over regulatory compliance, and greater operational efficiency at a measurable rate”.

The key to faster clinical development – Identifying the potential indicators of change.

By Suvarnala Mathangi | Date: May 31, 2018 | Blog | 0 Comment(s)

In order to avoid constrained growth model, a R&D focused organization needs to empower its business stakeholders with the cross-functional insight and portfolio-level information management strategy.

Prospective solution should allow necessary roll-down and roll-up flexibilities in identifying potential indicators of change (e.g. risk and/or opportunities) and its impacts, while taking decisions with confidence.

To mobilize such business transformative strategy, the technology enablers must ensure that the current clinical development processes, its existing investments, partnerships, and corporate strategies do not get impacted. Therefore, the evolution of AN INTEGRATED DATA MANAGEMENT PLATFORM begins with bringing in or (rather) aligning existing and new data sources into a single-source of truth (Data Repository); wherein they can all have common characterization, identity, accessibility, predictable movement in reference to other. Above all, a governance layer that establishes complete control over the clinical data assets as they move forward in the clinical development processes.

Overall, we understand that there has been huge gap, rather siloed approach, between the data capture, data management and the processes utilized in deriving actionable insight. Data capture systems have improved a lot over time, and so did the increase in the sources and varieties of data, but, data management, the analysis & reporting processes still suffer – this demands integration, aggregation, and harmonization of data with the integrated tools to achieve maximum downstream benefits within Analytics, Reporting, and Mining of critical Clinical & Operational Data horizontally across the silos.

Single-source clinical data repository with its ability to deliver timely access to an aggregated form of data across the historical studies, and real-word data, have paved ways to detect diseases at earlier stages when they can be treated more effectively and timely; manage specific patient and population health; and detect health care fraud or clinical risks more quickly and efficiently.

Development of medicines through agreement on evidence standards, including extrapolation of available clinical data for predicting drug behavior and encouraging a multi-arm, multi-company approach to reduce patients in clinical trials are some outcomes of global trials. This requires drug manufacturers to choose an integrated clinical data management solution that allows integrated capabilities across the life cycle of data from source to submission.

About MaxisIT
At MaxisIT, we clearly understand strategic priorities within clinical R&D, and we can resonate that well with our similar experiences of implementing solutions for improving Clinical Development Portfolio via an integrated platform-based approach; which delivers timely access to study specific as well as standardized and aggregated clinical data, allows efficient data quality management, clinical reviews, and statistical computing.

Moreover, it provides capabilities for planned vs. actual trending, optimization, as well as for fraud detection and risk-based monitoring. MaxisIT’s Integrated Technology Platform is purpose-built solution, which helps Pharmaceutical & Life sciences industry by “Empowering Business Stakeholders with Integrated Computing, and Self-service Analytics in the strategically externalized enterprise environment with major focus on the core clinical operations data as well as clinical information assets; which allows improved control over externalized, CROs and partners driven, clinical ecosystem; and enable in-time decision support, continuous monitoring over regulatory compliance, and greater operational efficiency at a measurable rate”.

A Perfect Match – When finding the right solution provider is as important as the solution itself

By Suvarnala Mathangi | Date: May 31, 2018 | Blog | 0 Comment(s)

The global clinical data analytics market will be worth $13.8 billion by 2023 according to a ResearchAndMarkets report. It identifies the growing popularity of EMR’s as the key driver of this growth. However the clinical development industry is still grappling with problem of disintegrated and non-standardized critical trial processes which is further handicapped by its nature of data heterogeneity in the market.

The costs of clinical trial operations are continuously being evaluated for ways of achieving greater efficiency while reducing expenses. Many biopharma companies, now more than ever, focus on containing costs while maintaining quality. This approach applies to eClinical technology used to support a trial. An eClinical solution should include best-in-class integrated technology that is cost-effective, high quality, and efficient.

An example of this type of technology used in clinical research is electronic data capture (EDC) combined with interactive response technologies (IRTs) such as IVR (integrated voice response). This software combination has proven effective in reducing the cost of clinical trials. These integrated solutions will increase the efficiency of a clinical trial and provide a single environment with the flexibility and scalability to keep pace with demands.

Most of the integrated eClinical solutions are the result of single technology solutions that are paired together or integrated into a suite of eClinical solutions, resulting in a single platform with multiple components such as EDC and IVR. Comprehensive multicomponent solutions that contain EDC/IVR/CTMS (clinical trial management system) labs built from the ground up as a single integrated solution are uncommon and rarely best-in-class.

The emergence of integrated solutions results from the need to gain access to central lab data, EDC, and IVR data, including ePRO within a single system in the fastest, most reliable manner. As integration solutions have evolved, the continuous challenge is to improve the level of data integration beyond a transfer or sharing of flat files to true integration.

Integrated approach makes business sense

A platform that integrates data sources in real time benefits the end user by providing immediate access to various forms of clinical data including central lab, IVR, and EDC data. Such visibility across different sources can bring immediate attention to data quality and patient safety issues. This in-turn will reduce errors, mitigate risk, reconcile data and minimize redundancy.

An integrated platform based approach causes a ripple effect which starts with making standardized data available for single study as well as aggregated across studies in a single repository. In effect, the end user no longer need to log in to multiple systems. This single source of truth reduces data redundancy, increases data traceability across technologies and improves the quality and reliability of shared data between applications. The centrally held data can now be efficiently used for reporting, insight generation with analytics leading to faster decision making. Thus, it makes a lot of business sense for the sponsor or the CRO to select a vendor that provides an integrated platform based solution.

Traditionally, clinical projects are executed in siloed manner; with redundancy, delay, inflation and ambiguity that emerge as byproducts, more than the valuable insights. The siloed approach in the clinical context is viewed as a socio-technology problem. It needs the cooperation of both realms and this itself indicates the need to integrate clinical data processes. An end-to-end integrated approach would resolve the complexities that rise out of silos.

In this context McKinsey and Co articulate the need of an integrated approach very aptly as, “Effective end-to-end data integration establishes an authoritative source for all pieces of information and accurately links disparate data regardless of the source—be it internal or external, proprietary or publicly available.”

Implementing an end-to-end integration requires certain capabilities: inclusion of trusted sources of data and documents, the ability to establish cross-linkages between elements, robust quality assurance, workflow management, and role-based access to ensure restricted visibility of specific data elements.

Flexibility – How innovative, and right-size company like MaxisIT can be more flexible and agile?

Unable to take up smaller slices of a large project, large companies increase the cost of the project compelling the sponsor or CRO to depend completely on the one vendor. However, with an integrated platform approach, MaxisIT can reduce the costs by rendering technology solutions to specific, standalone problems.

MaxisIT as an Integrated Clinical Development Platform strives to embrace a problem-solving approach rather than a scale-congenial approach. Our process allows us to solve clinical data and development problems using an integrated approach in a flexible and agile manner.

Most importantly, we have been able to demonstrate that our solution is compliant with regulatory guidelines and can be configured to the type of study or data management needs. The highly frustrating problem of cohesion between compliance and data format is fully taken care by our solution.

We see that though large software companies have bigger bucks for visibility, they don’t have enough bandwidth to pay attention to small challenges that arise during implementation. MaxisIT along with its customer support attends to the smaller details too while delivering the integrated solution.

Why MaxisIT for an integrated solution?

At MaxisIT, every clinical data management project matter to us. We see to that each process of a project revolves around one common mission—to strive for healthier and better life. To make this happen, we try harder and focus on integrating different processes across the lifespan of a drug development process, rather than worrying about the size of the project.

We clearly understand strategic priorities within clinical R&D, and we can resonate that well with our similar experiences of implementing solutions for improving Clinical Development Portfolio via an integrated platform-based approach; which delivers timely access to study specific as well as standardized and aggregated clinical data, allows efficient data quality management, clinical reviews, and statistical computing.

Moreover, it provides capabilities for planned vs. actual trending, optimization, as well as for fraud detection and risk-based monitoring. MaxisIT’s Integrated Technology Platform is purpose-built solution, which helps Pharmaceutical & Life sciences industry by “Empowering Business Stakeholders with Integrated Computing, and Self-service Analytics in the strategically externalized enterprise environment with major focus on the core clinical operations data as well as clinical information assets; which allows improved control over externalized, CROs and partners driven, clinical ecosystem; and enable in-time decision support, continuous monitoring over regulatory compliance, and greater operational efficiency at a measurable rate.

Exigency of the Industry – Integrated Clinical Development

By Suvarnala Mathangi | Date: April 30, 2018 | Blog | 0 Comment(s)

There has been a collective realization throughout the industry that the key data management processes must show efficiency, continuous compliance, scalability, sustainability and measurable productivity gains that, over time, will transform the research and development processes used to develop new drugs.

Industry leaders are addressing this problem by delivering solutions that can address various needs ranging from study design to data integration to clinical process optimization to targeted therapies to benefit-risks assessments to cost reduction.

Primarily clinical data management solutions empowers the ecosystem with decision making abilities backed by precise, accurate and timely insights. Some best practices culminate to build an integrated clinical data management solution that works for current processes and the future of clinical trials.

Integrated Data Management Platform: There is no single software to capture all kinds of clinical data. During clinical trials, practitioners use multiple software, devices and apparatus to capture data. Though EDCs are fast catching up, globally, some local circumstances and low capital-intensive projects still rely on paper-based case reports and ePROs. Some data is also in the form of X-ray, scans which is different format altogether. Therefore, industry needs a platform that allows integration across these diverse sources of data, and further provides all of the data in a single clinical data repository.

Real-time Data processing and insights: For a long time, information churned out of large data sets have been processed in isolation causing complexities. But today, we see a pressing need to use real-time analytics with integrated data from devices, health systems, payers, patients, providers and other systems and participants to deliver services.

Metadata driven process: To deal with entropy in clinical data, efficient CDMs use metadata. Metadata facilitates information flow, improves retrievals, helps in discovery and provides context by applying metadata principles to information. It establishes some of the most lacking comforts for a data scientist trying to arrive at meaningful insights. Some of those benefits include user friendliness with a simple and intuitive interface.

Reporting Tools: Seldom clinical data management solutions are integrated with reporting tools that can configure, execute, and review reports with built-in analytical algorithms in support of analyzing the data quality, clinical significance, operational performance, as well as reviewing genetics and proteomics data.

Risk Based Monitoring & Assessment: Solution that helps mitigate risk by allowing organizations to focus on critical study parameters. A robust risk monitoring & assessment needs to be empowered with analytical and reporting capabilities, third-party integrations with open APIs, workflow engine and a knowledge base.

Regulatory and Compliance: An integrated Data Management solution should be configurable to support different types of trial data, trial protocols, results and subject-level data. This configurability comes with an inherent need of maintaining regulatory compliance & audit traceability. Such solution should be robust enough to configure study specific needs and translate data to meet the regulatory and compliance standards both globally and locally.

The increasing volume, variety in data and the velocity at which clinical data is being produced, a recommended solution needs to be agile and robust to respond to such versatile dynamism.

About MaxisIT
At MaxisIT, we clearly understand strategic priorities within clinical R&D, and we can resonate that well with our similar experiences of implementing solutions for improving Clinical Development Portfolio via an integrated platform-based approach; which delivers timely access to study specific as well as standardized and aggregated clinical data, allows efficient data quality management, clinical reviews, and statistical computing.

Moreover, it provides capabilities for planned vs. actual trending, optimization, as well as for fraud detection and risk-based monitoring. MaxisIT’s Integrated Technology Platform is purpose-built solution, which helps Pharmaceutical & Life sciences industry by “Empowering Business Stakeholders with Integrated Computing, and Self-service Analytics in the strategically externalized enterprise environment with major focus on the core clinical operations data as well as clinical information assets; which allows improved control over externalized, CROs and partners driven, clinical ecosystem; and enable in-time decision support, continuous monitoring over regulatory compliance, and greater operational efficiency at a measurable rate”.

Current Gaps in Clinical Data Management

By Suvarnala Mathangi | Date: April 30, 2018 | Blog | 0 Comment(s)

When it comes to data management, there are a lot of data gaps that requires filling. Below are the areas of data management where the industry is facing problems. If any of this sounds familiar, you might need help!

Data Capturing & Aggregation: The prevalence of ‘data heterogeneity’ in clinical trials is what makes data capturing and aggregation a complex process. Capturing data is yet to be perfected because it comes in different forms and formats. It is generated from multiple devices used by practitioners who follow distinct regulatory protocols. The cleanliness of clinical data is also subject to issues like non-adherence and data variability which arise due to a different set of on-site challenges. This primary challenge stifles the data lifecycle, affecting analytics and quality of insights.

Data Cleaning & Discrepancy Management: Clinical trials deal with unstructured data all the time. Though digital documents such as EDCs, EHRs and ePROs are embraced in clinical trial projects, use of PCRFs cannot be eliminated completely. This fractional use of PCRFs and multi-source aggregation adds to the complexity of data cleanliness. Clean data, devoid of discrepancies, ensures use of accurate and rationale datasets for further analytics.

Data Storage & Data Security: Data storage and security issues always give rise to the debate of ‘on-site or cloud model’ since it is coined with the function of cost. In either case, care must be taken to establish disaster recovery, cost efficiency, immunity against security breaches and healthcare specific compliance with HIPAA security rule.

Data Stewardship & Data Querying: Clinical data has longer shelf life, as it not just used for the current and specific research, it is archived for future research too. Sometimes, the trials also have traces of unutilized datasets that can solve disconnected healthcare issues. Owning and retrieving such data among the large volumes of data repository over time is an area that needs attention. Having historical and secondary data handy can resolve many issues. These also raise red flags through the procedure that pertain to data updation, interoperability and sharing.

About MaxisIT

At MaxisIT, we clearly understand strategic priorities within clinical R&D, and we can resonate that well with our similar experiences of implementing solutions for improving Clinical Development Portfolio via an integrated platform-based approach; which delivers timely access to study specific as well as standardized and aggregated clinical data, allows efficient data quality management, clinical reviews, and statistical computing.

Moreover, it provides capabilities for planned vs. actual trending, optimization, as well as for fraud detection and risk-based monitoring. MaxisIT’s Integrated Technology Platform is purpose-built solution, which helps Pharmaceutical & Life sciences industry by “Empowering Business Stakeholders with Integrated Computing, and Self-service Analytics in the strategically externalized enterprise environment with major focus on the core clinical operations data as well as clinical information assets; which allows improved control over externalized, CROs and partners driven, clinical ecosystem; and enable in-time decision support, continuous monitoring over regulatory compliance, and greater operational efficiency at a measurable rate”.

Clinical Data Ecosystem – It’s evolving faster than ever

By Suvarnala Mathangi | Date: April 30, 2018 | Blog | 0 Comment(s)

Blog-Banner-blog

Since the first few citations of registered and structured clinical data in the 1940s, the methods of capturing clinical data, use and application of clinical data, global inclusions, and role of payers, users and regulators have evolved.

At present, Clinical Data universe comprises of the siloed internal and external data sources within clinical development processes, which is a conceivable evolution; and currently, every other organization across this industry is experiencing similar scenario due to that.

This resulted data universe is primarily due to the strategic decisions made in the interest of business to support organizational consolidations, to manage cost & revenue pressures, to enable focus on core R&D and management, and to ensure continuous compliance to dynamically evolving regulatory guidance.

Staying focused on the core R&D business and continually improving operational & clinical performance have always been the topmost priorities among the business stakeholders; but, a collective and timely insight across the horizontally spread data and information haven’t always been possible. In such cases, the business decisions have often delayed, or relied on outdated information or lacked the cross-functional impact. The resulting outcome has become more like a spider-web with the knots at every corner.

Other reasons for this evolution could also be that for a long time, the conduct of clinical trials and healthcare initiatives have been carried out by separate functional silos within an organization using separate “legacy” applications & cookie-cutter solutions. Such approaches are typically focused on a specific process or function. This has resulted in multiple, disparate, and inefficient solutions that may not work well together. This has deprived organizations of the ability to make timely decisions, as well as increase efficiencies and overall productivity at the corporate portfolio level while controlling costs & mitigating risks involved at a specific study or functional level or at a portfolio level.

However, the adoption of digital means such as electronic patient reported outcome (ePRO) and electronic data capture (EDC) systems in place of paper-based case report form (PCRF) has changed the landscape of clinical data. It has reduced the time taken to collect data and relay into the next stage of the process from five days to fifteen minutes. Also, has aided in expediting the process to develop drugs.

Another milestone is the evolution of data structures. Big Data has allowed management of large volume of data and conversion of the same into comprehensible and insightful visualizations. To realize such benefits, clinical data management processes are compelled to use standardized fields, formats, and forms. This requires the application of the metadata-driven process.

The uprise of Clinical Trial Globalization has eliminated gaps between global study expectations and national standard protocols that lead to many operational complexities. Regulatory bodies like FDA and EMA have come together to synchronize requirements on a protocol-by-protocol basis.

To control financial risks for managing patient populations and to provide continuous access to electronic data, private networks have started building centralized data repositories. This can be further used for a range of analytics, including predictive modeling, quality benchmarking, and risk stratification.

These breakthroughs have eased out many challenges while implementing critical trials but have also increased the responsibility of CROs and Clinical data managers to upgrade and standardize their systems in response to these progressive developments.

Rising above the data silos

By Suvarnala Mathangi | Date: February 28, 2018 | Blog | 0 Comment(s)

How an Integrated Clinical Development Platform Helped a Client Tear Down Clinical Data Silos and Improved Decision-Making.

Banner-11-Jan-18

How an Integrated Clinical Development Platform Helped a Client Tear Down Clinical Data Silos and Improved Decision-Making.

The Challenge – Riddled with mélange of data and challenges in standardization which hamstrung their decision-making capabilities, our client decided to implement an integrated platform-based strategy that would help them to

  • Ingest data from diversified sources
  • Establish a single-source of truth
  • Enable metadata repository-based standards compliance and,
  • Establish a controlled statistical computing environment on top – all seamlessly interoperable and metadata-driven.

The Objective – The mandate was simple. Enable teams to access data seamlessly and provide a unified view of the data to deliver regulatory standard deliverables on time and further perform exploratory analytics.

The Solution – An Integrated clinical data management platform which delivered three business critical solutions –

Metadata Repository (MDR)

  • Achieved metadata re-use and consistency of definition across the clinical data collection – analysis lifecycle
  • Minimized duplication of work and improved data integrity and quality.
  • Simplified the management (update, version control etc.) of metadata, standards and specs
  • Facilitated improved electronic metadata exchange
  • Enabled automated validation and conformance checking of data
  • Enabled automated data conversion (from one standard to another)
  • Enabled impact analysis and change management across metadata standard assets

Clinical Data Repository (CDR)

  • Efficiently integrated data across multiple studies and across pipelines thus providing a single source of truth
  • Maximized the value of data assets by making the data accessible in a consumable (neutral) and quality conformed format to stakeholders across R&D
  • Improved decision-making by ensuring reports are produced promptly and accurately, and enable prompt responses to regulators and other departments within an organization
  • Stored data in a structured and secure manner to support reproducibility and an improved ability to incorporate data into analyses.

Statistical Computing Environment (SCE)

  • Streamlined processes for workflow, version control, traceability, access management, audit trails that eliminated manual steps involved in compliance with operating procedures
  • Improved team collaboration through better organization and navigation of project artifacts across regulatory reporting process
  • Facilitated full traceability and improved regulatory compliance and inspection readiness

The Outcome – While the first feature laid the foundation for scalable analytics development and testing, the second and third feature provided the overarching structure for sharing knowledge and best practices. Most importantly, the platform provided a single unified view of data which allowed teams to finally speak the same language; something the company found immense value in.

Read the full case study here

About MaxisIT – MaxisIT® provides ONE Integrated Clinical Development Platform for biopharmaceutical industry – MDR, CDR and SCE combined in one scalable, integrated, cloud platform. The platform includes self-service data preparation and analytics products which are regulatory compliant, validated and delivered through alternate models viz. Enterprise SaaS, On-premise deployment, or as a Hybrid software-enabled service.

Wearables in Clinical Trials – Implications for you

By Suvarnala Mathangi | Date: January 31, 2018 | Blog | 0 Comment(s)

Banner-11-Jan-18

The demand for wearables and sensors in clinical trials is on the rise. Pharma companies are increasingly challenged with both rising costs and the need to find novel ways to differentiate the drugs they are developing. One such way is by accessing the clinical data collected on remote medical devices.

By leveraging remote medical devices, there is an opportunity to collect novel endpoints and supplemental data that may improve the regulatory case, make the case for reimbursement more compelling, open up participation to a wider population and/or reduce site visits for patients who may not live close to an investigative site.

However, large volumes of continuous flowing data will increasingly require scalable cloud support along-with a proactive approach to data standardization. Data standardization assumes vital importance because there is large chunks of unstructured data that can overwhelm traditional functions and processes.

A good data standardization platform will enable better clinical data quality management, clinical data review and reduce cycle time by creating submission standard deliverables. An ideal platform should enable –

  • Self-learning mapping, metadata driven automation with built-in security and regulatory compliance
  • Ability to scale up and scale-out on demand for faster time value realization
  • An integrated self-service approach to automated data standardization.

MaxisIT has shown tremendous success in delivering value based on standardizing over 800 clinical studies. Be it the self-service platform or functional outsourcing, MaxisIT has delivered consistent quality, compliance and cost reduction. If you want to know more please get in touch.