Select Page

Clinical Trials

by | Apr 28, 2016

In scientific research, it’s very important to have the ability to control or compensate for as many of the factors influencing an experiment as possible. This is as true with a trial for a new medicine fighting cancer as it is with an epidemiological survey of identical twins searching for the source of a peculiar-shaped birthmark.

Scientist at work in a laboratory

Scientific research requires a carefully designed protocol that embodies a methodical approach to the processes involved in collecting data. The usefulness of a dataset grows with its size, but so too does the likelihood that the data will contain mistakes or errors. Keeping the collected data as consistent as possible and resolving issues with the data as soon as possible are key factors to producing meaningful results. Hand-in-hand with the need to manage the data collection process are associated processes relating to the fixing of problems that arise with the data.

Maintaining data integrity is a challenge that does not scale well when done manually. The process of resolving issues in data is often long-running and has many different “exit points”. For example, a study may be missing records of an important sign-off from the study’s independent review board (IRB), which in turn triggers a resolution process. At any point in time, a waiver may be issued that obviates the need to complete the resolution process. However, if a waiver has been issued after the process was created that process must be “cleaned up”. Finally, any process which runs against the same data multiple times in a day must not create duplicate instances of a resolution workflow – in other words, the rules must be idempotent.

This is where the power of a BRMS (like say, InRule!) really shines. With a product like InRule, the process of data management becomes much more transparent, configurable, and powerful. InRule can be used to locate data which doesn’t conform to the process and flag it for resolution. A business process management system (BPMS) like K2 can be used to manage the long-running workflows that this usually involves, with the BRMS fulfilling the role of decision maker.

The typical modern scientific study is divided into various stages or phases. In earlier stages, the protocol is agreed upon with any governing bodies (medical research oversight usually comes from an IRB). Middle phases involve the enrollment of subjects and the collection of data, while later stages tend to revolve around the aggregation and certification of data. Regardless of the study’s phase, it’s necessary to validate data collected, identify any missed steps, and update administrative status. This is done via a periodic (e.g., twice daily) process which applies rules against each data record. This batch process validates newly received data, moves currently running workflows along, and closes tasks that have been complete/no longer apply.

Doctor visiting her male patient and taking notes at office.

In working to build these types of rules with a customer involved in clinical trials (a Big Pharma Company), we observed that the business rules involved in a clinical study or trial tend to possess some interesting characteristics:

  • They have various sub-rules and stages of processing
  • There are many similarities between many of the sub-rules, e.g. all rules must check to see if the record is “active”
  • Most, if not all, tie a specific phase or status of the study to various other conditions like “all phase II studies must possess these four data fields, and the allowable values for [field] is [value1], [value2]…[valueN]”
  • Data is more or less hierarchical – a particular study is usually not confined to a single site, region, or even country
  • Relationships between data entities are extensively involved in the evaluation of rules
  • A human workflow process is invoked to resolve issues with the data, which can often contain exemptions and other special situations

When a rule is triggered – like when a study is missing contact information for example – the rules add the issue along with some related data into a collection of “rules violation” data structures. These structures describe the located issue and provide all of the info that someone might need to resolve the problem. Once all of the rules have been evaluated, rules execution moves out of a business-specific stage of data assessment and into a processing stage of going through located issues and deciding what actions to take for each located issue. Typically speaking, the visible result is that users receive an email, but the nature of the design is such that the exact same set of rules can be used in a completely different context (provided the data models are identical, natch) – say, for form validation on data entry.

The separation between rules performing the business-relevant evaluations from rules processing the outcomes of those violations makes this system powerful; new rules can be easily implemented via composition of existing sub-rules, and new channels of expressing the output can be added without the modification of existing ones. The InRule BRMS is a perfect fit for this situation because it provides users with all of the capabilities needed to richly express domain-relevant rules in a fashion clear to even the most non-technical of users.

Want to learn more about using InRule to enable your scientific research? Leave me a comment!

BLOG POSTS BY TOPIC:

FEATURED ARTICLES:

STAY UP TO DATE

We’d love to share company and product updates with you! Please enter your email address to subscribe to monthly updates from InRule. 

If at any time you want to unsubscribe, you can easily do so by clicking “unsubscribe” at the bottom of every message we send or visiting this page.