RFC - Design Doc - Event Publishing

Event Publishing

nominated owner: Geoff Nunan


Rhize is a real-time, event-driven data hub. That means that we can consume and produce events and data streams.
This article expands on what it means to produce events. How do we define an event?, how do we define the event payload? How do we determine when an event has occurred?

An Event can be defined as a meaningful business activity or state transition that stakeholders or other systems may be interested in



Publishing events is one of the core functions of an event-driven data hub

Events differ from transactions in that they focus on capturing the outcomes or consequences of domain activities rather than the low-level mechanics of data manipulation or system operations:
Differences between Events and Transactions:

  1. Granularity and Focus:
  • Events typically capture high-level domain activities or state changes that are meaningful to stakeholders or other systems. They represent business-level concepts such as orders placed, shipments delivered, or inventory updated.
  • Transactions, on the other hand, are low-level operations that involve the manipulation of data or resources within the system. They focus on ensuring data consistency and integrity at the database or application level.
  1. Intent and Purpose:
  • Events convey the outcome or consequence of a domain activity, often reflecting the state transition or business significance of an action. They serve as signals or notifications of important occurrences within the domain.
  • Transactions are concerned with ensuring the atomicity, consistency, isolation, and durability (ACID properties) of data operations. Their primary purpose is to maintain data integrity and correctness during the execution of operations.
  1. Decoupling and Communication:
  • Events facilitate loose coupling between different parts of the system or between bounded contexts in a distributed architecture. They enable asynchronous communication and allow systems to react to changes without direct dependencies.
  • Transactions are typically tightly coupled to the execution context or transactional boundaries within a single system. They enforce sequential consistency and often involve immediate responses or feedback within the scope of a single operation.
  1. Temporal Aspects:
  • Events have temporal significance, representing past occurrences or future intents within the domain. They can be consumed by interested parties asynchronously and may trigger subsequent actions or workflows.
  • Transactions are concerned with the current state of data and ensuring that operations are executed atomically and consistently at a specific point in time.

So how should we generate Events in the Rhize Manufacturing Data Hub?

  • We have the ability to monitor transactions (GraphQL Mutations) and even lower level changes to entities in the database (each internal ID and insert/update/delete actions on that ID)
  • How would we define rules over those transactions to be able to generate Events?

Guide-level explanation

ISA95 includes a model for defining and recording Operations Events which have a very similar definition to that of an Event above.

  • ISA-95 Operations Event Model - Operations event information is generated as result of the occurrence of a real-world event that warrants notification to interested parties. Operations event information is published as time stamped notifications using the operations event information exchange object. The operations event exchange explicitly includes the process context of the real-world event and all pertinent information actioned by the publisher that is associated with the real-world event. The subsequent processing of operations events by subscribers is not of concern to the operations event publisher.

This gives us a model for defining classes of events, and definitions of events, and their schema (Record Specifications)

We now need somewhere to define the rules for when an Operations Event should be generated.

What different types of Event do we have in the Rhize Data Hub?

  • Calendar Events Shift Started, Shift Ended and other similar periodic work calendar events.
  • Time-Series Events We have a rule engine that monitors streaming data sources, and evaluates user-defined rules. The firing of a rule could be considered an Event. Rule definition UI allows the user to define message payload published when the rule fires.
  • Operations Events created by GraphQL Mutation: Users can create Operations Events directly via the GraphQL API
  • PROPOSED - Transaction Rule Evaluation: The user can define a rule, similar to rules that run over the data streams from external data sources that run over the Rhize DataHub Transaction log. This would support use-cases such as: Publish an event whenever a material sublot is placed on quality hold The rule would monitor transactions on the materialSubLot type, and check for the status attribute value = “Held”.

Is there a difference in how we should treat observed events, versus inferred events?

  • Observed Events are events that are directly captured or detected by the system from external sources or sensors. These events represent actual occurrences or changes in the environment. Observed events do not change when the business rules change.

  • Observed events may arrive late. An example could be that we are only notified that an Order has started an hour after the order actually started.

  • Examples of observed events could include:

    • PLC Data relating to sensors or actuators such as a Valve opening or closing, or a temperature measurement.
    • User data entry such as recording that an order is shipped.
    • An external application publishing a notification that an order has been released.
  • Inferred Events are events that are derived or inferred from existing data, patterns, or rules within the system. These events are not directly observed, but are generated based on analysis, reasoning or inference.

  • In the context of event generation and data processing, “backfilling” refers to the process of retroactively generating events or updating historical data based on newly added or updated rules, configurations, or data sources. It involves filling gaps or updating existing event streams with new information, ensuring that the event data is complete, accurate, and aligned with the latest rules or requirements.

    1. Rule Changes:
    • Suppose there are changes to the rules governing event generation, such as modifications to the criteria for determining event eligibility (e.g., order value thresholds).
    1. Late Arrival of Transactions:
    • Additionally, there may be instances where new transactions arrive late, either due to delays in data ingestion or updates to historical records.
    1. Backfilling Process:
    • When rule changes occur, the Event Generator detects the updates and initiates a backfilling process.

    • The system reevaluates historical data or late-arriving transactions based on the updated rules to identify events that need to be generated, modified, or invalidated.

    • For example, if a rule change lowers the order value threshold for generating a specific event, historical orders that previously didn’t meet the threshold may now qualify for event generation and need to be backfilled with the corresponding events.

    • Similarly, late-arriving transactions that were initially missed or overlooked can be processed retroactively to ensure that no relevant events are omitted from the event stream.

    1. Event Versioning and Metadata:
    • During backfilling, events are versioned and tagged with metadata indicating the rule set or version used for event generation.

    • This ensures that backfilled events are correctly interpreted and distinguished from events generated using previous rule versions.

    1. Asynchronous Processing:
    • Backfilling may involve significant computational effort, especially when processing large volumes of historical data or complex rule changes.

    • Asynchronous processing allows the system to handle backfilling tasks without impacting real-time event generation or system performance.

    1. Monitoring and Verification:
    • Throughout the backfilling process, the system monitors progress, logs relevant information, and performs verification checks to ensure the accuracy and correctness of the updated event data.

    • Monitoring metrics and logs provide visibility into the backfilling process, including processing times, error rates, and data consistency checks.

  • By incorporating backfilling capabilities into the Event Generator, the system can maintain the integrity and completeness of event data over time, even in the presence of rule changes or late-arriving data. This ensures that the event stream remains reliable and reflective of the latest business rules and requirements.

Reference-level explanation


Rationale and alternatives

Prior art

Unresolved questions

Future possibilities

@Jarrah @cooper.fitzgerald @Matt.Vandergrift @andy.german @yehezkiel @DavidSchultz @david.han