Requirement | Discussion |
---|---|
Legacy integration | The previous parts of the simulation ensured that legacy data integration was achieved. Data access illustrated in this simulation shows both applications are able to access measurements regardless of whether they originated from legacy or smart devices |
Cross-network communication | Similarly, previous parts of the simulation have realised cross-network communication. Data ingestion undertaken across all networks is unified by message queues before being exposed to applications for reading |
Fault tolerance | The inherent qualities of cloud computing are used to provide fault tolerance in the data access part of the simulation. For example, file storage and delivery services in the cloud (e.g. Amazon S3) can provide a distributed, low latency and fault tolerant platform for serving time-series data |
Extensibility | Extensible data access in the pipeline is necessary to service industrial analytics applications. For example, an application may require a certain data format or standard to be presented. In this scenario the new format can be generated by a processing component (part 2 of simulation) and pushed to the cloud repository for industrial applications to access |
Scalability | Similarly to fault tolerance, the scalability of the data access part of the simulation is dependent on the cloud service on which it resides. The ability of cloud-based file delivery services to scale horizontally across multiple compute nodes and data centres provides a highly scalable infrastructure for serving time-series data |
Openness and accessibility | The simulation illustrates how data access can be achieved from the data pipeline with context encoded URL’s over HTTP. Furthermore, no proprietary or commercial technologies or drivers are required to consume the time-series data from the cloud. Therefore, there are no obvious technology barriers preventing users, applications and systems from accessing the data |