Hugo Hiden

I originally did a Chemical Engineering undergraduate degree at Newcastle University before completing a PhD on the use of Artificial Intelligence and Machine Learning for modelling and monitoring chemical process plants. This led me away from Chemical Engineering towards software development with a particular focus on data intensive applications. After a spell of six years in industry creating data analysis systems for high throughput chemical laboratories, I moved back to Newcastle University to work in the newly formed e-Science Centre. This group was involved in a number of projects across the University making use of Grid systems. This research group eventually became the Digital Institute. Throughout this time, I have also maintained an active development role and, along with some University colleagues, developed the e-Science Central product (www.esciencecentral.co.uk) which is a Cloud based data processing and analytics platform. This has been used in a number of research projects (typically in the Medical School) and also formed the basis for the company that a few of us formed (Inkspot.co) which sold the e-Science Central platform into industry (most notably to Unilever for their connected consumer devices programs). This background has led my interest in IoT type applications, streaming data ingest, visualisation and machine learning.

Recent Posts

EventFlow: Event-driven microservices on OpenShift (Part 1)

EventFlow: Event-driven microservices on OpenShift (Part 1)

This post is the first in a series of three related posts that describes a lightweight cloud-native distributed microservices framework we have created called EventFlow. EventFlow can be used to develop streaming applications that can process CloudEvents, which are an effort to standardize upon a data format for exchanging information about events generated by cloud platforms.

The EventFlow platform was created to specifically target the Kubernetes/OpenShift platforms, and it models event-processing applications as a connected flow or stream of components. The development of these components can be facilitated through the use of a simple SDK library, or they can be created as Docker images that can be configured using environment variables to attach to Kafka topics and process event data directly.

Continue reading “EventFlow: Event-driven microservices on OpenShift (Part 1)”

Share
Smart-Meter Data Processing Using Apache Kafka on OpenShift

Smart-Meter Data Processing Using Apache Kafka on OpenShift

There is a major push in the United Kingdom to replace aging mechanical electricity meters with connected smart meters. New meters allow consumers to more closely monitor their energy usage and associated cost, and they enable the suppliers to automate the billing process because the meters automatically report fine-grained energy use.

This post describes an architecture for processing a stream of meter readings using Strimzi, which offers support for running Apache Kafka in a container environment (Red Hat OpenShift). The data has been made available through a UK research project that collected data from energy producers, distributors, and consumers from 2011 to 2014. The TC1a dataset used here contains data from 8,000 domestic customers on half-hour intervals in the following form:

Continue reading “Smart-Meter Data Processing Using Apache Kafka on OpenShift”

Share