Graph-Massivizer researches and develops a high-performance, scalable, and sustainable toolkit for information processing and reasoning based on the massive graph representation of extreme data. It delivers a toolkit of five open-source software tools and FAIR graph datasets covering the sustainable lifecycle of processing extreme data as massive graphs. The tools focus on holistic usability (from extreme data ingestion and massive graph creation), automated intelligence (through analytics and reasoning), performance modelling, and environmental sustainability tradeoffs, supported by credible data-driven evidence across the computing continuum. The automated operation based on the emerging serverless computing paradigm supports experienced and novice stakeholders from a broad group of large and small organisations to capitalise on extreme data through massive graph programming and processing.
The Graph-Massivizer Toolkit is a loosely integrated toolkit that leverages the unique researched functionalities in each separate Graph-Massivizer tool. In the integrated toolkit, algorithms that perform basic graph operations (BGO) developed by Graph-Inceptor and Graph-Scrutinizer as well as other open source libraries are integrated so that they can be executed efficiently and in a green-aware fashion within diverse hardware environments according to the advanced techniques developed by Graph-Optimizer, Graph-Greenifier, and Graph-Choreographer.
Synthetic financial data generation of extreme volumes of stocks and future commodities, adaptable to additional financial securities such as options, bonds, exchange-traded funds, mutual funds, and currencies
Analysis of company-related events from past data, patterns identification in a common sequence, and prediction of the most likely following events by matching them
Integration of traditional expert knowledge with sensor data for quality monitoring in manufacturing, combining KGs with time-series sensor data models to enhance exinability, accuracy, and flexibility in quality predictions, and provisioning of expert insights and real-time measurements for superior quality control
Continuous prediction of compute node failures in a high-performance computing system based on an anomaly prediction model that leverages the nodes’ physical layout integrated into the monitoring system with a continuous graph neural network deployment pipeline