The computing models of Belle II, T2K and HK experiments are at different design and operational stages but they have to face a variety of common problems to manage computational and storage resources, monitor the network and develop software. Each of the three collaborations has to manage a huge amount of data, which have to be made available to scientific communities spread worldwide. Moreover, large computing power is needed to reconstruct physics events in large detectors, with millions of readout channels, and to look for rare signals in a background dominated environment.
For each topic a set of technologies will be examined together.
Computing: DIRAC is a general framework for the management of jobs and resources over distributed heterogeneous computing environments. It will be one of the main common component used by the three experiments. During our activities want to share information and idea about the usage of this framework for production and analysis, in particular we want converge on a set of common technologies to take advantage from resources provided via Cloud interface.
Storage: Data Management represent a hot topic for the three experiments. We plan to converge on a set of common interfaces for data access and data replication, even doing joint test of data access and data transfer. Grid storage with SRM, Http and S3 are three of the candidate protocols, while FTS will be exanimate as possible common tools for data transfer and data replication.
Software: Code development and distribution overs sites are two common topics. We plan to share know how, best practise and procedure for the usage of distributed version control system for software development like GIT. Then we will share idea about directory organization and general usage of CVMFS, a software to replicate code in Grid and Cloud resources.
Network: The three experiments will run over a high latency network-connecting sites from three different continents. We want to define a common way to monitor the main parameters of the network infrastructure, measure performances and increase reliability via the early identification of fault and network degradation. We plan to evaluate the possibility to share a mesh of servers based of PERFSONAR Tool kit, sharing the ones implemented in the common sites, and creating the respective maps of the network of different experiments.