The Annual High Throughput Computing workshop took place in Madison, Wisconsin from July 8 till July 12. The conference itself had a session which provided the US CMS Computing and Software organization an opportunity to hold its annual all hands meeting. The part relevant for CMS was packed into 2 days, Wednesday and Thursday.
Wednesday was dedicated to CMS and joint ATLAS/CMS sessions. Unlike previous meetings, this time it was organized in the form of discussions rather than talks. The main topic was revolving around the question: “How analysis facilities should look like in the future?”
Right now it is hard to come up with a good answer to the question because there is not a lot of experience in the community of computing and software to perform large scale analyses which would reasonably well represent the future challenges. We at MIT with out subMIT prototype are interested in evaluating the current models and ideas and will do so in the upcoming months. Many of the tools that are being proposed do involve non-negligible investment and it is hard to predict whether they will be well supported into the future.
The discussion of the US CMS Tier-2 program was very subdues mostly because only people from the Wisconsin and Purdue teams came to Madison, the rest participated via zoom. Somewhat surprisingly, no topics related to Tier2 site experiences or challenges were brought up.
Thursday was dedicated to various HTCondor topics like best practices, weeding out bad users, some tutorials, and future developments. It was interesting and informative to listen to some of the talks. Most of it was useful to site admins, but not to the users.