Nowadays, physics research relies heavily on advanced computing resources to perform large-scale simulations, data processing, and complex analyses. As computing technologies evolve, physicists continuously adapt their workflows to take full advantage of the improvements. The Throughput Computing 2025 (HTC2025) conference, held from June 2–6 at the University of Wisconsin–Madison, brought together users, developers, and contributors of OSPool and HTCondor to share ideas, showcase innovations, and discuss the future of high-throughput computing.

The conference has more than 200 participants from a broad range of scientific domains.There were presentations from different communities on how they utilize the throughput computing in biology and life sciences, IceCube, XENONnT, and the LHC experiments. The throughput team also provided updates on the recent development, including Pelican, a powerful new tool designed to deliver data efficiently to high-throughput workflows. Dedicated sessions for CMS and ATLAS offered valuable discussions on topics such as analysis facilities, AI integration, data challenges, and monitoring and performance services.

Two members of the PPC group participated in the conference. Zhangqier Wang presented a holistic cost analysis of running a computing center, aimed at guiding procurement strategies and reducing operational expenses. Maxim Goncharov presented his work on the MIT Tier-2 tape project, which focuses on expanding storage capacity and streamlining data access for CMS workflows. Both talks were well received, with attendees noting their usefulness and practical value.