Skip to main content
ScaleSys 2025: 1st International Workshop on Intelligent and Scalable Systems across the Computing Continuum

Toward Sustainability-Aware LLM Inference on Edge Clusters

Authors: Kolichala Rajashekar (Department of Computer Science University of Innsbruck, Austria) , Nafiseh Sharghivand (Department of Computer Science University of Innsbruck, Austria) , Radu Prodan (Department of Computer Science, University of Innsbruck) , Reza Farahani (Institute of Information Technology, University of Klagenfurt)

  • Toward Sustainability-Aware LLM Inference on Edge Clusters

    ScaleSys 2025: 1st International Workshop on Intelligent and Scalable Systems across the Computing Continuum

    Toward Sustainability-Aware LLM Inference on Edge Clusters

    Authors: , , ,

Abstract

Large language models (LLMs) require substantial computational resources, leading to significant carbon emissions and operational costs. Although training is energy-intensive, the long-term environmental burden arises from inference, amplified by the massive global query volume. Cloud-based inference offers scalability but suffers from latency and bandwidth constraints due to centralized processing and continuous data transfer. Edge clusters instead can mitigate these limitations by enabling localized execution, yet they face trade-offs between performance, energy efficiency, and device constraints. This short paper presents a sustainability-aware LLM inference for edge clusters comprising NVIDIA Jetson Orin NX (8GB) and Nvidia Ada 2000 (16GB) devices. It aims to balance inference latency and carbon footprint through carbon- and latencyaware routing strategies, guided by empirical benchmarking of energy consumption and execution time across diverse prompts and batch (i.e., group of prompts) configurations.We compared baseline greedy strategies to carbon-aware and latency-aware strategies in prompt routing to specific hardware based on benchmarking information. Experimental evaluation shows that a batch size of four prompts achieves a trade-off between throughput, energy efficiency, while larger batches risk GPU memory saturation.

Keywords: Sustainability, Large Language Models, LLM inference, Carbon Footprint, Edge Computing

How to Cite:

Rajashekar, K., Sharghivand, N., Prodan, R. & Farahani, R., (2025) “Toward Sustainability-Aware LLM Inference on Edge Clusters”, IoT Workshop Proceedings 1(1), 57-60. doi: https://doi.org/10.34749/3061-1008.2025.9

Rights: Copyright © 2025 The author(s)

Downloads:
Download PDF

67 Views

61 Downloads

Published on
2025-11-17

Peer Reviewed