Long model runtime? We currently only have a very small set of servers and if you choose a model different to the one in the previous prompt you it will need to be loaded into memory. This takes time and energy. We are currently in the process of quantifying the loading cost of each model.
How representitve is this result of ChatGPT query or similar? The machines, machine configuration and architecture used for ChatGPT are currently not published thus it is not know how much less or more a query on the actual ChatGPT infrastructure will use in terms of energy. It is very likely that our example setup here uses more energy as we use untuned off-the-shelf components and a dedicated machine to make the inference.
Why are you showing Watt and not Joule? The ideal thing would be to use Joule as time is obviously a factor. But people know Watt so we use that.
How is the CO2eq calculated? We use the data from https://www.electricitymaps.com/ to get the current grid intensity and then we multiply this with the energy used
What is the R in the SCI We calculate the whole process of generating a response. We don't include network traffic or the energy used to parse the packets.
How long do links work? To save data links stop working after 30 days. We will rerun the prompt if you still try to access the link.