MOCK — Personal Usage Dashboard for Open WebUI
Conversations
23
across 5 days
→ typical
Total tokens
48.2k
input + output
↑ 35% vs last week
Estimated energy
36 Wh
≈ 0.036 kWh
↑ 28% vs last week
Estimated CO₂
24g
EU average grid
↑ 28% vs last week

Daily usage

3.2g
Mon
4.8g
Tue
2.1g
Wed
11.4g
Thu
2.7g
Fri
Sat
Sun
⚠️ Thursday was 3.5x your daily average — you had a long code generation session (8 conversations, ~18k tokens).

What does 24g CO₂ look like?

🫖🫖
≈ 1.6 kettles
of boiling water
🫖 Boiling a kettle 1.6x
🍞 Toasting bread 0.8 slices
🚿 Hot shower 16 seconds
📧 Emails sent 6 emails
🚗 Driving 200 meters

Usage by model

llama3.1:70b
62%
llama3.1:8b
24%
mistral:7b
14%
💡 62% of your usage was on the 70B model. For simpler questions, the 8B model uses ~7x less energy.

Suggestions based on your usage

📏

You often request long-form output

4 of your conversations this week generated 1000+ word responses. If bullet points work, they use ~60% less tokens. Try asking: "Give me the key points" instead of "Explain in detail."

🔄

Repeated similar questions on Thursday

You asked 3 variations of the same coding question. Refining one conversation is more efficient than starting fresh each time.

🎯

Good use of the smaller model

You used llama3.1:8b for quick lookups this week — that's a great fit. It used 7x less energy than the 70B model for similar results.

Choose your equivalence

Pick the comparison that makes most sense to you. We'll use it throughout the dashboard and in session tickers.