Hey everyone,
I’ve been experimenting with ChatGPT in a few different AppVMs — mostly for writing assistance, coding help, and casual research. It’s been incredibly useful, but it’s also gotten me thinking about how much tracking or cross-VM fingerprinting might actually be possible through usage patterns.
Let me explain.
Each time I use ChatGPT, I open it in a separate disposable or AppVM — depending on what I’m doing. My assumption was that since Qubes OS separates everything so strictly, OpenAI (or any web service, really) shouldn’t be able to link sessions across VMs… unless there’s some subtle browser fingerprinting or API-level behavior that links them anyway.
I’m not logged in to an OpenAI account in any of these sessions, and I’m using Tor in some cases. But in other cases, it’s just a regular Firefox-based AppVM with basic hardening. Still, I’ve noticed similar styles of follow-up responses or suggestions — which could just be coincidence, but it got me thinking:
- Could fingerprinting techniques (canvas, font metrics, extension detection, etc.) be enough to correlate use across AppVMs, even when cookies and logins are not present?
- Are there known mechanisms ChatGPT (or any AI web interface) might use that could persist across Qubes OS isolation layers?
- Would using different templates (Debian vs Fedora) give any additional protection in terms of entropy?
- Anyone gone deeper into network-level behavior for this kind of service?
Not trying to spread paranoia here — just genuinely curious. With how powerful these models are getting, and how much we’re using them across different workflows, it feels like a timely question for the Qubes community.
Looking forward to hearing if anyone else has thought about this — or done more rigorous testing. Thanks in advance!