Ask HN: Do you sanitize secrets before pasting code into ChatGPT?

I use AI assistants (ChatGPT/Claude) heavily for debugging, but realized I'm constantly pasting code that contains API keys, database credentials, and customer emails.

I try to manually redact them but honestly forget half the time.

Questions: - Is this actually a security risk? - How do you handle this in your workflow? - Would you use a tool that auto-sanitizes your clipboard?

Trying to figure out if this is a real problem or just me being paranoid.

2 points | by giovanella 3 hours ago

6 comments

  • rpruiz 2 hours ago
    There was a project posted here a few days ago. Vigil (https://github.com/PAndreew/vigil_vite) that provides some help in your case. Or inspiration to build something on top of it.
  • NetworkPerson 3 hours ago
    If you shared the chat at any point, it can be discovered by others. ChatGPT has also had at least one bug in the past where users were able to see the chats of others. So yeah, even if paid or over API, it’s not a good idea to trust it with sensitive information.
  • throw03172019 3 hours ago
    Big risk. Especially if you have memory enabled or have not toggled off the “ok to train on my data” toggle.
  • minimaxir 3 hours ago
    If you are using the free web interface, yes, it’s a security issue as inputs there are trained upon.

    APIs, less so.

  • namegulf 3 hours ago
    Rule of thumb: Do not enter anything that is proprietary into that prompt text box!

    Yes that includes credentials.

  • mring33621 1 hour ago
    yes

    i also change any 'brand names' that might reveal my employer