Artificial Insecurity: how AI tools compromise confidentiality https://www.accessnow.org/artificial-insecurity-compromising-confidentality/ From exposing user data to facilitating hacks, from undermining information integrity to creating supply chain vulnerabilities, AI tools are underpinned, and undermined, by dodgy security practices.  

Confidentiality:  no unauthorized people should be able to access your information. This is especially important given how people are using LLM-based tools for everything from therapy and medical advice, to companionship, while businesses, governments, and nonprofits are integrating these tools into workflows that deal with sensitive data. 

Even tools that promise to boost security may in fact undermine it a number of Urban Cyber Security Inc.’s virtual private network (VPN) browser extensions who promised “AI protection” for sensitive data, were actually harvesting data on all prompts entered into LLMs, the responses received, and timestamps, metadata, and information about the AI tools used by eight million people. 

Comment: Does our new Data Protection Law come anywhere close enough to protection our privacy, or civil liberties?

 the open source community has made positive strides in prioritizing confidentiality. OpenSecret’s MapleAI supports a multidevice end-to-end encrypted AI chatbot, while Moxie Marlinspike, co-author of Signal’s E2EE protocol, has launched ‘Confer,’ an open source AI assistant that protects all user prompts, responses, and related data. But for now at least, such rights-respecting solutions remain the exception rather than the norm. 

 

 

 

E-library