Technology reporter
Australia’s science minister, Ed Husic, has become the first member of a Western government to raise privacy concerns about DeepSeek, the Chinese chatbot causing turmoil on the markets and in the tech industry.
Chinese tech, from Huawei to TikTok, has repeatedly been the subject of allegations the firms are linked to the Chinese state, and fears this could lead to peoples’ data being harvested for intelligence purposes.
Donald Trump has said DeepSeek is a “wake up call” for the US but did not seem to suggest it was a threat to national security – instead saying it could even be a good thing if it brought costs down.
But Husic told ABC News on Tuesday there remained a lot of unanswered questions, including over “data and privacy management.”
“I would be very careful about that, these type of issues need to be weighed up carefully,” he added.
DeepSeek has not responded to the BBC’s request for comment – but users in the UK and US have so far shown no such caution.
DeepSeek has rocketed to the top of the app stores in both countries, with market analysts Sensor Tower saying it has seen 3 million downloads since launch.
As much as 80% of these have come in the past week – meaning it has been downloaded at three times the rate of rivals such as Perplexity.
Meanwhile, US officials have raised questions about national security, according to White House press secretary Karoline Leavitt.
“I spoke with [the National Security Council] this morning, they are looking into what [the national security implications] may be,” she said.
And the US navy has reportedly banned its members from using DeepSeek’s apps altogether, citing “potential security and ethical concerns”, according to CNBC.
The Navy did not immediately respond to a request for comment from BBC News.
What data does DeepSeek collect?
According to DeepSeek’s own privacy policy, it collects large amounts of personal information collected from users, which is then stored “in secure servers” in China.
This may include:
- Your email address, phone number and date of birth, entered when creating an account
- Any user input including text and audio, as well as chat histories
- So-called “technical information” – ranging from your phone’s model and operating system to your IP address and “keystroke patterns”.
It says it uses this information to improve DeepSeek by enhancing its “safety, security and stability”.
It will then share this information with others, such as service providers, advertising partners, and its corporate group, which will be kept “for as long as necessary”.
“There are genuine concerns around the technological potential of DeepSeek, specifically around the terms of its privacy policy,” said ExpressVPN’s digital privacy advocate Lauren Hendry Parsons.
She specifically highlighted the part of the policy which says data can be used “to help match you and your actions outside of the service” – which she said “should immediately ring an alarm bell for anyone concerned with their privacy”.
But while the app harvests a lot of data, experts point out it’s very similar to privacy policies users may have already agreed to for rival services like ChatGPT and Gemini, or even social media platforms.
So is it safe?
“For any openly available AI model, with a web or app interface – including but not limited to DeepSeek – the prompts, or questions that are asked of the AI, then become available to the makers of that model, as are the answers,” said Emily Taylor, chief executive of Oxford Information Labs
“So, anyone working on confidential or national security areas needs to be aware of those risks,” she told the BBC.
Dr Richard Whittle from University of Salford said he had “various concerns about data and privacy” with the app, but said there were “plenty of concerns” with the models used in the US too.
“Consumers should always be wary, especially in the hype and fear of missing out on a new, highly popular, app,” he said.
The UK data regulator, the Information Commissioner’s Office has urged the public to be aware of their rights around their information being used to train AI models.
Asked by BBC News if it shared the Australian government’s concerns, it said in a statement: “Generative AI developers and deployers need to make sure people have meaningful, concise and easily accessible information about the use of their personal data and have clear and effective processes for enabling people to exercise their information rights.
“We will continue to engage with stakeholders on promoting effective transparency measures, without shying away from taking action when our regulatory expectations are ignored.”