Skip to content Skip to menu

We use cookies

We use cookies to analyze traffic, remember preferences and improve the usability of the website. To give your consent, click on the "I Agree" button.

Settings I agree

You can refuse consent at any time.

Safety MENDELU

Warning against the use DeepSeek´s products

11. 7. 2025

DeepSeek represents a technological revolution - a cheap, powerful and open model that has raised concerns about security and geopolitical implications. On the one hand a great technical innovation, on the other hand the risks associated with storing, transmitting and filtering data. This is exactly why the NCIB issued a warning against it.

DeepSeek Warning

What does this mean for you as a user?

What happened?

On July 10, 2025, the National Cyber and Information Security Bureau (NCISB) issued a warning against the use of DeepSeek products. These tools pose a high security risk - they collect data, transmit it to China and Russia, bypass encryption, and allow third-party access.

What is DeepSeek?

DeepSeek is a Chinese developer of large-scale language models (LLMs) and AI chatbots. It offers web and mobile applications that can act as alternatives to well-known tools (e.g. ChatGPT).
However, their use is associated with illegal data transfer, unencrypted communication, collection of personal data and connection to Chinese state structures.

Why NOT to use DeepSeek?

  • Your data can be stored and misused outside the EU.

  • Personal or sensitive data (e.g. birth numbers, login details, research results) may be leaked.
  • The product is subject to Chinese laws - including possible enforcement of cooperation with intelligence agencies.
  • Data may also be shared with companies such as ByteDance (TikTok) and other sanctioned entities.

How to use AI safely at MENDELU

What not to enter into any AI tool:

  • Personal information, names, identifiers

  • Student or employee information

  • Research or thesis results

  • Login details or sensitive documents

How to recognize biased AI output?

Artificial intelligence doesn't always tell the truth - its answers can be:

  • Out of date - AI does not have up-to-date information.

  • Made up - called hallucinations, where the AI "makes up" facts or quotes.

  • Distorted - e.g. for political reasons (censorship, propaganda) or according to company settings.

Ask yourself:

  • Does the AI provide a source or evidence?    

  • Does the answer seem suspiciously convincing without context?
  • Does it contradict what you know from credible sources?

Never rely on AI as the only source of information. Always verify key claims from multiple independent sources.

Recommended tools:

For use at MENDELU will be published shortly on https://tech.mendelu.cz in the guidance notes.

Further information on using AI at MENDELU

clearly and with the MENDELU regulations is provided by the ÚVIS on its website.

 

Learn more about DeepSeek: