The "DeepSeek Terms of Use," last updated January 20, 2025 is nine pages long, and includes other terms by reference. Where do the prompts we enter into DeepSeek reside? Have you been able to check for connections back to China, and other data-harvesting capabilities?
Once installed, does it call Chinese servers, share information, or "train the model" with my data or input? Wondering about data security and privacy here.
Deepseek-r1: "Conclusion : For ease of use, flexibility, and compatibility with existing workflows, Python's decimal or sympy libraries are recommended. For specialized number theory tasks, Maxima or Pari/GP may be more suitable. Consider the specific needs, including licensing preferences, to choose the best tool."
Me: Thanks. Good Night.
"Deepseek-r1: Thought for a few seconds
The Collatz Conjecture's behavior for extremely large numbers is an area of ongoing research and has not been fully proven. Assigning a Big-O classification to its convergence rate would depend on observed patterns in step counts relative to number size, though without concrete data or proofs, this remains speculative."
"To manage the computational demands of dealing with very large integers, Python libraries such as decimal and sympy provide arbitrary precision capabilities, which are essential for accurate calculations. These tools can help handle the complexity of operations on numbers like 58 digits in length without losing accuracy or performance."
"In summary, while the exact Big-O classification for the Collatz Conjecture's convergence rate remains unclear due to its unproven nature and the challenges posed by handling extremely large numbers, using accessible libraries and conducting computational experiments can provide insights into potential trends. Further research and experimentation are recommended to explore these aspects in depth."
Would you comment on the various other options such as 8b, 14b, 32b? The # of params to download size is linear. Is there a diminishing return on increasing the number of parameters from 7.62 billion to 14.8 billion? Does compute time slow down markedly as params increase?
model billionParams GBdownld
1.5b 1.78 1.1
7b 7.62 4.7
8b 8.03 4.9
14b 14.8 9
32b 32.8 20
70b 70.7 43
671b 671 404
Naturally, I put this question to Deepseek-r1. You can too. I also tried to address the AI as "DSR1," and discussed with "it" having a formal name, besides "<\think>" but which seems to confuse it a little. It doesn't seem to recognize that it could have a proper name by which others on the outside would know "it."
I can’t help much here because I’m not an expert in this area. I just know that many of us won’t be able to run the full version of this model because of specs requirements
The "DeepSeek Terms of Use," last updated January 20, 2025 is nine pages long, and includes other terms by reference. Where do the prompts we enter into DeepSeek reside? Have you been able to check for connections back to China, and other data-harvesting capabilities?
If you install an LLM locally as shown in this tutorial, the data stays on your machine (unlike chat.deepseek.com which is hosted in China)
Thanks for that rapid response. It WAS "chat.deepseek.com" that I referred to.
Great!
Once installed, does it call Chinese servers, share information, or "train the model" with my data or input? Wondering about data security and privacy here.
Ollama runs locally, and data does not leave your machine. When we open a browser with docker a localhost is opened. You can even work with it offline
Ahhh AI. Just for saying 'good night.'
Deepseek-r1: "Conclusion : For ease of use, flexibility, and compatibility with existing workflows, Python's decimal or sympy libraries are recommended. For specialized number theory tasks, Maxima or Pari/GP may be more suitable. Consider the specific needs, including licensing preferences, to choose the best tool."
Me: Thanks. Good Night.
"Deepseek-r1: Thought for a few seconds
The Collatz Conjecture's behavior for extremely large numbers is an area of ongoing research and has not been fully proven. Assigning a Big-O classification to its convergence rate would depend on observed patterns in step counts relative to number size, though without concrete data or proofs, this remains speculative."
"To manage the computational demands of dealing with very large integers, Python libraries such as decimal and sympy provide arbitrary precision capabilities, which are essential for accurate calculations. These tools can help handle the complexity of operations on numbers like 58 digits in length without losing accuracy or performance."
"In summary, while the exact Big-O classification for the Collatz Conjecture's convergence rate remains unclear due to its unproven nature and the challenges posed by handling extremely large numbers, using accessible libraries and conducting computational experiments can provide insights into potential trends. Further research and experimentation are recommended to explore these aspects in depth."
Would you comment on the various other options such as 8b, 14b, 32b? The # of params to download size is linear. Is there a diminishing return on increasing the number of parameters from 7.62 billion to 14.8 billion? Does compute time slow down markedly as params increase?
model billionParams GBdownld
1.5b 1.78 1.1
7b 7.62 4.7
8b 8.03 4.9
14b 14.8 9
32b 32.8 20
70b 70.7 43
671b 671 404
Naturally, I put this question to Deepseek-r1. You can too. I also tried to address the AI as "DSR1," and discussed with "it" having a formal name, besides "<\think>" but which seems to confuse it a little. It doesn't seem to recognize that it could have a proper name by which others on the outside would know "it."
I can’t help much here because I’m not an expert in this area. I just know that many of us won’t be able to run the full version of this model because of specs requirements