New opportunities in the IoT market through the use of AI with Deepseek and NVIDIA
Deepseek versus ChatGPT

New opportunities in the IoT market through the use of AI with Deepseek and NVIDIA

AI is revolutionising process optimisation in IoT companies. Antennity, a leading IoT company, relies on innovative AI solutions to increase efficiency. Recent developments in Large Language Models (LLMs) are opening up new possibilities for local AI implementations in the office.

1.1 Market shake-up by Deepseek

The Chinese company Deepseek is causing a stir in the stock markets. With its open-source LLM solution, Deepseek offers a free alternative to ChatGPT and underscores China's quest for technological independence. This development could permanently change the global AI market and open up new opportunities for fast-growing companies like Antennity.

1.2 Local AI solutions on the rise

NVIDIA's Project DIGITS, a compact AI supercomputer for only $3,000, enables developers to work with large AI models locally. This technology perfectly fits Antennity's strategy of deploying local LLM/AI solutions for process optimisation. The trend towards local AI solutions offers increased data security and independence from cloud services.

1.3 Antennity's AI-supported process optimisation

Antennity uses AI technologies to automate and improve business processes. From customer care to order processing, AI-supported systems increase the company's efficiency and responsiveness. By using LLMs such as Deepseek, Antennity can optimise complex tasks such as data analysis, forecasting and decision-making.

1.4 AIoT: The future of smart connectivity

The combination of AI and IoT (AIoT) opens up new dimensions for smart, connected solutions. Antennity uses this synergy to develop innovative products and services that push the boundaries of what is possible in the IoT sector. From smart sensor networks to self-optimising industrial plants, AIoT is revolutionising entire industries.

1.5 Practical Applications of AI in the IoT

  • Predictive maintenance: AI algorithms analyse sensor data to predict failures and optimise maintenance.
  • Energy management: Intelligent systems control energy consumption in real-time and maximise efficiency.
  • Supply chain optimisation: AI-based forecasting improves warehousing and logistics.
  • Personalised customer experiences: AIoT enables tailored services based on user behaviour and preferences.

1.6 Challenges and Solutions

The integration of AI into IoT systems also brings challenges, such as data protection, security and scalability. Antennity addresses these issues by:

  • implementing robust security protocols
  • developing scalable architectures for growing data volumes
  • Focus on data protection through design and local data processing

1.7 Your opportunity for process optimisation

Use Antennity's expertise to take your IoT projects to the next level. We offer tailored consulting on AI-supported process optimisation for your company. Our experts analyse your existing processes and develop individual solutions that exploit the full potential of AI and IoT.

The rapid development in the field of AI hardware opens up possibilities for text optimisation using Large Language Models (LLMs), even with limited resources such as a 3D gaming laptop with 8 GB VRAM.


AI with Antennity

1.8 Performance of 8 GB VRAM with DeepSeek

A 3D gaming laptop with 8 GB VRAM on the GPU is sufficient to optimise texts using LLM:

  • Advances in model compression make it possible to run large models on limited memory. AIR-LLM, for example, can run 70B-parameter models on only 4 GB of GPU memory
  • For most text optimisation tasks, smaller models with 7B parameters are often sufficient and run smoothly on 8 GB VRAM.
  • Techniques such as quantisation and efficient memory management further improve performance.

1.9 NVIDIA's Project DIGITS: A quantum leap

NVIDIA's new ‘Project DIGITS’ product is revolutionising local AI processing:

  • For $3,000, it offers a personal AI supercomputer with 1 petaflop of computing power.
  • It can run models with up to 200 billion parameters.
  • With 128 GB of unified memory and up to 4 TB of NVMe storage, it offers enormous capacity.
  • A home for open source with Deepseek

1.10 Benefits for companies and developers

The availability of such powerful and affordable hardware has far-reaching implications:

  • Multiple workstations can benefit from a single DIGITS system, reducing the cost per user.
  • Complex prompts and demanding AI tasks can be processed quickly locally without relying on cloud services.
  • The ability to run large models locally improves data protection and control over sensitive information.

1.11 Outlook

The fusion of AI and IoT will continue to gain momentum in the coming years. Antennity is positioning itself at the forefront of this dynamic market and is continuously investing in research and development. Keep your finger on the pulse with us and actively shape the future of the intelligent IoT. Contact us today at harald.naumann (at) antennity.com at and discover how AI can revolutionise your business processes. We can start with a 3D Gaming laptop and then build DIGITS from NIVIDA.

Let us explore the possibilities of AI-powered IoT solutions for your business together and develop tailored strategies that will sustainably increase your business success.

Imprint

Harald Naumann ? Ludwig-Kaufholz-Weg 4 ? 31535 Neustadt ? Germany ? Fon +49-5032-8019985 ? Fax +49-5032-801998 6 ? Cell +49-152-33877687

Email harald.naumann (at) antennity.com ? SKYPE h.naumann ? www.antennity.com



Harald Naumann

As the winner of the 5G NTN Antenna Award , I am happy to inform you about my 0 USD antenna concept and more – contact me!

1 个月

David thx for like. For local LLMs with 8 GB of GPU RAM, which are well suited for writing Python code and creating email texts, the following options are available: Llama 3.1 8B: This model offers a good balance between performance and resource consumption. It can be configured with a context length of at least 8192 tokens, which is sufficient for code generation and email texts. 7B-GGUF models: These fit well in 8 GB of VRAM and offer fast response times at Q6 quantisation or lower. They are well suited for code generation and word processing. Phi-2: With only 2.7 billion parameters, this Microsoft model is significantly smaller than 7B models, but is considered to be very powerful. It should run smoothly on 8 GB of VRAM and is well suited for coding tasks. TinyLlama-1.1B-Chat: This very small model is ideal for initial experiments and should run smoothly on 8 GB VRAM. It can be used for simple coding tasks and email texts. Deepseek: We will see. Ollama or llama-cpp-python, which allow for easy setup and use of local LLMs. For Python code generation, you should choose models that have been trained specifically for programming tasks.

Danny E. Y.

SW, HW Development, Data Analytics , AI Applications, Program/Project Management, Engineering Management, New Product Introduction, Systems Engineering , Design Thinking

1 个月

Open source means so much you can do , endless opportunities as developers

回复

要查看或添加评论,请登录

Harald Naumann的更多文章

社区洞察

其他会员也浏览了