神経グループ, Inc. is pleased to announce that it has implemented a commercial 32 billion parameter LLM, its proprietary large-scale language model “NEURAL.LLM,” announced in June 2023, in a secure on-premises environment.
In recent years, the business use of generative AI and LLM has accelerated, but sending confidential and personal information to cloud-based generative AI and LLM services such as ChatGPT is a high hurdle in terms of information leakage risk and compliance, forcing many companies to limit the scope of LLM use. Our company’s “NEURAL.LLM” makes it possible to use LLM within a closed network within a company. This is expected to lead to its introduction as a “secure LLM” that can be operated safely.
また、エッジAIの研究開発にも注力し、限られた計算資源でも高い精度を実現するAI開発力を磨いてきました。この考えに基づき、小規模なLLMモデルをベースに、対話と複雑な解析をリーズナブルな計算資源で実現できる環境を構築します。これにより、ユーザー側のサーバー投資を大幅に削減できる効果があり、LLMの安全かつ低コストな商用利用の実現に貢献できると考えています。
こちらもお読みください: 日立ソリューションズがアプリマーケットプラットフォーム「Alli LLM」を提供開始
In addition to providing “NEURAL.LLM” to our client companies, we will also work to apply it to the secure AI agent that we are developing in-house.
ソース PRタイムズ
