《大模型项目实战:多领域智能应用开发》配套资源
-
Updated
Nov 24, 2025 - JavaScript
《大模型项目实战:多领域智能应用开发》配套资源
A curated collection of open-source Large Language Model (LLM) projects that are production-ready and can be used for solving real-world problems. This repository focuses on high-performance, scalable LLM solutions across various industries and applications.
5-Day Hands-on AI Agents Course using Google ADK & Vertex AI | From first agent to production deployment
🧠 A comprehensive toolkit for benchmarking, optimizing, and deploying local Large Language Models. Includes performance testing tools, optimized configurations for CPU/GPU/hybrid setups, and detailed guides to maximize LLM performance on your hardware.
API to efficiently deploy Language Model (LLM) applications using Flask API
Production-ready checklists and frameworks for deploying LLMs, GenAI models, and AI infrastructure. Covers vLLM, Kubernetes, GPU optimization, observability, compliance, and Day-0 to Day-2 operations.
AWS EKS + IRSA, Volumes, ISTIO & KServe+ NextJS App + Fastapi Serve + kubernetes + Helm charts + Multimodel or LLM-Deployment The School of AI EMLO-V4 course assignment https://theschoolof.ai/#programs
LLM App for summarization of Terms and Conditions agreements available on the internet.
Python codes generation from latex expressions. Using synthetic dataset and CodeT5-base model.
ModelSpec is an open, declarative specification for describing how AI models especially LLMs are deployed, served, and operated in production. It captures execution, serving, and orchestration intent to enable validation, reasoning, and automation across modern AI infrastructure.
Add a description, image, and links to the llm-deployment topic page so that developers can more easily learn about it.
To associate your repository with the llm-deployment topic, visit your repo's landing page and select "manage topics."