这个分享是因为看到了Github上两个非常优秀的RAG资料收集项目,链接在下方,建议大家直接打开链接阅读
- • 项目1:https://github.com/lizhe2004/Awesome-LLM-RAG-Application
- • 项目2:https://github.com/jxzhangjhu/Awesome-LLM-RAG
综述
- • 论文:Retrieval-Augmented Generation for Large Language Models: A Survey[1]
- • 面向大语言模型的检索增强生成技术:调查[2]
- • Github repo[3]
- • Advanced RAG Techniques: an Illustrated Overview[4]
- • 中译版 高级 RAG 技术:图解概览[5]
- • 高级 RAG 应用构建指南和总结[6]
- • Patterns for Building LLM-based Systems & Products[7]
- • 构建 LLM 系统和应用的模式[8]
- • RAG 大全[9]
- • 中译版[10]
介绍
- • Microsoft-Retrieval Augmented Generation (RAG) in Azure AI Search[11]
- • 微软 -Azure AI 搜索之检索增强生成(RAG)[12]
- • azure openai design patterns- RAG[13]
- • IBM-What is retrieval-augmented generation-IBM[14]
- • IBM -什么是检索增强生成[15]
- • Amazon -Retrieval Augmented Generation (RAG)[16]
- • Nvidia-What Is Retrieval-Augmented Generation?[17]
- • 英伟达 -什么是检索增强生成[18]
- • Meta-Retrieval Augmented Generation: Streamlining the creation of intelligent natural language processing models[19]
- • Meta -检索增强生成:简化智能自然语言处理模型的创建[20]
- • Cohere -Introducing Chat with Retrieval-Augmented Generation (RAG)[21]
- • Pinecone -Retrieval Augmented Generation[22]
- • Milvus -Build AI Apps with Retrieval Augmented Generation (RAG)[23]
- • Knowledge Retrieval Takes Center Stage[24]
- • 知识检索成为焦点[25]
- • Disadvantages of RAG [26]
- • RAG 的缺点[27]
比较
- • Retrieval-Augmented Generation (RAG) or Fine-tuning — Which Is the Best Tool to Boost Your LLM Application?[28]
- • RAG 还是微调,优化 LLM 应用的最佳工具是哪个?[29]
- • 提示工程、RAGs 与微调的对比[30]
- • RAG vs Finetuning — Which Is the Best Tool to Boost Your LLM Application?[31]
- • RAG 与微调 — 哪个是提升优化 LLM 应用的最佳工具?[32]
- • A Survey on In-context Learning[33]
应用参考
- • Kimi Chat[34]
- • 支持发送网页链接和上传文件进行回答
- • GPTs[35]
- • 支持上传文档进行类似 RAG 应用
- • 百川知识库[36]
- • 1.新建知识库后得到知识库 ID;
- • 2.上传文件,获取文件 ID;
- • 3.通过文件 ID 与知识库 ID 进行知识库文件关联,知识库中可以关联多个文档。
- • 4.调用对话接口时通过 knowledge_base 字段传入知识库 ID 列表,大模型使用检索到的知识信息回答问题。
- • COZE[37]
- • 应用编辑平台,旨在开发下一代人工智能聊天机器人。无论您是否有编程经验,该平台都可以让您快速创建各种类型的聊天机器人并将其部署在不同的社交平台和消息应用程序上。
- • Devv-ai[38]
- • 最懂程序员的新一代 AI 搜索引擎,底层采用了 RAG 的大模型应用模式,LLM 模型为其微调的模型。
开源工具
RAG 框架
- • LangChain[39]
- • langchain4j[40]
- • LlamaIndex[41]
- • GPT-RAG[42]
- • GPT-RAG 提供了一个强大的架构,专为 RAG 模式的企业级部署量身定制。它确保了扎实的回应,并建立在零信任安全和负责任的人工智能基础上,确保可用性、可扩展性和可审计性。非常适合正在从探索和 PoC 阶段过渡到全面生产和 MVP 的组织。
- • QAnything[43]
- • 致力于支持任意格式文件或数据库的本地知识库问答系统,可断网安装使用。任何格式的本地文件都可以往里扔,即可获得准确、快速、靠谱的问答体验。目前已支持格式: PDF,Word(doc/docx),PPT,Markdown,Eml,TXT,图片(jpg,png 等),网页链接
- • Quivr[44]
- • 您的第二大脑,利用 GenerativeAI 的力量成为您的私人助理!但增强了人工智能功能。
- • Quivr[45]
- • Dify[46]
- • 融合了 Backend as Service 和 LLMOps 的理念,涵盖了构建生成式 AI 原生应用所需的核心技术栈,包括一个内置 RAG 引擎。使用 Dify,你可以基于任何模型自部署类似 Assistants API 和 GPTs 的能力。
- • Verba[47]
- • 这是向量数据库 weaviate 开源的一款 RAG 应用,旨在为开箱即用的检索增强生成 (RAG) 提供端到端、简化且用户友好的界面。只需几个简单的步骤,即可在本地或通过 OpenAI、Cohere 和 HuggingFace 等 LLM 提供商轻松探索数据集并提取见解。
- • danswer[48]
- • 允许您针对内部文档提出自然语言问题,并获得由源材料中的引用和参考文献支持的可靠答案,以便您始终可以信任您得到的结果。您可以连接到许多常用工具,例如 Slack、GitHub、Confluence 等。
预处理
- • Unstructured[49]
- • 该库提供了用于摄取和预处理图像和文本文档(如 PDF、HTML、WORD 文档等)的开源组件。 unstructured 的使用场景围绕着简化和优化 LLM 数据处理工作流程, unstructured 模块化功能和连接器形成了一个有内聚性的系统,简化了数据摄取和预处理,使其能够适应不同的平台,并有效地将非结构化数据转换为结构化输出。
路由
- • semantic-router[50]
评测框架
- • ragas[51]
- • Ragas 是一个用于评估 RAG 应用的框架,包括忠诚度(Faithfulness)、答案相关度(Answer Relevance)、上下文精确度(Context Precision)、上下文相关度(Context Relevancy)、上下文召回(Context Recall)
- • tonic_validate[52]
- • 一个用于 RAG 开发和实验跟踪的平台,用于评估检索增强生成 (RAG) 应用程序响应质量的指标。
- • deepeval[53]
- • 一个简单易用的开源 LLM 评估框架,适用于 LLM 应用程序。它与 Pytest 类似,但专门用于单元测试 LLM 应用程序。 DeepEval 使用 LLMs 以及在您的计算机上本地运行的各种其他 NLP 模型,根据幻觉、答案相关性、RAGAS 等指标来评估性能。
- • trulens[54]
- • TruLens 提供了一套用于开发和监控神经网络的工具,包括大型语言模型。这包括使用 TruLens-Eval 评估基于 LLMs 和 LLM 的应用程序的工具以及使用 TruLens-Explain 进行深度学习可解释性的工具。 TruLens-Eval 和 TruLens-Explain 位于单独的软件包中,可以独立使用。
- • langchain-evaluation[55]
- • Llamaindex-evaluation[56]
Embedding
- • BCEmbedding[57]
- • 网易有道 开发的双语和跨语种语义表征算法模型库,其中包含 Embedding Model 和 Reranker Model 两类基础模型。EmbeddingModel 专门用于生成语义向量,在语义搜索和问答中起着关键作用,而 RerankerModel 擅长优化语义搜索结果和语义相关顺序精排。
- • BGE-Embedding[58]
- • 北京智源人工智能研究院开源的 embeeding 通用向量模型,使用 retromae 对模型进行预训练,再用对比学习在大规模成对数据上训练模型。
- • bge-reranker-large[59]
- • 北京智源人工智能研究院开源,交叉编码器将对查询和答案实时计算相关性分数,这比向量模型(即双编码器)更准确,但比向量模型更耗时。 因此,它可以用来对嵌入模型返回的前 k 个文档重新排序
- • gte-base-zh[60]
- • GTE text embedding GTE 中文通用文本表示模型 通义实验室提供
Prompting
- • YiVal[61]
- • GenAI 应用程序的自动提示工程助手 YiVal 是一款最先进的工具,旨在简化 GenAI 应用程序提示和循环中任何配置的调整过程。有了 YiVal,手动调整已成为过去。这种以数据驱动和以评估为中心的方法可确保最佳提示、精确的 RAG 配置和微调的模型参数。使用 YiVal 使您的应用程序能够轻松实现增强的结果、减少延迟并最大限度地降低推理成本!
SQL 增强
- • vanna[62]
- • Vanna 是一个 MIT 许可的开源 Python RAG(检索增强生成)框架,用于 SQL 生成和相关功能。
- • Vanna 的工作过程分为两个简单步骤 - 在您的数据上训练 RAG“模型”,然后提出问题,这些问题将返回 SQL 查询。训练的数据主要是一些 DDL schema、业务说明文档以及示例 sql 等,所谓训练主要是将这些数据 embedding 化,用于向量检索。
LLM 部署和 serving
- • vllm
- • OpenLLM[63]
- •
可观测性
- • llamaindex-可观测性[64]
- • langfuse[65]
- • phoenix[66]
- • openllmetry[67]
- • lunary[68]
其他
- • RAGxplorer[69]
- • RAGxplorer 是一种交互式 Streamlit 工具,通过将文档块和的查询问句展示为 embedding 向量空间中可的视化内容来支持检索增强生成 (RAG) 应用程序的构建。
论文
- • Retrieval Augmented Generation: Streamlining the creation of intelligent natural language processing models[70]
- • Lost in the Middle: How Language Models Use Long Contexts[71]
- • 论文-设计检索增强生成系统时的七个故障点[72]
- • Seven Failure Points When Engineering a Retrieval Augmented Generation System
- • Is ChatGPT Good at Search? Investigating Large Language Models as Re-Ranking Agents[73]
- • RankGPT Reranker Demonstration (Van Gogh Wiki)[74]
- • Bridging the Preference Gap between Retrievers and LLMs[75]
- • Tuning Language Models by Proxy[76]
- • Zero-Shot Listwise Document Reranking with a Large Language Model[77]
- • 这篇论文提到两种重新排序方法:逐点重新排名、列表重新排名。
- • 逐点重新排名是给定文档列表,我们将查询+每个文档单独提供给 LLM 并要求它产生相关性分数。
- • 列表重新排名是给定文档列表,我们同时向 LLM 提供查询 + 文档列表,并要求它按相关性对文档进行重新排序。
- • 建议对 RAG 检索到的文档按列表重新排序,列表重排优于逐点重排。
RAG 构建策略
预处理
- • From Good to Great: How Pre-processing Documents Supercharges AI’s Output[78]
- • 从好到优秀:如何预处理文件来加速人工智能的输出[79]
- • 5 Levels Of Text Splitting[80]
- • Semantic Chunker[81]
检索
- • Foundations of Vector Retrieval[82]
- • 这本 200 多页的专题论文提供了向量检索文献中主要算法里程碑的总结,目的是作为新老研究者可以独立参考的资料。
- • Query Transformations[83]
- • 基于 LLM 的 RAG 应用的问句转换的技巧(译)[84]
- • Query Construction[85]
- • 查询构造[86]
- • Improving Retrieval Performance in RAG Pipelines with Hybrid Search[87]
- • 在 RAG 流程中提高检索效果:融合传统关键词与现代向量搜索的混合式搜索技术[88]
- • Multi-Vector Retriever for RAG on tables, text, and images[89]
- • 针对表格、文本和图片的 RAG 多向量检索器[90]
- • Relevance and ranking in vector search[91]
- • 向量查询中的相关性和排序[92]
- • Boosting RAG: Picking the Best Embedding & Reranker models[93]
- • 提升优化 RAG:挑选最好的嵌入和重排模型[94]
- • Azure Cognitive Search: Outperforming vector search with hybrid retrieval and ranking capabilities[95]
- • Azure 认知搜索:通过混合检索和排序功能优于向量搜索[96]
- • Optimizing Retrieval Augmentation with Dynamic Top-K Tuning for Efficient Question Answering[97]
- • 动态 Top-K 调优优化检索增强功能实现高效的问答[98]
- • Building Production-Ready LLM Apps with LlamaIndex: Document Metadata for Higher Accuracy Retrieval [99]
- • 使用 LlamaIndex 构建生产就绪型 LLM 应用程序:用于更高精度检索的文档元数据[100]
检索后处理
重排序
- • RankGPT Reranker Demonstration[101]
Contextual(Prompt) Compression
- • How to Cut RAG Costs by 80% Using Prompt Compression[102]
- • 第一种压缩方法是 AutoCompressors。它的工作原理是将长文本汇总为短向量表示,称为汇总向量。然后,这些压缩的摘要向量充当模型的软提示。
- • LangChain Contextual Compression[103]
其他
- • Bridging the rift in Retrieval Augmented Generation[104]
- • 不是直接微调检索器和语言模型等效果不佳的基础模块,而是引入了第三个参与者——位于现有组件之间的中间桥接模块。涉及技术包括排序 、压缩 、上下文框架 、条件推理脚手架 、互动询问 等 (可参考后续论文)
评估
- • Evaluating RAG Applications with RAGAs[105]
- • 用 RAGAs(检索增强生成评估)评估 RAG(检索增强型生成)应用[106]
- • Best Practices for LLM Evaluation of RAG Applications[107]
- • RAG 应用的 LLM 评估最佳实践(译)[108]
- • Exploring End-to-End Evaluation of RAG Pipelines[109]
- • 探索 RAG 管道的端到端评估[110]
- • Evaluating Multi-Modal Retrieval-Augmented Generation[111]
- • 评估多模态检索增强生成[112]
- • RAG Evaluation[113]
- • RAG 评估[114]
- • Evaluation - LlamaIndex[115]
- • 不同数据规模下不同模型的 RAG 忠实度效果
- • 不同模型下使用 RAG 与不是用 RAG(仅依靠内部知识)的忠实度效果
- • 不同模型下结合内部和外部知识后的 RAG 忠实度效果
- • 不同模型下的 RAG 的答案相关度效果
- • 评估-LlamaIndex[116]
- • Pinecone 的 RAG 评测[117]
- • zilliz:Optimizing RAG Applications: A Guide to Methodologies, Metrics, and Evaluation Tools for Enhanced Reliability[118]
实践
- • 实践[119]
幻觉
- •
Let’s Talk About LLM Hallucinations[120]- 谈一谈 LLM 幻觉[121]
课程
- • 短课程 Building and Evaluating Advanced RAG Applications[122]
- • Retrieval Augmented Generation for Production with LangChain & LlamaIndex[123]
视频
- • A Survey of Techniques for Maximizing LLM Performance[124]
- • How do domain-specific chatbots work? An overview of retrieval augmented generation (RAG)[125]
- • 文字版[126]
- • nvidia:Augmenting LLMs Using Retrieval Augmented Generation[127]
- • How to Choose a Vector Database[128]
其他
- • 构建企业级 AI 助手的经验教训[129]
- • How to build an AI assistant for the enterprise[130]
- • Large Language Model (LLM) Disruption of Chatbots[131]
- • 大型语言模型 (LLM)对聊天机器人的颠覆[132]
- • Gen AI: why does simple Retrieval Augmented Generation (RAG) not work for insurance?[133]
- • 生成式 AI:为什么 RAG 在保险领域起不了作用?[134]
- • OpenAI 如何优化 LLM 的效果[135]
- • End-to-End LLMOps Platform[136]
if like_this_article():
do_action('点赞')
do_action('再看')
add_wx_friend('iamxxn886')
if like_all_arxiv_articles():
go_to_link('https://github.com/HuggingAGI/HuggingArxiv') star_github_repo(''https://github.com/HuggingAGI/HuggingArxiv')
引用链接
[1]
论文:Retrieval-Augmented Generation for Large Language Models: A Survey: https://arxiv.org/abs/2312.10997
[2]
面向大语言模型的检索增强生成技术:调查: https://baoyu.io/translations/ai-paper/2312.10997-retrieval-augmented-generation-for-large-language-models-a-survey
[3]
Github repo: https://github.com/Tongji-KGLLM/RAG-Survey/tree/main
[4]
Advanced RAG Techniques: an Illustrated Overview: https://pub.towardsai.net/advanced-rag-techniques-an-illustrated-overview-04d193d8fec6
[5]
中译版 高级 RAG 技术:图解概览: https://baoyu.io/translations/rag/advanced-rag-techniques-an-illustrated-overview
[6]
高级 RAG 应用构建指南和总结: https://blog.llamaindex.ai/a-cheat-sheet-and-some-recipes-for-building-advanced-rag-803a9d94c41b
[7]
Patterns for Building LLM-based Systems & Products: https://eugeneyan.com/writing/llm-patterns/
[8]
构建 LLM 系统和应用的模式: https://tczjw7bsp1.feishu.cn/docx/Z6vvdyAdXou7XmxuXt2cigZUnTb?from=from\_copylink
[9]
RAG 大全: https://aman.ai/primers/ai/RAG/
[10]
中译版: https://tczjw7bsp1.feishu.cn/docx/GfwOd3rASo6lI4xoFsycUiz8nhg
[11]
Microsoft-Retrieval Augmented Generation (RAG) in Azure AI Search: https://learn.microsoft.com/en-us/azure/search/retrieval-augmented-generation-overview
[12]
微软 -Azure AI 搜索之检索增强生成(RAG): https://tczjw7bsp1.feishu.cn/docx/JJ7ldrO4Zokjq7xZIJcc5IZjnFh?from=from\_copylink
[13]
azure openai design patterns- RAG: https://github.com/microsoft/azure-openai-design-patterns/tree/main/patterns/03-retrieval-augmented-generation
[14]
IBM-What is retrieval-augmented generation-IBM: https://research.ibm.com/blog/retrieval-augmented-generation-RAG
[15]
IBM -什么是检索增强生成: https://tczjw7bsp1.feishu.cn/wiki/OMUVwsxlSiqjj4k4YkicUQbcnDg?from=from\_copylink
[16]
Amazon -Retrieval Augmented Generation (RAG): https://docs.aws.amazon.com/sagemaker/latest/dg/jumpstart-foundation-models-customize-rag.html
[17]
Nvidia-What Is Retrieval-Augmented Generation?: https://blogs.nvidia.com/blog/what-is-retrieval-augmented-generation/?ncid=so-twit-174237&=&linkId=100000226744098
[18]
英伟达 -什么是检索增强生成: https://tczjw7bsp1.feishu.cn/docx/V6ysdAewzoflhmxJDwTcahZCnYI?from=from\_copylink
[19]
Meta-Retrieval Augmented Generation: Streamlining the creation of intelligent natural language processing models: https://ai.meta.com/blog/retrieval-augmented-generation-streamlining-the-creation-of-intelligent-natural-language-processing-models/
[20]
Meta -检索增强生成:简化智能自然语言处理模型的创建: https://tczjw7bsp1.feishu.cn/wiki/TsL8wAsbtiLfDmk1wFJcQsiGnQb?from=from\_copylink
[21]
Cohere -Introducing Chat with Retrieval-Augmented Generation (RAG): https://txt.cohere.com/chat-with-rag/
[22]
Pinecone -Retrieval Augmented Generation: https://www.pinecone.io/learn/series/rag/
[23]
Milvus -Build AI Apps with Retrieval Augmented Generation (RAG): https://zilliz.com/learn/Retrieval-Augmented-Generation?utm\_source=twitter&utm\_medium=social&utm\_term=zilliz
[24]
Knowledge Retrieval Takes Center Stage: https://towardsdatascience.com/knowledge-retrieval-takes-center-stage-183be733c6e8
[25]
知识检索成为焦点: https://tczjw7bsp1.feishu.cn/docx/VELQdaizVoknrrxND3jcLkZZn8d?from=from\_copylink
[26]
Disadvantages of RAG : https://medium.com/@kelvin.lu.au/disadvantages-of-rag-5024692f2c53
[27]
RAG 的缺点: https://tczjw7bsp1.feishu.cn/docx/UZCCdKmLEo7VHQxWPdNcGzICnEd?from=from\_copylink
[28]
Retrieval-Augmented Generation (RAG) or Fine-tuning — Which Is the Best Tool to Boost Your LLM Application?: https://www.linkedin.com/pulse/retrieval-augmented-generation-rag-fine-tuning-which-best-victoria-s-
[29]
RAG 还是微调,优化 LLM 应用的最佳工具是哪个?: https://tczjw7bsp1.feishu.cn/wiki/TEtHwkclWirBwqkWeddcY8HXnZf?chunked=false
[30]
提示工程、RAGs 与微调的对比: https://github.com/lizhe2004/Awesome-LLM-RAG-Application/blob/main/Prompting-RAGs-Fine-tuning.md
[31]
RAG vs Finetuning — Which Is the Best Tool to Boost Your LLM Application?: https://webcache.googleusercontent.com/search?q=cache:https://towardsdatascience.com/rag-vs-finetuning-which-is-the-best-tool-to-boost-your-llm-application-94654b1eaba7
[32]
RAG 与微调 — 哪个是提升优化 LLM 应用的最佳工具?: https://tczjw7bsp1.feishu.cn/wiki/Cs9ywwzJSiFrg9kX2r1ch4Nxnth
[33]
A Survey on In-context Learning: https://arxiv.org/abs/2301.00234
[34]
Kimi Chat: https://kimi.moonshot.cn/
[35]
GPTs: https://chat.openai.com/gpts/mine
[36]
百川知识库: https://platform.baichuan-ai.com/knowledge
[37]
COZE: https://www.coze.com/
[38]
Devv-ai: https://devv.ai/zh
[39]
LangChain: https://github.com/langchain-ai/langchain/
[40]
langchain4j: https://github.com/langchain4j/langchain4j
[41]
LlamaIndex: https://github.com/run-llama/llama\_index/
[42]
GPT-RAG: https://github.com/Azure/GPT-RAG
[43]
QAnything: https://github.com/netease-youdao/QAnything/tree/master
[44]
Quivr: https://github.com/StanGirard/quivr
[45]
Quivr: https://www.quivr.app/chat
[46]
Dify: https://github.com/langgenius/dify
[47]
Verba: https://github.com/weaviate/Verba
[48]
danswer: https://github.com/danswer-ai/danswer
[49]
Unstructured: https://github.com/Unstructured-IO/unstructured
[50]
semantic-router: https://github.com/aurelio-labs/semantic-router
[51]
ragas: https://github.com/explodinggradients/ragas?tab=readme-ov-file
[52]
tonic_validate: https://github.com/TonicAI/tonic\_validate
[53]
deepeval: https://github.com/confident-ai/deepeval
[54]
trulens: https://github.com/truera/trulens
[55]
langchain-evaluation: https://python.langchain.com/docs/guides/evaluation/
[56]
Llamaindex-evaluation: https://docs.llamaindex.ai/en/stable/optimizing/evaluation/evaluation.html
[57]
BCEmbedding: https://github.com/netease-youdao/BCEmbedding/tree/master
[58]
BGE-Embedding: https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/baai\_general\_embedding
[59]
bge-reranker-large: https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/reranker
[60]
gte-base-zh: https://modelscope.cn/models/iic/nlp\_gte\_sentence-embedding\_chinese-base/summary
[61]
YiVal: https://github.com/YiVal/YiVal
[62]
vanna: https://github.com/vanna-ai/vanna
[63]
OpenLLM:
[64]
llamaindex-可观测性: https://docs.llamaindex.ai/en/stable/module\_guides/observability/observability.html
[65]
langfuse: https://github.com/langfuse/langfuse
[66]
phoenix: https://github.com/Arize-ai/phoenix
[67]
openllmetry: https://github.com/traceloop/openllmetry
[68]
lunary: https://lunary.ai/
[69]
RAGxplorer: https://github.com/gabrielchua/RAGxplorer
[70]
Retrieval Augmented Generation: Streamlining the creation of intelligent natural language processing models: https://ai.meta.com/blog/retrieval-augmented-generation-streamlining-the-creation-of-intelligent-natural-language-processing-models/
[71]
Lost in the Middle: How Language Models Use Long Contexts: https://arxiv.org/abs/2307.03172
[72]
论文-设计检索增强生成系统时的七个故障点: https://arxiv.org/abs/2401.05856
[73]
Is ChatGPT Good at Search? Investigating Large Language Models as Re-Ranking Agents: https://arxiv.org/abs/2304.09542
[74]
RankGPT Reranker Demonstration (Van Gogh Wiki): https://github.com/run-llama/llama\_index/blob/main/docs/examples/node\_postprocessor/rankGPT.ipynb
[75]
Bridging the Preference Gap between Retrievers and LLMs: https://arxiv.org/abs/2401.06954
[76]
Tuning Language Models by Proxy: https://arxiv.org/abs/2401.08565
[77]
Zero-Shot Listwise Document Reranking with a Large Language Model: https://arxiv.org/pdf/2305.02156.pdf
[78]
From Good to Great: How Pre-processing Documents Supercharges AI’s Output: https://webcache.googleusercontent.com/search?q=cache:https://medium.com/mlearning-ai/from-good-to-great-how-pre-processing-documents-supercharges-ais-output-cf9ecf1bd18c
[79]
从好到优秀:如何预处理文件来加速人工智能的输出: https://tczjw7bsp1.feishu.cn/docx/HpFOdBVlIo2nE5xHN8GcPqaSnxg?from=from\_copylink
[80]
5 Levels Of Text Splitting: https://github.com/FullStackRetrieval-com/RetrievalTutorials/blob/main/5\_Levels\_Of\_Text\_Splitting.ipynb
[81]
Semantic Chunker: https://github.com/run-llama/llama-hub/blob/main/llama\_hub/llama\_packs/node\_parser/semantic\_chunking/semantic\_chunking.ipynb
[82]
Foundations of Vector Retrieval: arxiv.org/abs/2401.09350
[83]
Query Transformations: https://blog.langchain.dev/query-transformations/
[84]
基于 LLM 的 RAG 应用的问句转换的技巧(译): https://tczjw7bsp1.feishu.cn/docx/UaOJdXdIzoUTBTxIuxscRAJLnfh?from=from\_copylink
[85]
Query Construction: https://blog.langchain.dev/query-construction/
[86]
查询构造: https://tczjw7bsp1.feishu.cn/docx/Wo0Sdn23voh0Wqx245zcu1Kpnuf?from=from\_copylink
[87]
Improving Retrieval Performance in RAG Pipelines with Hybrid Search: https://towardsdatascience.com/improving-retrieval-performance-in-rag-pipelines-with-hybrid-search-c75203c2f2f5
[88]
在 RAG 流程中提高检索效果:融合传统关键词与现代向量搜索的混合式搜索技术: https://baoyu.io/translations/rag/improving-retrieval-performance-in-rag-pipelines-with-hybrid-search
[89]
Multi-Vector Retriever for RAG on tables, text, and images: https://blog.langchain.dev/semi-structured-multi-modal-rag/
[90]
针对表格、文本和图片的 RAG 多向量检索器: https://tczjw7bsp1.feishu.cn/docx/Q8T8dZC0qoV2KRxPh8ScqoHanHg?from=from\_copylink
[91]
Relevance and ranking in vector search: https://learn.microsoft.com/en-us/azure/search/vector-search-ranking#hybrid-search
[92]
向量查询中的相关性和排序: https://tczjw7bsp1.feishu.cn/docx/VJIWd90fUohXLlxY243cQhKCnXf?from=from\_copylink
[93]
Boosting RAG: Picking the Best Embedding & Reranker models: https://blog.llamaindex.ai/boosting-rag-picking-the-best-embedding-reranker-models-42d079022e83
[94]
提升优化 RAG:挑选最好的嵌入和重排模型: https://tczjw7bsp1.feishu.cn/docx/CtLCdwon9oDIF4x49mOchmjxnud?from=from\_copylink
[95]
Azure Cognitive Search: Outperforming vector search with hybrid retrieval and ranking capabilities: https://techcommunity.microsoft.com/t5/ai-azure-ai-services-blog/azure-cognitive-search-outperforming-vector-search-with-hybrid/ba-p/3929167
[96]
Azure 认知搜索:通过混合检索和排序功能优于向量搜索: https://tczjw7bsp1.feishu.cn/docx/CDtGdwQJXo0mYVxaLpecXWuRnLc?from=from\_copylink
[97]
Optimizing Retrieval Augmentation with Dynamic Top-K Tuning for Efficient Question Answering: https://medium.com/@sauravjoshi23/optimizing-retrieval-augmentation-with-dynamic-top-k-tuning-for-efficient-question-answering-11961503d4ae
[98]
动态 Top-K 调优优化检索增强功能实现高效的问答: https://tczjw7bsp1.feishu.cn/docx/HCzAdk2BmoBg3lxA7ZOcn3KlnJb?from=from\_copylink
[99]
Building Production-Ready LLM Apps with LlamaIndex: Document Metadata for Higher Accuracy Retrieval : https://webcache.googleusercontent.com/search?q=cache:https://betterprogramming.pub/building-production-ready-llm-apps-with-llamaindex-document-metadata-for-higher-accuracy-retrieval-a8ceca641fb5
[100]
使用 LlamaIndex 构建生产就绪型 LLM 应用程序:用于更高精度检索的文档元数据: https://tczjw7bsp1.feishu.cn/wiki/St29wfD5QiMcThk8ElncSe90nZe?from=from\_copylink
[101]
RankGPT Reranker Demonstration: https://github.com/run-llama/llama\_index/blob/main/docs/examples/node\_postprocessor/rankGPT.ipynb
[102]
How to Cut RAG Costs by 80% Using Prompt Compression: https://webcache.googleusercontent.com/search?q=cache:https://towardsdatascience.com/how-to-cut-rag-costs-by-80-using-prompt-compression-877a07c6bedb
[103]
LangChain Contextual Compression: https://python.langchain.com/docs/modules/data\_connection/retrievers/contextual\_compression/?ref=blog.langchain.dev
[104]
Bridging the rift in Retrieval Augmented Generation: https://webcache.googleusercontent.com/search?q=cache:https://medium.com/@alcarazanthony1/bridging-the-rift-in-retrieval-augmented-generation-3e12f379f66c
[105]
Evaluating RAG Applications with RAGAs: https://towardsdatascience.com/evaluating-rag-applications-with-ragas-81d67b0ee31a
[106]
用 RAGAs(检索增强生成评估)评估 RAG(检索增强型生成)应用: https://baoyu.io/translations/rag/evaluating-rag-applications-with-ragas
[107]
Best Practices for LLM Evaluation of RAG Applications: https://www.databricks.com/blog/LLM-auto-eval-best-practices-RAG
[108]
RAG 应用的 LLM 评估最佳实践(译): https://tczjw7bsp1.feishu.cn/docx/TQJcdzfcfomL4QxqgkfchvbOnog?from=from\_copylink
[109]
Exploring End-to-End Evaluation of RAG Pipelines: https://webcache.googleusercontent.com/search?q=cache:https://betterprogramming.pub/exploring-end-to-end-evaluation-of-rag-pipelines-e4c03221429
[110]
探索 RAG 管道的端到端评估: https://tczjw7bsp1.feishu.cn/wiki/XL8WwjYU9i1sltkawl1cYOounOg?from=from\_copylink
[111]
Evaluating Multi-Modal Retrieval-Augmented Generation: https://blog.llamaindex.ai/evaluating-multi-modal-retrieval-augmented-generation-db3ca824d428
[112]
评估多模态检索增强生成: https://tczjw7bsp1.feishu.cn/docx/DrDQdj29DoDhahx9439cjb30nrd?from=from\_copylink
[113]
RAG Evaluation: https://cobusgreyling.medium.com/rag-evaluation-9813a931b3d4
[114]
RAG 评估: https://tczjw7bsp1.feishu.cn/wiki/WzPnwFMgbisICCk9BFrc9XYanme?from=from\_copylink
[115]
Evaluation - LlamaIndex: https://docs.llamaindex.ai/en/stable/module\_guides/evaluating/root.html
[116]
评估-LlamaIndex: https://tczjw7bsp1.feishu.cn/wiki/KiSow8rXviiHDWki4kycULRWnqg?from=from\_copylink
[117]
Pinecone 的 RAG 评测: https://www.pinecone.io/blog/rag-study/
[118]
zilliz:Optimizing RAG Applications: A Guide to Methodologies, Metrics, and Evaluation Tools for Enhanced Reliability: https://zilliz.com/blog/how-to-evaluate-retrieval-augmented-generation-rag-applications?utm\_source=twitter&utm\_medium=social&utm\_term=zilliz
[119]
实践: ./practice.md
[120]
Let’s Talk About LLM Hallucinations: https://webcache.googleusercontent.com/search?q=cache:https://levelup.gitconnected.com/lets-talk-about-llm-hallucinations-9c8dab3e7ac3
[121]
谈一谈 LLM 幻觉: https://tczjw7bsp1.feishu.cn/docx/G7KJdjENqoMYyhxw05rc8vrgn1c?from=from\_copylink
[122]
短课程 Building and Evaluating Advanced RAG Applications: https://www.deeplearning.ai/short-courses/building-evaluating-advanced-rag/
[123]
Retrieval Augmented Generation for Production with LangChain & LlamaIndex: https://learn.activeloop.ai/courses/rag?utm\_source=Twitter&utm\_medium=social&utm\_campaign=student-social-share
[124]
A Survey of Techniques for Maximizing LLM Performance: https://www.youtube.com/watch?v=ahnGLM-RC1Y&ab\_channel=OpenAI
[125]
How do domain-specific chatbots work? An overview of retrieval augmented generation (RAG): https://www.youtube.com/watch?v=1ifymr7SiH8&ab\_channel=CoryZue
[126]
文字版: https://scriv.ai/guides/retrieval-augmented-generation-overview/
[127]
nvidia:Augmenting LLMs Using Retrieval Augmented Generation: https://courses.nvidia.com/courses/course-v1:NVIDIA+S-FX-16+v1/course/
[128]
How to Choose a Vector Database: https://www.youtube.com/watch?v=Yo-AzVpWrRg&ab\_channel=Pinecone
[129]
构建企业级 AI 助手的经验教训: https://tczjw7bsp1.feishu.cn/docx/Hq4Hd7JXEoHdGZxomkecEDs3n6b?from=from\_copylink
[130]
How to build an AI assistant for the enterprise: https://www.glean.com/blog/lessons-and-learnings-from-building-an-enterprise-ready-ai-assistant
[131]
Large Language Model (LLM) Disruption of Chatbots: https://cobusgreyling.medium.com/large-language-model-llm-disruption-of-chatbots-8115fffadc22
[132]
大型语言模型 (LLM)对聊天机器人的颠覆: https://tczjw7bsp1.feishu.cn/docx/GbxKdkpwrodWRnxW4ffcBU0Gnur?from=from\_copylink
[133]
Gen AI: why does simple Retrieval Augmented Generation (RAG) not work for insurance?: https://www.zelros.com/2023/10/27/gen-ai-why-does-simple-retrieval-augmented-generation-rag-not-work-for-insurance/
[134]
生成式 AI:为什么 RAG 在保险领域起不了作用?: https://tczjw7bsp1.feishu.cn/docx/KfbidIiZBoPfb3xrT0WcL70LnPd?from=from\_copylink
[135]
OpenAI 如何优化 LLM 的效果: https://www.breezedeus.com/article/make-llm-greater
[136]
End-to-End LLMOps Platform: https://medium.com/@bijit211987/end-to-end-llmops-platform-514044dc791d