{"id":16805,"date":"2024-05-15T03:59:31","date_gmt":"2024-05-15T03:59:31","guid":{"rendered":"https:\/\/blog.datumo.com\/en\/?p=16805"},"modified":"2024-10-22T09:08:50","modified_gmt":"2024-10-22T09:08:50","slug":"all-in-one-language-model-for-search-and-generation","status":"publish","type":"post","link":"https:\/\/blog.datumo.com\/en\/tech\/16805","title":{"rendered":"All-in-One Language Model for Search and Generation"},"content":{"rendered":"\t\t<div data-elementor-type=\"wp-post\" data-elementor-id=\"16805\" class=\"elementor elementor-16805\">\n\t\t\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-6d59fb7 elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"6d59fb7\" data-element_type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-ca5389a\" data-id=\"ca5389a\" data-element_type=\"column\" data-settings=\"{&quot;background_background&quot;:&quot;classic&quot;}\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-44765fa4 elementor-widget elementor-widget-text-editor\" data-id=\"44765fa4\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t<style>\/*! elementor - v3.23.0 - 05-08-2024 *\/\n.elementor-widget-text-editor.elementor-drop-cap-view-stacked .elementor-drop-cap{background-color:#69727d;color:#fff}.elementor-widget-text-editor.elementor-drop-cap-view-framed .elementor-drop-cap{color:#69727d;border:3px solid;background-color:transparent}.elementor-widget-text-editor:not(.elementor-drop-cap-view-default) .elementor-drop-cap{margin-top:8px}.elementor-widget-text-editor:not(.elementor-drop-cap-view-default) .elementor-drop-cap-letter{width:1em;height:1em}.elementor-widget-text-editor .elementor-drop-cap{float:left;text-align:center;line-height:1;font-size:50px}.elementor-widget-text-editor .elementor-drop-cap-letter{display:inline-block}<\/style>\t\t\t\t<meta http-equiv=\"refresh\" content=\"0; url=https:\/\/datumo.com\/en\/all-in-one-language-model-for-search-and-generation\/\">\n\n<span style=\"font-family: helvetica, arial, sans-serif; font-size: 12pt;\">One of the most effective methods for reducing hallucinations in language models is generating answers based on search results, and a prime example of this is RAG. RAG, or Retrieval-Augmented Generation, involves using search-augmented generation techniques, where the language model uses search results as input when generating answers to questions. It&#8217;s akin to searching for information on a portal site when you&#8217;re curious about something.<\/span>\n\n<span style=\"font-family: helvetica, arial, sans-serif; font-size: 12pt;\">For RAG to work effectively, it&#8217;s crucial to find the right documents related to the query. While leveraging a well-established search engine is an option, it&#8217;s not always feasible. For example, company-owned data or highly specialized information might not be available through search engines. In such cases, a separate database must be created, and only the information related to the query should be filtered.<\/span>\n\n<span style=\"font-family: helvetica, arial, sans-serif; font-size: 12pt;\"><!-- notionvc: d65fe94b-e2d8-4561-9859-50eb787a4d3e --><\/span>\n\n<span style=\"font-family: helvetica, arial, sans-serif; font-size: 12pt;\">So, how do we filter similar information<\/span><span style=\"font-family: helvetica, arial, sans-serif; font-size: 12pt; color: inherit; letter-spacing: -0.01em;\">? First, we need to calculate the similarity between the search query and the documents in the database. This requires vectorizing each document, a process known as <\/span><strong style=\"font-family: helvetica, arial, sans-serif; font-size: 12pt; color: inherit; letter-spacing: -0.01em;\">Embedding<\/strong><span style=\"font-family: helvetica, arial, sans-serif; font-size: 12pt; color: inherit; letter-spacing: -0.01em;\">.<\/span>\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-5b51a11a elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"5b51a11a\" data-element_type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-56f779b7\" data-id=\"56f779b7\" data-element_type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-38237e36 elementor-widget elementor-widget-pix-heading\" data-id=\"38237e36\" data-element_type=\"widget\" data-widget_type=\"pix-heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t<div  class=\"pix-heading-el text-center \"><div><div class=\"slide-in-container\"><h3 class=\"font-weight-bold heading-text el-title_custom_color mb-12\" style=\"\" data-anim-type=\"\" data-anim-delay=\"\">Embedding<\/h3><\/div><\/div><\/div>\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-68cf2fb7 elementor-widget elementor-widget-image\" data-id=\"68cf2fb7\" data-element_type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t<style>\/*! elementor - v3.23.0 - 05-08-2024 *\/\n.elementor-widget-image{text-align:center}.elementor-widget-image a{display:inline-block}.elementor-widget-image a img[src$=\".svg\"]{width:48px}.elementor-widget-image img{vertical-align:middle;display:inline-block}<\/style>\t\t\t\t\t\t\t\t\t\t<img fetchpriority=\"high\" decoding=\"async\" width=\"640\" height=\"224\" src=\"https:\/\/blog.datumo.com\/en\/wp-content\/uploads\/2024\/08\/embedding-1024x358.jpg\" class=\"attachment-large size-large wp-image-16818\" alt=\"\" srcset=\"https:\/\/blog.datumo.com\/en\/wp-content\/uploads\/2024\/08\/embedding-1024x358.jpg 1024w, https:\/\/blog.datumo.com\/en\/wp-content\/uploads\/2024\/08\/embedding-300x105.jpg 300w, https:\/\/blog.datumo.com\/en\/wp-content\/uploads\/2024\/08\/embedding-768x268.jpg 768w, https:\/\/blog.datumo.com\/en\/wp-content\/uploads\/2024\/08\/embedding-1536x537.jpg 1536w, https:\/\/blog.datumo.com\/en\/wp-content\/uploads\/2024\/08\/embedding.jpg 1920w\" sizes=\"(max-width: 640px) 100vw, 640px\" \/>\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-68d73fe4 elementor-widget elementor-widget-pix-text\" data-id=\"68d73fe4\" data-element_type=\"widget\" data-widget_type=\"pix-text.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t<div class=\"pix-el-text w-100 text-center \" ><p class=\"text-xs  text-gray-6 text-center font-weight-bold font-italic\" >Source: A Gentle Introduction to Retrieval Augmented Generation (RAG)<\/p><\/div>\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-6fbcef84 elementor-widget elementor-widget-text-editor\" data-id=\"6fbcef84\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t<span style=\"font-family: helvetica, arial, sans-serif; font-size: 12pt;\">The embedding process also uses a language model. However, the language model used for embedding differs from the one used for generating answers (e.g., GPT). The embedding model is specialized in understanding context, while the generative language model excels at predicting the next word. Although both are language models, they serve different purposes, which means each question must be processed separately. Naturally, this requires two computations, leading to some inefficiency.<\/span>\n\n<span style=\"font-family: helvetica, arial, sans-serif; font-size: 12pt;\">As a result, efforts are being made to integrate these two language models. This would lead to an all-in-one language model capable of both embedding for search and generating responses. In this newsletter, we&#8217;ll introduce GRIT, the first language model to integrate generation and embedding.<\/span><span style=\"font-family: helvetica, arial, sans-serif; font-size: 12pt;\"><!-- notionvc: fbdc191f-6ed5-4291-a5a0-808d1917224e --><\/span>\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-6173e50e elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"6173e50e\" data-element_type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-640007bc\" data-id=\"640007bc\" data-element_type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-2f17c5b6 elementor-widget elementor-widget-pix-heading\" data-id=\"2f17c5b6\" data-element_type=\"widget\" data-widget_type=\"pix-heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t<div  class=\"pix-heading-el text-center \"><div><div class=\"slide-in-container\"><h3 class=\"font-weight-bold heading-text el-title_custom_color mb-12\" style=\"\" data-anim-type=\"\" data-anim-delay=\"\">GRIT: Generative Representational Instruction Tuning<\/h3><\/div><\/div><\/div>\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-37c61b49 elementor-widget elementor-widget-image\" data-id=\"37c61b49\" data-element_type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t<img decoding=\"async\" width=\"640\" height=\"380\" src=\"https:\/\/blog.datumo.com\/en\/wp-content\/uploads\/2024\/08\/GRIT-graph-1024x608.jpg\" class=\"attachment-large size-large wp-image-16820\" alt=\"Graph of GRIT: Generative Representational Instruction Tuning\" srcset=\"https:\/\/blog.datumo.com\/en\/wp-content\/uploads\/2024\/08\/GRIT-graph-1024x608.jpg 1024w, https:\/\/blog.datumo.com\/en\/wp-content\/uploads\/2024\/08\/GRIT-graph-300x178.jpg 300w, https:\/\/blog.datumo.com\/en\/wp-content\/uploads\/2024\/08\/GRIT-graph-768x456.jpg 768w, https:\/\/blog.datumo.com\/en\/wp-content\/uploads\/2024\/08\/GRIT-graph-1536x912.jpg 1536w, https:\/\/blog.datumo.com\/en\/wp-content\/uploads\/2024\/08\/GRIT-graph.jpg 1920w\" sizes=\"(max-width: 640px) 100vw, 640px\" \/>\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-2f2c6ab9 elementor-widget elementor-widget-pix-text\" data-id=\"2f2c6ab9\" data-element_type=\"widget\" data-widget_type=\"pix-text.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t<div class=\"pix-el-text w-100 text-center \" ><p class=\"text-xs  text-gray-6 text-center font-weight-bold font-italic\" >Source: Generative Representational Instruction Tuning (Muennighoff et al., 2024)<\/p><\/div>\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-70eec9e elementor-widget elementor-widget-text-editor\" data-id=\"70eec9e\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t<p><span style=\"font-family: helvetica, arial, sans-serif; font-size: 12pt;\">How did GRIT acquire both capabilities? GRIT applied Instruction Tuning for both embedding representation and generation within a single model.<\/span><\/p><p><span style=\"font-family: helvetica, arial, sans-serif; font-size: 12pt;\"><!-- notionvc: bea38902-e29c-4bad-9a5b-a0ad60aea04a --><\/span><\/p><p><span style=\"font-family: helvetica, arial, sans-serif; font-size: 12pt;\">For embedding, the goal is to obtain good vector values, while for generation, the aim is to predict the appropriate next token. Although the training process is the same, the final output differs, which requires slight adjustments in the model&#8217;s final stage.<\/span><\/p>\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-193caf8 elementor-widget elementor-widget-image\" data-id=\"193caf8\" data-element_type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t<img decoding=\"async\" width=\"640\" height=\"301\" src=\"https:\/\/blog.datumo.com\/en\/wp-content\/uploads\/2024\/08\/embedding-task-1024x482.png\" class=\"attachment-large size-large wp-image-16824\" alt=\"embedding task, showing weight values\" srcset=\"https:\/\/blog.datumo.com\/en\/wp-content\/uploads\/2024\/08\/embedding-task-1024x482.png 1024w, https:\/\/blog.datumo.com\/en\/wp-content\/uploads\/2024\/08\/embedding-task-300x141.png 300w, https:\/\/blog.datumo.com\/en\/wp-content\/uploads\/2024\/08\/embedding-task-768x362.png 768w, https:\/\/blog.datumo.com\/en\/wp-content\/uploads\/2024\/08\/embedding-task-1536x723.png 1536w, https:\/\/blog.datumo.com\/en\/wp-content\/uploads\/2024\/08\/embedding-task.png 2000w\" sizes=\"(max-width: 640px) 100vw, 640px\" \/>\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-ddd349c elementor-widget elementor-widget-pix-text\" data-id=\"ddd349c\" data-element_type=\"widget\" data-widget_type=\"pix-text.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t<div class=\"pix-el-text w-100 text-center \" ><p class=\"text-xs  text-gray-6 text-center font-weight-bold font-italic\" >Source: Generative Representational Instruction Tuning (Muennighoff et al., 2024)<\/p><\/div>\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-9f23632 elementor-widget elementor-widget-text-editor\" data-id=\"9f23632\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t<p><span style=\"font-family: helvetica, arial, sans-serif; font-size: 12pt;\">As illustrated above, in the embedding task, the weight values of the last hidden layer are averaged (Mean Pooling), while in the generation task, the last hidden layer is used to predict the next token. A special token is added to the instruction to determine which task to train on. The following diagram visualizes this process.<\/span><!-- notionvc: 79403338-5718-46cb-82e3-0f1d79a5b0c7 --><\/p>\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-dd5dcbb elementor-widget elementor-widget-image\" data-id=\"dd5dcbb\" data-element_type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t<img loading=\"lazy\" decoding=\"async\" width=\"640\" height=\"275\" src=\"https:\/\/blog.datumo.com\/en\/wp-content\/uploads\/2024\/08\/grit-model-1024x440.jpg\" class=\"attachment-large size-large wp-image-16825\" alt=\"input and output of the GRIT model\" srcset=\"https:\/\/blog.datumo.com\/en\/wp-content\/uploads\/2024\/08\/grit-model-1024x440.jpg 1024w, https:\/\/blog.datumo.com\/en\/wp-content\/uploads\/2024\/08\/grit-model-300x129.jpg 300w, https:\/\/blog.datumo.com\/en\/wp-content\/uploads\/2024\/08\/grit-model-768x330.jpg 768w, https:\/\/blog.datumo.com\/en\/wp-content\/uploads\/2024\/08\/grit-model-1536x660.jpg 1536w, https:\/\/blog.datumo.com\/en\/wp-content\/uploads\/2024\/08\/grit-model.jpg 1920w\" sizes=\"(max-width: 640px) 100vw, 640px\" \/>\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-16d848f elementor-widget elementor-widget-pix-text\" data-id=\"16d848f\" data-element_type=\"widget\" data-widget_type=\"pix-text.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t<div class=\"pix-el-text w-100 text-center \" ><p class=\"text-xs  text-gray-6 text-center font-weight-bold font-italic\" >Source: Generative Representational Instruction Tuning (Muennighoff et al., 2024)<\/p><\/div>\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-8fae8eb elementor-widget elementor-widget-text-editor\" data-id=\"8fae8eb\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t<p><span style=\"font-family: helvetica, arial, sans-serif; font-size: 12pt;\"><a href=\"https:\/\/github.com\/GritLM\">https:\/\/github.com\/GritLM<\/a>While the same form of instruction is input into the GRIT model, the output results differ. To achieve good embedding results, the training data for instruction tuning explicitly includes the domain, intent, and <strong>text unit<\/strong>. In the example above, the instruction specifies retrieving an abstract (unit) of a scientific paper (domain) with the intent to search (retrieve).<\/span><\/p><p><span style=\"font-family: helvetica, arial, sans-serif; font-size: 12pt;\">A base model is required for instruction tuning. The GRIT model is based on the Mistral 7B model, with additional instruction tuning applied. While it required more training to achieve both embedding and generation goals, the model ultimately achieved commendable performance in both areas (\ud83d\udd17 <a href=\"https:\/\/github.com\/GritLM\">GritLM GitHub link<\/a>).<\/span><span style=\"font-family: helvetica, arial, sans-serif; font-size: 12pt;\"><!-- notionvc: 92de8f28-89b1-4aff-9da5-fc17623f2fab --><\/span><!-- notionvc: 79403338-5718-46cb-82e3-0f1d79a5b0c7 --><\/p>\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-65194cf8 elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"65194cf8\" data-element_type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-7c68ea\" data-id=\"7c68ea\" data-element_type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-6dcd4766 elementor-widget elementor-widget-pix-heading\" data-id=\"6dcd4766\" data-element_type=\"widget\" data-widget_type=\"pix-heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t<div  class=\"pix-heading-el text-center \"><div><div class=\"slide-in-container\"><h3 class=\"font-weight-bold heading-text el-title_custom_color mb-12\" style=\"\" data-anim-type=\"\" data-anim-delay=\"\">Can it be applied to RAG?<\/h3><\/div><\/div><\/div>\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-23f5e2b8 elementor-widget elementor-widget-text-editor\" data-id=\"23f5e2b8\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t<p><span style=\"font-family: helvetica, arial, sans-serif; font-size: 12pt;\">In the traditional RAG method, the query was first input into the &#8217;embedding model,&#8217; and then the query, along with the retrieved results, was input again into the &#8216;generation model.&#8217; However, by integrating these two functions into a single language model, inefficiencies were reduced. Information that has already been computed is cached, dramatically reducing retrieval speed.<\/span><!-- notionvc: 0daaadef-0cce-4ac7-88cf-f4abfbc55202 --><\/p>\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-489165ae elementor-widget elementor-widget-image\" data-id=\"489165ae\" data-element_type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t<img loading=\"lazy\" decoding=\"async\" width=\"640\" height=\"381\" src=\"https:\/\/blog.datumo.com\/en\/wp-content\/uploads\/2024\/08\/grit-query-doc-caching-method-1024x609.jpg\" class=\"attachment-large size-large wp-image-16827\" alt=\"GRIT\u2019s Query-Doc Caching method\" srcset=\"https:\/\/blog.datumo.com\/en\/wp-content\/uploads\/2024\/08\/grit-query-doc-caching-method-1024x609.jpg 1024w, https:\/\/blog.datumo.com\/en\/wp-content\/uploads\/2024\/08\/grit-query-doc-caching-method-300x178.jpg 300w, https:\/\/blog.datumo.com\/en\/wp-content\/uploads\/2024\/08\/grit-query-doc-caching-method-768x456.jpg 768w, https:\/\/blog.datumo.com\/en\/wp-content\/uploads\/2024\/08\/grit-query-doc-caching-method-1536x913.jpg 1536w, https:\/\/blog.datumo.com\/en\/wp-content\/uploads\/2024\/08\/grit-query-doc-caching-method.jpg 1920w\" sizes=\"(max-width: 640px) 100vw, 640px\" \/>\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-6a1fc84a elementor-widget elementor-widget-pix-text\" data-id=\"6a1fc84a\" data-element_type=\"widget\" data-widget_type=\"pix-text.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t<div class=\"pix-el-text w-100 text-center \" ><p class=\"text-xs  text-gray-6 text-center font-weight-bold font-italic\" >Source: Generative Representational Instruction Tuning (Muennighoff et al., 2024)<\/p><\/div>\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-43475d8a elementor-widget elementor-widget-text-editor\" data-id=\"43475d8a\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t<p><span style=\"font-family: helvetica, arial, sans-serif; font-size: 12pt;\">Let\u2019s look at GRIT\u2019s Query-Doc Caching method. The GRIT model (GritLM) first performs vector operations on the input question. This vector can be used for both document retrieval and as a condition for generation. Hence, it\u2019s stored in the first cache (1st Cache) without needing to be computed twice. The same applies to the retrieved document; the document vector, also obtained through GritLM, can be used for generation. This information is stored in the second cache (2nd Cache). Utilizing both sets of information as input for GritLM, the desired answer is generated, which is the principle behind Query-Doc Caching.<\/span><\/p><p><span style=\"font-family: helvetica, arial, sans-serif; font-size: 12pt;\"><!-- notionvc: 72de2010-b978-4304-9255-dcd4836d35ef --><\/span><\/p><p><span style=\"font-family: helvetica, arial, sans-serif; font-size: 12pt;\">However, its performance isn\u2019t yet sufficient for practical RAG applications. Results from RAG are not significantly different from those obtained without RAG. The researchers attribute this to the GRIT model not being fine-tuned for this specific method.<\/span><\/p><p><span style=\"font-family: helvetica, arial, sans-serif; font-size: 12pt; color: inherit; letter-spacing: -0.01em;\">While GRIT<\/span><span style=\"color: inherit; font-family: helvetica, arial, sans-serif; font-size: 12pt; letter-spacing: -0.01em;\">\u00a0is based on Mistral 7B, the nature of its training structure allows for the implementation of this system on any model. This suggests a lot of potential for research on other models. There\u2019s hope that applying this method to top-performing models like GPT-4 or Gemini Pro could allow for a more cost-effective implementation of RAG. Of course, further research is needed.<\/span><\/p>\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-197939c3 elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"197939c3\" data-element_type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-69ab0854\" data-id=\"69ab0854\" data-element_type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-7149037d elementor-invisible elementor-widget elementor-widget-pix-heading\" data-id=\"7149037d\" data-element_type=\"widget\" data-widget_type=\"pix-heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t<div  class=\"pix-heading-el text-center \"><div><div class=\"slide-in-container\"><h3 class=\"font-weight-bold animate-in heading-text el-title_custom_color mb-12\" style=\"\" data-anim-type=\"slide-in-up\" data-anim-delay=\"0\">Your AI Data Standard<\/h3><\/div><\/div><\/div>\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-2c57dfce elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"2c57dfce\" data-element_type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-50 elementor-top-column elementor-element elementor-element-184abca7\" data-id=\"184abca7\" data-element_type=\"column\" data-settings=\"{&quot;background_background&quot;:&quot;classic&quot;}\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-581945f8 elementor-widget elementor-widget-pix-heading\" data-id=\"581945f8\" data-element_type=\"widget\" data-widget_type=\"pix-heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t<div  class=\"pix-heading-el text-center \"><div><div class=\"slide-in-container\"><h5 class=\"text-white font-weight-bold heading-text el-title_custom_color mb-12\" style=\"\" data-anim-type=\"\" data-anim-delay=\"\">LLM Evaluation Platform<\/h5><\/div><\/div><\/div>\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-f1b6432 elementor-widget elementor-widget-pix-button\" data-id=\"f1b6432\" data-element_type=\"widget\" data-widget_type=\"pix-button.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t<span  class=\"btn m-0     text-primary btn-white d-inline-block      btn-normal\"     ><span class=\"font-weight-bold \" >Learn more<\/span> <i class=\"font-weight-bold pixicon-arrow-right2   ml-1\"><\/i><\/span>\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t<div class=\"elementor-column elementor-col-50 elementor-top-column elementor-element elementor-element-5ce5b125\" data-id=\"5ce5b125\" data-element_type=\"column\" data-settings=\"{&quot;background_background&quot;:&quot;classic&quot;}\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-d9b332b elementor-widget elementor-widget-pix-heading\" data-id=\"d9b332b\" data-element_type=\"widget\" data-widget_type=\"pix-heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t<div  class=\"pix-heading-el text-center \"><div><div class=\"slide-in-container\"><h5 class=\"text-primary font-weight-bold heading-text el-title_custom_color mb-12\" style=\"\" data-anim-type=\"\" data-anim-delay=\"\">About Datumo<\/h5><\/div><\/div><\/div>\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-5bbd5c47 elementor-widget elementor-widget-pix-button\" data-id=\"5bbd5c47\" data-element_type=\"widget\" data-widget_type=\"pix-button.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t<span  class=\"btn m-0     btn-primary d-inline-block      btn-normal\"     ><span class=\"font-weight-bold \" >Learn more<\/span> <i class=\"font-weight-bold pixicon-arrow-right2   ml-1\"><\/i><\/span>\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<\/div>\n\t\t","protected":false},"excerpt":{"rendered":"One of the most effective methods for reducing hallucinations in language models is generating answers based on search results, and a prime example of this is RAG. RAG, or Retrieval-Augmented Generation, involves using search-augmented generation techniques, where the language model&#8230;","protected":false},"author":1,"featured_media":16807,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[131],"tags":[],"class_list":["post-16805","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-tech"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v23.3 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>All-in-One Language Model for Search and Generation - DATUMO<\/title>\n<meta name=\"description\" content=\"Language Model for Search and Generation: For RAG to work effectively, it\u2019s crucial to find the right documents related to the query.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/blog.datumo.com\/en\/tech\/16805\" \/>\n<meta property=\"og:locale\" content=\"ko_KR\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"All-in-One Language Model for Search and Generation - DATUMO\" \/>\n<meta property=\"og:description\" content=\"Language Model for Search and Generation: For RAG to work effectively, it\u2019s crucial to find the right documents related to the query.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/blog.datumo.com\/en\/tech\/16805\" \/>\n<meta property=\"og:site_name\" content=\"DATUMO\" \/>\n<meta property=\"article:published_time\" content=\"2024-05-15T03:59:31+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-10-22T09:08:50+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/blog.datumo.com\/en\/wp-content\/uploads\/2024\/08\/diego-carneiro-zB38rW1oFWM-unsplash.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1920\" \/>\n\t<meta property=\"og:image:height\" content=\"1080\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"DATUMO\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"\uae00\uc4f4\uc774\" \/>\n\t<meta name=\"twitter:data1\" content=\"DATUMO\" \/>\n\t<meta name=\"twitter:label2\" content=\"\uc608\uc0c1 \ub418\ub294 \ud310\ub3c5 \uc2dc\uac04\" \/>\n\t<meta name=\"twitter:data2\" content=\"6\ubd84\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/blog.datumo.com\/en\/tech\/16805#article\",\"isPartOf\":{\"@id\":\"https:\/\/blog.datumo.com\/en\/tech\/16805\"},\"author\":{\"name\":\"DATUMO\",\"@id\":\"https:\/\/blog.datumo.com\/#\/schema\/person\/02ec2d0ba953b146878dab089dc735b6\"},\"headline\":\"All-in-One Language Model for Search and Generation\",\"datePublished\":\"2024-05-15T03:59:31+00:00\",\"dateModified\":\"2024-10-22T09:08:50+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/blog.datumo.com\/en\/tech\/16805\"},\"wordCount\":839,\"publisher\":{\"@id\":\"https:\/\/blog.datumo.com\/#organization\"},\"image\":{\"@id\":\"https:\/\/blog.datumo.com\/en\/tech\/16805#primaryimage\"},\"thumbnailUrl\":\"https:\/\/blog.datumo.com\/en\/wp-content\/uploads\/2024\/08\/diego-carneiro-zB38rW1oFWM-unsplash.jpg\",\"articleSection\":[\"tech\"],\"inLanguage\":\"ko-KR\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/blog.datumo.com\/en\/tech\/16805\",\"url\":\"https:\/\/blog.datumo.com\/en\/tech\/16805\",\"name\":\"All-in-One Language Model for Search and Generation - DATUMO\",\"isPartOf\":{\"@id\":\"https:\/\/blog.datumo.com\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/blog.datumo.com\/en\/tech\/16805#primaryimage\"},\"image\":{\"@id\":\"https:\/\/blog.datumo.com\/en\/tech\/16805#primaryimage\"},\"thumbnailUrl\":\"https:\/\/blog.datumo.com\/en\/wp-content\/uploads\/2024\/08\/diego-carneiro-zB38rW1oFWM-unsplash.jpg\",\"datePublished\":\"2024-05-15T03:59:31+00:00\",\"dateModified\":\"2024-10-22T09:08:50+00:00\",\"description\":\"Language Model for Search and Generation: For RAG to work effectively, it\u2019s crucial to find the right documents related to the query.\",\"breadcrumb\":{\"@id\":\"https:\/\/blog.datumo.com\/en\/tech\/16805#breadcrumb\"},\"inLanguage\":\"ko-KR\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/blog.datumo.com\/en\/tech\/16805\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"ko-KR\",\"@id\":\"https:\/\/blog.datumo.com\/en\/tech\/16805#primaryimage\",\"url\":\"https:\/\/blog.datumo.com\/en\/wp-content\/uploads\/2024\/08\/diego-carneiro-zB38rW1oFWM-unsplash.jpg\",\"contentUrl\":\"https:\/\/blog.datumo.com\/en\/wp-content\/uploads\/2024\/08\/diego-carneiro-zB38rW1oFWM-unsplash.jpg\",\"width\":1920,\"height\":1080},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/blog.datumo.com\/en\/tech\/16805#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/blog.datumo.com\/en\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"All-in-One Language Model for Search and Generation\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/blog.datumo.com\/#website\",\"url\":\"https:\/\/blog.datumo.com\/\",\"name\":\"DATUMO\",\"description\":\"The Data for Smarter AI\",\"publisher\":{\"@id\":\"https:\/\/blog.datumo.com\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/blog.datumo.com\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"ko-KR\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/blog.datumo.com\/#organization\",\"name\":\"DATUMO\",\"url\":\"https:\/\/blog.datumo.com\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"ko-KR\",\"@id\":\"https:\/\/blog.datumo.com\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/blog.datumo.com\/en\/wp-content\/uploads\/2022\/05\/2.1.webp\",\"contentUrl\":\"https:\/\/blog.datumo.com\/en\/wp-content\/uploads\/2022\/05\/2.1.webp\",\"width\":1080,\"height\":600,\"caption\":\"DATUMO\"},\"image\":{\"@id\":\"https:\/\/blog.datumo.com\/#\/schema\/logo\/image\/\"}},{\"@type\":\"Person\",\"@id\":\"https:\/\/blog.datumo.com\/#\/schema\/person\/02ec2d0ba953b146878dab089dc735b6\",\"name\":\"DATUMO\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"ko-KR\",\"@id\":\"https:\/\/blog.datumo.com\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/1942a8a63e1c8fa0d9be56cda789edd6c0a866259cd5dca24952597ffa8bab3d?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/1942a8a63e1c8fa0d9be56cda789edd6c0a866259cd5dca24952597ffa8bab3d?s=96&d=mm&r=g\",\"caption\":\"DATUMO\"},\"description\":\"DATUMO, The Data for Smarter AI. We seek to drive impact in the world by providing diverse and high quality data to build smarter AI.\",\"sameAs\":[\"https:\/\/blog.datumo.com\/en\"],\"url\":\"https:\/\/blog.datumo.com\/en\/author\/selectstar\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"All-in-One Language Model for Search and Generation - DATUMO","description":"Language Model for Search and Generation: For RAG to work effectively, it\u2019s crucial to find the right documents related to the query.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/blog.datumo.com\/en\/tech\/16805","og_locale":"ko_KR","og_type":"article","og_title":"All-in-One Language Model for Search and Generation - DATUMO","og_description":"Language Model for Search and Generation: For RAG to work effectively, it\u2019s crucial to find the right documents related to the query.","og_url":"https:\/\/blog.datumo.com\/en\/tech\/16805","og_site_name":"DATUMO","article_published_time":"2024-05-15T03:59:31+00:00","article_modified_time":"2024-10-22T09:08:50+00:00","og_image":[{"width":1920,"height":1080,"url":"https:\/\/blog.datumo.com\/en\/wp-content\/uploads\/2024\/08\/diego-carneiro-zB38rW1oFWM-unsplash.jpg","type":"image\/jpeg"}],"author":"DATUMO","twitter_card":"summary_large_image","twitter_misc":{"\uae00\uc4f4\uc774":"DATUMO","\uc608\uc0c1 \ub418\ub294 \ud310\ub3c5 \uc2dc\uac04":"6\ubd84"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/blog.datumo.com\/en\/tech\/16805#article","isPartOf":{"@id":"https:\/\/blog.datumo.com\/en\/tech\/16805"},"author":{"name":"DATUMO","@id":"https:\/\/blog.datumo.com\/#\/schema\/person\/02ec2d0ba953b146878dab089dc735b6"},"headline":"All-in-One Language Model for Search and Generation","datePublished":"2024-05-15T03:59:31+00:00","dateModified":"2024-10-22T09:08:50+00:00","mainEntityOfPage":{"@id":"https:\/\/blog.datumo.com\/en\/tech\/16805"},"wordCount":839,"publisher":{"@id":"https:\/\/blog.datumo.com\/#organization"},"image":{"@id":"https:\/\/blog.datumo.com\/en\/tech\/16805#primaryimage"},"thumbnailUrl":"https:\/\/blog.datumo.com\/en\/wp-content\/uploads\/2024\/08\/diego-carneiro-zB38rW1oFWM-unsplash.jpg","articleSection":["tech"],"inLanguage":"ko-KR"},{"@type":"WebPage","@id":"https:\/\/blog.datumo.com\/en\/tech\/16805","url":"https:\/\/blog.datumo.com\/en\/tech\/16805","name":"All-in-One Language Model for Search and Generation - DATUMO","isPartOf":{"@id":"https:\/\/blog.datumo.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/blog.datumo.com\/en\/tech\/16805#primaryimage"},"image":{"@id":"https:\/\/blog.datumo.com\/en\/tech\/16805#primaryimage"},"thumbnailUrl":"https:\/\/blog.datumo.com\/en\/wp-content\/uploads\/2024\/08\/diego-carneiro-zB38rW1oFWM-unsplash.jpg","datePublished":"2024-05-15T03:59:31+00:00","dateModified":"2024-10-22T09:08:50+00:00","description":"Language Model for Search and Generation: For RAG to work effectively, it\u2019s crucial to find the right documents related to the query.","breadcrumb":{"@id":"https:\/\/blog.datumo.com\/en\/tech\/16805#breadcrumb"},"inLanguage":"ko-KR","potentialAction":[{"@type":"ReadAction","target":["https:\/\/blog.datumo.com\/en\/tech\/16805"]}]},{"@type":"ImageObject","inLanguage":"ko-KR","@id":"https:\/\/blog.datumo.com\/en\/tech\/16805#primaryimage","url":"https:\/\/blog.datumo.com\/en\/wp-content\/uploads\/2024\/08\/diego-carneiro-zB38rW1oFWM-unsplash.jpg","contentUrl":"https:\/\/blog.datumo.com\/en\/wp-content\/uploads\/2024\/08\/diego-carneiro-zB38rW1oFWM-unsplash.jpg","width":1920,"height":1080},{"@type":"BreadcrumbList","@id":"https:\/\/blog.datumo.com\/en\/tech\/16805#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/blog.datumo.com\/en\/"},{"@type":"ListItem","position":2,"name":"All-in-One Language Model for Search and Generation"}]},{"@type":"WebSite","@id":"https:\/\/blog.datumo.com\/#website","url":"https:\/\/blog.datumo.com\/","name":"DATUMO","description":"The Data for Smarter AI","publisher":{"@id":"https:\/\/blog.datumo.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/blog.datumo.com\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"ko-KR"},{"@type":"Organization","@id":"https:\/\/blog.datumo.com\/#organization","name":"DATUMO","url":"https:\/\/blog.datumo.com\/","logo":{"@type":"ImageObject","inLanguage":"ko-KR","@id":"https:\/\/blog.datumo.com\/#\/schema\/logo\/image\/","url":"https:\/\/blog.datumo.com\/en\/wp-content\/uploads\/2022\/05\/2.1.webp","contentUrl":"https:\/\/blog.datumo.com\/en\/wp-content\/uploads\/2022\/05\/2.1.webp","width":1080,"height":600,"caption":"DATUMO"},"image":{"@id":"https:\/\/blog.datumo.com\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/blog.datumo.com\/#\/schema\/person\/02ec2d0ba953b146878dab089dc735b6","name":"DATUMO","image":{"@type":"ImageObject","inLanguage":"ko-KR","@id":"https:\/\/blog.datumo.com\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/1942a8a63e1c8fa0d9be56cda789edd6c0a866259cd5dca24952597ffa8bab3d?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/1942a8a63e1c8fa0d9be56cda789edd6c0a866259cd5dca24952597ffa8bab3d?s=96&d=mm&r=g","caption":"DATUMO"},"description":"DATUMO, The Data for Smarter AI. We seek to drive impact in the world by providing diverse and high quality data to build smarter AI.","sameAs":["https:\/\/blog.datumo.com\/en"],"url":"https:\/\/blog.datumo.com\/en\/author\/selectstar"}]}},"_links":{"self":[{"href":"https:\/\/blog.datumo.com\/en\/wp-json\/wp\/v2\/posts\/16805","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.datumo.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.datumo.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.datumo.com\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.datumo.com\/en\/wp-json\/wp\/v2\/comments?post=16805"}],"version-history":[{"count":35,"href":"https:\/\/blog.datumo.com\/en\/wp-json\/wp\/v2\/posts\/16805\/revisions"}],"predecessor-version":[{"id":16937,"href":"https:\/\/blog.datumo.com\/en\/wp-json\/wp\/v2\/posts\/16805\/revisions\/16937"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/blog.datumo.com\/en\/wp-json\/wp\/v2\/media\/16807"}],"wp:attachment":[{"href":"https:\/\/blog.datumo.com\/en\/wp-json\/wp\/v2\/media?parent=16805"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.datumo.com\/en\/wp-json\/wp\/v2\/categories?post=16805"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.datumo.com\/en\/wp-json\/wp\/v2\/tags?post=16805"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}