
Hello. We are Kikuzuki, Narita, Kikuchi, Miyahara from the Artificial Intelligence Laboratory.
To promote the use of generative AI at enterprises, Fujitsu has developed a generative AI framework for enterprises that can flexibly respond to diverse and changing corporate needs and easily comply with the vast amount of data held by a company and laws and regulations. The framework was successfully launched in July 2024 as part of Fujitsu Kozuchi (R&D)'s AI service lineup.
Some of the challenges that enterprise customers face when leveraging specialized generative AI models include:
- Difficulty handling large amounts of data required by the enterprise
- Generated AI cannot meet cost, response speed, and various other requirements
- Requirement to comply with corporate rules and regulations
To address these challenges, the generative AI framework for enterprises consists of the following technologies:
- Fujitsu Knowledge Graph enhanced RAG( *1 )
- Amalgamation Technology
- Generative AI Audit Technology

In this series, we introduce the "Fujitsu Knowledge Graph enhanced RAG" every week. We hope this helps you solve your problems. At the end of the article, we'll also tell you how to try out the technology.
Fujitsu Knowledge Graph Enhanced RAG Technology Overcomes the Weakness of Generative AI that Cannot Accurately Reference Large-Scale Data
Existing RAG techniques for making generative AI refer to related documents, such as internal documents, have the challenge of not accurately referencing large-scale data. To solve this problem, we have developed Fujitsu Knowledge Graph enhanced RAG (hereinafter, KG enhanced RAG) technology that can expand the amount of data that can be referred to by LLM from hundreds of thousands to millions of tokens to more than 10 million tokens by developing existing RAG technology and automatically creating a knowledge graph that structures a huge amount of data such as corporate regulations, laws, manuals, and videos owned by companies. In this way, knowledge based on relationships from the knowledge graph can be accurately fed to the generative AI, and logical reasoning and output rationale can be shown.。
This technology consists of five technologies, depending on the target data and the application scene. In addition to introducing our initiative to publish and share the knowledge built with this group of technologies, this is a six-part series.

(1) Root Cause Analysis (Now Showing)
This technology creates a report on the occurrence of a failure based on system logs and failure case data, and suggests countermeasures based on similar failure cases.
(2) Question & Answer (Now Showing)
This technology makes it possible to conduct advanced Q&A based on a comprehensive view of a large amount of document data such as product manuals.
(3) Software Engineering (Now Showing)
This technology not only understands source code, but also generates high-level functional design documents, summaries, and enables modernization.
(4) Vision Analytics (Now Showing)
This technology can detect specific events and dangerous actions from video data, and even propose countermeasures.
(5) Log Analysis (Now Showing)
This technology can answer various questions related to system logs in natural language, including fault cause analysis, anomaly detection, and summarization.
(6) Knowledge Publication (This Article)
This initiative aims to publish and share the knowledge built with the Fujitsu KG enhanced RAG mentioned above, in forms that can be used in real-world business operations and research.
In this article, I will introduce (6) Knowledge Publication in detail.
Overview of Knowledge Publication
This initiative aims to publish and share knowledge built with Fujitsu KG enhanced RAG in ways that are practically useful for real business operations and research. We will first organize the challenges in data utilization using generative AI, and then present our policy on how knowledge publication can help solve those challenges.
What are the challenges in data utilization with generative AI?
With advances in search systems and generative AI, it has become easier to handle various internal and external data. Tasks such as listing manuals and documents or performing fuzzy searches are now relatively simple. On the other hand, when searching and analyzing internal/external data or websites with generative AI, you may have encountered situations like the following:
- You want to consult manuals (e.g., manuals for internal systems, cloud systems, or equipment) to learn the procedures, but the answer you get contains proper nouns or concrete examples that are slightly off from what you need.
- You want to analyze multiple sections of complex documents (e.g., contracts, standard documents, government documents) and determine a policy for individual cases, but only scattered passages that match certain keywords are referenced, making it hard to grasp the overall picture.
- You want to perform statistical analysis (such as counting cases, creating rankings) or AI-based analysis (such as analyzing human behavior, extracting relationships between papers) using open data (e.g., video datasets, audio datasets, collections of papers), but due to insufficient metadata, the analysis is inadequate and accuracy does not improve.
In this way, while it is possible to perform simple searches and high-level investigations in the initial stages, once you move on to more in-depth study and analysis, there is a challenge in that the data cannot be fully leveraged. These issues stem from technical challenges in data utilization using generative AI, and active efforts are underway to address them through best-practice surveys and the development of guidelines (e.g., Example 1,Example 2).
While these challenges are partly due to the technical limitations of generative AI, the biggest factor is that most of the target data is "unstructured data". Therefore, it is essential to convert such unstructured data into more practically usable "structured data" by adding metadata and by encoding relationships between entities and events. In this article, we refer to data that has been structured in this way and transformed into a practically useful form as "knowledge".
What existing initiatives are there?
The same awareness of these issues has existed since before the rise of generative AI, and for each specific purpose, efforts have been made to structure data using proprietary formats or RDF formats ( *2 ) (examples: Google Data Commons, Discourse Graphs, KG publication). However, these have mainly targeted analytical data handled by a limited number of researchers, and cannot be regarded as broad initiatives to structure the general unstructured data that is used on a daily basis.
The Fujitsu KG enhanced RAG introduced at the beginning of this article is a technology that can automatically create KGs, which are structured data, and is a promising technology for solving challenges in data utilization. Until now, however, the technology has been applied mainly to highly confidential data (such as Fujitsu's internal data and customer data), and public data has not been included in its scope.
Thus, with existing initiatives alone, it has not been sufficient to convert highly beneficial public data (manuals, documents, datasets, etc.) into forms that can be used as knowledge.
Why publish knowledge? - A new proposal to solve challenges in data utilization
To fundamentally solve the challenges of data utilization, it is important to structure highly beneficial public data and convert it into knowledge that anyone can easily handle. With this in mind, we have started an initiative to convert public data into KGs using Fujitsu KG enhanced RAG and to broadly publish that knowledge.
The figure below shows the overall concept of knowledge publication. By enabling various people to build their own chat applications using the published knowledge, we aim to foster a community that promotes data utilization. In addition, Fujitsu KG enhanced RAG provides advanced functions for already-created knowledge (such as knowledge customization/extension and statistical processing). Those who need these capabilities can also access them via the Fujitsu Kozuchi platform. We also envision that researchers will use this as benchmark data to support research and development in AI technologies.

We hope that many people will make use of the knowledge structured by Fujitsu KG enhanced RAG and work together with us to solve challenges in data utilization. To make it easier to use, we are publishing it in Neo4j format, which is easy to access from LangChain, an OSS library. At the time of writing this article, we are publishing part of the knowledge created using three technologies that make up Fujitsu KG enhanced RAG: Root Cause Analysis, Question & Answer, and Vision Analytics. In the following chapters, we will explain the concrete value and usage methods that each type of knowledge provides.
Value of Knowledge and Application Examples
(1) Turning manuals and incident reports into "usable knowledge"
Overview of the initiative
Manuals and incident reports are among the documents that are difficult to interpret. Even when people try to use them in actual operations when a failure occurs, experts must spend a great deal of effort reading them and conducting root-cause analysis and investigation. Generative AI can support reading, but if it does not capture the causal relationships behind failures, the analysis becomes fragmented, making it difficult to avoid insufficient answers or hallucinations. Fujitsu KG enhanced RAG for Root Cause Analysis can automatically extract complex causal relationships between events from descriptions in incident reports and similar documents, and convert them into a KG.
By leveraging this knowledge, it becomes possible to present concrete procedures and causal chains with high accuracy at failure-response sites. In addition, by explicitly presenting causality, the system can provide superior answers from the standpoint of explainability.
Concrete example
Here we introduce a sample knowledge dataset created based on the Official Ubuntu Documentation.
As shown in the figure below, sentences in the documents related to failures are automatically analyzed, and the causal chains leading up to the failure and the recovery procedures are structured as a KG.

Below is an example of root cause analysis performed by giving this KG to an LLM as a prompt. Compared with a simple RAG approach that just ingests the documents, this enables the system to present more concrete procedures (see the example below) and to explain the causal chain leading to the failure.

Furthermore, by loading this KG into the KG enhanced RAG for Root Cause Analysis app on Fujitsu Kozuchi, it becomes possible to perform more advanced analyses, such as visualizing the troubleshooting steps for isolating the failure (see the example below).

(2) Turning documents that include figures and tables into "usable knowledge"
When performing Q&A using Retrieval-Augmented Generation (RAG) with generative AI, it often fails to answer correctly because it cannot select the chunks (the minimum units of a document fed into the generative AI) needed for the answer from a large volume of documents. One countermeasure is to structure the text so that the necessary passages can be extracted more accurately. To further improve retrieval accuracy based on structured document information (KG), we extended the KG with two additional types of information: "insights" and "enumeration relationships". Below, we explain this KG in more detail.
Focused challenges
KG enhanced RAG for Q&A 2.0 provides a Q&A system that delivers highly accurate answers to end users, targeting many documents that include figures and tables. In RAG applied to documents with images, such as design documents and technical documents, it often fails to retrieve documents that are useful for answering questions due to the following issues:
1. Lack of "insights"
In standard RAG, retrieval is performed based on vector similarity between the "question text" and the "document content". However, documents that include images often do not explicitly state "insights" such as business objectives or intent, making it difficult to extract them as documents suitable for the question.
2. Fragmentation of chunks
Documents often contain enumerated information that spans multiple pages, such as procedures or product lists.
The mainstream chunking methods are fixed-length chunking and variable-length chunking that includes semantic analysis, but enumerated information that spans pages cannot be extracted correctly. As a result, what should be a single block of enumerated text is split into multiple chunks and cannot be used appropriately in answers.
Approach
To address these challenges, we added the following two elements to KG generation:
1. Insight sentences
For pages that include images, we dynamically generate insight sentences that are not mere summaries but instead verbalize in what kind of business context or intent the information has value, and link them to the corresponding page nodes to enrich the KG.
- Related paper: 画像を含む文書から検索用洞察を生成することによるマルチモーダルRAGシステムの検索精度の改善 (accepted at JSAI2025)
2. Enumeration relationships
We analyze the relationships between enumerated sentences that are easily fragmented by chunking, and connect chunks via enumeration relationships so that enumerated information can be handled as a single unit.
- Related paper: Chunk-Link: Context-aware chunk completion (accepted at RAGE-KG 2025)
Effect
The AI becomes able to accurately answer questions about the background that cannot be read from the figures and tables alone, such as "why the numbers turned out that way" (growth drivers, profit breakdown, etc.). The figures below show examples of KGs that include insight nodes and KGs with enumeration relationships supplemented. Each page node representing a page of the document is enriched with numerous insight nodes, thereby complementing the figures and tables with many insights.
Concrete examples
1. Insight sentences
Here is a concrete example of an insight page. On the following page (P4) of the public Q&A document, there is no body text; instead, figures and tables summarizing items such as revenue are shown. When such figures and tables are viewed in isolation, the information that can be extracted is limited to simple numerical data.

However, by attaching insight nodes that suggest the content, we can supplement the figure/table with information that cannot be read from it alone.
Representative examples of insight nodes: 1. Overall corporate strategy and market trends: A narrative summarizing FY2023 Service Solutions revenue of 2,137.5 billion yen / adjusted operating profit of 237.2 billion yen, the expansion of Fujitsu Uvance, and the company's flexible response to market changes. 2. Breakdown of changes in adjusted operating profit: Organizing contributions with figures - such as 60.2 billion yen from increased revenue, 35.3 billion yen from margin improvement, and a negative 21.4 billion yen from increased investment - so it can be used as a background explanation of "why profits increased". 3. Performance of Service Solutions: Listing revenue, adjusted operating profit, and operating margin (6.1% -> 11.1%) for FY2021-2023 so that year-on-year trends and margin improvements can be retrieved immediately.
By combining these insight nodes, the context needed for anticipated Q&A - such as "growth drivers", "profit breakdown", and "comparison of profit margins" - is automatically indicated and leads to the final answer.
2. Enumeration relationships
Next, we present an example that illustrates enumeration relationships. The figure below shows a KG from the same dataset, this time focusing on enumeration relationships. In this example, for instance, the P3 node and the P4 node are connected by an enumeration node, which suggests that an enumeration spans across these two pages.

Let's look at the actual document. You can see that P3 and P4 each describe the overall situation from different perspectives.


(3) Turning long-term video into "usable knowledge"
Like documents, video data is also a valuable source of information that records on-site conditions and work procedures in detail. However, because most of it is unstructured data, it has been difficult to locate the desired information, and thus it has not been fully utilized. We have developed a technology that structures this video data and converts it into "usable knowledge" that anyone can leverage.
Focused challenges
In video analysis, a key challenge is how to efficiently extract the necessary information from long-term videos or large volumes of video. Existing video understanding models can only process short clips of a few seconds at a time. For example, a 5-minute video at 30 fps has about 9,000 frames, and due to constraints on computing resources, memory, and the context window of LMMs ( *3 ), it is difficult to handle all frames simultaneously.
Therefore, the conventional approach has been to either subsample frames or split the video into short clips, analyze them individually, and then integrate the results afterward. However, this approach risks missing important scenes, and because the same question must be processed repeatedly for many clips, computation costs and response times increase.
In addition, because RAG technology is mainly designed for text, it is difficult to directly handle relationships among people, objects, and actions contained in video and audio. As a result, important events and relationships between entities in the video cannot be fully exploited, which may reduce the accuracy of question answering.
Understanding long-term video: Because LMMs such as GPT-5 and Gemini have context window limitations, it is necessary to subsample video frames or split the video into short clips for processing. Thus, there is a need for technology that can understand long-term videos without missing important scenes.
Support for multimodal input: Existing RAG technologies are primarily designed for text data, and mechanisms for integrally handling video, audio, and text are not yet sufficiently established.
Approach
To solve the challenges of understanding long-term videos with RAG technology, we have developed a method that extracts elements such as people, objects, and actions from video, and generates and extends an updatable multimodal KG that explicitly represents their temporal and spatial relationships. We call the graph constructed in this way a Multimodal Dynamic Knowledge Graph (hereafter, Multimodal DKG).
Organizing video into "a form that can be used later": In this method, we first analyze every frame in the input video only once, using relatively lightweight recognition models such as object detection and action recognition. As a result, we extract information such as:
- People and objects that appear in the video
- Actions and events that occur
- The time and location at which they occur
We then store this information as a graph structure composed of "nodes" representing elements and "edges" representing relationships between elements. Because this graph can be incrementally updated as the video progresses, it can directly handle long-term videos. There is no need to subsample frames to accommodate the input-length limitations of LMMs.
A mechanism that avoids reprocessing the video over and over: In conventional methods, each time a question is asked, the video must be split into short clips, and each clip must be processed by an LMM. In contrast, our method proceeds as follows:
- Process the video only once to create a KG
- At question time, first search the KG
- Feed only the minimum necessary video segments related to the question into the LMM
Therefore, there is no need to process the entire video each time, which greatly reduces computation cost and response time.
Using the graph to narrow down the necessary scenes: During question answering, we first search the KG for people, objects, actions, and time intervals related to the question. For example:
- "When is this person speaking?"
- "Where are the scenes in which a specific object is being used?"
For such questions, we can directly identify the relevant time ranges by traversing nodes and edges. At this stage, we do not use heavy LMM processing, so we can reliably extract important scenes while reducing unnecessary video input.
Handling multimodal input (video, audio, text) in an integrated way: In a Multimodal DKG, not only video information but also speech recognition results (spoken content) and text information can be integrated into the same graph. Because information is connected using people and objects as common anchors, it becomes possible, for example, to start from the video and then refer to related audio or text information. By combining a KG with an LLM in this way, we can expect improved answer accuracy and suppression of incorrect LMM outputs (hallucinations).

In summary, a "Multimodal DKG" is constructed mainly through the following steps:
- Element recognition: Using lightweight video recognition models (such as object detection and action recognition), we detect and classify key elements - people, objects, actions, etc. - from each frame of the video. We then analyze temporal changes across frames and extract events (actions) in verbalized form, such as "picks up a tool" or "tightens a bolt".
- Relationship extraction: We analyze the temporal and spatial relationships among the detected elements (e.g., person A, tool B, action C) and define semantic relationships such as "person A is using tool B".
- KG construction: We add the extracted elements and their relationships to the KG as nodes (elements such as people and objects) and edges (relationships).

Effect
By converting video into a Multimodal DKG and leveraging it with KG enhanced RAG technology, we can achieve extremely high-precision video understanding that was impossible with conventional search methods. Concretely, it becomes possible to answer complex queries that go beyond simple keyword search and take into account relationships among people, objects, and actions - for example, "Find the scene where person A is using tool B to perform procedure C". As a result, video evolves from a mere recording medium into "usable knowledge" that creates business value by enabling more advanced analysis and insight discovery, such as analyzing work procedures, transferring expert skills, and detecting dangerous behavior. Furthermore, by structuring video as a KG, the content of long-term videos can be retained as knowledge, making it possible to search and reference information from long videos efficiently.

Concrete example
Let's look at a concrete example of how a KG is generated from video. The figure above shows a scene in which a woman wearing a gray coat is pushing a cart down a supermarket aisle and entering the sales floor. From each frame of the input video, captions are generated (e.g., "A woman wearing a gray coat pushed a cart and entered the sales floor".). From these captions, nodes representing elements (such as Woman, Gray coat, Cart, Store aisle) and edges representing relationships (such as wears, pushes, enters) are extracted to form small graphs, which are then integrated into a single KG. In this fragment of the KG, relationships such as (Woman) -[wears]-> (Gray coat), (Woman) -[pushes]-> (Cart), and (Woman) -[enters]-> (Store aisle) are represented.
By constructing a KG in this way, it becomes possible to perform composite searches that combine multiple conditions, such as "the scene where a woman wearing a gray coat is pushing a cart and entering the sales floor".
Let's Explore the Knowledge: How to Use It
The knowledge and demo code you can actually try are stored in the following locations. In this chapter, we explain how you can experience and use this knowledge.
| Item | URL | Description |
|---|---|---|
| Knowledge | HuggingFace | The KG data itself (Neo4j GraphML format), stored under names ending with *_Knowledge_Dataset |
| Demo code | GitHub | Demo code for importing KG data into a Neo4j environment and running QA using LangChain |
(1) Using knowledge from manuals and incident reports
At the time of writing, for RCA we have stored knowledge generated from: Utuntu documentation Documents assuming Windows setup troubles (Japanese, English) Documents assuming equipment troubles in manufacturing (Japanese, English)
You can first perform simple failure analysis using the demo code.
If you are interested in more advanced use of this knowledge, please try the KG enhanced RAG for Root Cause Analysis app on Fujitsu Kozuchi. You can load the knowledge and perform failure analysis, as well as refer to troubleshooting procedures. You can also extend the knowledge. For example, by loading the Ubuntu documentation knowledge and adding incident-report documents for applications running on Ubuntu, you can build knowledge that covers both Ubuntu and those applications.
(2) Using knowledge from documents that include figures and tables
The KG introduced here can be easily created just by registering the public Q&A document into ForQ&A 2.0. Since the created KG is published on GitHub, please import it into Neo4j and try it out yourself.
1. Explanation of the Knowledge
First, we will explain the main representative nodes.

- Minimum set of nodes required for RAG
At the center of the KG is MMKGFileChunkNode, which indicates that the KG itself is a document, and linked to it are MMKGPageChunkNode nodes, each representing a page of the document.
These two node types are the minimum set required to simply ingest the document into RAG.
- Insight nodes
Next, focus on the numerous MMKGIndexNode nodes clustered around each MMKGPageChunkNode. These are insight nodes generated from the content of that page.
Insight nodes complement a single page with a large amount of information from various perspectives. By checking the text property of any MMKGIndexNode, you can see what kind of insight has been generated.
- Enumeration-relationship nodes
Finally, look at the MMKGChunkLinkNode nodes that connect MMKGPageChunkNode nodes. These nodes indicate that enumerated information spans across the pages represented by those MMKGPageChunkNode nodes.
2. Effect of the Knowledge
Here we present the actual execution results when this KG is integrated into a Q&A application. The results of running Q&A using the knowledge graph shown earlier are as follows:
| Item | Content |
|---|---|
| Query | What is the reason for the decline in Fujitsu's operating profit in FY2023? |
| Answer | The main reason for the decline in Fujitsu's operating profit in FY2023 is the inclusion of one-off losses associated with business restructuring and structural reforms. Specifically, operating profit was 160.2 billion yen, a decrease of 175.3 billion yen from the previous fiscal year, and this includes one-off losses of 123.4 billion yen. These losses arose as part of strategic investments and reorganizations aimed at the company's long-term growth. In addition, intensifying competition and changes in the market environment may also have had an impact. In particular, while Fujitsu is strengthening its Service Solutions business and developing new markets, these initiatives are factors that put pressure on short-term profits. |
In this Q&A example, P3/P4 and the insight chunks work together within the KG to supplement background and connections that cannot be obtained from simple phrases or numbers alone, leading to a more convincing answer.
- Complementation via enumeration relationships
P3 and P4 are connected as an enumeration relationship, so even if RAG retrieval picks up only one of the pages, it can still follow the bidirectional context.
- Complementation via insight information
For P3, we attach anticipated Q&A such as "What are the reasons for the decline in operating profit in FY2023?" as insight chunks, enabling the system to retrieve the appropriate page from the user's question.
The Q&A application used in this example is published on Fujitsu Kozuchi as Knowledge Graph enhanced RAG for Q&A 2.0. By registering the public Q&A document, you can easily reproduce this example, so please refer to it if you are interested.
(3) Utilizing knowledge from long-term video
To help more people experience the effectiveness of Multimodal DKG, a KG designed for video, we are applying it to publicly available video datasets and publishing the results as knowledge. As the first step, we applied it to FieldWorkArena, a benchmark published by Fujitsu for evaluating AI agents specialized in on-site work such as factories and retail stores, and released the video data as knowledge under the title "FieldWork as a Knowledge" in January 2026.
1. FieldWork as a Knowledge
In "FieldWork as a Knowledge", the video data in the FieldWorkArena dataset is structured into a Multimodal DKG. We publish two types of KGs with different intended uses:
| KG type | Description | Number of videos | Input video fps | Use case |
|---|---|---|---|---|
| Single-video KG | A KG that structures events and elements within a single video at high granularity | 1 | 1 | Pinpoint search for specific scenes within a video |
| Multi-video KG | A KG that represents relationships and common scenes across multiple videos | Per related-scene unit | 0.2 | Cross-video search for videos that contain specific scenes |
The single-video KG structures the information contained in a single video at high granularity and represents it as a graph. Because scenes are described in fine segments, you can efficiently search for specific moments within the video over short time ranges. The multi-video KG, on the other hand, structures information at a coarser granularity across multiple videos and represents inter-video relationships as a graph. Although its granularity is lower than that of the single-video KG, it enables cross-cutting exploration of relationships between videos and efficient discovery of videos that contain specific scenes.
"FieldWork as a Knowledge" is published on Hugging Face. You can access the Hugging Face page by applying via the application form provided on the FieldWorkArena site. In addition, sample code for using the KGs is available on Github. Besides sample code for performing Q&A directly over the KG, we also provide code for searching specific scenes, so please try it out.
2. Fujitsu Kozuchi Knowledge Graph enhanced RAG for Vision Analytics
From the end of March 2026, on Fujitsu Kozuchi we will release the technology for structuring video data as a Multimodal DKG as an updated version of "Fujitsu Knowledge Graph enhanced RAG for Vision Analytics". In the previous section's "FieldWork as a Knowledge", you can use the structured results of existing datasets, whereas with KG enhanced RAG for VA you can take a more practical approach by structuring any video data you own in the same way. If you are considering using this technology, please contact us via the "Contact" section on this site. This technology is characterized not only by analysis and structuring of a single video, but also by its ability to handle multiple videos of arbitrary length in a cross-cutting manner. As with "FieldWork as a Knowledge", this makes it possible to generate both single-video KGs, which are highly granular structures at the individual video level, and multi-video KGs, which also represent relationships between videos. If "FieldWork as a Knowledge" has sparked your interest in the possibilities, we encourage you to take the next step and try building a Multimodal DKG using your own data.
Conclusion
In this article, we introduced our initiatives to solve challenges in data utilization using generative AI. Specifically, we are working to convert highly beneficial public data (manuals, documents, datasets, etc.) into practically useful structured knowledge, and to publish and share it. This is an initiative to broadly share with society the knowledge automatically created by Fujitsu KG enhanced RAG, developed at Fujitsu's Artificial Intelligence Laboratory.
At the time of writing, we have just begun publishing part of the knowledge created using the three technologies that make up Fujitsu Knowledge Graph enhanced RAG: Root Cause Analysis, Question & Answer, and Vision Analytics. Going forward, we plan to expand both the public data we target and the scope of the knowledge, and work together with you to solve challenges in data utilization.
We also welcome your comments, such as requests regarding public data that you would like to see converted into structured knowledge. We hope that this initiative will help support your data utilization and AI development efforts.
*1: RAG technology: Retrieval Augmented Generation. A technology that extends the capabilities of generative AI by combining it with external data sources.
*2: RDF: Resource Description Framework, a framework designed to integrate and interlink information across different data sources
*3:context window: the maximum number of tokens a model can process