Our new Graph Transformer model is designed to extract the most important information from patent text, making your search process faster and more efficient. In this blog post, we'll give you a brief overview of Graph Transformer architecture. We won't dive into technical details, but we'll explain why this new model will improve your search results.
The Graph Transformer is a unique AI model that is based on the powerful Transformer architecture, which has been successfully used in some of the most advanced AI models, including ChatGPT. The key feature of Transformer architecture is the attention mechanism which allows the model to selectively focus on the most relevant parts of the input when making a prediction. In the case of natural language processing, this means that the model can focus on the most important words or phrases in the text.
However, the Graph Transformer goes a step further by operating over a knowledge graph instead of full patent text. A knowledge graph is a data structure that connects related concepts. Combining the attention mechanism with the knowledge graph plays a crucial role in helping the model understand relationships between concepts while enabling it to focus on the most relevant information. This is especially useful for patent search, where documents can be complex and technical.
Our Graph Transformer has only 6.5 million parameters, making it significantly smaller than other well-known Transformer-based models like GPT-3.5, which has 175 billion parameters and serves as the basis for ChatGPT. Despite having much fewer parameters, the Graph Transformer achieves exceptional performance in a patent search. Its specialized design, which includes mechanisms to selectively focus on relevant information and understand relationships between concepts in a document, enables it to excel in this application. Furthermore, the Graph Transformer's smaller size not only allows for more efficient processing but also lowers computational costs, making it a practical choice for patent search tasks.
In comparing the results of our Graph Transformer to those of the Tree-LSTM model previously used for patent search, we found that the Graph Transformer outperformed the Tree-LSTM in terms of recall across all categories. The recall is a metric that measures the percentage of relevant documents retrieved in a search. During our tests, the Graph Transformer achieved a top 5 total recall that was 15.66% higher than that of the Tree-LSTM, and a top 50 total recall that was 10.08% higher than that of the Tree-LSTM.
In practical terms, this means that the Graph Transformer retrieves more relevant results, especially at the top of the list. This improvement is particularly noteworthy when input texts and graphs are short, such as single claims and self-drafted graphs. We attribute this improvement to the attention mechanism used in the Graph Transformer, which enables it to selectively focus on the most important information and retrieve relevant documents more efficiently.
With the Graph Transformer, you can now enjoy a more efficient and accurate search experience. By leveraging a knowledge graph and attention mechanism, the Graph Transformer can better understand the context of the patent text, resulting in more relevant results being retrieved. In all categories, the Graph Transformer surpasses the Tree-LSTM model in terms of recall, meaning that you can retrieve more relevant documents in a shorter amount of time. With these advancements, the Graph Transformer is paving the way for the next generation of patent search.
Copyright © 2022
— IPRally Technologies Oy.