Big part of our vision for the better search is result explainability. However, there has been an issue: loading a result document with dynamic highlighting took up to 10 seconds. Today we have fixed the main bottleneck with new GPU servers.
We pre-calculate the deep learning based intelligence behind the highlighting, 25 documents at a time. GPU is much faster than a standard server, when you give it enough data. If there are any issues with the new GPU servers, we fall back to the slower highlighting. This fallback also allows us to cut over 60% from the GPU costs, by using servers without a complete uptime guarantee.
This GPU-based solution brought the load times down to 1s from the old 4-10s. Now most of the time is spent outside the neural nets and the next bottleneck is elsewhere. The speed may be reasonable for a long document, but we intend to make it even faster. The work continues.
Is your organization willing to be in the IPR frontline? Get in touch to get a demo or take a sneak peek of the future of patent AI as we see it.