More data and information is now circulated than at any other time in history. Tenders seem to be getting bigger and to contain more information than ever before. Tendering is already an essential part of many organisations’ business activities. According to TED, the tender market in Europe is large and growing. Not only because of the public sector’s growing need for transparency regarding the spending of taxpayers’ money, but also many large companies in the private sector are following this trend, seeking better prices and a more structured way to negotiate with potential suppliers. This blog post is not about how to monitor the announcements or published tender information, it is about how to find the right information fast once the tender documents are received.
At Brainial, we find that our customers receive an average of 100 to 150 documents per tender, with some peaking at over 2,000 documents per tender. Even after reading or scanning the documents, it’s still hard to find the information you are looking for. Ctrl F is very often used but you need to know which document to look in to find the right information. Even worse, multiple team members are using the Ctrl F function per document of the tender and trying to find the same concerns or information. Colleagues contact each other via digital channels such as Teams, Slack and email, but also by phone or walk up to each other with questions about where they can find certain information. This includes information within the tender itself, or historical tender information and information from answers written in the past. Do you recognize that?
How do you deal with the flood of information? And how do you make sure that you don’t have to keep bothering your colleagues with unnecessary questions that create distractions? The search capabilities of a shared or local drive or searching within your Document Management System will not help you any further. We will explain how the Brainial solution can help and how we can tackle this problem.
With the Brainial solution, you are able to instantly find any information you want by using so-called “Smart Search” (vector search) or exact search. Search results can be filtered on categories, documents, document types, tags, tasks, etc.
Ideally, you would like to instantly find the information you are looking for. Filtering information on things like category, document, document type and tages become very handy to find the right results. Brainial made it easy to find the relevant information automatically with our advanced search capabilities that allows you to even find related information based on completely different keywords or even a different language. How did we do this? We explain in the next section of this blog post. And yes, we are getting slightly scientific about it :-).
Natural Language Processing (NLP) analyses language data in the form of documents by using various computer techniques¹. The machine learns the structure and meaning of human language and gives the output to the user². The goal of NLP is to produce correct text data that adds structure to the unstructured data by using linguistic knowledge¹. The added value of NLP in the future helps deal with text to build processes and models and manipulate them according to the algorithm in the computer². When the structure can establish grammatical relationships between the components of the text, the text is syntactic. When it reflects the meaning of the text, the text is called semantic¹. To make the right connections between words in a piece of text, syntactic information is essential. This is at the basic level of determining the grammatical function of a word¹.
An NLP system consists of a pipeline with a number of components that take care of processing the text. Each component provides structure in the text so that downstream processing becomes easier like for example the removal of stopwords or the replacement of all the language-specific letters with normal ones (e.g. é to e). The first components differ in task compared to the later components, which are more focused on analysing concepts and relationships. Methods used for this range from rule-based methods under which you can understand regular expressions to statistical and machine learning models¹. At Brainial we use a combination of rule-based methods and machine learning models.
In our case, an important component of the NLP pipeline is vectorization. What is vectorization? Word Embeddings or Word vectorization is a methodology in NLP to map words or phrases from vocabulary to a corresponding vector of real numbers which is used to find word predictions, word similarities/semantics. The process of converting words into numbers are called Vectorization³. The advantage of vectorization for the user is that you don’t have to know the specific search term to find the relevant information you want. At Brainial we use text similarity search for simplicity reasons and the fact that we are able to extend vector search with exact string matching in order to search on the exact search input of the user.
So how do we help? With the Brainial solution, users are able to instantly find any information they want by using so-called “Smart Search” (vector search) or exact search. Search results can be filtered on categories, documents, document types, tags, tasks, etc. because we categorize, label and classify the tender data during the initial analysis as part of the NLP pipeline activities. Stop wasting time and stop bothering your colleagues with asking where to find information. Start exploring the Brainial Smart Search functionality by requesting a demo.
Author: Fedor Klinkenberg is the co-founder and chief executive officer at Brainial. He leads the business strategy and the general and commercial teams. Fedor has >11 years experience working at one of the fastest growing tech companies in the Netherlands: Mendix. Fedor holds a Master of Science degree in Management of Innovation from Rotterdam School of Management, Erasmus University.