Attending the Haystack conference (day 2)

haystack

In my previous blog I have described the first day of the Haystack conference. In this blog as you might have guessed I am going to tell about the talks I visited in day 2. After a nice dinner, some beers and a nice game of Bocce it was time to go to bed.

Day one of the conferenceThe game of Bocce

Addressing variance in AB tests: Interleaved evaluation of rankers

The second day started off with a very interesting presentation by Erik Bernhardson from Wikimedia. The goal for AB tests is to find the better version of two ranking algorithms or queries. One way is to deploy two versions or use a switch and send some percentage of users to one version (A) and the other percentage to the second version (B). This does come with some challenges like deployment issues and previous results have shown you need a lot of data to determine the better version.

Erik discussed another method called interleaving. If you’d like more detailed information about interleaving please read the following paper: Large Scale Validation and Analysis of Interleaved Search Evaluation. Erik describes two forms of interleaving:

  • Balanced Interleaving -> with the result sets you to choose the first item of list A for the first item than take the first item from the second list if it is not the same. Move on to the next item till you have a complete list. Now clicks are recorded for the original list the item was taken from. This method could have a disproportionate number of clicks for one of the lists due to the way of choosing items.
  • Team draft Interleaving -> Difference with the balanced interleaving is that we choose for each round which list can pick first. A nice blog post that Erik gave us is from Netflix: Interleaving in online experiments at Netflix.

At Wikimedia they are using the team draft interleaving. They use the _msearch API from Elasticsearch to send both queries at the same time. To collect the clicks the right way, for each hit the originating list is provided by the backend.

This is most likely one of the first talks I am going to use to do my own experiments. In short, a really nice talk, if you just have time for one talk, this is the one I would watch.

Solving for satisfaction: Introduction to click models

This talk by Elizabeth Haubert turned out to be a big challenge. Not due to her level of knowledge, not due to her presentation skills, but due to technical difficulties. It turned out that the projector did no longer work nicely with laptops using the long cable. Therefore she had to do without a screen in front of her. Still, she managed to deliver a good presentation about using clicks to learn about your users. As I already read a lot of articles about this, I just listened to her and forgot to take notes. Therefore not a lot of notes. You can always wait for her presentation or read her excellent blog post.

What is learning to rank – blog by Elizabeth Haubert

Learning to rank search results – video by Jettro Coenradie and Byron Voorbach

Architectural considerations on search relevancy in the context of e-commerce

Interesting presentation by Johannes Peter from the MediaMarkt / Saturn concern about their journey from a commercial search solution to a new solution based on Elasticsearch. Their complete solution is running on Docker / Kubernetes. It makes heavy use of a project called Apache NiFi to connect all sources with Elasticsearch.

What I like about their approach is the way they handle the user query. They take four steps to parse the user query into an optimized Elasticsearch query. In the first step, they take away stop words, do things like stemming and lemmatization. The second step is about finding redirects if the user is searching for a category. After the redirect, they check for campaign rules in the third step. A campaign rule can be in place for Black Friday or Easter. In the final query parsing step they can add contextual information based on chosen categories.

When the query is executed they do re-ranking based on clicks or popularity and after that using stock information or other information that is added later on. In this step, they also can change the order of the facets based on the chosen or available categories.

Improving Search Relevance With Numeric Features in Elasticsearch

First presentation about new features of Elasticsearch 7 I attended. The presentation was given by Mayya Sharipova from Elasticsearch. She told us all about three new features already available in 7.0 and coming available in 7.1.

Rank feature(s) – This is a mapping type as well as a query type. The goal is to replace some of the ways the function score query was used. But then in an optimized way. It is created to implement popularity fields. The query supports a number of functions that you can use to calculate a boost from the numeric field. More information in this blog: Easier relevance tuning in elasticsearch 7.

Distance Feature Query – A better way to query with distances in mind. Again a more optimized query when working with distances on date fields and geo points. More information in the Elasticsearch documentation.

Vector Fields – Can deal with word embeddings, dense_vectors, and sparse_vectors from within your document store or inverted index. Especially in the field of Learning To Rank, the use of vectors is very interesting. Cannot wait to give it a spin. More information can be found here. If your looking for samples, check this Github repo: https://github.com/jtibshirani/text-embeddings

Natural Language Search with Knowledge Graphs

For the past few months, I have been boosting my NLP knowledge. Therefore I was very interested in this talk by Trey Grainger from Lucidworks. Trey is a very energetic presenter, hard to keep up with notes. He started out explaining the difference between ontology and knowledge graph. If I understood well if you would compare an ontology with a class than the knowledge graph would be the object instantiation of the class. I also liked the way he looks at the unstructured text, which has a lot of structure. It can have different kind of words, words that strengthen the meaning of other words. It can contain references to other articles or entities.

If you can construct a semantic knowledge graph from your text, you can understand your text a lot better. He mentioned some ways of doing this with Solr components. Need more time to dive into this subject.

Search with Vectors

This was my final presentation of the conference. This one was presented by Simon Hughes from dice.com. He started out with very familiar topics. Using clicks to impact search results. He continued with extracting concepts from strings by comparing string vectors. This feels like embeddings right? He continued with distributional approaches to word meanings with a reference to the following article. A review of the recent history of natural language processing.

Suddenly I realized that I lost it somewhere, tough theory for the last talk, at least for me. Still a very nice presentation. Will look at this one again when the video becomes available.

What a great conference

That is it, what a great conference it has been. I really liked meeting a lot of people I only knew via the relevance slack channel. And of course, the quality of all the talks was really great. I hope to be here again next year.

haystack