Meet the Objective V3 Ensembles: Enhanced Intent Understanding
Every week is launch week here at Objective HQ. And it’s ensemble launch week this week. We’re rolling out availability of our V3 family of ensembles, which are available as the default configuration for all new Indexes created. The V3 ensembles introduce broader foundational understanding for ‘exactness’ in user search queries. Early reports from pilot customers have all been great, and we’re excited to see you roll them out in your apps. Check out the Docs, and read on to learn all about the new ensembles!
If you’re new to Objective, good to meet you! You can grab an account and build a prototype for free on a wicked-powerful search platform. Go jump in!
Expanded foundational understanding for queries that need ‘exactness’.
Our mission here at Objective is to make search human. Part of this vision is providing a search abstraction layer that thinks like you and your users think. Humans don’t have trouble finding both related and exact information. If you were asked to skim a chapter of a book and find any mentions related to a “king of the jungle” or any times a character notes that the temperature is exactly 32°C, you’d breeze through it.
A lot of search systems, however, do one or the other well but not both. Typically, neural and lexical search systems are entirely different stacks of technology, with different retrieval & ranking techniques.
This is where AI-Native search is powerful. Every search Index you create on Objective is an ensemble — rather than just running a single off-the-shelf model, Objective Indexes are combinations of models that are tuned and blended to optimize for understanding your datasets, your users, and the queries that they create. Just like you or I might.
The V3 ensembles introduce an enhanced understanding of search queries that straddle the neural and lexical worlds. A lot of common use cases need the ability to match a keyword — searching by ID, looking up part numbers or SKUs. But those use cases frequently need this exactness with the ability to incorporate semantic understanding to navigate brand terms with “common words” in them. A search query like “Better Sour” is both a combination of common words, and a brand of sour gummies. The V3 ensembles let you leverage the ‘exactness’ understanding without sacrificing all of the additional power that comes with the neural understanding that is blended in.
You’ll see forms of blended “keyword search” on other platforms — most frequently, a system will deliver two sets of results & scores to you, and make your developers do the work to blend & rank those results manually. It’s both a pain, and puts all of the hard work on your team.
Objective Hybrid Search does that work for you, returning intelligently-blended lexical & neural search results.
Upgrading to V3-powered Indexes
Upgrading your text & multimodal Indexes to the new V3 ensembles is as simple as creating a clone of your existing Index. V3 is now the default for all indexes created with text
and multimodal
index types. Check out the full behavioral details in the Docs.
But as always - a big part of finding the right solution for your data and your users is experimenting! The V2 ensembles aren’t going away - to compare behavior between the two families, you can create neural-only V2 indexes with an index type text-neural
or multimodal-neural
.
To get a thorough evaluation of behavior between the two, check out Auto-Evaluations in Console! It lets you evaluate the relevancy of any index, and compare the results between any two. It’s a great way to get a more comprehensive sense of an index change than the “Looks Good to Me” we’re all guilty of sometimes.
We can’t wait to see what you build!