23 August 2023
Under the Hood of Culture Lens: Embeddings
In today’s fast-paced digital landscape, understanding and staying ahead of emerging trends and consumer behaviors is crucial for businesses. At Quilt.AI, we’re harnessing the power of artificial intelligence to do just that.
Discover the AI technologies driving our apps, designed to unlock consumer insights.
With nearly ten million curated consumer experiences, Culture Lens categorizes these invaluable insights by context, subculture, and emotion. It’s more than a text based search engine — it’s an intelligent tool that empowers you to explore consumer’s experiences and do quick hypothesis testing with just a query.
In this series of “Under the Hood”, we will be showcasing the first of the three cornerstone technologies that power the engine behind Culture Lens: Embeddings.
Imagine the challenge of sifting through countless consumer experiences in search of relevant insights.
Too often insight lies beyond simple attributes such as objects or colors or hashtags.
Instead it is found in the meaning and complex social norms that photographs tend to encapsulate.
Through zero-shot multimodal image models, we embed nearly ten million carefully curated consumer photographs into vectors that embody their semantic essence during the storage process. These models have been trained on an expansive dataset exceeding half a billion pairs of images and texts. They capture the essence of images, ranging from simple features like colors, shapes, textures, and patterns to more intricate aspects such as photographic compositions, angles, and aesthetic concepts. This allows us to capture the meaning behind consumer imagery.
When a user searches for a particular experience, we take the text query and vectorize it using the same model. We can then compare it against the 10 million consumer images. With each vector representing a singular combination of features, vectors that are close together are more similar compared to those more far apart.
Leveraging a proprietary methodology of dimension reduction and semantic similarity calculation, we are able to identify the most relevant 100 images that reflect the experience. This allows users to craft a consumer space moodboard in seconds.
In Part 2, we will talk more about the illuminating analytics we use to analyze and discover meaningful insights within each experience.
Request a Sphere demo
Transform the way you understand online dataTry now