Digamma.ai Q&A Series: Karen Ouk of mode.ai

Leave a comment
Artificial Intelligence / Big Data / Machine Learning
mode.ai digamma.ai

Digamma.ai Q&A Series: Interview with Karen Ouk, SVP Business Development at mode.ai, which builds AI-powered B2B2C visual chatbots for retailers.

1. mode.ai’s mission is to allow users to rediscover shopping in a more visual, conversational and personalized way. Why is this relevant now on the consumer front?

There are three components of our mission: visual search, conversational commerce and personalization.

Visual search allows us to offer features that purely text-based search cannot accomplish. For example, say a shopper is looking for a specific type of dress with a V-cut neck and embellishment on the waistline — this sort of item can be something you imagine or saw somebody wearing, but would be very difficult to find using pure text-based search. Our technology allows users to upload an image of a dress they’ve seen in order to find visually similar items.

Another feature that mode.ai offers using visual search technology is the ability to provide style inspiration to users. With computer vision, we can find people who’ve worn outfits that have a similar look to a particular apparel or accessory item, and then provide users with inspiration of how other people are wearing that similar item.

Next, we believe that conversational commerce — the intersection of messaging applications and shopping — is the way that people will shop in the future. Conversational commerce gives customers access to stores 24/7, and the integration of the mode.ai chatbot is just like talking to a real sales associate. Millennials in particular, who are very active on messaging platforms but remain the least engaged consumer, will likely embrace this high-tech shopping experience on messaging platforms once it catches on in North America. We’re already seeing this trend in China with WeChat, Japan with LINE, and in South Korea with Kakao Talk.

Lastly, personalization is what consumers are demanding more and more today. Shoppers want services that cater to their preferences and their needs. Data also has shown that millennials and younger generations are willing to share information about themselves if they know that the results will be much more personalized.

mode.ai creates a highly personalized shopping experience that allows customers to save their size and style preferences conversationally, without having to fill out a cumbersome survey upfront. For example, our mode.ai bot will ask product personalization questions when it detects that a customer is shopping in a new category where they have not specified their preferences yet. The bot is able to learn and save what brands and styles a customer prefers, refining searches using visual search technology. Shoppers are also able to save items, which is not only convenient for the user, but also teaches the mode.ai bot what types of styles and items a customer likes.

2. How do you anticipate that the experience you are providing to consumers is going to change the landscape for brands and retailers?

We believe that consumers are going to continue to demand a more personalized, conversation-driven experience through e-commerce. Brick and mortar is static; customers are moving online, and our bots allow retailers to engage with shoppers in a much more personalized way, based entirely on their unique needs and preferences.

The brands and retailers who adopt this innovative, conversational shopping experience over messaging platforms — creating a highly personalized shopping experiences that meets their customers on social media channels where they are already very active — are going to be the ones who will be successful in today’s extremely competitive industry.

We are using mode.ai technology to build chatbots for a number of the largest global brands including rue21, which we launched in April, and many other companies that we will be announcing over the next few months. mode.ai creates a commerce channel for companies over messaging that is very turnkey and low maintenance, and is fully managed by us. We also offer analytics that cannot be acquired through any other type of platform, giving retailers rich conversational insights with signals about user intent and preferences.

3. How do you see the fields of computer vision and conversational UX/UI impacting the retail industry over the next decade?

Conversational UX/UI will have a major impact on the retail industry. As people spend more time on messaging platforms, they are going to want more and more of the services that they use to be integrated within those platforms. Without computer vision technology, users are limited to pure text-based search, which comes with lots of limitation.

A lot of the retail and technology industry’s biggest players, including Amazon and Pinterest, have taken note of this and are starting to use computer vision. We expect there to be lots of additional interest in the future.

4. How did you come up with the idea behind mode.ai?

Our founder, Eitan Sharon, taught in this field when he was in academia at Brown University. His last start up, Videosurf, utilized computer vision and was subsequently purchased by Microsoft. Following Videosurf, Eitan wanted to apply computer vision technology in an entirely new way, and received a lot of interest from Silicon Valley’s top tier VCs and angel investors. They decided to invest in mode.ai, which is where the idea was born. Eitan likes to joke, mode.ai was the bot before bots became bots!

5. How will mode.ai grow over the next few years? What new features will you be adding and what will users be able to do with mode.ai that they won’t be able to with other applications?

Our mission is to provide the most personalized, visual, and conversational shopping experience driven by artificial intelligence, which is why we are always developing new features. The first feature we are working on is correlating the sizing of top brands, which will give customers a much better signal of sizing of any given item they’re interested in.

The second feature we are currently developing is a virtual try-on, which we demoed publicly at the NRF Big Show Innovation Lab in January this year and at VivaTech in Paris in July. Using computer vision technology and user sizing information gathered through conversational commerce, users will be able to upload a selfie to see how an item will look on them virtually, without ever having to try it on.

We have many interesting ideas about ways to enhance the mode.ai bot in the future, including eventually offering a virtual closet for customers based on the items that actually own. The mode.ai bot could send notifications to users with curated suggestions of what they should wear from their closet, or what they should buy from a retailer in order to complete an outfit. This could even be based on events that they have listed in their mobile calendar.


With over a decade of AI experience, Digamma.ai’s team are your trusted machine learning consultants, partners, and engineers.

Leave a Reply

Your email address will not be published.