By 2018, 50,000 gigabytes of data will be created per second. A significant amount of that data will be stored in corporate server farms. A report from IDG found that “[m]anaging unstructured data is growing as a challenge – rising from 31 percent in 2015 to 45 percent in 2016.” IDC’s Digital Universe report found that “the amount of data stored in the world’s IT systems” doubles every two years. On one hand, many companies are eager to start confronting the challenge of big data. But according to an IDG Enterprise report, 90% of the companies surveyed reported running into major problems when implementing or developing their big data initiatives. No wonder an October 2016 report from Gartner found that most companies who attempted to implement a big data project were mostly stuck in the pilot stage. The challenge of what do to with big data is daunting for many companies. However, machine learning has an important role to play in “solving” big data. Read More
Digamma.ai CEO Q&A Series: Jonas Cleveland, CEO of COSY
What transformative effects do you intend COSY to have on the retail sector?
We describe COSY as an aisle intelligence company that’s using machine vision and AI to improve retail execution and inventory productivity for our customers in retail stores and warehouses. The major trend occurring in retail today is the evolution of the retail store floor into a distribution center. This is something that COSY has been talking about for some time. For consumers, this means being able to order online, show up to the store and grab what you ordered plus other things, such as fresh produce.
For retailers, this means that they need more stores so that they can reach more people efficiently. Today we see stores like Target moving into more urban environments. We also see many stores closing down as the overhead of these stores is too high.
There is also the issue of an inefficient use of space. So, really, what COSY enables is the ability to optimize the way this real estate is used. Essentially, being able to optimize how you place departments, products and organize them on the store floor to drive revenue higher for retailers. Read More
Our latest event, AI’s Potential: From Policy to Posterity, featured an experienced panel of AI and machine learning specialists. Our panelists included:
• Douglas Bemis, CTO and Co-Founder at Uber AI Labs
• Gus Katsiapis, Senior Staff Software Engineer at Google
• Prateek Joshi, Founder at Pluto AI
• Christian Reilly, Co-Founder of MedNition
The event featured wide-ranging discussions on the current state of artificial intelligence infrastructure and where we need it to be in the future in order to truly realize its potential.
Our team of machine learning consultants at Digamma believes that we are on the brink of a new AI era in which emerging AI technologies will quickly evolve, mature and require the creation of a new, sophisticated technological infrastructure. The AI we have now is emergent and not fully production ready—it is primarily “custom craft”, not the “AI factory line” that we need to truly scale AI technologies. In light of this, we asked our panelists to provide their perspective on the role of hardware and processing power in enabling the growth and evolution of AI technologies. Read More
Digamma.ai CEO Q&A Series: Interview with Kieran Snyder, co-founder of Textio
1. You previously worked at Microsoft and Amazon and have a PhD in Linguistics and Cognitive Science. Today, you and your team run Textio, an augmented writing platform that has been used by companies such as Atlassian, NVIDIA, Square, Starbucks, Twitter and Vodafone. How did you and your co-founder and CTO Jensen Harris come up with the idea for Textio back in 2014?
We started Textio with a very simple vision, which is that if every time you wrote something you knew exactly who was going to respond. Now imagine you had the data to tell you why someone would respond. If you know who will respond, and why, you can change your approach to get a different result.
Jensen and I founded Textio together, though we come from very different backgrounds. The union of those backgrounds makes Textio what it is today.
My background is in measurement in of language. I was a Linguistics and Math major in college and my Ph.D. is in natural language processing. I spent around a decade leading natural language and search at Microsoft and Amazon. Jensen’s background is in user experience. He designed the first ever UI for e-mail in the form of Outlook. He led the design and implementation of Word, PowerPoint, Excel kind of core Office UI. Where my experience has always been in measuring the impact of language in software, his has been building enterprise software that’s usable by a billion people. Read More
Digamma.ai CEO Q&A Series: Interview with Wout Brusselaers, CEO of Deep 6 AI
1. Deep 6 AI recently won the Enterprise and Smart Data category at SXSW’s accelerator competition and participated in the inaugural SXSW Connect to End Cancer event. What role is Deep 6 taking in the fight to end cancer?
We’re not doing any cancer research ourselves. However, Joe Biden, our former Vice President, actually pitched the purpose of our company himself when he was on stage at SXSW. He mentioned that in order for the mutual effort to cure cancer to have a fighting chance, we need to increase the number of patients with cancer that participate in clinical trials. Today, that stands at 4%—only 4% of cancer patients actually enroll in clinical trials. To be able to find a cure for cancer it is widely estimated that that number needs to grow to at least 25% and ideally 50%. Most patients don’t know about clinical trials.
Finding the appropriate clinical trial they are eligible for requires an in-depth, professional understanding of their specific condition, sometimes down to the molecular level. This requires a lot of research, expertise and time. We cannot expect patients to carry this burden to figure out which trial to enroll in, from possibly hundreds available to them, and reach out to all the pharmaceutical companies, the CROs, or the hospitals involved. But trial sponsors or sites also don’t have the tools to look into patient data and quickly assess whether a patient will be a suitable candidate for a given trial. So, that is where artificial intelligence can really help and perform that matching, reducing the time it takes from months to as little as 10 seconds.
After our SXSW presentation, a woman who had Stage 3 breast cancer came up to me and explained how she was looking for a trial to help fight her cancer. Her husband was an oncologist, she was highly educated, and she knew her way around medical data, but she was still overwhelmed trying to find a trial. She said that when she keyed in her basic characteristics she found 140 trials online. She then had to narrow it down by reading through them and reaching out to the different PIs and sponsors. One sent her to a website where she had to take a survey, then a phone interview, and then weeks later they told her that she was not eligible. And she doesn’t even know why. It’s a nightmare. We want to help change that. Read More
Digamma.ai Q&A Series: Interview with Karen Ouk, SVP Business Development at mode.ai, which builds AI-powered B2B2C visual chatbots for retailers.
1. mode.ai’s mission is to allow users to rediscover shopping in a more visual, conversational and personalized way. Why is this relevant now on the consumer front?
Visual search allows us to offer features that purely text-based search cannot accomplish. For example, say a shopper is looking for a specific type of dress with a V-cut neck and embellishment on the waistline — this sort of item can be something you imagine or saw somebody wearing, but would be very difficult to find using pure text-based search. Our technology allows users to upload an image of a dress they’ve seen in order to find visually similar items.
Another feature that mode.ai offers using visual search technology is the ability to provide style inspiration to users. With computer vision, we can find people who’ve worn outfits that have a similar look to a particular apparel or accessory item, and then provide users with inspiration of how other people are wearing that similar item.
Next, we believe that conversational commerce — the intersection of messaging applications and shopping — is the way that people will shop in the future. Conversational commerce gives customers access to stores 24/7, and the integration of the mode.ai chatbot is just like talking to a real sales associate. Millennials in particular, who are very active on messaging platforms but remain the least engaged consumer, will likely embrace this high-tech shopping experience on messaging platforms once it catches on in North America. We’re already seeing this trend in China with WeChat, Japan with LINE, and in South Korea with Kakao Talk. Read More
Digamma.ai CEO Q&A Series: Interview with Jarrod Wolf, co-founder of AddStructure
1. How are you looking to fundamentally change the customer shopping experience through Addstructure?
With our technology we’re hoping to make the shopping experience much more seamless. Imagine looking at your phone and using your voice to speak. As you’re speaking, the product mix that is being displayed to you is updating in real time. You can say something like, “I’m looking for a TV maybe like between 40 and 50 inches, around $600 and that has 3 HDMI ports.” And, as you’re speaking, the products that are being shown to you are actually updating with that conversational memory.
Next, you can imagine that technology like this could also enable a much better grocery cart building experience. It’s difficult to build a grocery list because you’re building a cart of 50 or 60 products. With our technology you can have your phone in your hand and say, “I’m looking for milk. Actually I want that to be organic milk. I want some of the yogurt. I want some sourdough bread. I want all the ingredients to make chicken enchiladas.” As you’re speaking, the entire list is just building itself. So, it’s a much faster, natural experience than the experience you have today. Read More
1. Verdigris provides real-time energy intelligence for facilities managers, enabling them to react faster with device-level monitoring and real-time alerts. What are the implications of your company’s technology on the building industry and to a larger extent, the environment?
We target mission-critical facilities like distribution centers, factories, and even hotels because they are sources of major power consumption and often struggle to mitigate their energy usage. By providing them with a system that allows the building to communicate with its facilities management, it transforms the way we conceptualize buildings. In our paradigm, you have buildings taking care of people instead of people taking care of buildings. The implications of our technology for the future of the building industry, and for the environment as a whole, is one of sustainability through a better understanding of how and when a building consumes electricity. We have created a system that drives down utility costs by reducing energy consumption and avoids operational costs of equipment failures. This technology will give the building industry a powerful tool to take a step in the right direction, one that safeguards our environment. Read More
Facebook Messenger chatbots have major potential, even if the field is relatively nascent one. With Facebook launching Discover, a hub inside Messenger for discovering new and interesting chatbots to message with, there’s no excuse not to try out a new chatbot this summer — especially a food-related one. From analyzing your receipts to providing awesome restaurant recommendations, the following list represents a veritable freshmen class of powerful, value-adding, food and restaurants AI chatbots.
Lunchcat, created by the machine learning consulting firm Digamma.ai, is an experimental chatbot that helps you and your friends split lunch costs. Simply type how many people you are and what the total bill was and Lunchcat will instantly tell you everyone’s share and tip amount.
Lunchcat’s coolest feature lies in its ability to analyze receipts. Simply upload a photo of your receipt and Lunchcat will automatically split your bill for you, no extra information needed.