Digamma.ai Q&A Series: Vsevolod Dyomkin of (m8n)ware

Leave a comment
Artificial Intelligence
NLP

Digamma.ai Q&A Series: Interview with Vsevolod Dyomkin

 1. Grammarly is a commercial product. While developing it, did you encounter any interesting challenges and obtain any interesting research results in the NLP area? Did any interesting academic results arise from the development process or was the work purely an application of existing NLP algorithms?

Grammarly operates in a field that is both down-to-earth and also has a history of relevant academic research.  In addition to its core error correction engine, it relies on a comprehensive set of NLP tools such as language modelling, lemmatization, and parsing, to name a few. Our approach was based on combining the best existing technologies in addition to the internal “secret sauce”, and fine-tuning them to better suit our goals. This resulted in a number of interesting improvements, some of which got into our technical blog:

Some were submitted to conferences, although neither one was accepted, probably due to our immaturity in academic publishing. Meanwhile, the most interesting ones were kept secret as they were too tightly related to our core algorithms.

Also, recently, a novel dataset was released courtesy of the efforts of the Grammarly NLP team, but I was not part of this development.

To sum up, we have definitely faced a lot of research challenges across the whole NLP technology stack and, especially, in error correction. We tried to address those problems with a product-oriented mindset by performing research that would be immediately relevant in improving the quality of Grammarly’s product. This was often quite successful, although most of these solutions will remain in-house at least for some time.

2. You are working on a new project involving the use of NLP methods to detect ‘fake news’. What is your vision for the project and what are you looking to achieve?

The vision expressed by the project’s founder, Diane Francis, is to build tools that will help curate the world’s information. That’s obviously a long-term one. Currently, we’re working on a way to robustly attribute individual claims. This involves, for example, finding relevant previously published materials and determining whether they support or contradict the statement at hand. We’d like to eventually put powerful fact-checking tools in the hands of both professionals and non-professionals to help them better spot plain lies or alterations of facts, thus contributinh to disarming propaganda and other fraudulent activity in the media.

3. You are a Lisp programmer and have released several open-source Lisp projects, including a Lisp NLP toolkit which aims to provide an extensible and comprehensive set of tools to solve NLP problems in Common Lisp. Why do you think Lisp is particularly suited to supporting language modelling experiments?

Well, first of all, I wouldn’t limit Lisp’s use in NLP just to language modelling (which is only one direction of computational linguistics research) — I use it across the whole stack. My preference for this language despite its limited popularity is due to several traits:

  • A superb interactive environment that is essential for productive research work coupled with a performant runtime (which is, actually, the best in terms of robustness and speed amongst the dynamic programming languages). I successfully used Lisp in production for CPU-intensive tasks at Grammarly and in other projects.
  • What’s very helpful in specialized domains such as NLP is that Lisp provides excellent facilities to programmatically express the knowledge of domain experts such as linguists. In this way, it empowers them to directly work with the NLP engine, improve it and experiment with it, bypassing software developers.

4. Neil Lawrence has been quoted to say that, “NLP is kind of like a rabbit in the headlights of the Deep Learning machine, waiting to be flattened”. Do you think that the field of NLP has not been ‘revolutionized’ enough yet and the best is yet to come? If so, how do you see the field evolving over the next few years?

Fernando Pereira, a famous researcher who is one of the leads for Google’s NLP efforts, recently wrote a nice summary of the history of the development of the NLP field and the role deep learning currently plays in it, which was kind of a reaction to a fervent rant by Yoav Goldberg, another influential NLP researcher. On the topic of the arrogance of deep learning people claiming to “have solved language”, I cannot add a lot of these discussions 🙂

From the point of view of substantially improving existing and proven NLP technologies, deep learning approaches haven’t yet achieved anything near to what happened in sound processing or computer vision. However, the biggest game changer brought by it is a vast expansion of the possible tasks that can be solved in the current NLP field with a sufficient level of quality and generality to be applicable in real-world settings. The downside is the seeming accessibility of the field to outsiders that would, actually, be a good thing if it was really the case. But it’s not. And this is what Yoav tries to argue in his blog.

Speaking about the future and from my experience in the industry, I can say that competitive and appealing NLP tools and products require a combination of many, if not all, NLP approaches. This starts from the often frowned upon rule-based approach to the bleeding edge deep learning approach or whatever. A good example supporting this point I often show is the Gmail Smart Reply paper.

5. What organizations today do you believe are applying NLP methods in exciting ways?

Well, apart from the well-known academic research institutions, Google is always on the bleeding edge when it comes to any machine learning-related field, including NLP. Also, the Salesforce machine learning group is constantly producing new and practical results. As for the others, there’s a number of vertical products that do advanced things with NLP with Grammarly being an obvious example.

6. What industry sectors do you believe would gain the most benefits in applying NLP methods to the current problems they are facing?

Well, I’d say that NLP is still heavily under-utilized in spaces where text is created: both in communication channels and in authoring tools. And that’s one of the reasons Grammarly’s mission statement reads “to improve people’s communication”. If you think of all email clients, chat apps, and text editors that still use almost no NLP technologies — apart from maybe a simple spell check — you can imagine how much can be done in this area.

With over a decade of AI experience,  Digamma.ai’s team are your trusted machine learning consultants, partners, and engineers.

Leave a Reply

Your email address will not be published. Required fields are marked *