Companies like Scale.ai (recently valued at $7B+ dollars) were able to build massive businesses around data cleaning & labeling for AI: this is a human-intensive task, and companies flocked to services that allow them to get this value quickly.
Some of the other startups in this space include Clarifai, CloudFactory, LabelBox and Sama (now with offices in Montreal!), and in the majors we
have Amazon's SageMaker Ground Truth, as well as Google's and Azure's labeling tooling.
While some of these providers let you bring your own team of labelers, the real advantage (and business model) is in their workforce: they help teams scale the work by parallelizing the labeling tasks across a pool of humans. A lot of the peripheral value is in the data governance and collaboration features (which makes sense, given projects are often outsourced).
To my knowledge, most of these platforms started with image labeling, and gradually added workflows to support other types of content (videos, text) - in any case, most of them now provide labeling capabilities across text, video and image (with varying degrees of AI-assistance to accelerate the process).
It's undeniable that a lot of innovation in areas like self-driving cars & drones would not have happened so quickly without platforms like Scale.
However, I don't see any evidence that these labeling solutions have achieved adoption or been leveraged to accelerate conversational AI development.
The key limitation of these platforms (and a very critical one) is that they all expect a pre-defined set of labels to be provided ahead of time: this makes sense for image-labeling use-cases where the number of things you want to label is either very specific or very finite (i.e: « cars , buildings » etc), and for simple text-classification problems (when you want to bucket text in « negative vs. positive » sentiment for example); these also happen to be use-cases where domain expertise isn't necessarily needed, and where the labeling can therefore be easily outsourced.
On the other hand, the next generation of conversational AI use-cases will need to capture hundreds (or even thousands) of different domain-specific customer intents: it’s impossible to know what those are ahead of time.
The hardest part of conversational AI isn’t the labeling, but discovering and organizing the intents that can be trained from the data in the first place.
Figuring out the right "labels" (or intents) that will constitute the core of your conversational AI's NLU is as much an art as science: it requires a combination of domain expertise, linguistics, and insight into the way these intents will be applied within the AI's business rules.
In my experience, this intent discovery can't be done "top-down" if you're looking to achieve NLU with real depth and accuracy: rather, it requires an iterative and continuous process that will gradually uncover the information model "bottom-up" from the raw data, and improve over time.
Most teams use Excel to sort, organize and label their raw data today.
That tells me the types of labeling tools listed above are not providing the necessary AI-assisted data engineering and modelling capabilities required to train the next level of natural language understanding...
... otherwise all big brands investing in conversational AI experiences would have simply outsourced the NLU to Scale.ai, and chatbots wouldn't be saying "I didn't understand that" quite as much :)
HumanFirst is like Excel, for Natural Language Data. A complete productivity suite to transform natural language into business insights and AI training data.