In this talk, Rsqrd welcomes Emad Elwany, CTO and Co-Founder of Lexion! He discusses his experiences with ML tooling and how it has evolved through the lifespan of Lexion, and shares his findings on important considerations, problems and solutions, and how decisions about ML tooling have changed over time through the stages of a startup.
What is ML Tooling?
ML tooling is what it sounds like: tools used to aid and create machine learning solutions. This is a combination of different software, platforms, and ‘tools’ that can help you work with machine learning. Some examples are programming languages, packages, and cloud services.
Lexion is software that applies NLP to legal agreements to extract extract key commercial terms (e.g. renewal and expiration dates) as well as legal terms (e.g. termination rights, liability and assignment) that may take weeks or months to do by hand within a few minutes.
While Emad felt very passionate about trying to apply NLP to a business setting, there were quite a few challenges. One of the problems he encountered was that traditional NLP techniques were built for short utterances and web type documents, but these documents can be 200-300 pages with multiple agreements. There’s also a problem of extracting information from not just explicit, but implicit information and being able to appropriately interpret domain-specific language.
To visualize the complexity and impact of the decisions regarding ML tooling, here is an overview of the pipeline from receiving a document and processing it:
To navigate these problems and the journey from idea to product, smart decisions had to be made regarding ML tooling at each and every node.
Emad and his team’s approach to building an ML-focused product went through two phases: the first phase focusing on goals of an early-stage ML product pre-MVP, and the second phase focusing on different goals once the MVP is validated, post-MVP.
When creating the MVP, it is best to use tools that are easy to understand, setup, and deploy. The goal is to create a technically feasible, business viable, and quick product. Choosing tools that fit this criteria will allow you to create the MVP as swiftly as possible and focusing on getting it to work when the startup has limited resources.
After creating and validating the MVP, the goal changes. The focus shifts from putting together a new product to expanding on it. The goal at this stage is to scale model deployment and focus on user experiences. It is best to use tools that are easy to integrate, scale, and configure.
Shifting Focus of Tooling Throughout the Model Cycle
The typical model cycle is the development process of a ML model broken down into different stages.
In each stage, the general focus changes from working with what you have and piecing together the bare minimum product, to working on the finer details and making the product robust.
|finding the data
cleaning the data
annotating the data
|managing the data
protecting the data
|optimize for speed of results
|optimize for speed of experimentation
|optimize for shipping models
|optimize for operationalizing models
|does it work well enough?
|is it better?
why is it better?
how is it better?
|optimize for speed of deployment
|optimize for scale of deployment
|bare minimum to ensure things are working
|invest in monitoring all aspects of the model
This goes back to the evolving approach. The tooling you choose should help you realize the goals that align with where your product is in the general development process. Having these goals and shaping your decisions around them will help you seamlessly transition from pre-MVP development to post-MVP.
An area Lexion currently spends a lot of time on and that Emad is passionate about is model versioning. The concept is similar to code versioning for traditional software, but it’s more than that. Code versioning focuses on tracking changes in code, library dependencies, configs, and topologies. Model versioning shares those same features, but includes additional considerations such as training data, training parameters, model state, and hardware.
Why Good Model Versioning is Important
Without proper model versioning, the problems that can arise affect users, product managers, and data scientists. The user can encounter a new break in the product they’ve never seen before. Product managers can see drops in performance and want to roll back. Data scientists can see varied results and want to know what has changed. Good model versioning will let you find and fix changes that impact the success of your product.
Types of Versioning
There are 3 types of versioning for ML models with each offering more guarantee than before.
|Allows for short-term rollback/rollforward
|Once you have a trained model, you can reconstruct the model.
Allows for long-term rollback/roll-forward
|You can re-train a model that yields the exact same model you trained before.
Allows for reproducibility and space to deal with training data corruption
There are multiple artifacts that need to be versioned for each approach, such as model state, hardware, and model hyperparameters. Reproducing training goes a step further and versions training config (ex. early stopping criteria) and training data (ex. data and labels)
Versioning Best Practices
Now that the importance of versioning is established, how do you utilize it? What are some best practices? There are many solutions that are appealing but do not work in practice, such as only supporting the latest version and not committing a new version until you’re positive it’s good. You won’t be able to iterate quickly, and this hinders the growth of the product.
The team found that a great tool to use and works well with the team’s existing infrastructure is Metaflow. It’s important to find tools that work best with what you have to ease development.
Importance of Versioning from the Beginning
The goal of an early-stage startup is to get a product shipped. Even though it’s not fully justified to put a big focus on infrastructure and versioning, there are some elements worth noting that have helped Lexion in the development process.
Here are the investments that have paid off:
- Versioning all model state during packaging
- Versioning all data artefacts in data store and make them immutable
- Versioning all code explicitly by keeping stable interfaces and supporting version upgrades to model/featurizer code
- Pinning major versions of dependencies
What’s most important on top all of this is:
“Remember: we are building a whole user facing application on top of this, prioritizing when to invest here is critical”
The pipeline to create a product is very intricate and is bigger than just creating a ML model. Look at the bigger picture and decide which investments are more important than others.
While it’s very important to create this support system for a ML model, at the end of the day, you’re building a user facing app. There’s multiple costs that will incur outside of any ML model such as building API’s, security, and integrations. It is critical to prioritize your investment of time and resources depending on what is important to the app. Remember to incorporate the cost of ML infrastructure in your business model, but as Emad says, "focus on creating a product that customers love!"
Cool Stuff to Check Out
Learn more about Lexion:
- Paper on BERT Goes to Law School was presented at NeurIPS 2019
- Check out their blog for cool articles on Law + ML
Interesting questions from the video:
- Being a SaaS product in the legal space, have you ever had any conflict with someone saying their documents are too sensitive and need to be trained in their own space and feel like they can’t work with Lexion? 32m 09s
- There are so many open source projects coming out everyday. How do you build a defensible moat to keep ahead of your competitors? 40m 36s
Some cool links:
- Federated Analytics: Collaborative Data Science without Data Collection
- AI models affected by our behavior changed during COVID
- Visual representation of limits of AI
All information and ideas presented in this post are that of the speaker and the talk.