Excellent! You have mastered the art of creating basic neural search applications using the Jina Ecosystem. Now, it's time to venture beyond the comfort of your local machine and think about creating scalable production-grade applications.
This section will walk you through the prerequisites to build scalable multi-user applications capable of supporting significant traffic. Here comes the final phase!
Lean and efficient solutions are the results of clean and efficient code. Often, you have to look back at your code, and if it doesn’t follow standard convention, you might not always recall why you wrote a particular line of code the way it is.
Writing clean and efficient code not only ensures you have better practice with clean code but also ensures others in the community can understand your code when you make it public on GitHub or push your Executors to Jina Hub. Hence, we have outlined the best practices to write clean code with Jina in this document.
Till now, you have learned how to manipulate different data types using DocArray and built search applications involving text and images. The interactive tutorials in this module will teach you how to build a neural search application using complex data types:
Sometimes, you like a particular song and might want to search for other similar ones. But, you can't check all the songs on the online music store and see if they are identical to the previous one you liked. In this Jina-based search example, you can input an audio file and get similar audio as the output. Follow this interactive tutorial to learn how we built it from scratch.
When you type "What is a Document in Jina" into Google search, you get search results in the form of text and some recommendations of YouTube videos related to your search query. Due to the latest advances in NLP, our search systems are smart enough to search through video given a text query. This tutorial will teach you how to build an intelligent Q&A system for video content.
Similar to other data types, the 3D mesh search pipeline includes loading, encoding, indexing, and querying the data. In this tutorial, we will walk you through the process of building a 3D-mesh search system capable of retrieving similar meshes given a 3D mesh as an input.
In this tutorial, you'll learn to build a miniature version of SQL for filtering and manipulating tables by leveraging Jina’s neural search capabilities.
By now you are familiar with building search applications using Jina AI and now it's time to scale those applications to make them production-ready. Here comes the concept of a document store, which acts as a storage for Documents. Jina integrates flawlessly with most of the well-known databases including Weaviate, Elasticsearch, Qdrant, SQLite, etc.
Compared to in-memory storage, the benefit of using an external store is longer persistence and faster retrieval. Although, the look-and-feel of a DocumentArray with an external store is almost the same as a regular in-memory DocumentArray. This allows users to easily switch between backends under the same DocArray idiom.
Check out the following blog posts and documentation for more information on document stores for Jina AI applications.
Jina leverages the power of deep learning models for its search systems. Running deep learning models requires substantial resources and is often slow. To speed up development and make processing more efficient for your application, there are some modifications that you can do on top of Jina's Executors.
Monitoring allows you to identify and diagnose problems early via performance data and enable you to optimize and improve your application before errors occur. In Jina AI search applications, many different components can be monitored as a Jina Flow exposes several core metrics that allow you to look deeper at what is happening inside it.
Metrics allow you to monitor the overall state of your Flow, detect performance bottlenecks, or alert your team when some components of your Flow are down. To set up monitoring in Jina, we use Prometheus and Grafana.
Check out the monitoring blog post and documentation for more information.
By now, you know everything about creating small-scale or single-user Jina AI applications, but the Jina ecosystem is not limited to that. Let's look at how you can make the applications scalable and production-ready that will run the same regardless of your platform.
The simplest way of either prototyping or serving your application in production is to run your Flow with docker-compose . To learn more about How to run Jina with Docker Compose , check out the following documentation
Jina natively supports deploying your Flow and Executors on Kubernetes. To learn more about How to run Jina with Kubernetes, check out the following documentation.
Deploy on Cloud with JCloud
JCloud simplifies deploying and managing Jina Flows on the cloud with minor changes in the application code. It lets you focus on the things that matter and takes the hassle out of the deployment and hosting process.
Check out the blog post for step-by-step instructions for deploying a Jina application on JCloud:
By now, you have a working application and would like to improve its quality by making the results more relevant to your use case. You have already figured out the best deep learning model for your application, and now it's time to make it better!
Finetuner is a product in Jina's ecosystem that can tune the weights of any deep neural network to generate better embeddings on the search tasks. Check out the documentation. to understand how Finetuner works.
Here is a step-by-step guide to finetune your deep learning models.
We've also put together some examples to get you started with Finetuner:
You have shown an incredible amount of dedication and enthusiasm towards learning Jina’s neural search framework.
Take this final quiz to earn the status of being an advanced user of Jina and apply everything you have learned so far to build enterprise-grade search solutions.
We’d appreciate any feedback you’d have about your experience with the developer portal. Please check it out and provide us with your valuable feedback.