diff --git a/src/data/roadmaps/ai-engineer/content/models-on-hugging-face@dLEg4IA3F5jgc44Bst9if.md b/src/data/roadmaps/ai-engineer/content/models-on-hugging-face@dLEg4IA3F5jgc44Bst9if.md index 941c86818..b7c73e7a0 100644 --- a/src/data/roadmaps/ai-engineer/content/models-on-hugging-face@dLEg4IA3F5jgc44Bst9if.md +++ b/src/data/roadmaps/ai-engineer/content/models-on-hugging-face@dLEg4IA3F5jgc44Bst9if.md @@ -1 +1,7 @@ -# Models on Hugging Face \ No newline at end of file +# Models on Hugging Face + +Embedding models are used to convert raw data like text, code, or images into high-dimensional vectors that capture semantic meaning. These vector representations allow AI systems to compare, cluster, and retrieve information based on similarity rather than exact matches. Hugging Face provides a wide range of pretrained embedding models such as `all-MiniLM-L6-v2`, `gte-base`, `Qwen3-Embedding-8B` and `bge-base` which are commonly used for tasks like semantic search, recommendation systems, duplicate detection, and retrieval-augmented generation (RAG). These models can be accessed through libraries like transformers or sentence-transformers, making it easy to generate high-quality embeddings for both general-purpose and task-specific applications. + +Learn more from the following resources: +- [@video@Hugging Face - Text embeddings & semantic search](https://www.youtube.com/watch?v=OATCgQtNX2o) +- [@official@Hugging Face Embedding Models](https://huggingface.co/models?pipeline_tag=feature-extraction)