insightvault.services package¶
Submodules¶
insightvault.services.database module¶
- class insightvault.services.database.AbstractDatabaseService¶
Bases:
ABCAbstract database service
- abstract async add_documents(documents: list[Document]) None¶
Add a list of documents to the database
- abstract async delete_all_documents() None¶
Delete all documents from the database
- class insightvault.services.database.ChromaDatabaseService(config: DatabaseConfig)¶
Bases:
AbstractDatabaseServiceChroma database service
This service is used to interact with the Chroma database.
Embedding functions are not provided here, so the caller must provide them.
- async add_documents(documents: list[Document], collection_name: str = 'default') None¶
Add a list of documents to the database. The documents must have embeddings.
- async delete_all_documents(collection_name: str = 'default') None¶
Delete all documents in the database
insightvault.services.embedding module¶
- class insightvault.services.embedding.EmbeddingService(config: EmbeddingConfig)¶
Bases:
objectService for generating embeddings from text using sentence-transformers.
To use it, you must first call await get_client() to ensure the model is loaded.
- Attributes:
config: The configuration for the embedding service client: The embedding model client loading_task: The task that loads the embedding model logger: The logger for the embedding service
- async embed(texts: list[str]) list[list[float]]¶
Generate embeddings for a list of texts
- Args:
texts: List of text strings to embed
- Returns:
List of embedding vectors (as lists of floats)
- async init() None¶
Initialize the embedding service
insightvault.services.llm module¶
- class insightvault.services.llm.AbstractLLMService(model_name: str)¶
Bases:
ABC- abstract async chat(prompt: str) str | None¶
Generate a response from the model while maintaining chat history.
- abstract async clear_chat_history() None¶
Clear the chat history.
- abstract async init() None¶
Prepare the LLM service for use, such as loading model weights.
- abstract async query(prompt: str) str | None¶
Generate a one-off response from the model without chat history.
- class insightvault.services.llm.BaseLLMService(model_name: str)¶
Bases:
AbstractLLMService
- class insightvault.services.llm.OllamaLLMService(model_name: str = 'llama3')¶
Bases:
BaseLLMServiceOllama LLM service
- async chat(prompt: str) str | None¶
Generate a response from the model while maintaining chat history.
- async clear_chat_history() None¶
Clear the chat history.
- async init() None¶
Initialize the LLM service
- async query(prompt: str) str | None¶
Generate a one-off response from the model without chat history.
insightvault.services.prompt module¶
- class insightvault.services.prompt.PromptService¶
Bases:
objectPrompt service
- get_prompt(prompt_type: str, context: dict[str, str] | None = None) str¶
Retrieves a predefined prompt for a specific use case and injects context if needed. The context can include parameters like ‘text’, etc.
- Args:
prompt_type (str): The type of prompt to retrieve. context (dict | None): The context to inject into the prompt.
- Returns:
str: The prompt with the injected context.