Retrieve relevant information through RAG
Information Retrieval
Quick start
You can create an index on LlamaCloud using the following code. By default, new indexes use managed embeddings (OpenAI text-embedding-3-small, 1536 dimensions, 1 credit/page):
import os
from llama_index.core import SimpleDirectoryReader
from llama_cloud_services import LlamaCloudIndex
# create a new index (uses managed embeddings by default)
index = LlamaCloudIndex.from_documents(
documents,
"my_first_index",
project_name="default",
api_key="llx-...",
verbose=True,
More from run-llama/vibe-llama
pdf processing
Invoke this skill BEFORE implementing any text extraction/parsing logic to learn how to use LlamaParse to process any document accurately. Requires llama_cloud_services package and LLAMA_CLOUD_API_KEY as an environment variable.
3classify files according to specific rules
Invoke this skill BEFORE implementing any text/document classification task to learn the correct llama_cloud_services API usage. Required reading before writing classification code." Requires the llama_cloud_services package and LLAMA_CLOUD_API_KEY as an environment variable.
2use llamactl - a cli tool for llamaagents
Use llamactl to initialize, locally preview, deploy and manage LlamaIndex workflows as LlamaAgents. Required llama-index-workflows and llamactl to be installed in the environment.
2