Tutorial: Timescale Vector (PostgreSQL) Python Client Library

This notebook shows how to use the PostgreSQL as vector database via the Python Vector python client library. You’ll learn how to use the client for (1) semantic search, (2) time-based vector search, (3) and how to create indexes to speed up queries.

Follow along by downloading the Jupyter notebook version of this tutorial here.

Sample dataset: We’ll analyze a gitlog dataset (git commit messages) and use the vector embeddings of the commit messages to find relevant commit messages to a given query question. Each git commit entry has a timestamp associated with it, as well as natural language message and other metadata (e.g author, commit hash etc).

What is Timescale Vector?

Timescale Vector is PostgreSQL++ for AI applications.

Timescale Vector enables you to efficiently store and query millions of vector embeddings in PostgreSQL.

  • Enhances pgvector with faster and more accurate similarity search on 100M+ vectors via DiskANN inspired indexing algorithm.
  • Enables fast time-based vector search via automatic time-based partitioning and indexing.
  • Provides a familiar SQL interface for querying vector embeddings and relational data.

Timescale Vector is cloud PostgreSQL for AI that scales with you from POC to production:

  • Simplifies operations by enabling you to store relational metadata, vector embeddings, and time-series data in a single database.
  • Benefits from rock-solid PostgreSQL foundation with enterprise-grade feature liked streaming backups and replication, high-availability and row-level security.
  • Enables a worry-free experience with enterprise-grade security and compliance.

How to access Timescale Vector

Timescale Vector is available on Timescale, the cloud PostgreSQL platform. (There is no self-hosted version at this time.)

Good news! You get a 90-day free trial for Timescale Vector:

0. Setup

Download the Jupyter notebook version of this tutorial.

# install needed packages
!pip install timescale-vector
!pip install openai
!pip install tiktoken
!pip install python-dotenv
# import needed packages
import os
from dotenv import load_dotenv, find_dotenv
import timescale_vector
from timescale_vector import client
import openai
import pandas as pd
from pathlib import Path
import numpy as np
import json
import tiktoken
import ast
import math
import uuid
from datetime import datetime
from datetime import timedelta
from typing import List, Tuple

We’ll use OpenAI’s embedding models so let’s load our OpenAI API keys from a .env file.

If you do not have an OpenAI API Key, signup for an OpenAI Developer Account and create an API Key. See OpenAI’s developer platform for more information.

# Run export OPENAI_API_KEY=sk-YOUR_OPENAI_API_KEY...
# Get openAI api key by reading local .env file

_ = load_dotenv(find_dotenv())
openai.api_key  = os.environ['OPENAI_API_KEY']

Next let’s load our Timescale service URL which we’ll use to connect to our cloud PostgreSQL database hosted on Timescale.

Launch a PostgreSQL database on Timescale and download the .env file after your service is created.

# Get the service url by reading local .env file
# The .env file should contain a line starting with `TIMESCALE_SERVICE_URL=postgresql://`
_ = load_dotenv(find_dotenv())
TIMESCALE_SERVICE_URL = os.environ["TIMESCALE_SERVICE_URL"]

# OR set it explicitly
# TIMESCALE_SERVICE_URL = "postgres://tsdbadmin:<password>@<id>.tsdb.cloud.timescale.com:<port>/tsdb?sslmode=require"

1. Create a table to store the vectors and metadata

First, we’ll define a table name, which will be the name of our table in the PostgreSQL database.

We set the time_partition_interval argument in the client creation function to enable automatic time-based partitioning of the table. This will partition the table into time-based chunks (in this case each containing data for 7 days) and create indexes on the time-based chunks to speed up time-based queries.

Each partition will consist of data for the specified length of time. We’ll use 7 days for simplicity, but you can pick whatever value make sense for your use case – for example if you query recent vectors frequently you might want to use a smaller time delta like 1 day, or if you query vectors over a decade long time period then you might want to use a larger time delta like 6 months or 1 year.

# Table information
TABLE_NAME = "commit_history"
EMBEDDING_DIMENSIONS = 1536
TIME_PARTITION_INTERVAL = timedelta(days=7)

# Create client object
vec = client.Async(TIMESCALE_SERVICE_URL, 
                   TABLE_NAME,  
                   EMBEDDING_DIMENSIONS, 
                   time_partition_interval=TIME_PARTITION_INTERVAL)

# create table
await vec.create_tables()

The create_tables() function will create a table with the following schema:

id | metadata | contents | embedding

  • id is the UUID which uniquely identifies each vector.
  • metadata is a JSONB column which stores the metadata associated with each vector.
  • contents is the text column which stores the content we want vectorized (in this case the commit message).
  • embedding is the vector column which stores the vector embedding representation of the content.

2. Load in dataset, create vector embeddings, and prepare data for ingestion

Load sample dataset

First, you’ll need to download the sample dataset and place it in the same directory as this notebook.

You can use following command:

# Download the file using curl and save it as commit_history.csv
# Note: Execute this command in your terminal, in the same directory as the notebook
# curl -O https://s3.amazonaws.com/assets.timescale.com/ai/commit_history.csv

Then, we’ll load in the gitlog dataset CSV file into a pandas dataframe.

Note: Since this is a demo, we will only work with the first 1000 records. In practice, you can load as many records as you want.

# Set the path to the dataset file relative to this notebook
file_path = Path("commit_history.csv")

# Read the CSV file into a DataFrame
df = pd.read_csv(file_path)

# Light data cleaning on CSV
df.dropna(inplace=True)
df = df.astype(str)
df = df[:1000]
# Take a look at the data in the csv (optional)
df.head()
commit author date change summary change details
0 44e41c12ab25e36c202f58e068ced262eadc8d16 Lakshmi Narayanan Sreethar<lakshmi@timescale.com> Tue Sep 5 21:03:21 2023 +0530 Fix segfault in set_integer_now_func When an invalid function oid is passed to set_...
1 e66a40038e3c84fb1a68da67ad71caf75c64a027 Bharathy<satish.8483@gmail.com> Sat Sep 2 09:24:31 2023 +0530 Fix server crash on UPDATE of compressed chunk UPDATE query with system attributes in WHERE c...
2 c6a930897e9f9e9878db031cc7fb6ea79d721a74 Jan Nidzwetzki<jan@timescale.com> Tue Aug 29 21:13:51 2023 +0200 Use Debian Bookworm for 32-bit tests So far, we have used Debian Buster (10) for ou...
3 8e941b80ae1b0e0b6affe5431454cdc637628d99 Lakshmi Narayanan Sreethar<lakshmi@timescale.com> Mon Aug 28 23:19:22 2023 +0530 Fix incorrect row count in EXPLAIN ANALYZE INS... INSERT ... ON CONFLICT statements record few m...
4 caada43454e25d3098744fa6b675ac7d07390550 Lakshmi Narayanan Sreethar<lakshmi@timescale.com> Tue May 30 20:32:29 2023 +0530 PG16: Fix concurrent update issues with MERGE. PG16 commit postgres/postgres@9321c79c fixes a...

Create vector embeddings

Next we’ll create vector embeddings of the commit messages using the OpenAI API. We’ll use the text-embedding-ada-002 model to create the embeddings (Learn more here).

# Create embeddings for each commit message

embedding_list = []
content_list = []

openai_client = openai.Client()

# Helper function: get embeddings for a text
def get_embeddings(text):
    response = openai_client.embeddings.create(
        model="text-embedding-ada-002",
        input = text.replace("\n"," ")
    )
    embedding = response.data[0].embedding
    return embedding

for index, row in df.iterrows():
    #construct text we want to embed
    text = row['author'] + " "+ row['date'] + " " +row['commit']+ " " + row['change summary'] + " "+ row['change details']
    content_list.append(text)
    embedding = get_embeddings(text)
    embedding_list.append(embedding)
# Append embddings and content to dataframe
df['content'] = content_list
df['embedding'] = embedding_list

Prepare data for ingestion

Next, we’ll create a uuid for each git log entry.

We’ll define a helper funciton create_uuid() to create a uuid for commit message and associated vector embedding based on its timestamp.

In the helper function, we’ll use the timescale vector client library’s uuid_from_time() method to take a date and create a uuid with a datetime portion that reflects the date string.

# Create uuids for each message
uuid_list = []
# helper function to take in a date string in the past and return a uuid v1
def create_uuid(date_string: str):
    if date_string is None:
        return None
    time_format = '%a %b %d %H:%M:%S %Y %z'
    datetime_obj = datetime.strptime(date_string, time_format)
    uuid = client.uuid_from_time(datetime_obj)
    return str(uuid)

for index, row in df.iterrows():
    uuid = create_uuid(row['date'])
    uuid_list.append(uuid)
# Add uuids to dataframe
df['uuid'] = uuid_list

Finally, let’s create a json of metadata for each entry in our dataset.

We’ll again use some helper functions to ensure we have data in the right format for easy filtering.

# Helper functions
# Helper function to split name and email given an author string consisting of Name Lastname <email>
def split_name(input_string: str) -> Tuple[str, str]:
    if input_string is None:
        return None, None
    start = input_string.find("<")
    end = input_string.find(">")
    name = input_string[:start].strip()
    return name

def create_date(input_string: str) -> datetime:
    if input_string is None:
        return None
    # Define a dictionary to map month abbreviations to their numerical equivalents
    month_dict = {
        "Jan": "01",
        "Feb": "02",
        "Mar": "03",
        "Apr": "04",
        "May": "05",
        "Jun": "06",
        "Jul": "07",
        "Aug": "08",
        "Sep": "09",
        "Oct": "10",
        "Nov": "11",
        "Dec": "12",
    }

    # Split the input string into its components
    components = input_string.split()
    # Extract relevant information
    day = components[2]
    month = month_dict[components[1]]
    year = components[4]
    time = components[3]
    timezone_offset_minutes = int(components[5])  # Convert the offset to minutes
    timezone_hours = timezone_offset_minutes // 60  # Calculate the hours
    timezone_minutes = timezone_offset_minutes % 60  # Calculate the remaining minutes
    # Create a formatted string for the timestamptz in PostgreSQL format
    timestamp_tz_str = f"{year}-{month}-{day} {time}+{timezone_hours:02}{timezone_minutes:02}"
    return timestamp_tz_str
metadata_list = []

for index, row in df.iterrows():
    metadata = {
        "author": split_name(row['author']),
        "date": create_date(row['date']),
        "commit": row['commit'],
    }
    metadata_list.append(metadata)

# Add metadata to dataframe
df['metadata'] = metadata_list
print(metadata_list[0])
{'author': 'Lakshmi Narayanan Sreethar', 'date': '2023-09-5 21:03:21+0850', 'commit': '44e41c12ab25e36c202f58e068ced262eadc8d16'}

We’ve finished loading in our dataset and preparing it for storage in Timescale Vector.

Note that we’ve explained each step of the data preparation process in detail for the purposes of this tutorial. In practice, you can create embeddings, uuids and metadata in a single step.

3. Add data to Timescale Vector

Next, we’ll add our commit history data to the commit_history table in Timescale Vector.

We’ll prepare a list of tuples to add to the table. Each tuple will contain the following data:

(id, metadata, contents, embedding)

to match the schema of our table.

# Remind ourselves of the data in the dataframe
# df.head()
# convert dataframe to array of tuples
records = []
for index, row in df.iterrows():
    record = (row['uuid'], row['metadata'], row['content'], row['embedding'])
    records.append(record)

Lastly, we batch upsert the data into the table using the .upsert() method passing in our list of tuples we prepared above.

# batch upsert vectors into table
await vec.upsert(records)

4. Similarity search with Timescale Vector

Similarity search with metadata filtering

Timecale Vector also supports filtering results by metadata. Let’s see an example of this, where we’ll specify the number of results to return using the limit argument and filter the results by the author metadata field.

records_filtered = await vec.search(query_embedding, limit=3, filter={"author": "Rafia Sabih"})
for result in records_filtered:
    print("-" * 80)
    print(result[client.SEARCH_RESULT_CONTENTS_IDX])
--------------------------------------------------------------------------------
Rafia Sabih<rafia.sabih@gmail.com> Wed Feb 8 11:54:28 2023 +0100  98218c1d079231a9aa469b37ddd0ed39e77c2adb Enable joins for heirarchical continuous aggregates The joins could be between a continuous aggregate and hypertable, continuous aggregate and a regular Postgres table, and continuous aggregate and a regular Postgres view. 
--------------------------------------------------------------------------------
Rafia Sabih<rafia.sabih@gmail.com> Wed Feb 8 11:54:28 2023 +0100  98218c1d079231a9aa469b37ddd0ed39e77c2adb Enable joins for heirarchical continuous aggregates The joins could be between a continuous aggregate and hypertable, continuous aggregate and a regular Postgres table, and continuous aggregate and a regular Postgres view. 
--------------------------------------------------------------------------------
Rafia Sabih<rafia.sabih@gmail.com> Thu Apr 27 15:01:38 2023 +0200  d9849325d0d0f81a13db1e41aa56f8b567945e72 Improve test suite Add more regression tests for Continuous aggregates with joins. 

We can from the output above that we only get results back from the author we filtered for!

5. Using ANN search indexes to speed up queries

You can speed up similarity queries by creating an index on the embedding column. We recommend doing this if you have large number (10k+) vectors in your table.

Timescale Vector supports the following indexes: - timescale_vector_index: a disk-ann inspired graph index for fast similarity search (default). - pgvector’s HNSW index: a hierarchical navigable small world graph index for fast similarity search. - pgvector’s IVFFLAT index: an inverted file index for fast similarity search.

To learn more about Timescale Vector’s new DiskANN inspired index, see the Timescale Vector explainer blog for more details and performance benchmarks.

Important note: In PostgreSQL, each table can only have one index on a particular column. So if you’d like to test the performance of different index types, you can do so either by (1) creating multiple tables with different indexes, (2) creating multiple vector columns in the same table and creating different indexes on each column, or (3) by dropping and recreating the index on the same column and comparing results.

Let’s look at how to create each type of index, starting with the StreamingDiskANN index.

# Create a timescale vector (DiskANN) search index on the embedding column
await vec.create_embedding_index(client.DiskAnnIndex())

Timescale Vector also supports HNSW and ivfflat indexes:

await vec.drop_embedding_index()

# Create HNSW search index on the embedding column
await vec.create_embedding_index(client.HNSWIndex())
await vec.drop_embedding_index()

# Create IVFFLAT search index on the embedding column
await vec.create_embedding_index(client.IvfflatIndex())

In general we recommend using the timescale vector or HNSW index for most use cases, as they are most useful for high dimension and large datasets.

await vec.drop_embedding_index()
await vec.create_embedding_index(client.DiskAnnIndex())

6. Similarity search with time filtering

We’ll use Timescale Vector to find similar commits to a given query within a time range.

A key use case for Timescale Vector is efficient time-based vector search. Timescale Vector enables this by automatically partitioning vectors (and associated metadata) by time. This allows you to efficiently query vectors by both similarity to a query vector and time.

Time-based vector search functionality is helpful for applications like: - Storing and retrieving LLM response history (e.g. chatbots) - Finding the most recent embeddings that are similar to a query vector (e.g recent news). - Constraining similarity search to a relevant time range (e.g asking time-based questions about a knowledge base)

Let’s look at how to run similarity searches with time range filters using the client.

  • The first step to using time filtering with Timescale Vector is to create a table with the time_partition_interval argument set to the desired time interval. This will automatically partition the table into time-based chunks to speed up queries. We completed this step in Part 1 above.

  • Next, we ensure the id of our row is a uuid with a datetime portion that reflects the date and time we want to associated with the embedding. We completed this step in Part 2 above, where we used the uuid_from_time() method provided by the Timescale Vector library.

  • Finally, we can run similarity searches with time range filters using the client. We’ll illustrate this below.

# define search query
query_string = "What's new with TimescaleDB functions?"
query_embedding = get_embeddings(query_string)
# Time filter variables for query
start_date = datetime(2023, 8, 1, 22, 10, 35) # Start date = 1 August 2023, 22:10:35
end_date = datetime(2023, 8, 30, 22, 10, 35) # End date = 30 August 2023, 22:10:35
td = timedelta(days=7) # Time delta = 7 days
# Method 1: Filter within a provided start date and end date.
records_time_filtered = await vec.search(query_embedding,
                        limit=3, 
                        uuid_time_filter=client.UUIDTimeRange(start_date, end_date))
for result in records_time_filtered:
    print("-" * 80)
    print(result[client.SEARCH_RESULT_CONTENTS_IDX])
--------------------------------------------------------------------------------
Sven Klemm<sven@timescale.com> Tue Aug 29 18:13:24 2023 +0200  e4facda540286b0affba47ccc63959fefe2a7b26 Add compatibility layer for _timescaledb_internal functions With timescaledb 2.12 all the functions present in _timescaledb_internal were moved into the _timescaledb_functions schema to improve schema security. This patch adds a compatibility layer so external callers of these internal functions will not break and allow for more flexibility when migrating. 
--------------------------------------------------------------------------------
Sven Klemm<sven@timescale.com> Tue Aug 29 18:13:24 2023 +0200  e4facda540286b0affba47ccc63959fefe2a7b26 Add compatibility layer for _timescaledb_internal functions With timescaledb 2.12 all the functions present in _timescaledb_internal were moved into the _timescaledb_functions schema to improve schema security. This patch adds a compatibility layer so external callers of these internal functions will not break and allow for more flexibility when migrating. 
--------------------------------------------------------------------------------
Dmitry Simonenko<dmitry@timescale.com> Thu Aug 3 14:30:23 2023 +0300  7aeed663b9c0f337b530fd6cad47704a51a9b2ec Feature flags for TimescaleDB features This PR adds several GUCs which allow to enable/disable major timescaledb features:  - enable_hypertable_create - enable_hypertable_compression - enable_cagg_create - enable_policy_create 

Only vectors within the specified time range are returned. These queries are very efficient as they only need to search the relevant partitions.

Resources and next steps

To continue your learning journey check out the following resources:

And finally, if you haven’t already, remember to claim your 90 days free of Timescale Vector by signing up here.