Galileo x Zilliz: The Power of Vector Embeddings

Galileo x Zilliz: The Power of Vector Embeddings
Vikram Chatterji
Vikram ChatterjiCEO
less than a minute readOctober 20 2023

Unstructured data is everywhere. The IDC has estimated we’ll have over 175 zettabytes (21 zeroes) of data by 2025. 80% of that is unstructured data. Unstructured data is data that doesn’t fit any predetermined format — it doesn’t follow a preset data model or schema. It commonly takes the form of text, images, audio, and videos but can handle various other conditions.

What is a vector embedding?

At the most basic level, vector embeddings are a numerical data representation. A vector embedding typically consists of hundreds or thousands of floats. The high dimensionality allows vector embeddings to store complex data such as images, audio, and text.

You can extract vector embeddings from trained machine-learning models. Most neural networks used in production have many layers, each with hundreds of neurons. When a data point is run through the feed-forward function of the neural network, each layer produces an output. Typically, these networks do some classification created by the final layer of the network. The vector embedding representing the data is the output of the final hidden layer, which usually refers to the second to last layer.

How can you work with vector embeddings?

Vector embeddings are the de facto way to work with unstructured data. If you have data that you want to compare, we recommend you do so using vector embeddings. Vector embeddings are generated from neural networks by taking the output from the second to last layer of a neural network and using that as the vector embedding of the input.

When generating embedding vectors, there are several factors to consider. Your primary considerations are the size of the embeddings, the data for model training, and the data quality. You have to ensure that your vectors are a size that makes sense.

READ MORE!