Skip to content
Merged
99 changes: 98 additions & 1 deletion docs/getting-started/architecture/feature-transformation.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,4 +18,101 @@ when to use which transformation engine/communication pattern is extremely criti
the success of your implementation.

In general, we recommend transformation engines and network calls to be chosen by aligning it with what is most
appropriate for the data producer, feature/model usage, and overall product.
appropriate for the data producer, feature/model usage, and overall product.


## API
### feature_transformation
`feature_transformation` or `udf` are the core APIs for defining feature transformations in Feast. They allow you to specify custom logic that can be applied to the data during materialization or retrieval. Examples include:

```python
def remove_extra_spaces(df: DataFrame) -> DataFrame:
df['name'] = df['name'].str.replace('\s+', ' ')
return df

spark_transformation = SparkTransformation(
mode=TransformationMode.SPARK,
udf=remove_extra_spaces,
udf_string="remove extra spaces",
)
feature_view = FeatureView(
feature_transformation=spark_transformation,
...
)
```
OR
```python
spark_transformation = Transformation(
mode=TransformationMode.SPARK_SQL,
udf=remove_extra_spaces_sql,
udf_string="remove extra spaces sql",
)
feature_view = FeatureView(
feature_transformation=spark_transformation,
...
)
```
OR
```python
@transformation(mode=TransformationMode.SPARK)
def remove_extra_spaces_udf(df: pd.DataFrame) -> pd.DataFrame:
return df.assign(name=df['name'].str.replace('\s+', ' '))

feature_view = FeatureView(
feature_transformation=remove_extra_spaces_udf,
...
)
```

### Aggregation
Aggregation is builtin API for defining batch or streamable aggregations on data. It allows you to specify how to aggregate data over a time window, such as calculating the average or sum of a feature over a specified period. Examples include:
```python
from feast import Aggregation
feature_view = FeatureView(
aggregations=[
Aggregation(
column="amount",
function="sum"
)
Aggregation(
column="amount",
function="avg",
time_window="1h"
),
]
...
)
```

### Filter
ttl: They amount of time that the features will be available for materialization or retrieval. The entity rows' timestamp higher that the current time minus the ttl will be used to filter the features. This is useful for ensuring that only recent data is used in feature calculations. Examples include:

```python
feature_view = FeatureView(
ttl="1d", # Features will be available for 1 day
...
)
```

### Join
Feast can join multiple feature views together to create a composite feature view. This allows you to combine features from different sources or views into a single view. Examples include:
```python
feature_view = FeatureView(
name="composite_feature_view",
entities=["entity_id"],
source=[
FeatureView(
name="feature_view_1",
features=["feature_1", "feature_2"],
...
),
FeatureView(
name="feature_view_2",
features=["feature_3", "feature_4"],
...
)
]
...
)
```
The underlying implementation of the join is an inner join by default, and join key is the entity id.
148 changes: 148 additions & 0 deletions docs/getting-started/concepts/batch-feature-view.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,148 @@
# 🧬 BatchFeatureView in Feast

`BatchFeatureView` is a flexible abstraction in Feast that allows users to define features derived from batch data sources or even other `FeatureView`s, enabling composable and reusable feature pipelines. It is an extension of the `FeatureView` class, with support for user-defined transformations, aggregations, and recursive chaining of feature logic.

---

## ✅ Key Capabilities

- **Composable DAG of FeatureViews**: Supports defining a `BatchFeatureView` on top of one or more other `FeatureView`s.
- **Transformations**: Apply [transformation](../../getting-started/architecture/feature-transformation.md) logic (`feature_transformation` or `udf`) to raw data source, can also be used to deal with multiple data sources.
- **Aggregations**: Define time-windowed aggregations (e.g. `sum`, `avg`) over event-timestamped data.
- **Feature resolution & execution**: Automatically resolves and executes DAGs of dependent views during materialization or retrieval. More details in the [Compute engine documentation](../../reference/compute-engine/README.md).
- **Materialization Sink Customization**: Specify a custom `sink_source` to define where derived feature data should be persisted.

---

## 📐 Class Signature

```python
class BatchFeatureView(FeatureView):
def __init__(
*,
name: str,
source: Union[DataSource, FeatureView, List[FeatureView]],
sink_source: Optional[DataSource] = None,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for some reason i thought we agreed on naming it sink.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, on a second thought, I think sink_source is more explicit for the user to know it is passing a data source to this config.

schema: Optional[List[Field]] = None,
entities: Optional[List[Entity]] = None,
aggregations: Optional[List[Aggregation]] = None,
udf: Optional[Callable[[DataFrame], DataFrame]] = None,
udf_string: Optional[str] = None,
ttl: Optional[timedelta] = timedelta(days=0),
online: bool = True,
offline: bool = False,
description: str = "",
tags: Optional[Dict[str, str]] = None,
owner: str = "",
)
```

---

## 🧠 Usage

### 1. Simple Feature View from Data Source

```python
from feast import BatchFeatureView, Field
from feast.types import Float32, Int32
from feast import FileSource
from feast.aggregation import Aggregation
from datetime import timedelta

source = FileSource(
path="s3://bucket/path/data.parquet",
timestamp_field="event_timestamp",
created_timestamp_column="created",
)

driver_fv = BatchFeatureView(
name="driver_hourly_stats",
entities=["driver_id"],
schema=[
Field(name="driver_id", dtype=Int32),
Field(name="conv_rate", dtype=Float32),
],
aggregations=[
Aggregation(column="conv_rate", function="sum", time_window=timedelta(days=1)),
Copy link
Member

@franciscojavierarceo franciscojavierarceo Jul 14, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: maybe we should make it avg since summing a conversion rate is weird

],
source=source,
)
```

---

### 2. Derived Feature View from Another View
You can build feature views on top of other features by deriving a feature view from another view. Let's take a look at an example.
```python
from feast import BatchFeatureView, Field
from pyspark.sql import DataFrame
from feast.types import Float32, Int32
from feast import FileSource

def transform(df: DataFrame) -> DataFrame:
return df.withColumn("conv_rate", df["conv_rate"] * 2)

daily_driver_stats = BatchFeatureView(
name="daily_driver_stats",
entities=["driver_id"],
schema=[
Field(name="driver_id", dtype=Int32),
Field(name="conv_rate", dtype=Float32),
],
udf=transform,
source=driver_fv,
sink_source=FileSource( # Required to specify where to sink the derived view
name="daily_driver_stats_sink",
path="s3://bucket/daily_stats/",
file_format="parquet",
timestamp_field="event_timestamp",
created_timestamp_column="created",
),
)
```

---

## 🔄 Execution Flow

Feast automatically resolves the DAG of `BatchFeatureView` dependencies during:

- `materialize()`: recursively resolves and executes the feature view graph.
- `get_historical_features()`: builds the execution plan for retrieving point-in-time correct features.
- `apply()`: registers the feature view DAG structure to the registry.

Each transformation and aggregation is turned into a DAG node (e.g., `SparkTransformationNode`, `SparkAggregationNode`) executed by the compute engine (e.g., `SparkComputeEngine`).

---

## ⚙️ How Materialization Works

- If the `BatchFeatureView` is backed by a base source (`FileSource`, `BigQuerySource`, `SparkSource` etc), the `batch_source` is used directly.
- If the source is another feature view (i.e., chained views), the `sink_source` must be provided to define the materialization target data source.
- During DAG planning, `SparkWriteNode` uses the `sink_source` as the batch sink.

---

## 🧪 Example Tests

See:

- `test_spark_dag_materialize_recursive_view()`: Validates chaining of two feature views and output validation.
- `test_spark_compute_engine_materialize()`: Validates transformation and write of features into offline and online stores.

---

## 🛑 Gotchas

- `sink_source` is **required** when chaining views (i.e., `source` is another FeatureView or list of them).
- Schema fields must be consistent with `sink_source`, `batch_source.field_mapping` if field mappings exist.
- Aggregation logic must reference columns present in the raw source or transformed inputs.

---

## 🔮 Future Directions

- Support additional offline stores (e.g., Snowflake, Redshift) with auto-generated sink sources.
- Enable fully declarative transform logic (SQL + UDF mix).
- Introduce optimization passes for DAG pruning and fusion.
87 changes: 71 additions & 16 deletions docs/reference/compute-engine/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,16 +13,39 @@ This system builds and executes DAGs (Directed Acyclic Graphs) of typed operatio

## 🧠 Core Concepts

| Component | Description |
|--------------------|----------------------------------------------------------------------|
| `ComputeEngine` | Interface for executing materialization and retrieval tasks |
| `FeatureBuilder` | Constructs a DAG from Feature View definition for a specific backend |
| `DAGNode` | Represents a logical operation (read, aggregate, join, etc.) |
| `ExecutionPlan` | Executes nodes in dependency order and stores intermediate outputs |
| `ExecutionContext` | Holds config, registry, stores, entity data, and node outputs |
| Component | Description | API |
|--------------------|----------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------|
| `ComputeEngine` | Interface for executing materialization and retrieval tasks | [link](https://github.com/feast-dev/feast/blob/master/sdk/python/feast/infra/compute_engines/base.py) |
| `FeatureBuilder` | Constructs a DAG from Feature View definition for a specific backend | [link](https://github.com/feast-dev/feast/blob/master/sdk/python/feast/infra/compute_engines/feature_builder.py) |
| `FeatureResolver` | Resolves feature DAG by topological order for execution | [link](https://github.com/feast-dev/feast/blob/master/sdk/python/feast/infra/compute_engines/feature_resolver.py) |
| `DAG` | Represents a logical DAG operation (read, aggregate, join, etc.) | [link](https://github.com/feast-dev/feast/blob/master/sdk/python/feast/infra/compute_engines/dag/README.md) |
| `ExecutionPlan` | Executes nodes in dependency order and stores intermediate outputs | [link]([link](https://github.com/feast-dev/feast/blob/master/sdk/python/feast/infra/compute_engines/dag/README.md)) |
| `ExecutionContext` | Holds config, registry, stores, entity data, and node outputs | [link]([link](https://github.com/feast-dev/feast/blob/master/sdk/python/feast/infra/compute_engines/dag/README.md)) |

---

## Feature resolver and builder
The `FeatureBuilder` initializes a `FeatureResolver` that extracts a DAG from the `FeatureView` definitions, resolving dependencies and ensuring the correct execution order. \
The FeatureView represents a logical data source, while DataSource represents the physical data source (e.g., BigQuery, Spark, etc.). \
When defining a `FeatureView`, the source can be a physical `DataSource`, a derived `FeatureView`, or a list of `FeatureViews`.
The FeatureResolver walks through the FeatureView sources, and topologically sorts the DAG nodes based on dependencies, and returns a head node that represents the final output of the DAG. \
Subsequently, the `FeatureBuilder` builds the DAG nodes from the resolved head node, creating a `DAGNode` for each operation (read, join, filter, aggregate, etc.).
An example of built output from FeatureBuilder:
```markdown
- Output(Agg(daily_driver_stats))
- Agg(daily_driver_stats)
- Filter(daily_driver_stats)
- Transform(daily_driver_stats)
- Agg(hourly_driver_stats)
- Filter(hourly_driver_stats)
- Transform(hourly_driver_stats)
- Source(hourly_driver_stats)
```

## Diagram
![feature_dag.png](feature_dag.png)


## ✨ Available Engines

### 🔥 SparkComputeEngine
Expand All @@ -44,7 +67,7 @@ This system builds and executes DAGs (Directed Acyclic Graphs) of typed operatio
SourceReadNode
|
v
JoinNode (Only for get_historical_features with entity df)
TransformationNode (If feature_transformation is defined) | JoinNode (default behavior for multiple sources)
|
v
FilterNode (Always included; applies TTL or user-defined filters)
Expand All @@ -56,9 +79,6 @@ AggregationNode (If aggregations are defined in FeatureView)
DeduplicationNode (If no aggregation is defined for get_historical_features)
|
v
TransformationNode (If feature_transformation is defined)
|
v
ValidationNode (If enable_validation = True)
|
v
Expand All @@ -79,20 +99,54 @@ To create your own compute engine:

```python
from feast.infra.compute_engines.base import ComputeEngine
from feast.infra.materialization.batch_materialization_engine import MaterializationTask, MaterializationJob
from feast.infra.compute_engines.tasks import HistoricalRetrievalTask
from typing import Sequence, Union
from feast.batch_feature_view import BatchFeatureView
from feast.entity import Entity
from feast.feature_view import FeatureView
from feast.infra.common.materialization_job import (
MaterializationJob,
MaterializationTask,
)
from feast.infra.common.retrieval_task import HistoricalRetrievalTask
from feast.infra.offline_stores.offline_store import RetrievalJob
from feast.infra.registry.base_registry import BaseRegistry
from feast.on_demand_feature_view import OnDemandFeatureView
from feast.stream_feature_view import StreamFeatureView


class MyComputeEngine(ComputeEngine):
def materialize(self, task: MaterializationTask) -> MaterializationJob:
def update(
self,
project: str,
views_to_delete: Sequence[
Union[BatchFeatureView, StreamFeatureView, FeatureView]
],
views_to_keep: Sequence[
Union[BatchFeatureView, StreamFeatureView, FeatureView, OnDemandFeatureView]
],
entities_to_delete: Sequence[Entity],
entities_to_keep: Sequence[Entity],
):
...

def _materialize_one(
self,
registry: BaseRegistry,
task: MaterializationTask,
**kwargs,
) -> MaterializationJob:
...

def get_historical_features(self, task: HistoricalRetrievalTask) -> RetrievalJob:
...

```

2. Create a FeatureBuilder
```python
from feast.infra.compute_engines.feature_builder import FeatureBuilder


class CustomFeatureBuilder(FeatureBuilder):
def build_source_node(self): ...
def build_aggregation_node(self, input_node): ...
Expand All @@ -101,6 +155,7 @@ class CustomFeatureBuilder(FeatureBuilder):
def build_dedup_node(self, input_node):
def build_transformation_node(self, input_node): ...
def build_output_nodes(self, input_node): ...
def build_validation_node(self, input_node): ...
```

3. Define DAGNode subclasses
Expand All @@ -114,7 +169,7 @@ class CustomFeatureBuilder(FeatureBuilder):
## 🚧 Roadmap
- [x] Modular, backend-agnostic DAG execution framework
- [x] Spark engine with native support for materialization + PIT joins
- [ ] PyArrow + Pandas engine for local compute
- [ ] Native multi-feature-view DAG optimization
- [x] PyArrow + Pandas engine for local compute
- [x] Native multi-feature-view DAG optimization
- [ ] DAG validation, metrics, and debug output
- [ ] Scalable distributed backend via Ray or Polars
Binary file added docs/reference/compute-engine/feature_dag.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Loading