Skip to main content

Fabric Lakehouse File Writer

Fabric Lakehouse File Writer writes to files in a lakehouse in Microsoft Fabric.

Summary

APIs used/data supported

Uses the OneLake API to write data into the OneLake "Files" folder.

Supported formatters

Avro, DSV, JSON, Parquet, XML. See Supported writer-formatter combinations.Supported writer-formatter combinations

Supported sources

All sources supported by Striim that generate the following event types: JSONNodeEvent, ParquetEvent, user-defined, WAEvent, XMLNodeEvent.

See Readers overview.Readers overview

Security and authentication

Supports authentication using OAuth (ROPC) mechanism with Fabric username and password.

Folder support

Supports uploading files to a user specified directory under "Files" endpoint and creation of directories dynamically.

Upload policy

Supports uploading files based on an upload policy, when a certain condition is met. These conditions are supported:

  • File size

  • Time interval

  • Event count

  • Event count and time

See Setting output names and rollover / upload policies.Setting output names and rollover / upload policies

Resilience / recovery

Programmability

  • Flow Designer

  • TQL

  • Wizards in the web UI to build pipelines from sources such as databases or apps

Metrics and auditing

Key metrics available through Striim monitoring. See GCS Reader monitoring metrics.

Typical use case and integration

Data engineers can load data first into a lakehouse and selectively stream from the lakehouse to a data warehouse for downstream consumers.

Fabric Lakehouse File Writer overview

Microsoft Fabric is an all-in-one analytics platform created for businesses and data professionals. The platform handles everything from data science and real-time analytics to data storage and data migration.

All of the data utilized within Fabric is stored in OneLake. OneLake is a single, unified, logical data lake that’s responsible for supporting all of the Fabric workloads. OneLake is built on top of Azure Data Lake Storage (ADLS) Gen2 and can support any type of file, structured or unstructured. All Fabric data items like data warehouses and lakehouses store their data automatically in OneLake in Delta Parquet Table format (see Fabric Data Warehouse Writer). On the other hand, raw files (unstructured) can be stored under the "Files" folder in the OneLake storage.

Fabric Lakehouse File Writer is a target adapter that writes data into the OneLake "Files" folder. It supports uploading files to a user-specified directory under the "Files" endpoint and creating directories dynamically. Fabric Lakehouse File Writer supports uploading files based on an upload policy; that is, it supports uploading files when a certain condition is met. Fabric Lakehouse File Writer supports authentication using OAuth (ROPC) mechanism with a Fabric username and password.