Scalable, cloud-ready data export with Parquet
Factry Historian supports automatic export of time-series and event data to Apache Parquet files. These files are an efficient, columnar storage format optimized for analytics workloads. Data can be written at regular intervals (e.g. hourly or daily), making it easy to feed cloud-based data lakes, notebooks, or machine learning pipelines with fresh, structured data. Parquet exports include timestamps, values, and relevant metadata such as units, asset structure, and quality flags. The files are stored in a user-defined directory or cloud-connected storage, enabling seamless handoff to tools like AWS S3, Azure Data Lake, or Google BigQuery for further processing, querying, or model training.
Why we love
Parquet
Medium length section heading goes here
Medium length section heading goes here
Medium length section heading goes here
Medium length section heading goes here
Medium length section heading goes here
Medium length section heading goes here
Analytics-ready dat a format
Exported data is stored in columnar Parquet format, making it ideal for batch processing, filtering, and large-scale queries in Spark, Pandas, or cloud engines.
Automated export scheduling
Configure exports to run on a fixed schedule. Parquet files are written regularly with consistent schema, ensuring downstream tools always have fresh, reliable data.
Cloud-native integration
Write directly to local or cloud-mounted storage. From there, connect to your cloud analytics stack without needing custom pipelines or manual preprocessing.
Why Parquet files are a game changer for industrial data management


Put your industrial data to work
Ready to get started with Factry? Awesome! Let’s schedule a call so you can discover how our platform empowers your operations.