Correct Answer : 100 MB to 250 MB, compressed
Explanation :
Snowflake recommends that to optimize the number of parallel operations for a load, data files should roughly be 100-250 MB in size, compressed.
In case input data files are smaller than this, it is better to aggregate multiple input files. On the other hand, if the input data files are larger than this, it is better to split them into smaller files to match this guidance.
Snowflake also recommends against loading very large files (e.g., 100 GB+ sizes) without splitting them as splitting of files will take advantage of parallelism.
Practical Info – The recommendation applies to both modes of data loading – bulk load using SnowSQL (COPY INTO) and continuous/micro-batch using Snowpipe.