Skip to content

native_iceberg_compat uses hard-coded config values #1816

@parthchandra

Description

@parthchandra

Describe the bug

In native_iceberg_compat, the initialization had codes the following configuration flags -

        conf.set("spark.sql.parquet.binaryAsString", "false");
        conf.set("spark.sql.parquet.int96AsTimestamp", "true");
        conf.set("spark.sql.caseSensitive", "false");
        conf.set("spark.sql.parquet.inferTimestampNTZ.enabled", "true");
        conf.set("spark.sql.legacy.parquet.nanosAsLong", "false");

These explicitly set the config to the default values for the configs.
We should apply the values specified by the end user instead and use defaults only if the value is not provided.

Steps to reproduce

No response

Expected behavior

No response

Additional context

No response

Metadata

Metadata

Assignees

Labels

area:scanParquet scan / data readingbugSomething isn't workingnative_iceberg_compatSpecific to native_iceberg_compat scan typepriority:mediumFunctional bugs, performance regressions, broken features

Type

No type
No fields configured for issues without a type.

Projects

No projects

Milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions