Skip to content

Conversation

@JannicCutura
Copy link

@JannicCutura JannicCutura commented Feb 1, 2026

Rationale for this change

Add support for accessing Iceberg tables via S3 access points, enabling cross-account access scenarios where organizations enforce access point usage instead of direct bucket access. Closes #2991.

I created this PR based on the contribution guide, but it is my first PR on an open source repo, so apologies in advance should I have missed any step.

Changes:

  • Add S3_ACCESS_POINT_PREFIX config constant (s3.access-point.)
  • Implement _resolve_s3_access_point() in PyArrowFileIO
  • Implement _resolve_s3_access_point() in FsspecFileIO
  • Add 12 unit tests (6 per FileIO implementation)

Configuration:

s3.access-point.<bucket-name> = <access-point-alias>

for example:

from pyiceberg.catalog import load_catalog
from pyiceberg.io import S3_ACCESS_POINT_PREFIX
    catalog = load_catalog(
        "glue",
        **{
            "type": "glue",
            "client.region": AWS_REGION,
            # Multiple buckets, each with its own access point
            "s3.access-point.bucket-a": "alias-a-123456-s3alias",
            f"{S3_ACCESS_POINT_PREFIX}bucket-b": "alias-b-789012-s3alias",

        }
    )

What

The access point alias (format: --s3alias) is used transparently in place of the bucket name when accessing S3 objects.

Why

Organizations increasingly enforce S3 access point usage for cross-account
data access instead of direct bucket access. This is common in enterprise
environments with strict security policies.

How

Introduces s3.access-point.<bucket> configuration that maps bucket names
to access point aliases. Both PyArrowFileIO and FsspecFileIO resolve these
at runtime, rewriting paths transparently.

Are these changes tested?

Yes.

  • 12 unit tests added (6 for PyArrowFileIO, 6 for FsspecFileIO)
  • Manually tested with real cross-account S3 access point on across two AWS accounts

Are there any user-facing changes?

No breaking changes. Existing configurations continue to work unchanged.
There is a new configuration option s3.access-point.<bucket-name>

@JannicCutura JannicCutura force-pushed the feat/s3-access-point-support branch from d49ea63 to 17cd113 Compare February 1, 2026 08:28
Add support for accessing Iceberg tables via S3 access points, enabling
cross-account access scenarios where organizations enforce access point
usage instead of direct bucket access.

Changes:
- Add S3_ACCESS_POINT_PREFIX config constant (s3.access-point.<bucket>)
- Implement _resolve_s3_access_point() in PyArrowFileIO
- Implement _resolve_s3_access_point() in FsspecFileIO
- Add 12 unit tests (6 per FileIO implementation)

Configuration:
  s3.access-point.<bucket-name> = <access-point-alias>

The access point alias (format: <name>-<account-id>-s3alias) is used
transparently in place of the bucket name when accessing S3 objects.
@JannicCutura JannicCutura force-pushed the feat/s3-access-point-support branch from 17cd113 to dcff1ac Compare February 1, 2026 08:30
…tion

The _resolve_s3_access_point method was incorrectly constructing paths
for non-S3 schemes (like local files) by concatenating netloc and path_suffix.
This caused issues when local paths had double slashes (e.g., //tmp/...) because
urlparse interprets these as network paths with netloc.

Now the method takes the original path from parse_location as a parameter
and returns it unchanged for non-S3 schemes, ensuring local file operations
work correctly.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Feature Request] Support S3 Access Points for cross-account bucket access

1 participant