Skip to content

Use ReadOnlyMemory<byte> for binary data to eliminate UTF-16 transcoding#1070

Open
Copilot wants to merge 15 commits intomainfrom
copilot/fix-binary-data-encoding
Open

Use ReadOnlyMemory<byte> for binary data to eliminate UTF-16 transcoding#1070
Copilot wants to merge 15 commits intomainfrom
copilot/fix-binary-data-encoding

Conversation

Copy link
Contributor

Copilot AI commented Dec 5, 2025

Plan: Fix Binary Data Representation in Protocol Types

Based on the issue analysis, I need to change how binary data is represented in the protocol types to avoid unnecessary UTF-16 transcoding. The current implementation uses string properties for base64-encoded data, which causes UTF-8 → UTF-16 → base64 decode conversions.

Changes Required:

  • Update BlobResourceContents.Blob property

    • Changed from string to ReadOnlyMemory<byte> for zero-copy UTF-8 representation
    • Added lazy Data property for decoded binary data with caching
    • Updated custom JSON converter in ResourceContents.Converter
    • Added cache invalidation when Blob is set
    • Optimized with MemoryMarshal.TryGetArray to avoid ToArray() in common cases
    • Use Base64.DecodeFromUtf8 to decode directly from UTF-8 bytes without string intermediate
  • Update ImageContentBlock.Data property

    • Changed from string to ReadOnlyMemory<byte>
    • Added lazy DecodedData property for decoded binary data with caching
    • Updated custom JSON converter in ContentBlock.Converter
    • Added cache invalidation when Data is set
    • Optimized with MemoryMarshal.TryGetArray to avoid ToArray() in common cases
    • Use Base64.DecodeFromUtf8 to decode directly from UTF-8 bytes without string intermediate
  • Update AudioContentBlock.Data property

    • Changed from string to ReadOnlyMemory<byte>
    • Added lazy DecodedData property for decoded binary data with caching
    • Updated custom JSON converter in ContentBlock.Converter
    • Added cache invalidation when Data is set
    • Optimized with MemoryMarshal.TryGetArray to avoid ToArray() in common cases
    • Use Base64.DecodeFromUtf8 to decode directly from UTF-8 bytes without string intermediate
  • Fixed AIContentExtensions.cs to use DecodedData properties

  • Fixed AIFunctionMcpServerResource.cs to convert strings to UTF-8 bytes

    • Optimized with MemoryMarshal.TryGetArray for ReadOnlyMemory to avoid ToString() in common cases
    • Use GetBytes overload with offset and count parameters for better performance
  • Fixed test servers (TestServer, TestSseServer, ConformanceServer) to use UTF-8 encoding

  • Update tests

    • Fixed ResourceContentsTests.cs to use byte arrays and UTF-8 encoding
    • Fixed ContentBlockTests.cs to use byte arrays
    • Fixed all integration tests and sample code
    • Updated assertions to compare UTF-8 decoded strings
  • Build and test all changes

    • Build succeeds across all target frameworks
    • All unit tests pass (protocol tests, resource tests, content block tests)
    • Code review feedback addressed
    • Performance optimizations applied

Summary:

Successfully converted binary data properties from string to ReadOnlyMemory<byte> to avoid UTF-16 transcoding overhead. The changes:

  1. Use ReadOnlyMemory<byte> to store base64-encoded UTF-8 bytes directly from JSON
  2. Provide lazy Data/DecodedData properties that decode and cache the binary data
  3. Cache invalidation when base64-encoded data is modified
  4. Write UTF-8 bytes directly to JSON without string allocation
  5. All tests updated and passing
  6. Performance optimizations using MemoryMarshal.TryGetArray to avoid unnecessary allocations
  7. Direct base64 decoding from UTF-8 bytes using System.Buffers.Text.Base64.DecodeFromUtf8 - completely eliminates string intermediate during decoding

Performance Optimizations:

  • Changed #if NET6_0_OR_GREATER to #if NET for simplicity
  • Use MemoryMarshal.TryGetArray to extract backing array when available, avoiding ToArray() allocations
  • Apply same optimization for ReadOnlyMemory to avoid ToString() allocations
  • Use GetBytes(char[], int, int) overload to pass offset and count directly instead of checking for complete array
  • Use System.Buffers.Text.Base64.DecodeFromUtf8 on .NET to decode directly from UTF-8 base64 bytes to binary without creating an intermediate UTF-16 string
  • Allocate max buffer size, decode, then slice to actual size using ReadOnlyMemory (avoids extra allocation ~67% of time)
  • Falls back to optimized string-based approach for .NET Standard 2.0

End-to-End Performance Path:

  • Wire → JSON Reader → ReadOnlyMemory<byte>: Zero UTF-8 → UTF-16 transcoding
  • Base64 Decode: Direct UTF-8 bytes → binary via Base64.DecodeFromUtf8 (no string intermediate)
  • Cache & Return: ReadOnlyMemory<byte> with lazy decode and caching
Original prompt

This section details on the original issue you should resolve

<issue_title>Content types that represent binary data are exposing UTF16 encoded strings</issue_title>
<issue_description>

/// <summary>
/// Gets or sets the base64-encoded string representing the binary data of the item.
/// </summary>
[JsonPropertyName("blob")]
public required string Blob { get; set; }

/// <summary>
/// Gets or sets the base64-encoded image data.
/// </summary>
[JsonPropertyName("data")]
public required string Data { get; set; }

/// <summary>
/// Gets or sets the base64-encoded audio data.
/// </summary>
[JsonPropertyName("data")]
public required string Data { get; set; }

Potentially more -- https://github.com/search?q=repo%3Amodelcontextprotocol%2Fcsharp-sdk%20%22base64-encoded%22&type=code

Doing this means that the UTF8 data that comes across the wire is transcoded to UTF16 just to be then base64 decoded to bytes when anyone uses it. The intermediate step to UTF16 (.NET string) is unnecessary, wastes cycles, creates garbage, and makes the API harder to use.

Instead these types should keep the data in UTF8 and then have lazy getters that will do the base64-decoding. We can tell folks to use those decoded getter properties.

https://github.com/dotnet/extensions/blob/main/src/Libraries/Microsoft.Extensions.AI.Abstractions/Contents/DataContent.cs follows a similar pattern, though it uses the Uri as it's serialized representation. It seems these MCP types should use the UTF8 encoded data as their serialized representation.

The OpenAI SDK exposes a BinaryData as its representation to deal with this problem https://github.com/openai/openai-dotnet/blob/636075222205d626a770acb1af1e5cc060abd517/src/Custom/Images/GeneratedImage.cs#L23 however that's slightly less usable since it forces the caller to decode.

The proposal here would be to have these be instead:

/// <summary> 
/// Gets or sets the base64-encoded UTF-8 byes representing the binary data of the item.
/// </summary> 
/// <remarks>
/// This is a zero-copy representation of the wire payload of this item.  Setting this value will invalidate any cached value of <see cref="Data" />.
/// </remarks>
[JsonPropertyName("blob")] 
public required ReadOnlyMemory<byte> Blob { get; set; } 

/// <summary> 
/// Gets the decoded data represented by <see cref="Blob" />.
/// </summary>
/// <remarks>
/// Accessing this member will decode the value in <see cref="Blob" /> and cache the result.
/// </remarks>
[JsonIgnore]
public byte[] Data { get; }

</issue_description>

<agent_instructions>Please find any protocol types that expose binary data as string (typically base64 encoded) and follow the cited recommendations for how to expose them differently.</agent_instructions>

Comments on the Issue (you are @copilot in this section)

@stephentoub > The proposal here would be to have these be instead:

Should Data also be a ReadOnlyMemory<byte>?

Is Data going to cache the decoded data lazily, or is it going to produce a new array on each access? If the latter, it shouldn't be a property.

what do you think about the name Blob

I'm torn. On the one hand, I agree with you. On the other hand, it's nice to keep it the same or super close to the spec. Offline you'd proposed something like Base64Blob; is that still your first choice?</comment_new>


💬 We'd love your input! Share your thoughts on Copilot coding agent in our 2 minute survey.

Copilot AI and others added 3 commits December 5, 2025 23:00
Co-authored-by: stephentoub <2642209+stephentoub@users.noreply.github.com>
Co-authored-by: stephentoub <2642209+stephentoub@users.noreply.github.com>
Co-authored-by: stephentoub <2642209+stephentoub@users.noreply.github.com>
Copilot AI changed the title [WIP] Fix binary data types exposing UTF16 encoded strings Use ReadOnlyMemory<byte> for binary data to eliminate UTF-16 transcoding Dec 5, 2025
Copilot AI requested a review from stephentoub December 5, 2025 23:28
@stephentoub stephentoub requested a review from ericstj December 8, 2025 03:42
Co-authored-by: stephentoub <2642209+stephentoub@users.noreply.github.com>
@stephentoub
Copy link
Contributor

@ericstj, I'm not sure this will actually end up saving much end-to-end. It'd help if some of the data wasn't actually looked at, but if it ends up being looked at, it seems it'll actually make the end-to-end worse.

Co-authored-by: stephentoub <2642209+stephentoub@users.noreply.github.com>
@ericstj
Copy link
Collaborator

ericstj commented Dec 8, 2025

I see this PR is still going though a string when decoding - I thought we had API that would go straight from ROS (base64 UTF8) to byte[] without an intermediate string.

@stephentoub
Copy link
Contributor

I thought we had API that would go straight from ROS (base64 UTF8) to byte[] without an intermediate string.

We have operation status APIs that write into an existing destinatiom buffer, but no one-shot that creates the array. We do for Base64Url but we didn't square the circle for Base64.

@ericstj
Copy link
Collaborator

ericstj commented Dec 8, 2025

Can't we do Base64.GetMaxDecodedFromUtf8Length to get the size, create a byte array, then Base64.DecodeFromUtf8, followed by slice to final actual size?

@stephentoub
Copy link
Contributor

Base64.DecodeFromUtf8

That's the operation status API I was referring to.

Can't we do Base64.GetMaxDecodedFromUtf8Length to get the size, create a byte array, then Base64.DecodeFromUtf8, followed by slice to final actual size?

I thought you'd stated you wanted the decoded data property to be a byte[]. If it is, slicing would entail allocating a whole new array unless the max happened to equal actual, which would happen only ~33% of the time.

If the destination property is a ReadOnlyMemory<byte>, then yes, it can be done that way.

@ericstj
Copy link
Collaborator

ericstj commented Dec 9, 2025

If the destination property is a ReadOnlyMemory<byte>, then yes, it can be done that way.

This is exactly what I was thinking of when mentioning:

If using ROM avoids a copy it's acceptable.

And the current implementation is already exposing as public ReadOnlyMemory<byte> DecodedData so I think we've already committed to this. Let's go ahead and remove those string intermediates. @copilot

Co-authored-by: ericstj <8918108+ericstj@users.noreply.github.com>
Copy link
Contributor Author

Copilot AI commented Dec 9, 2025

If the destination property is a ReadOnlyMemory<byte>, then yes, it can be done that way.

This is exactly what I was thinking of when mentioning:
...

Implemented using Base64.DecodeFromUtf8 to decode directly from UTF-8 bytes without string intermediate in commit 39213fb. The implementation:

  • Uses Base64.GetMaxDecodedFromUtf8Length to allocate the buffer
  • Calls Base64.DecodeFromUtf8 to decode directly
  • Slices to actual size using ReadOnlyMemory when needed (only allocates new array ~33% of time when actual != max)
  • Falls back to optimized string-based approach for .NET Standard 2.0

@stephentoub
Copy link
Contributor

stephentoub commented Dec 9, 2025

What are developers that need to set the data going to do? We will do the complicated thing internally in our own use, but most devs/call sites won't. It'll end up being worse for them still, right? How are we going to mitigate that?

@ericstj
Copy link
Collaborator

ericstj commented Dec 9, 2025

Thinking about principles here:

  1. The type should internally represent the data in the form closest to it's transit format. That ensures the existence of a high-perf, zero-overhead path in case conversion is not needed. For these MCP types that UTF8, base64 encoded bytes. For DataContent that's a data URI.
  2. The type should have constructors/factories/properties that support multiple forms of initialization without forcing the caller to do the decoding. I imagine that would be base64-uft8-bytes, base64-string, decoded bytes. We should choose the initialization option that minimizes additional decoding in any circumstance.
  3. We should have helpers that efficiently decode lazily and cache the result.

So in reviewing this I wouldn't want to see a ton of decoding sprinkled throughout the codebase. I would hope that we can have all the decoding surface area on the types themselves.

What are developers that need to set the data going to do? We will do the complicated thing internally in our own use, but most devs/call sites won't. It'll end up being worse for them still, right? How are we going to mitigate that?

Developers can initialize these types in the way that is most convenient to the form of data they have. We can indicate the least expensive initialization (from Base64 UTF8 bytes) but they need not initialize this way if they only have the data in a different form. They need not do the conversion - we will do it in the most efficient way. It's similar to MEAI DataContent -- though that type happens to have a very unlikely underlying format (data URI in UTF16) so it most likely will always convert when folks initialize.

Developers should avoid accessing members that require the to convert the output, and instead access the member with the data in the format they require. We'll convert to that in the most efficient way and cache the result.

Does this sound right @stephentoub

@stephentoub
Copy link
Contributor

That sounds reasonable, but it also requires more changes / APIs to get there.

@eiriktsarpalis
Copy link
Member

The type should internally represent the data in the form closest to it's transit format. That ensures the existence of a high-perf, zero-overhead path in case conversion is not needed. For these MCP types that UTF8, base64 encoded bytes. For DataContent that's a data URI.

On my tasks branch I made the decision to represent millisecond ttl properties using TimeSpan and a suitable custom converter. Should I reconsider and use int instead?

@stephentoub
Copy link
Contributor

The type should internally represent the data in the form closest to it's transit format. That ensures the existence of a high-perf, zero-overhead path in case conversion is not needed. For these MCP types that UTF8, base64 encoded bytes. For DataContent that's a data URI.

On my tasks branch I made the decision to represent millisecond ttl properties using TimeSpan and a suitable custom converter. Should I reconsider and use int instead?

Elsewhere we use DateTimeOffset in the protocol types, e.g.

public DateTimeOffset? LastModified { get; set; }

Seems reasonable to use the .NET representation of the concept for dates, times, and time spans.

I think the concern Eric is raising is more about cases where there's a non-trivial overhead to the cost of the transformation.

@ericstj
Copy link
Collaborator

ericstj commented Dec 10, 2025

Yes, my concerns are cases where the cost of transformation is significant, or the data at rest is neither the user format nor the wire format (as was the case with UTF16 base64 string).

@stephentoub
Copy link
Contributor

@ericstj, is this something you're still interested in pushing forward?

@ericstj
Copy link
Collaborator

ericstj commented Jan 21, 2026

Yes, thanks for the reminder. Let me refresh my memory on it and have a look this week.

@ericstj
Copy link
Collaborator

ericstj commented Jan 27, 2026

Fixing up the merge conflict.

@ericstj
Copy link
Collaborator

ericstj commented Jan 30, 2026

Test failure is #1222

@ericstj ericstj marked this pull request as ready for review February 2, 2026 18:31
@ericstj
Copy link
Collaborator

ericstj commented Feb 5, 2026

@stephentoub when you have a chance would appreciate your pass on this once more, since I'm the last committer.

new BlobResourceContents
{
Blob = resource.Description!,
Blob = System.Text.Encoding.UTF8.GetBytes(resource.Description!),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@copilot, we prefer to use simple rather than fully-qualified names. Can these "System.Text"s be moved instead to usings at the beginning?

Comment on lines +454 to +465
// gets UTF-8 bytes from UTF-16 chars without intermediate string allocations
fixed (char* pChars = utf16)
{
var byteCount = System.Text.Encoding.UTF8.GetByteCount(pChars, utf16.Length);
var bytes = new byte[byteCount];

fixed (byte* pBytes = bytes)
{
System.Text.Encoding.UTF8.GetBytes(pChars, utf16.Length, pBytes, byteCount);
}
return bytes;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@copilot, this shouldn't require unsafe code on .NET Core... there are overloads that work with spans, so this can be e.g.

byte[] bytes = new byte[Encoding.UTF8.GetByteCount(utf16)]
Encoding.UTF8.GetBytes(utf16, bytes);
return bytes;

For the netstandard2.0 build, you can add polyfills for these methods in https://github.com/modelcontextprotocol/csharp-sdk/tree/main/src/Common/Polyfills/System

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had used the unsafe code to avoid the ifdef, I agree a polyfill approach should work. Thanks for suggestion.

int maxLength = Base64.GetMaxEncodedToUtf8Length(data.Length);
byte[] buffer = new byte[maxLength];
if (Base64.EncodeToUtf8(data.Span, buffer, out _, out int bytesWritten) == OperationStatus.Done)
{
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We expect bytesWritten to always equal buffer.Length, right? Should we assert that?

throw new FormatException("Invalid base64 data");
}
}
return _decodedData.Value;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
return _decodedData.Value;
return _decodedData.Value;

/// </para>
/// </remarks>
[JsonIgnore]
public ReadOnlyMemory<byte> Data
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ericstj why do you call this Data here but DecodedData elsewhere like on ContentBlock?

Copy link
Collaborator

@ericstj ericstj Feb 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can change it.

The change was made here 4287694#diff-9da29ad32a17c90e9b28c79ad3ab9c4b370a757e26657c736020ea47314a60ba

I had assumed we would have liked to use Data everywhere, since that's the better name and more-likely the "useful" property for the caller. The ContentBlock types had already used Data to mean the encoded data, so we didn't change that.

I agree that consistency is better here - lets use DecodedData here. What are your thoughts on Blob vs Data for the encoded data? Should we keep those different between content and resource types (since it's preexisting discrepancy) or try to make the consistent?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we keep those different between content and resource types (since it's preexisting discrepancy) or try to make the consistent?

I think there's value in keeping the names of these C# properties aligned with their spec / JSON schema counterparts. I don't know why they differ in the MCP spec, but alas.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Content types that represent binary data are exposing UTF16 encoded strings

5 participants