Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 11 additions & 1 deletion content/copilot/reference/copilot-feature-matrix.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,10 +26,20 @@ topics:

The following table shows supported {% data variables.product.prodname_copilot_short %} features in the latest version of each IDE.

{%- comment %}
This loop generates the "Features by IDE" comparison table:
- Outer loop: Iterates through each feature from VS Code's feature list (using VS Code as the canonical source)
- Inner loop: For each feature, iterates through all IDEs to check support in their latest version
- Gets the latest version using ideEntry[1].versions | first
- Looks up the support level for that feature in that version
- Outputs ✓ (supported), P (preview), or ✗ (not supported)
Example row: | Agent mode | ✓ | ✓ | P | ✗ | ... |
{%- endcomment %}

| Feature{%- for entry in tables.copilot.copilot-matrix.ides %} | {{ entry[0] }}{%- endfor %} |
|:----{%- for entry in tables.copilot.copilot-matrix.ides %}|:----:{%- endfor %}|
{%- for featureEntry in tables.copilot.copilot-matrix.ides["VS Code"].features %}
| {{ featureEntry[0] }}{%- for ideEntry in tables.copilot.copilot-matrix.ides %}{%- assign latestVersion = ideEntry[1].versions | last %}{%- assign supportLevel = ideEntry[1].features[featureEntry[0]][latestVersion] %} | {%- case supportLevel -%}{%- when "supported" %}✓{%- when "preview" %}P{%- else %}✗{%- endcase -%}{%- endfor %} |
| {{ featureEntry[0] }}{%- for ideEntry in tables.copilot.copilot-matrix.ides %}{%- assign latestVersion = ideEntry[1].versions | first %}{%- assign supportLevel = ideEntry[1].features[featureEntry[0]][latestVersion] %} | {%- case supportLevel -%}{%- when "supported" %}✓{%- when "preview" %}P{%- else %}✗{%- endcase -%}{%- endfor %} |
{%- endfor %}

{% endides %}
Expand Down
2 changes: 2 additions & 0 deletions data/release-notes/enterprise-server/3-16/13.yml
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,8 @@ sections:

Now, if an administrator sets the instance's `skip_rebase_commit_generation_from_rebase_merge_settings` configuration variable to `true`, the "Allow rebase merging" option in a repository's pull request settings becomes the source of truth for whether rebase commits are generated when mergeability is checked.
known_issues:
- |
When applying an enterprise security configuration to all repositories (for example, enabling Secret Scanning or Code Scanning across all repositories), the system immediately enqueues enablement jobs for every organization in the enterprise simultaneously. For enterprises with a large number of repositories, this can result in significant system load and potential performance degradation. If you manage a large enterprise with many organizations and repositories, we recommend applying security configurations at the organization level rather than at the enterprise level in the UI. This allows you to enable security features incrementally and monitor system performance as you roll out changes.
- |
During an upgrade of GitHub Enterprise Server, custom firewall rules are removed. If you use custom firewall rules, you must reapply them after upgrading.
- |
Expand Down
2 changes: 2 additions & 0 deletions data/release-notes/enterprise-server/3-17/10.yml
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,8 @@ sections:

Now, if an administrator sets the instance's `skip_rebase_commit_generation_from_rebase_merge_settings` configuration variable to `true`, the "Allow rebase merging" option in a repository's pull request settings becomes the source of truth for whether rebase commits are generated when mergeability is checked.
known_issues:
- |
When applying an enterprise security configuration to all repositories (for example, enabling Secret Scanning or Code Scanning across all repositories), the system immediately enqueues enablement jobs for every organization in the enterprise simultaneously. For enterprises with a large number of repositories, this can result in significant system load and potential performance degradation. If you manage a large enterprise with many organizations and repositories, we recommend applying security configurations at the organization level rather than at the enterprise level in the UI. This allows you to enable security features incrementally and monitor system performance as you roll out changes.
- |
During an upgrade of GitHub Enterprise Server, custom firewall rules are removed. If you use custom firewall rules, you must reapply them after upgrading.
- |
Expand Down
2 changes: 2 additions & 0 deletions data/release-notes/enterprise-server/3-18/4.yml
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,8 @@ sections:

Now, if an administrator sets the instance's `skip_rebase_commit_generation_from_rebase_merge_settings` configuration variable to `true`, the "Allow rebase merging" option in a repository's pull request settings becomes the source of truth for whether rebase commits are generated when mergeability is checked.
known_issues:
- |
When applying an enterprise security configuration to all repositories (for example, enabling Secret Scanning or Code Scanning across all repositories), the system immediately enqueues enablement jobs for every organization in the enterprise simultaneously. For enterprises with a large number of repositories, this can result in significant system load and potential performance degradation. If you manage a large enterprise with many organizations and repositories, we recommend applying security configurations at the organization level rather than at the enterprise level in the UI. This allows you to enable security features incrementally and monitor system performance as you roll out changes.
- |
During an upgrade of GitHub Enterprise Server, custom firewall rules are removed. If you use custom firewall rules, you must reapply them after upgrading.
- |
Expand Down
2 changes: 2 additions & 0 deletions data/release-notes/enterprise-server/3-19/1.yml
Original file line number Diff line number Diff line change
Expand Up @@ -48,6 +48,8 @@ sections:
- |
You can configure multiple data disks to host MySQL and repository data. This capability is currently in public preview and is applicable only for standalone and high availability topologies. It does not apply to cluster topologies. For more information, see [AUTOTITLE](/admin/monitoring-and-managing-your-instance/multiple-data-disks/configuring-multiple-data-disks). [Updated: 2026-01-19]
known_issues:
- |
When applying an enterprise security configuration to all repositories (for example, enabling Secret Scanning or Code Scanning across all repositories), the system immediately enqueues enablement jobs for every organization in the enterprise simultaneously. For enterprises with a large number of repositories, this can result in significant system load and potential performance degradation. If you manage a large enterprise with many organizations and repositories, we recommend applying security configurations at the organization level rather than at the enterprise level in the UI. This allows you to enable security features incrementally and monitor system performance as you roll out changes.
- |
Upgrading or hotpatching to 3.19.1 may fail on very old nodes that have been continuously upgraded from versions older than 2021 versions (i.e. 2.17). If this issue occurs, you will see log entries prefixed with `invalid secret` in ghe-config.log. If you are running nodes this old, it is recommended not to upgrade to 3.19.1.
If you must hotpatch to 3.19.1, first run `ghe-config 'secrets.session-manage' | tr -d '\n' | wc -c`. If the output is less than 64, run `ghe-config --unset 'secrets.session-manage'` and `ghe-config-apply` before you start the hotpatch. You can also run these same commands after the hotpatch to recover from the failure. [Updated: 2026-01-12]
Expand Down
7 changes: 7 additions & 0 deletions package-lock.json

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

1 change: 1 addition & 0 deletions package.json
Original file line number Diff line number Diff line change
Expand Up @@ -157,6 +157,7 @@
"dependencies": {
"@elastic/elasticsearch": "8.19.1",
"@github/failbot": "0.8.3",
"@github/hydro-analytics-client": "^2.3.3",
"@gr2m/gray-matter": "4.0.3-with-pr-137",
"@horizon-rs/language-guesser": "0.1.1",
"@octokit/graphql": "9.0.1",
Expand Down
5 changes: 5 additions & 0 deletions src/events/components/events.ts
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ import { isLoggedIn } from '@/frame/components/hooks/useHasAccount'
import { getExperimentVariationForContext } from './experiments/experiment'
import { EventType, EventPropsByType } from '../types'
import { isHeadless } from './is-headless'
import { sendHydroAnalyticsEvent, getOctoClientId } from './hydro-analytics'

const COOKIE_NAME = '_docs-events'

Expand Down Expand Up @@ -114,6 +115,7 @@ export function sendEvent<T extends EventType>({
content_type: getMetaContent('page-content-type'),
status: Number(getMetaContent('status') || 0),
is_logged_in: isLoggedIn(),
octo_client_id: getOctoClientId(),

// Device information
// os, os_version, browser, browser_version:
Expand Down Expand Up @@ -152,6 +154,9 @@ export function sendEvent<T extends EventType>({

queueEvent(body)

// Send events to hydro-analytics-client for cross-subdomain tracking
sendHydroAnalyticsEvent(body)

if (type === EventType.exit) {
flushQueue()
}
Expand Down
98 changes: 98 additions & 0 deletions src/events/components/hydro-analytics.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,98 @@
/**
* Integration with @github/hydro-analytics-client for cross-subdomain tracking.
*
* This sends events to collector.githubapp.com alongside our existing analytics.
* The client auto-collects: page, title, client_id, referrer, user_agent,
* screen_resolution, browser_resolution, browser_languages, pixel_ratio, timestamp, tz_seconds
*
* We send all other docs-specific context fields, including:
* - path_language, path_version, path_product, path_article
* - page_document_type, page_type, content_type
* - color_mode_preference, is_logged_in, experiment_variation, is_headless
* - event_id, page_event_id, octo_client_id
* - Plus any event-specific properties (exit metrics, link_url, etc.)
*
* All functions are wrapped in try/catch to ensure that issues with the
* hydro-analytics-client or collector don't affect our primary analytics.
*/

import {
AnalyticsClient,
getOrCreateClientId as hydroGetOrCreateClientId,
} from '@github/hydro-analytics-client'
import { EventType } from '../types'

/**
* Safe wrapper around hydro-analytics-client's getOrCreateClientId.
* Returns undefined if the client fails for any reason.
*/
export function getOctoClientId(): string | undefined {
try {
return hydroGetOrCreateClientId()
} catch (error) {
console.log('hydro-analytics-client getOctoClientId error:', error)
return undefined
}
}

const hydroClient = new AnalyticsClient({
collectorUrl: 'https://collector.githubapp.com/docs/collect',
clientId: getOctoClientId(),
})

// Fields that hydro-analytics-client already collects automatically
const AUTO_COLLECTED_FIELDS = new Set([
'referrer',
'user_agent',
'viewport_width',
'viewport_height',
'screen_width',
'screen_height',
'pixel_ratio',
'timezone',
'user_language',
'href',
'title',
])

/**
* Flatten a nested event body into a single-level context object,
* excluding fields that hydro-analytics-client already auto-collects.
*/
export function prepareData(body: Record<string, unknown>): {
type: string
context: Record<string, string>
} {
const { context: nestedContext, type, ...rest } = body
const flattened = {
...((nestedContext as Record<string, unknown>) || {}),
...rest,
}
const context = Object.fromEntries(
Object.entries(flattened)
.filter(([, value]) => value != null)
.filter(([key]) => !AUTO_COLLECTED_FIELDS.has(key))
.map(([key, value]) => [key, String(value)]),
)
return { type: typeof type === 'string' ? type : 'unknown', context }
}

/**
* Send an event to hydro-analytics-client.
* For page events, sends as a page view. For all other events, sends as a custom event.
*
* This is wrapped in try/catch to ensure that if the hydro collector is down
* or errors, it doesn't affect our primary analytics pipeline.
*/
export function sendHydroAnalyticsEvent(body: Record<string, unknown>): void {
try {
const { type, context } = prepareData(body)
if (type === EventType.page) {
hydroClient.sendPageView(context)
} else {
hydroClient.sendEvent(type, context)
}
} catch (error) {
console.log('hydro-analytics-client error:', error)
}
}
5 changes: 5 additions & 0 deletions src/events/lib/schema.ts
Original file line number Diff line number Diff line change
Expand Up @@ -124,6 +124,11 @@ const context = {
type: 'boolean',
description: 'The cookie value of staffonly',
},
octo_client_id: {
type: 'string',
description:
'The _octo cookie client ID for cross-subdomain tracking with github.com analytics.',
},

// Device information
os: {
Expand Down
102 changes: 102 additions & 0 deletions src/events/tests/hydro-analytics.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,102 @@
import { describe, expect, test } from 'vitest'
import { prepareData } from '../components/hydro-analytics'

describe('prepareData', () => {
test('flattens nested context into top level', () => {
const body = {
type: 'page',
context: {
event_id: '123',
path_language: 'en',
},
}
const result = prepareData(body)
expect(result.type).toBe('page')
expect(result.context.event_id).toBe('123')
expect(result.context.path_language).toBe('en')
})

test('includes top-level props alongside context', () => {
const body = {
type: 'exit',
context: { event_id: '123' },
exit_scroll_length: 0.75,
}
const result = prepareData(body)
expect(result.type).toBe('exit')
expect(result.context.event_id).toBe('123')
expect(result.context.exit_scroll_length).toBe('0.75')
})

test('filters out auto-collected fields', () => {
const body = {
type: 'page',
context: {
event_id: '123',
referrer: 'https://google.com',
user_agent: 'Mozilla/5.0',
viewport_width: 1024,
title: 'Test Page',
path_language: 'en',
},
}
const result = prepareData(body)
expect(result.context.event_id).toBe('123')
expect(result.context.path_language).toBe('en')
expect(result.context.referrer).toBeUndefined()
expect(result.context.user_agent).toBeUndefined()
expect(result.context.viewport_width).toBeUndefined()
expect(result.context.title).toBeUndefined()
})

test('filters out null and undefined values', () => {
const body = {
type: 'page',
context: {
event_id: '123',
path_language: null,
path_version: undefined,
path_product: 'actions',
},
}
const result = prepareData(body)
expect(result.context.event_id).toBe('123')
expect(result.context.path_product).toBe('actions')
expect(result.context.path_language).toBeUndefined()
expect(result.context.path_version).toBeUndefined()
})

test('converts all values to strings', () => {
const body = {
type: 'exit',
context: {
status: 200,
is_logged_in: true,
is_headless: false,
},
}
const result = prepareData(body)
expect(result.context.status).toBe('200')
expect(result.context.is_logged_in).toBe('true')
expect(result.context.is_headless).toBe('false')
})

test('defaults type to unknown if not a string', () => {
const body = {
type: 123,
context: { event_id: '123' },
}
const result = prepareData(body)
expect(result.type).toBe('unknown')
})

test('handles missing context gracefully', () => {
const body = {
type: 'page',
exit_scroll_length: 0.5,
}
const result = prepareData(body)
expect(result.type).toBe('page')
expect(result.context.exit_scroll_length).toBe('0.5')
})
})
1 change: 1 addition & 0 deletions src/events/types.ts
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,7 @@ export type EventProps = {
is_logged_in: boolean
dotcom_user: string
is_staff: boolean
octo_client_id?: string
os: string
os_version: string
browser: string
Expand Down
4 changes: 3 additions & 1 deletion src/frame/middleware/helmet.ts
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,9 @@ const DEFAULT_OPTIONS = {
prefetchSrc: ["'self'"],
// When doing local dev, especially in Safari, you need to add `ws:`
// which NextJS uses for the hot module reloading.
connectSrc: ["'self'", isDev && 'ws:'].filter(Boolean) as string[],
connectSrc: ["'self'", 'https://collector.githubapp.com', isDev && 'ws:'].filter(
Boolean,
) as string[],
fontSrc: ["'self'", 'data:'],
imgSrc: [...GITHUB_DOMAINS, 'data:', 'placehold.it'],
objectSrc: ["'self'"],
Expand Down
14 changes: 14 additions & 0 deletions src/graphql/data/fpt/changelog.json
Original file line number Diff line number Diff line change
@@ -1,4 +1,18 @@
[
{
"schemaChanges": [
{
"title": "The GraphQL schema includes these changes:",
"changes": [
"<p>Field <code>Issue.projectItems</code> changed type from 'ProjectV2ItemConnection!<code>to</code>ProjectV2ItemConnection'</p>",
"<p>Field <code>PullRequest.projectItems</code> changed type from 'ProjectV2ItemConnection!<code>to</code>ProjectV2ItemConnection'</p>"
]
}
],
"previewChanges": [],
"upcomingChanges": [],
"date": "2026-01-28"
},
{
"schemaChanges": [
{
Expand Down
4 changes: 2 additions & 2 deletions src/graphql/data/fpt/schema.docs.graphql
Original file line number Diff line number Diff line change
Expand Up @@ -19282,7 +19282,7 @@ type Issue implements Assignable & Closable & Comment & Deletable & Labelable &
Returns the last _n_ elements from the list.
"""
last: Int
): ProjectV2ItemConnection!
): ProjectV2ItemConnection

"""
Find a project by number.
Expand Down Expand Up @@ -41103,7 +41103,7 @@ type PullRequest implements Assignable & Closable & Comment & Labelable & Lockab
Returns the last _n_ elements from the list.
"""
last: Int
): ProjectV2ItemConnection!
): ProjectV2ItemConnection

"""
Find a project by number.
Expand Down
Loading
Loading