NEW267291
Umbrella: Using Canvas image sources between different canvases and canvas types is slow
https://bugs.webkit.org/show_bug.cgi?id=267291
Summary Umbrella: Using Canvas image sources between different canvases and canvas ty...
Kimmo Kinnunen
Reported 2024-01-09 11:46:33 PST
Umbrella: Using Canvas image sources between different canvases and canvas types is slow - Video - Offscreen canvas - WebGL - 2DContext - Bitmaprenderer
Attachments
Radar WebKit Bug Importer
Comment 1 2024-01-12 12:16:34 PST
Yegor
Comment 2 2024-05-01 09:39:06 PDT
Here's a couple of repro cases: Plain JS repro: https://jsfiddle.net/yjbanov/0vurdt42/8/ Flutter Web repro: https://image-bitmap-stress-test.web.app/ Fixing this will help us make Flutter web apps run much smoother in Safari and consume less memory.
Simon Fraser (smfr)
Comment 3 2024-12-13 10:26:45 PST
https://jsfiddle.net/yjbanov/0vurdt42/8/ seems fast but https://image-bitmap-stress-test.web.app/ is still slower in Safari than Chrome.
Yegor
Comment 4 2024-12-13 12:38:29 PST
This is exciting! We'll kick the tires in the newest Safari. This variant is still drastically slower compared to Chromium-based browsers: https://jsfiddle.net/6e4dz57v/4/
Yegor
Comment 5 2024-12-13 12:38:59 PST
The difference may be in transferToImageBitmap vs createImageBitmap
Yury Yarashevich
Comment 6 2025-06-04 14:10:48 PDT
I'd like to contribute two additional demos that cover scenarios not currently represented in this ticket: 1. The efficiency of `HTMLCanvasElement.captureStream` implementation: https://github.com/mstyura/webkit-issues/blob/main/video-transform-pipeline/README.md#mirror-4k-canvas-to-video-element-via-htmlcanvaselementcapturestream 2. The efficiency of `VideoTrackGenerator` when produce frames via `new VideoFrame(offscreenCanvas)`: https://github.com/mstyura/webkit-issues/blob/main/video-transform-pipeline/README.md#produce-a-mediastreamtrack-via-videotrackgenerator-from-an-offscreencanvas-rendered-in-a-worker
Simon Taylor
Comment 7 2025-08-26 02:22:36 PDT
One thing I've noticed in the WebKit source is that I believe any use of IOSurfaces looks to set kIOSurfaceCacheMode to write combined: https://github.com/WebKit/WebKit/blob/46f0a4c36be821652d1582cbfabe5e51cfee5176/Source/WebCore/platform/graphics/cocoa/IOSurface.mm#L200 This is the default in AVAssetWriter too and in a separate project I discovered reading those buffers on the CPU had a large performance penalty (10x or more). There's a comment in iOS SDK the IOSurfaceTypes.h header that says this: /* ** Note: Write-combined memory is optimized for resources that the CPU writes into, but never reads. ** On some implementations, writes may bypass caches, which avoids cache pollution. Read actions may perform very poorly. ** Applications should investigate changing the cache mode only if writing to normally cached buffers is known to cause ** performance issues due to cache pollution, as write-combined memory can have surprising performance pitfalls. */ Perhaps worth someone at Apple reviewing the reasoning for setting write combined here and if the benefit is worth the penalty for CPU access.
Note You need to log in before you can comment on or make changes to this bug.