-
-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add zoom functionality #20
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One question about how this interacts with the onIdle
listener, otherwise this is looking good to me.
How does this interact with the onIdle
listener? It seems like onIdle
might be invoked multiple times while zooming through the map depending on how quickly tiles are loaded (e.g., on initial load, after zooming to 1st level, after zooming to 2nd level, etc.). Or are we basically fine relying on the fact that the listener is instantiated after the zooming is kicked off?
good catch. i had forgotten about this. looking at this now, i'm wondering if moving the check for the idle state into the for loop that performs the zoom operations could potentially provide more granular performance measurements. Specifically, it could allow us to measure the time it takes for the map to become idle after each individual zoom operation, rather than after all zoom operations have been performed. |
i'm thinking of something along these lines if zoom_level:
for _ in range(zoom_level):
if operation == 'zoom_in':
page.keyboard.press('=')
elif operation == 'zoom_out':
page.keyboard.press('-')
# Start timer
page.evaluate(
"""
window._timerStart = performance.now();
"""
)
# Wait for the map to be idle and then stop timer
page.evaluate(
"""
() => {
window._error = null;
if (!window._map) {
window._error = 'window._map does not exist'
console.error(window._error)
window._timerEnd = performance.now()
}
return new Promise((resolve, reject) => {
const THRESHOLD = 5000
// timeout after THRESHOLD ms if idle event is not seen
setTimeout(() => {
window._error = `No idle events seen after ${THRESHOLD}ms`;
reject(window._error)
}, THRESHOLD)
window._map.onIdle(() => {
console.log('window._map.onIdle callback called')
window._timerEnd = performance.now()
resolve()
})
}).catch((error) => {
window._error = 'Error in page.evaluate: ' + error;
console.error(window._error);
window._timerEnd = performance.now()
})
}
"""
)
if error := page.evaluate('window._error'):
raise Exception(error)
timer_end = page.evaluate('window._timerEnd')
timer_start = page.evaluate('window._timerStart')
# Record system metrics for each zoom operation
data = {
'request_data': filtered_request_data,
'timer_start': timer_start,
'timer_end': timer_end,
'total_duration_in_ms': timer_end - timer_start,
'playwright_python_version': playwright_python_version,
'provider': provider_name,
'browser_name': playwright.chromium.name,
'browser_version': browser.version,
'operation': operation,
'zoom_level': zoom_level,
}
all_data.append(data) with this change, we would have a separate set of metrics for each zoom operation, which might affect how we analyze and interpret the performance data. do we want this level of granularity? |
If we go this route, should we also separately listen for the initial (pre-zoom) Would it be possible to make this choice (metrics per operation vs. metrics based on sum of operations) easy to configure? I could imagine a helper function like |
Regarding both of these comments, I think it would be best to handle the data filtering outside of the main data collection script. This would make it easy to change which metrics are considered important without needing to re-run the benchmarks. For the original requests data, I separated the request extraction from Specifically for the choice of per operation vs sum of operations, this would be simple to configure as a post-processing step if the data produced directly from running the benchmarks were something like:
where the number of numeric keys equals the number of zoom levels. |
…#24) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
@maxrjones, i've enabled storing chrome traces in s3. the RMSE values that i am getting are not close to zero when I run the benchmarks on my local Linux machine. is this expected? if so, does this mean the baselines are specifically tied to the machine they are generated from? you can view these exact plots by running python scripts/analysis.py --timestamp 2023-07-31T22-32-38 --run 1 --s3-bucket s3://carbonplan-scratch |
main.py
Outdated
chrome_args = [ | ||
'--enable-features=Vulkan,UseSkiaRenderer', | ||
'--use-vulkan=swiftshader', | ||
'--enable-unsafe-webgpu', | ||
'--disable-vulkan-fallback-to-gl-for-testing', | ||
'--dignore-gpu-blocklist', | ||
'--use-angle=vulkan', | ||
] | ||
browser = await playwright.chromium.launch(args=chrome_args) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i was able to ensure GPU rasterization. confirmed this by launching chromium with headless=False
and visiting chrome://gpu
page. i get the following,
Graphics Feature Status
Canvas: Hardware accelerated
Canvas out-of-process rasterization: Disabled
Direct Rendering Display Compositor: Disabled
Compositing: Hardware accelerated
Multiple Raster Threads: Enabled
OpenGL: Enabled
Rasterization: Hardware accelerated
Raw Draw: Disabled
Skia Graphite: Disabled
Video Decode: Hardware accelerated
Video Encode: Software only. Hardware acceleration disabled
Vulkan: Enabled
WebGL: Hardware accelerated
WebGL2: Hardware accelerated
WebGPU: Software only, hardware acceleration unavailable
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@andersy005 can you please clarify the source of this report? is this local or on an ec2 instance?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is from my local linux machine.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this seems possible. there are two options for progressing with this approach:
I will explore option 2. |
i'm going to merge this shortly. i'm open to addressing any additional feedback in separate pull requests. |
This PR introduces a feature to the benchmarks that allows for configurable zoom functionality. while executing the benchmarks, users can now specify the desired operation (
zoom_in
orzoom_out
) along with the correspondingzoom level
.when dealing with the request data, should we filter out requests that do not correspond to the pyramid level? for example, should we exclude requests that are irrelevant to the specified zoom level?
produces