About 282,000 results
Open links in new tab
  1. Understanding CUDA Memory Usage — PyTorch 2.9 …

    Aug 23, 2023 · To debug CUDA memory use, PyTorch provides a way to generate memory snapshots that record the state of allocated CUDA memory at any point in time, and optionally …

  2. `torch.cuda.memory._record_memory_history()`: Warning: …

    Aug 30, 2023 · There is a bug in the _record_memory_history_legacy pathway. I fixed it, but I am going to update the blog post so people don't hit it. The new arguments to …

  3. Visualize and understand GPU memory in PyTorch - Hugging Face

    Dec 24, 2024 · Running this code generates a profile.pkl file that contains a history of GPU memory usage during execution. You can visualize this history at: …

  4. Visualizing PyTorch memory usage over time - Zach’s Blog

    Dec 9, 2022 · Memory traces supplement snapshot information with trace events related to memory allocation. They show the series of allocation events that led up to an OOM error and …

  5. Enable Recording of Memory Allocation Stack Traces

    Enables recording of stack traces associated with memory allocations, allowing users to identify the source of memory allocation in CUDA snapshots.

  6. Understanding GPU Memory 1: Visualizing All Allocations over …

    Dec 14, 2023 · In this series, we show how to use memory tooling, including the Memory Snapshot, the Memory Profiler, and the Reference Cycle Detector to debug out of memory …

  7. Debugging PyTorch memory use with snapshots - Zach's Blog

    Aug 5, 2025 · The memory view gives a good overview of how the memory is being used. For debugging allocator issues in particular, though, it is useful to first categorized memory into …

  8. A Deep Dive into PyTorch’s GPU Memory Management

    Sep 3, 2024 · Overcoming the “CUDA out of memory” error requires a deep understanding of PyTorch’s memory management strategies and the ability to leverage profiling tools effectively.

  9. Debugging PyTorch memory use with snapshots - Zach’s Blog

    Aug 16, 2022 · In a previous post, I gave a detailed guide about how PyTorch CUDA caching allocator hands out memory. To understand how it is working with your own training run or …

  10. Debugging PyTorch memory use with snapshots

    May 11, 2023 · With _record_memory_history each block will also record a History object that remembers the last allocation placed in that block, including its stack trace as a list of Frames. …