<i id='RHmjO'><tr id='RHmjO'><dt id='RHmjO'><q id='RHmjO'><span id='RHmjO'><b id='RHmjO'><form id='RHmjO'><ins id='RHmjO'></ins><ul id='RHmjO'></ul><sub id='RHmjO'></sub></form><legend id='RHmjO'></legend><bdo id='RHmjO'><pre id='RHmjO'><center id='RHmjO'></center></pre></bdo></b><th id='RHmjO'></th></span></q></dt></tr></i><div id='RHmjO'><tfoot id='RHmjO'></tfoot><dl id='RHmjO'><fieldset id='RHmjO'></fieldset></dl></div>

      <small id='RHmjO'></small><noframes id='RHmjO'>

    1. <legend id='RHmjO'><style id='RHmjO'><dir id='RHmjO'><q id='RHmjO'></q></dir></style></legend>

        <bdo id='RHmjO'></bdo><ul id='RHmjO'></ul>
      <tfoot id='RHmjO'></tfoot>

      1. 为什么启动一个Numba Cuda内核可以处理多达640个线程,但在有足够的GPU内存可用时却会在641个线程上失败?

        Why launching a Numba cuda kernel works with up to 640 threads, but fails with 641 when there#39;s plenty of GPU memory free?(为什么启动一个Numba Cuda内核可以处理多达640个线程,但在有足够的GPU内存可用时却会在641个线程上失败?) - IT屋-程序员
        <i id='dZEbH'><tr id='dZEbH'><dt id='dZEbH'><q id='dZEbH'><span id='dZEbH'><b id='dZEbH'><form id='dZEbH'><ins id='dZEbH'></ins><ul id='dZEbH'></ul><sub id='dZEbH'></sub></form><legend id='dZEbH'></legend><bdo id='dZEbH'><pre id='dZEbH'><center id='dZEbH'></center></pre></bdo></b><th id='dZEbH'></th></span></q></dt></tr></i><div id='dZEbH'><tfoot id='dZEbH'></tfoot><dl id='dZEbH'><fieldset id='dZEbH'></fieldset></dl></div>
          <tbody id='dZEbH'></tbody>
        <legend id='dZEbH'><style id='dZEbH'><dir id='dZEbH'><q id='dZEbH'></q></dir></style></legend>

            • <bdo id='dZEbH'></bdo><ul id='dZEbH'></ul>

                  <small id='dZEbH'></small><noframes id='dZEbH'>

                  <tfoot id='dZEbH'></tfoot>
                1. 本文介绍了为什么启动一个Numba Cuda内核可以处理多达640个线程,但在有足够的GPU内存可用时却会在641个线程上失败?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着跟版网的小编来一起学习吧!

                  问题描述

                  我有一个Numba Cuda内核,在RTX 3090上可以启动多达640个线程和64个块。

                  如果我尝试使用641个线程,则失败并显示:

                  Traceback (most recent call last):
                    File "/home/stark/Work/mmr6/mmr/algos/company_analysis/_analysis_gpu_backup.py", line 905, in <module>
                      load()
                    File "/home/stark/Work/mmr6/mmr/algos/company_analysis/_analysis_gpu_backup.py", line 803, in load_markets
                      run_simulations[algo_configs.BLOCK_COUNT, algo_configs.THREAD_COUNT, stream](
                    File "/home/stark/anaconda3/envs/mmr-env/lib/python3.9/site-packages/numba/cuda/compiler.py", line 821, in __call__
                      return self.dispatcher.call(args, self.griddim, self.blockdim,
                    File "/home/stark/anaconda3/envs/mmr-env/lib/python3.9/site-packages/numba/cuda/compiler.py", line 966, in call
                      kernel.launch(args, griddim, blockdim, stream, sharedmem)
                    File "/home/stark/anaconda3/envs/mmr-env/lib/python3.9/site-packages/numba/cuda/compiler.py", line 693, in launch
                      driver.launch_kernel(cufunc.handle,
                    File "/home/stark/anaconda3/envs/mmr-env/lib/python3.9/site-packages/numba/cuda/cudadrv/driver.py", line 2094, in launch_kernel
                      driver.cuLaunchKernel(cufunc_handle,
                    File "/home/stark/anaconda3/envs/mmr-env/lib/python3.9/site-packages/numba/cuda/cudadrv/driver.py", line 300, in safe_cuda_api_call
                      self._check_error(fname, retcode)
                    File "/home/stark/anaconda3/envs/mmr-env/lib/python3.9/site-packages/numba/cuda/cudadrv/driver.py", line 335, in _check_error
                      raise CudaAPIError(retcode, msg)
                  numba.cuda.cudadrv.driver.CudaAPIError: [701] Call to cuLaunchKernel results in CUDA_ERROR_LAUNCH_OUT_OF_RESOURCES
                  

                  但是当我查看nvidia-smi时,我发现它只需要2.9 GB的内存就可以运行640个线程。此GPU有22 GB未使用的空间。

                  在这种情况下还会有什么问题?我在某处读到网格大小、挡路大小、寄存器使用率和共享内存使用率都是需要考虑的因素。如何才能知道我正在使用多少寄存器和共享内存?

                  推荐答案

                  通常是每个线程的寄存器问题(CUDA_ERROR_LAUNCH_OUT_OF_RESOURCES)。这在SOcuda标签上的许多问题中都有涉及,例如this one。还有许多其他的,如here。简而言之,每个线程块使用的寄存器总数不能超过GPU的限制(见下文)。每个块使用的寄存器总数大约等于每个线程的寄存器总数乘以每个挡路的线程数(可能会向上舍入以获取分配粒度)。

                  在Numba Cuda中解决此问题的主要方法是在cuda.jit装饰符中包含maximum register usage parameter:

                  @cuda.jit( max_registers=40) 
                  

                  您当然可以将其设置为其他值。一个简单的启发式方法是将每个SM(或每个标题挡路,如果它更低)的寄存器总数(可通过CUDAdeviceQuery示例代码或the programming guide的表15发现)除以您希望启动的每个挡路的线程总数。因此,如果您的GPUSM有64K寄存器,并且您希望每个挡路启动1024线程,那么您可以选择每个线程最多64个寄存器。该号码应适用于腾讯通3090。

                  这篇关于为什么启动一个Numba Cuda内核可以处理多达640个线程,但在有足够的GPU内存可用时却会在641个线程上失败?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持跟版网!

                  本站部分内容来源互联网,如果有图片或者内容侵犯了您的权益,请联系我们,我们会在确认后第一时间进行删除!

                  相关文档推荐

                  groupby multiple coords along a single dimension in xarray(在xarray中按单个维度的多个坐标分组)
                  Group by and Sum in Pandas without losing columns(Pandas中的GROUP BY AND SUM不丢失列)
                  Group by + New Column + Grab value former row based on conditionals(GROUP BY+新列+基于条件的前一行抓取值)
                  Groupby and interpolate in Pandas(PANDA中的Groupby算法和插值算法)
                  Pandas - Group Rows based on a column and replace NaN with non-null values(PANAS-基于列对行进行分组,并将NaN替换为非空值)
                  Grouping pandas DataFrame by 10 minute intervals(按10分钟间隔对 pandas 数据帧进行分组)

                    <small id='lNmj4'></small><noframes id='lNmj4'>

                    <legend id='lNmj4'><style id='lNmj4'><dir id='lNmj4'><q id='lNmj4'></q></dir></style></legend>
                    <i id='lNmj4'><tr id='lNmj4'><dt id='lNmj4'><q id='lNmj4'><span id='lNmj4'><b id='lNmj4'><form id='lNmj4'><ins id='lNmj4'></ins><ul id='lNmj4'></ul><sub id='lNmj4'></sub></form><legend id='lNmj4'></legend><bdo id='lNmj4'><pre id='lNmj4'><center id='lNmj4'></center></pre></bdo></b><th id='lNmj4'></th></span></q></dt></tr></i><div id='lNmj4'><tfoot id='lNmj4'></tfoot><dl id='lNmj4'><fieldset id='lNmj4'></fieldset></dl></div>

                          <bdo id='lNmj4'></bdo><ul id='lNmj4'></ul>
                          • <tfoot id='lNmj4'></tfoot>
                              <tbody id='lNmj4'></tbody>