User Tools

Site Tools


cur-arch

====== Differences ====== This shows you the differences between two versions of the page.

Link to this comparison view

cur-arch [2013/02/06 02:55]
morr [CudaGPU (src/gpu/gpgpu-sim/cuda_gpu.*, src/gpu/gpgpu-sim/CudaGPU.py)]
cur-arch [2016/02/19 08:53] (current)
jthestness [Current gem5-gpu limitations]
Line 43: Line 43:
   * Once trapped into CUDA syscalls, the appropriate CUDA call is executed, which may interface with the GPU for managing memory or kernel handling, PTX code handling or the copy engine   * Once trapped into CUDA syscalls, the appropriate CUDA call is executed, which may interface with the GPU for managing memory or kernel handling, PTX code handling or the copy engine
  
-Currently supported CUDA calls are listed [[syscalls|here]].+Currently supported CUDA calls are listed [[http://goo.gl/ysf55|here]]
 + 
 +==== Current gem5-gpu limitations ==== 
 +gem5-gpu models memory access to GPU global, const, and local memory through the gem5/Ruby memory hierarchy. Shared (scratch) memory accesses are modeled in GPGPU-Sim code. Atomic memory operations are supported to global and shared memory spaces. 
 + 
 +However, currently, there are a few noteworthy limitations to the GPU simulation capability in gem5-gpu: 
 +  * gem5-gpu does not model the CUDA texture memory space. This would require aligning GPGPU-Sim's address space identifiers for texture memory with memory allocations and accesses in gem5-gpu. 
 +  * Though GPGPU-Sim provides some support for asynchronous copy engine activity and multiple GPU kernel streams, this functionality has not yet been pulled into gem5-gpu.
cur-arch.1360140900.txt.gz · Last modified: 2013/02/06 02:55 by morr