cuda - How does nvcc compile __host__ code?

jinglei

I'm trying to convert my c++ only project to cuda code to run on GPU.

I'm new to cuda programming and I don't know what to do with this circumstance:

If I have a very complicated class definition and now I want to pass a class instance to the device and execute some of its member functions on the device, I then should rewrite my whole .cpp file. Do I only need to turn those functions run on device to __host__ __device__ or should I rewrite all the functions?

I think nvcc treat functions with no function type qualifiers as __host__. How does it compile host code? Does it compile them exactly as g++ does?

talonmies

If I have a very complicated class definition and now I want to pass a class instance to the device and execute some of its member functions on the device, I then should rewrite my whole .cpp file. Do I only need to turn those functions run on device to __host__ __device__ or should I rewrite all the functions?

That depends entirely on your code. CUDA has support for a limited subset of C++ language features (fully documented here) and almost no C++ standard library is supported. So there is no general answer, but it is likely that you will have to rewrite at least some of your class member function code if you wish to call them on the GPU.

I think nvcc treat functions with no function type qualifiers as __host__. How does it compile host code? Does it compile them exactly as g++ does?

The first thing to understand is that nvcc isn't a compiler, it is a compiler driver. Plain C++ code in a file without a .cu file extension is, by default, passed straight to the host compiler with a set of predefined compiler options without modification.

Host code inside a .cu extension file is parsed by the CUDA C++ front end to look for CUDA syntax and then passed to the host compiler. This process can fail on extremely complex template definitions and bleeding edge language features. nvcc also includes CUDA headers automatically and these headers can potentially conflict with the contents of your own code. But eventually your host code reaches the host C++ compiler, albeit by a different route.

Collected from the Internet

Please contact [email protected] to delete if infringement.

edited at
0

Comments

0 comments
Login to comment

Related

From Dev

Does Swift compile to native code?

From Dev

Why does this Java code with "+ +" compile?

From Dev

Why does this code even compile?

From Dev

How to generate, compile and run CUDA kernels at runtime

From Dev

CUDA: nvcc without CUDA toolkit

From Dev

Why does this code compile

From Dev

Assert that code does NOT compile

From Dev

How to set CMake CUDA_NVCC_FLAGS per .cu file

From Dev

Legitimate code does not compile in Scalding

From Dev

MEX cuda code with dynamic parallelism - unable to compile

From Dev

NVCC compile to ptx using CMAKE's cuda_compile_ptx

From Dev

How to compile multiple files in cuda?

From Dev

CUDA 7.5 experimental __host__ __device__ lambdas

From Dev

How to interrupt or cancel a CUDA kernel from host code

From Dev

CUDA nvcc building chain of libraries

From Dev

Why and How Does This Java Code Compile?

From Dev

How to compile cuda code with calling one function twice inside one method?

From Dev

How to Get CUDA Toolkit Version at Compile Time Without nvcc?

From Dev

How to compile cuda code with calling one function twice inside one method?

From Dev

Why does this code even compile?

From Dev

CUDA: nvcc without CUDA toolkit

From Dev

Why does this code compile

From Dev

How to set CMake CUDA_NVCC_FLAGS per .cu file

From Dev

MEX cuda code with dynamic parallelism - unable to compile

From Dev

How to compile multiple files in cuda?

From Dev

CUDA - nvcc -G - if not working properly

From Dev

NVCC ignoring CUDA code?

From Dev

__CUDA_ARCH__ and kernel call in __host__ __device__ function

From Dev

C++ code does not compile