我有我想建立使用distutils的動態庫到Python代碼CUDA。但是,即使安裝了「nvcc」編譯器,distutils似乎也無法識別「.cu」文件。不知道如何完成它。python distutils可以編譯CUDA代碼嗎?
6
A
回答
11
的Distutils不能默認編譯CUDA,因爲它不會同時使用多個編譯器支持。默認情況下,它會根據您的平臺設置爲編譯器,而不是您擁有的源代碼類型。
我有一個包含一些猴子補丁到的distutils在這種支持砍在github的示例項目。示例項目是管理一些GPU內存和CUDA核心,包裹在痛飲,和所有剛剛python setup.py install
編譯的C++類。重點是數組操作,所以我們也使用numpy。所有內核都爲此示例項目增加一個數組中的每個元素。
的代碼是在這裏:https://github.com/rmcgibbo/npcuda-example。這是setup.py腳本。整個代碼的關鍵是customize_compiler_for_nvcc()
。
import os
from os.path import join as pjoin
from setuptools import setup
from distutils.extension import Extension
from distutils.command.build_ext import build_ext
import subprocess
import numpy
def find_in_path(name, path):
"Find a file in a search path"
#adapted fom http://code.activestate.com/recipes/52224-find-a-file-given-a-search-path/
for dir in path.split(os.pathsep):
binpath = pjoin(dir, name)
if os.path.exists(binpath):
return os.path.abspath(binpath)
return None
def locate_cuda():
"""Locate the CUDA environment on the system
Returns a dict with keys 'home', 'nvcc', 'include', and 'lib64'
and values giving the absolute path to each directory.
Starts by looking for the CUDAHOME env variable. If not found, everything
is based on finding 'nvcc' in the PATH.
"""
# first check if the CUDAHOME env variable is in use
if 'CUDAHOME' in os.environ:
home = os.environ['CUDAHOME']
nvcc = pjoin(home, 'bin', 'nvcc')
else:
# otherwise, search the PATH for NVCC
nvcc = find_in_path('nvcc', os.environ['PATH'])
if nvcc is None:
raise EnvironmentError('The nvcc binary could not be '
'located in your $PATH. Either add it to your path, or set $CUDAHOME')
home = os.path.dirname(os.path.dirname(nvcc))
cudaconfig = {'home':home, 'nvcc':nvcc,
'include': pjoin(home, 'include'),
'lib64': pjoin(home, 'lib64')}
for k, v in cudaconfig.iteritems():
if not os.path.exists(v):
raise EnvironmentError('The CUDA %s path could not be located in %s' % (k, v))
return cudaconfig
CUDA = locate_cuda()
# Obtain the numpy include directory. This logic works across numpy versions.
try:
numpy_include = numpy.get_include()
except AttributeError:
numpy_include = numpy.get_numpy_include()
ext = Extension('_gpuadder',
sources=['src/swig_wrap.cpp', 'src/manager.cu'],
library_dirs=[CUDA['lib64']],
libraries=['cudart'],
runtime_library_dirs=[CUDA['lib64']],
# this syntax is specific to this build system
# we're only going to use certain compiler args with nvcc and not with gcc
# the implementation of this trick is in customize_compiler() below
extra_compile_args={'gcc': [],
'nvcc': ['-arch=sm_20', '--ptxas-options=-v', '-c', '--compiler-options', "'-fPIC'"]},
include_dirs = [numpy_include, CUDA['include'], 'src'])
# check for swig
if find_in_path('swig', os.environ['PATH']):
subprocess.check_call('swig -python -c++ -o src/swig_wrap.cpp src/swig.i', shell=True)
else:
raise EnvironmentError('the swig executable was not found in your PATH')
def customize_compiler_for_nvcc(self):
"""inject deep into distutils to customize how the dispatch
to gcc/nvcc works.
If you subclass UnixCCompiler, it's not trivial to get your subclass
injected in, and still have the right customizations (i.e.
distutils.sysconfig.customize_compiler) run on it. So instead of going
the OO route, I have this. Note, it's kindof like a wierd functional
subclassing going on."""
# tell the compiler it can processes .cu
self.src_extensions.append('.cu')
# save references to the default compiler_so and _comple methods
default_compiler_so = self.compiler_so
super = self._compile
# now redefine the _compile method. This gets executed for each
# object but distutils doesn't have the ability to change compilers
# based on source extension: we add it.
def _compile(obj, src, ext, cc_args, extra_postargs, pp_opts):
if os.path.splitext(src)[1] == '.cu':
# use the cuda for .cu files
self.set_executable('compiler_so', CUDA['nvcc'])
# use only a subset of the extra_postargs, which are 1-1 translated
# from the extra_compile_args in the Extension class
postargs = extra_postargs['nvcc']
else:
postargs = extra_postargs['gcc']
super(obj, src, ext, cc_args, postargs, pp_opts)
# reset the default compiler_so, which we might have changed for cuda
self.compiler_so = default_compiler_so
# inject our redefined _compile method into the class
self._compile = _compile
# run the customize_compiler
class custom_build_ext(build_ext):
def build_extensions(self):
customize_compiler_for_nvcc(self.compiler)
build_ext.build_extensions(self)
setup(name='gpuadder',
# random metadata. there's more you can supploy
author='Robert McGibbon',
version='0.1',
# this is necessary so that the swigged python file gets picked up
py_modules=['gpuadder'],
package_dir={'': 'src'},
ext_modules = [ext],
# inject our custom trigger
cmdclass={'build_ext': custom_build_ext},
# since the package has c code, the egg cannot be zipped
zip_safe=False)
+1
這是一種古老的問題,但你有什麼想法如何做到這一點的窗口?問題是** msvccompiler **沒有使用** _ compile **方法。 – rAyyy 2017-03-14 10:52:23
相關問題
- 1. Python的distutils可以編譯.S(彙編)嗎?
- 2. python distutils C++編譯
- 3. 無法編譯CUDA代碼
- 4. 編譯cuda代碼CPU
- 5. 故障編譯cuda代碼
- 6. C可以運行編譯的Python代碼(.pyc文件)嗎?
- 7. 你可以爲Android代碼編譯Python嗎?
- 8. 使用distutils編譯不帶輸出名稱的純C代碼
- 9. 編譯器可以在沒有擴展的情況下編譯代碼嗎?
- 10. cuda - nvcc如何編譯__host__代碼?
- 11. 我可以在沒有cuda設備的情況下編譯cuda程序嗎
- 12. Python編譯器,可以編譯使用Pillow/PIL的程序嗎?
- 13. 我可以使用C++編譯器來編譯c源代碼嗎?
- 14. gcc是否可以不編譯代碼?
- 15. 像Python一樣,可以將Ruby編譯爲字節碼嗎?
- 16. 你可以在Windows上運行並編譯swift代碼嗎?
- 17. 可以編譯一個groovy代碼在JRE中運行嗎?
- 18. 我們可以查看runtime.js的未編譯JavaScript源代碼嗎?
- 19. 我可以使用Macports的GCC編譯通用代碼嗎?
- 20. 我可以測試ES2015('ES6')代碼而無需編譯它嗎?
- 21. C++編譯的DLL可以在代碼中有區別嗎?
- 22. 可以在虛擬機內編譯代碼嗎?
- 23. 我可以從Java/Scala編譯Dart代碼嗎?
- 24. 我可以使用Roslyn編譯時代碼重寫嗎?
- 25. Visual C++ 2008 Express Edition可以編譯C源代碼嗎?
- 26. java可以運行一個編譯好的scala代碼嗎?
- 27. 可以使用mingw編譯Windows Vista或7的代碼嗎?
- 28. 如何用C頭文件和CUDA代碼編譯C代碼?
- 29. 編譯代碼的Python包
- 30. 從python代碼編譯DLL
你可以發佈一些代碼,以便我們可以看到你已經嘗試過嗎?另外,如果CUDA內核是關鍵部分,則可以嘗試使用PyCUDA將其提供給python。 – 2012-04-05 19:08:31
你是什麼意思'不承認'?它不包括.cu文件的蛋?然後將package_data = {'':['* .cu']}添加到您的設置(...)中。 – 2012-04-05 19:34:48