当前位置: 首页 > news >正文

PyTorch中的pyi檔案生成機制

PyTorch中的pyi檔案生成機制

  • 前言
  • pyi檔
  • 由py生成pyi.in
  • 由pyi.in生成pyi
    • torch/CMakeLists.txt
    • tools/pyi/gen_pyi.py
    • gen_pyi
      • native_functions
        • rand.names & rand.names_out
        • rand.generator_with_names & rand.generator_with_names_out
        • rand
        • rand.generator
        • rand.out
        • rand.generator_out
        • add.Tensor && add.out
        • add_.Tensor && add.out
        • add.out
      • function_signatures
        • rand.names & rand.names_out
        • rand.generator_with_names & rand.generator_with_names_out
        • rand
        • rand.generator
        • rand.out
        • rand.generator_out
        • add.Tensor && add.out
        • add_.Tensor && add.out
        • add.out
      • sig_groups
        • rand.generator_with_names & rand.generator_with_names_out
        • rand.generator & rand.generator_out
        • rand.names & rand.names_out
        • rand & rand.out
        • add.Tensor & add.out
        • add & add.Tensor & add.out
      • unsorted_function_hints
        • rand
        • add
      • function_hints
        • rand
        • add
      • hinted_function_names
      • all_symbols
      • all_directive
      • env
    • gen_nn_functional
    • datapipe.pyi
    • 生成結果
  • 使用pyi做類型檢查

前言

在PyTorch中如果查找python函數的定義,十有八九會跳轉到torch/_C/_VariableFunctions.pyi這個檔案。但是如果去PyTorch的github repo上尋找這個檔案,只能找到一個跟它名字類似的torch/_C/_VariableFunctions.pyi.in,卻找不到torch/_C/_VariableFunctions.pyi這個檔案本身。

如果打開torch/_C/_VariableFunctions.pyi去看:

# @generated from torch/_C/_VariableFunctions.pyi.in

才發現原來首行就說了:它是在編譯時才由torch/_C/_VariableFunctions.pyi.in動態生成的。

本篇就是要探討PyTorch中pyi檔案的生成機制。pyi檔案的生成過程大致可分為以下兩步:

  1. 由py生成pyi.in

  2. 由pyi.in生成pyi

但在這之前,先來看一下pyi檔在Python中的作用為何。

pyi檔

首先來看一下pyi這個檔案類型的名稱由來,根據What does “i” represent in Python .pyi extension?:

The i in .pyi stands for ‘interface’.The .pyi extension was first mentioned in this GitHub issue thread where JukkaL says:I'd probably prefer an extension with just a single dot. It also needs to be something that is not in use (it should not be used by cython, etc.). .pys seems to be used in Windows (or was). Maybe .pyi, where i stands for an interface definition?

可以知道,pyi中的i代表的是interface。

pyi implements "stub" file (definition from Martin Fowler)Stubs: provide canned answers to calls made during the test, usually not responding at all to anything outside what's programmed in for the test.

而它代表的涵意則是stub(樁/存根),詳見樁 (計算機):

樁[1](Stub / Method Stub )是指用來替換一部分功能的程序段。樁程序可以用來模擬已有程序的行為(比如一個遠端機器的過程)或是對將要開發的代碼的一種臨時替代。因此,打樁技術在程序移植、分布式計算、通用軟體開發和測試中用處很大。

正如pyi文件是干嘛的?(一文读懂Python的存根文件和类型检查)中所說,而pyi檔的作用只是在IDE中給type hint的,並不是必須的。

在PyTorch中也是一樣,torch/_C/_VariableFunctions.pyi僅用於類型提示。Python函數與C++函數的關聯實際上是由torch/csrc/autograd/generated/python_torch_functions_i.cpp所指定,而該檔案也是在編譯時自動生成的,詳見PyTorch中的python_torch_functions_i.cpp檔案生成機制。

由py生成pyi.in

PyTorch源碼中有以下.pyi.in檔:

torch/_C/__init__.pyi.in
torch/_C/_nn.pyi.in
torch/_C/return_types.pyi.in
torch/_C/_VariableFunctions.pyi.in
torch/nn/functional.pyi.in
torch/utils/data/datapipes/datapipe.pyi.in

根據torch/nn/functional.pyi.in中的注釋:

# These stubs were generated by running stubgen (`stubgen --parse-only functional.py`), followed by manual cleaning.

functional.pyi.in是用mypystubgen工具由functional.py生成後手動編輯而成的。

試著自己對torch/nn/functional.py跑跑看stubgen,首先把functional.py這個檔案複製到一個合適的地方,然後下:

stubgen functional.py

如果出現以下跟import相關的錯誤,先手動把對應的行數注釋掉就好:

Critical error during semantic analysis: functional.py:23: error: No parent module -- cannot perform relative import
functional.py:24: error: No parent module -- cannot perform relative import

先只關注以下這一段:

def fractional_max_pool2d_with_indices(input: Tensor, kernel_size: BroadcastingList2[int],output_size: Optional[BroadcastingList2[int]] = None,output_ratio: Optional[BroadcastingList2[float]] = None,return_indices: bool = False,_random_samples: Optional[Tensor] = None
) -> Tuple[Tensor, Tensor]:# ...fractional_max_pool2d = boolean_dispatch(arg_name="return_indices",arg_index=4,default=False,if_true=fractional_max_pool2d_with_indices,if_false=_fractional_max_pool2d,module_name=__name__,func_name="fractional_max_pool2d",
)

生成的functional.pyi裡對應的內容:

# ...
def fractional_max_pool2d_with_indices(input: Tensor, kernel_size: BroadcastingList2[int], output_size: Optional[BroadcastingList2[int]] = ..., output_ratio: Optional[BroadcastingList2[float]] = ..., return_indices: bool = ..., _random_samples: Optional[Tensor] = ...) -> Tuple[Tensor, Tensor]: ...fractional_max_pool2d: Incomplete
# ...

fractional_max_pool2d_with_indices這個函數的簽名與原本的幾乎一致,而fractional_max_pool2d則因為無法推斷被標注為Incomplete

照理說.pyi.in檔是由.py檔生成的,但是torch/_C目錄下的.pyi.in檔都沒有對應的.py檔,推測是由多個.py檔合併到同一個.pyi.in檔而來的。

由pyi.in生成pyi

一般來說.pyi檔是由stubgen生成的,但在PyTorch中則是先用stubgen生成並手動編輯後得到pyi.in檔,然後再利用Python腳本由.pyi.in檔生成的。

torch/CMakeLists.txt

torch/CMakeLists.txt

新增一個名為torch_python_stubs的custom target,依賴於如下的pyi檔。(關於add_custom_target和接下來會看到的add_custom_command詳見cmake的add_custom_command及add_custom_target。)

add_custom_target(torch_python_stubs DEPENDS"${TORCH_SRC_DIR}/_C/__init__.pyi""${TORCH_SRC_DIR}/_C/_VariableFunctions.pyi""${TORCH_SRC_DIR}/nn/functional.pyi""${TORCH_SRC_DIR}/utils/data/datapipes/datapipe.pyi"
)

查看如下add_custom_commandOUTPUT參數,可以知道這個custom command正是用於生成torch_python_stubs所依賴的前三個pyi檔。至於剩下的datapipe.pyi是如何生成的,詳見datapipe.pyi章節。

file(GLOB_RECURSE torchgen_python "${PROJECT_SOURCE_DIR}/torchgen/*.py")
file(GLOB_RECURSE autograd_python "${TOOLS_PATH}/autograd/*.py")
file(GLOB_RECURSE pyi_python "${TOOLS_PATH}/pyi/*.py")
add_custom_command(OUTPUT"${TORCH_SRC_DIR}/_C/__init__.pyi""${TORCH_SRC_DIR}/_C/_VariableFunctions.pyi""${TORCH_SRC_DIR}/nn/functional.pyi"COMMAND"${PYTHON_EXECUTABLE}" -mtools.pyi.gen_pyi--native-functions-path "aten/src/ATen/native/native_functions.yaml"--tags-path "aten/src/ATen/native/tags.yaml"--deprecated-functions-path "tools/autograd/deprecated.yaml"DEPENDS"${TORCH_SRC_DIR}/_C/__init__.pyi.in""${TORCH_SRC_DIR}/_C/_VariableFunctions.pyi.in""${TORCH_SRC_DIR}/nn/functional.pyi.in""${TORCH_ROOT}/aten/src/ATen/native/native_functions.yaml""${TORCH_ROOT}/aten/src/ATen/native/tags.yaml""${TORCH_ROOT}/tools/autograd/deprecated.yaml"${pyi_python}${autograd_python}${torchgen_python}WORKING_DIRECTORY"${TORCH_ROOT}"
)

這一段的入口是add_custom_command中的COMMAND,它透過"${PYTHON_EXECUTABLE}" -mtools.pyi.gen_pyi調用tools/pyi/gen_pyi.py,輸入則是DEPENDS區塊中寫的_C/__init__.pyi.in, _C/_VariableFunctions.pyi.innn/functional.pyi.in,程序執行完後會生成OUTPUT區塊中寫的三個pyi檔。

torch/_C/_nn.pyitorch/_C/return_types.pyi也是由tools/pyi/gen_pyi.py生成的,為什麼沒寫在add_custom_targetadd_custom_commandDEPENDSOUTPUT裡?

新增一個名為torch_python的shared library,運行過後會生成build/lib/libtorch_python.so

add_library(torch_python SHARED ${TORCH_PYTHON_SRCS})

接著宣告torch_python依賴於torch_python_stubs這個custom target。

add_dependencies(torch_python torch_python_stubs)

在非MacOS的系統上都會建構一個名為nnapi_backend的library,它的依賴中就有torch_python

# Skip building this library under MacOS, since it is currently failing to build on Mac
# Github issue #61930
if(NOT ${CMAKE_SYSTEM_NAME} MATCHES "Darwin")# Add Android Nnapi delegate libraryadd_library(nnapi_backend SHARED${TORCH_SRC_DIR}/csrc/jit/backends/nnapi/nnapi_backend_lib.cpp${TORCH_SRC_DIR}/csrc/jit/backends/nnapi/nnapi_backend_preprocess.cpp)# Pybind11 requires explicit linking of the torch_python librarytarget_link_libraries(nnapi_backend PRIVATE torch torch_python pybind::pybind11)
endif()

總結一下,就是有nnapi_backend -> torch_python -> torch_python_stubs -> torch/_C/__init__.pyi, torch/_C/_VariableFunctions.pyi, torch/nn/functional.pyi間的層層依賴,所以要建構nnapi_backend這個library時才會調用tools/pyi/gen_pyi.py去生成.pyi檔。

tools/pyi/gen_pyi.py

CMakeLists.txt中透過"${PYTHON_EXECUTABLE}" -mtools.pyi.gen_pyi調用了tools/pyi/gen_pyi.py, 它的功用是由.pyi.in檔生成.pyi檔。

def main() -> None:parser = argparse.ArgumentParser(description="Generate type stubs for PyTorch")parser.add_argument("--native-functions-path",metavar="NATIVE",default="aten/src/ATen/native/native_functions.yaml",help="path to native_functions.yaml",)parser.add_argument("--tags-path",metavar="TAGS",default="aten/src/ATen/native/tags.yaml",help="path to tags.yaml",)parser.add_argument("--deprecated-functions-path",metavar="DEPRECATED",default="tools/autograd/deprecated.yaml",help="path to deprecated.yaml",)parser.add_argument("--out", metavar="OUT", default=".", help="path to output directory")args = parser.parse_args()fm = FileManager(install_dir=args.out, template_dir=".", dry_run=False)gen_pyi(args.native_functions_path, args.tags_path, args.deprecated_functions_path, fm)if __name__ == "__main__":main()

gen_pyi.py中的注釋:

- We start off with a hand-written __init__.pyi.in file.  Thisfile contains type definitions for everything we cannot automaticallygenerate, including pure Python definitions directly in __init__.py(the latter case should be pretty rare).- We go through automatically bound functions based on thetype information recorded in native_functions.yaml andgenerate type hints for them (generate_type_hints)

native_functions.yaml中記錄了自動綁定函數(automatically bound functions,猜測是Python與C++函數的綁定)的類型資訊,gen_pyi.py會依據這些類型資訊用generate_type_hints函數(待會會在unsorted_function_hints一節出現)生成類型提示。

gen_pyi

tools/pyi/gen_pyi.py

這個函數的功用是由_C/__init__.pyi.in, _C/_VariableFunctions.pyi.intorch/_C/return_types.pyi.in生成_C/__init__.pyi, _C/_VariableFunctions.pyi, torch/_VF.pyitorch/return_types.pyi

def gen_pyi(native_yaml_path: str,tags_yaml_path: str,deprecated_yaml_path: str,fm: FileManager,
) -> None:"""gen_pyi()This function generates a pyi file for torch."""# ...

前三個參數預設為:

  • native_yaml_pathaten/src/ATen/native/native_functions.yaml
  • tags_yaml_pathaten/src/ATen/native/tags.yaml
  • deprecated_yaml_pathtools/autograd/deprecated.yaml

fm建構子中的兩個參數如下:

  • install_dirargs.out,也就是’.’
  • template_dir:‘.’

native_functions

解析native_functions.yamltags.yaml,得到native_functions變數:

    native_functions = parse_native_yaml(native_yaml_path, tags_yaml_path).native_functionsnative_functions = list(filter(should_generate_py_binding, native_functions))

native_functions是一個NativeFunction的列表,表示aten命名空間裡的函數,其第零個元素如下:

NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='_cast_Byte', inplace=False, dunder_method=False, functional_overload=False), overload_name=''), arguments=Arguments(pre_self_positional=(), self_arg=SelfArgument(argument=Argument(name='self', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=None)), post_self_positional=(Argument(name='non_blocking', type=BaseType(name=<BaseTy.bool: 9>), default='False', annotation=None),), pre_tensor_options_kwarg_only=(), tensor_options=None, post_tensor_options_kwarg_only=(), out=()), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.ExactSame: 1>, python_module=None, category_override=None, variants={<Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=9), autogen=[], ufunc_inner_loop={}, structured=False, structured_delegate=None, structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=False, has_composite_implicit_autograd_kernel=True, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=False, has_composite_explicit_autograd_non_functional_kernel=False, tags=set())

代表rand函數的元素如下,aten::rand函數有六種overload name,分別為names, generator_with_names, 空字串, generator, out, generator_out。可與native_functions.yaml交互參看:

rand.names & rand.names_out
- func: rand.names(SymInt[] size, *, Dimname[]? names, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None) -> Tensordevice_check: NoCheckdevice_guard: Falsedispatch:CompositeExplicitAutograd: randautogen: rand.names_outtags: nondeterministic_seeded

yaml檔中的autogen欄位中有rand.names_out,對照native_functions中的元素,可以發現NativeFunctionautogen成員也有一個overload_namenames_outOperatorName

NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='rand', inplace=False, dunder_method=False, functional_overload=False), overload_name='names'), arguments=Arguments(pre_self_positional=(), self_arg=None, post_self_positional=(Argument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, annotation=None),), pre_tensor_options_kwarg_only=(Argument(name='names', type=OptionalType(elem=ListType(elem=BaseType(name=<BaseTy.Dimname: 5>), size=None)), default=None, annotation=None),), tensor_options=TensorOptionsArguments(dtype=Argument(name='dtype', type=OptionalType(elem=BaseType(name=<BaseTy.ScalarType: 2>)), default='None', annotation=None), layout=Argument(name='layout', type=OptionalType(elem=BaseType(name=<BaseTy.Layout: 10>)), default='None', annotation=None), device=Argument(name='device', type=OptionalType(elem=BaseType(name=<BaseTy.Device: 11>)), default='None', annotation=None), pin_memory=Argument(name='pin_memory', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='None', annotation=None)), post_tensor_options_kwarg_only=(), out=()), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), use_const_ref_for_mutable_tensors=False, device_guard=False, device_check=<DeviceCheckType.NoCheck: 0>, python_module=None, category_override=None, variants={<Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=4254), autogen=[OperatorName(name=BaseOperatorName(base='rand', inplace=False, dunder_method=False, functional_overload=False), overload_name='names_out')], ufunc_inner_loop={}, structured=False, structured_delegate=None, structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False,has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=True, has_composite_explicit_autograd_non_functional_kernel=False, tags={'nondeterministic_seeded'})
rand.generator_with_names & rand.generator_with_names_out
- func: rand.generator_with_names(SymInt[] size, *, Generator? generator, Dimname[]? names, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None) -> Tensordevice_check: NoCheckdevice_guard: Falsetags: nondeterministic_seededdispatch:CompositeExplicitAutograd: randautogen: rand.generator_with_names_out

yaml檔中的autogen欄位中有rand.generator_with_names_out,對照下面,可以發現NativeFunctionautogen成員也有一個overload_namegenerator_with_names_outOperatorName

NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='rand', inplace=False, dunder_method=False, functional_overload=False), overload_name='generator_with_names'), arguments=Arguments(pre_self_positional=(), self_arg=None, post_self_positional=(Argument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, annotation=None),), pre_tensor_options_kwarg_only=(Argument(name='generator', type=OptionalType(elem=BaseType(name=<BaseTy.Generator: 1>)), default=None, annotation=None), Argument(name='names', type=OptionalType(elem=ListType(elem=BaseType(name=<BaseTy.Dimname: 5>), size=None)), default=None, annotation=None)), tensor_options=TensorOptionsArguments(dtype=Argument(name='dtype', type=OptionalType(elem=BaseType(name=<BaseTy.ScalarType: 2>)), default='None', annotation=None), layout=Argument(name='layout', type=OptionalType(elem=BaseType(name=<BaseTy.Layout: 10>)), default='None', annotation=None), device=Argument(name='device', type=OptionalType(elem=BaseType(name=<BaseTy.Device: 11>)), default='None', annotation=None), pin_memory=Argument(name='pin_memory', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='None', annotation=None)), post_tensor_options_kwarg_only=(), out=()), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), use_const_ref_for_mutable_tensors=False, device_guard=False, device_check=<DeviceCheckType.NoCheck: 0>, python_module=None, category_override=None, variants={<Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=4262), autogen=[OperatorName(name=BaseOperatorName(base='rand', inplace=False, dunder_method=False, functional_overload=False), overload_name='generator_with_names_out')], ufunc_inner_loop={}, structured=False, structured_delegate=None, structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=True, has_composite_explicit_autograd_non_functional_kernel=False, tags={'nondeterministic_seeded'})
rand
- func: rand(SymInt[] size, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None) -> Tensortags: nondeterministic_seededdispatch:CompositeExplicitAutograd: rand
NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='rand', inplace=False, dunder_method=False, functional_overload=False), overload_name=''), arguments=Arguments(pre_self_positional=(), self_arg=None, post_self_positional=(Argument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, annotation=None),), pre_tensor_options_kwarg_only=(), tensor_options=TensorOptionsArguments(dtype=Argument(name='dtype', type=OptionalType(elem=BaseType(name=<BaseTy.ScalarType: 2>)), default='None', annotation=None), layout=Argument(name='layout', type=OptionalType(elem=BaseType(name=<BaseTy.Layout: 10>)), default='None', annotation=None), device=Argument(name='device', type=OptionalType(elem=BaseType(name=<BaseTy.Device: 11>)), default='None', annotation=None), pin_memory=Argument(name='pin_memory', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='None', annotation=None)), post_tensor_options_kwarg_only=(), out=()), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.ExactSame: 1>, python_module=None, category_override=None, variants={<Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=4270), autogen=[], ufunc_inner_loop={}, structured=False, structured_delegate=None, structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=True, has_composite_explicit_autograd_non_functional_kernel=False, tags={'nondeterministic_seeded'})
rand.generator
- func: rand.generator(SymInt[] size, *, Generator? generator, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None) -> Tensortags: nondeterministic_seededdispatch:CompositeExplicitAutograd: rand
NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='rand', inplace=False, dunder_method=False, functional_overload=False), overload_name='generator'), arguments=Arguments(pre_self_positional=(), self_arg=None, post_self_positional=(Argument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, annotation=None),), pre_tensor_options_kwarg_only=(Argument(name='generator', type=OptionalType(elem=BaseType(name=<BaseTy.Generator: 1>)), default=None, annotation=None),), tensor_options=TensorOptionsArguments(dtype=Argument(name='dtype', type=OptionalType(elem=BaseType(name=<BaseTy.ScalarType: 2>)), default='None', annotation=None), layout=Argument(name='layout', type=OptionalType(elem=BaseType(name=<BaseTy.Layout: 10>)), default='None', annotation=None), device=Argument(name='device', type=OptionalType(elem=BaseType(name=<BaseTy.Device: 11>)), default='None', annotation=None), pin_memory=Argument(name='pin_memory', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='None', annotation=None)), post_tensor_options_kwarg_only=(), out=()), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.ExactSame: 1>, python_module=None, category_override=None, variants={<Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=4275), autogen=[], ufunc_inner_loop={}, structured=False, structured_delegate=None, structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=True, has_composite_explicit_autograd_non_functional_kernel=False, tags={'nondeterministic_seeded'})
rand.out
- func: rand.out(SymInt[] size, *, Tensor(a!) out) -> Tensor(a!)tags: nondeterministic_seededdispatch:CompositeExplicitAutograd: rand_out
NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='rand', inplace=False, dunder_method=False, functional_overload=False), overload_name='out'), arguments=Arguments(pre_self_positional=(), self_arg=None, post_self_positional=(Argument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, annotation=None),), pre_tensor_options_kwarg_only=(), tensor_options=None, post_tensor_options_kwarg_only=(), out=(Argument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.ExactSame: 1>, python_module=None, category_override=None, variants={<Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=4280), autogen=[], ufunc_inner_loop={}, structured=False, structured_delegate=None, structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=True, has_composite_explicit_autograd_non_functional_kernel=False, tags={'nondeterministic_seeded'})
rand.generator_out
- func: rand.generator_out(SymInt[] size, *, Generator? generator, Tensor(a!) out) -> Tensor(a!)tags: nondeterministic_seeded
NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='rand', inplace=False, dunder_method=False, functional_overload=False), overload_name='generator_out'), arguments=Arguments(pre_self_positional=(), self_arg=None, post_self_positional=(Argument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, annotation=None),), pre_tensor_options_kwarg_only=(Argument(name='generator', type=OptionalType(elem=BaseType(name=<BaseTy.Generator: 1>)), default=None, annotation=None),), tensor_options=None, post_tensor_options_kwarg_only=(), out=(Argument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.ExactSame: 1>, python_module=None, category_override=None, variants={<Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=4285), autogen=[], ufunc_inner_loop={}, structured=False, structured_delegate=None, structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=False, has_composite_implicit_autograd_kernel=True, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=False, has_composite_explicit_autograd_non_functional_kernel=False, tags={'nondeterministic_seeded'})

因為rand.namesrand.generator_with_names會生成對應的out版本的函數,所以由native_functions.yaml裡六個rand相關函數最後可以生成C++ aten命名空間裡的八個函數。

add.Tensor && add.out

傳入selfother,回傳結果的add函數。

- func: add.Tensor(Tensor self, Tensor other, *, Scalar alpha=1) -> Tensordevice_check: NoCheck   # TensorIteratorstructured_delegate: add.outvariants: function, methoddispatch:SparseCPU, SparseCUDA: add_sparseSparseCsrCPU, SparseCsrCUDA: add_sparse_csrMkldnnCPU: mkldnn_addZeroTensor: add_zerotensorNestedTensorCPU, NestedTensorCUDA: NestedTensor_add_Tensortags: [canonical, pointwise]
NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='add', inplace=False, dunder_method=False, functional_overload=False), overload_name='Tensor'), arguments=Arguments(pre_self_positional=(), self_arg=SelfArgument(argument=Argument(name='self', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=None)), post_self_positional=(Argument(name='other', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=None),), pre_tensor_options_kwarg_only=(Argument(name='alpha', type=BaseType(name=<BaseTy.Scalar: 12>), default='1', annotation=None),), tensor_options=None, post_tensor_options_kwarg_only=(), out=()), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.NoCheck: 0>, python_module=None, category_override=None, variants={<Variant.function: 1>, <Variant.method: 2>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=497), autogen=[], ufunc_inner_loop={}, structured=False, structured_delegate=OperatorName(name=BaseOperatorName(base='add', inplace=False, dunder_method=False, functional_overload=False), overload_name='out'), structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=False, has_composite_explicit_autograd_non_functional_kernel=False, tags={'pointwise', 'canonical'})
add_.Tensor && add.out

直接修改self參數的inplace版本。

- func: add_.Tensor(Tensor(a!) self, Tensor other, *, Scalar alpha=1) -> Tensor(a!)device_check: NoCheck   # TensorIteratorvariants: methodstructured_delegate: add.outdispatch:SparseCPU, SparseCUDA: add_sparse_SparseCsrCPU, SparseCsrCUDA: add_sparse_csr_MkldnnCPU: mkldnn_add_NestedTensorCPU, NestedTensorCUDA: NestedTensor_add__Tensortags: pointwise

根據pytorch native README.md:

Tensor(a!) - members of a may be written to thus mutating the underlying data.

Tensor(a!) self這個寫法表示self參數同時是入參也是出參。

NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='add', inplace=True, dunder_method=False, functional_overload=False), overload_name='Tensor'), arguments=Arguments(pre_self_positional=(), self_arg=SelfArgument(argument=Argument(name='self', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=()))), post_self_positional=(Argument(name='other', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=None),), pre_tensor_options_kwarg_only=(Argument(name='alpha', type=BaseType(name=<BaseTy.Scalar: 12>), default='1', annotation=None),), tensor_options=None, post_tensor_options_kwarg_only=(), out=()), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.NoCheck: 0>, python_module=None, category_override=None, variants={<Variant.method: 2>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=509), autogen=[], ufunc_inner_loop={}, structured=False, structured_delegate=OperatorName(name=BaseOperatorName(base='add', inplace=False, dunder_method=False, functional_overload=False), overload_name='out'), structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=False, has_composite_explicit_autograd_non_functional_kernel=False, tags={'pointwise'})
add.out

有出參out版本的add函數。

- func: add.out(Tensor self, Tensor other, *, Scalar alpha=1, Tensor(a!) out) -> Tensor(a!)device_check: NoCheck   # TensorIteratorstructured: Truestructured_inherits: TensorIteratorBaseufunc_inner_loop:Generic: add (AllAndComplex, BFloat16, Half, ComplexHalf)ScalarOnly: add (Bool)dispatch:SparseCPU: add_out_sparse_cpuSparseCUDA: add_out_sparse_cudaSparseCsrCPU: add_out_sparse_csr_cpuSparseCsrCUDA: add_out_sparse_csr_cudaMkldnnCPU: mkldnn_add_outMPS: add_out_mpstags: pointwise
NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='add', inplace=False, dunder_method=False, functional_overload=False), overload_name='out'), arguments=Arguments(pre_self_positional=(), self_arg=SelfArgument(argument=Argument(name='self', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=None)), post_self_positional=(Argument(name='other', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=None),), pre_tensor_options_kwarg_only=(Argument(name='alpha', type=BaseType(name=<BaseTy.Scalar: 12>), default='1', annotation=None),), tensor_options=None, post_tensor_options_kwarg_only=(), out=(Argument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.NoCheck: 0>, python_module=None, category_override=None, variants={<Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=520), autogen=[], ufunc_inner_loop={<UfuncKey.Generic: 7>: UfuncInnerLoop(name='add', supported_dtypes=<torchgen.utils.OrderedSet object at 0x7f600cff7910>, ufunc_key=<UfuncKey.Generic: 7>), <UfuncKey.ScalarOnly: 6>: UfuncInnerLoop(name='add', supported_dtypes=<torchgen.utils.OrderedSet object at 0x7f600cff7b80>, ufunc_key=<UfuncKey.ScalarOnly: 6>)}, structured=True, structured_delegate=None, structured_inherits='TensorIteratorBase', precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=False, has_composite_explicit_autograd_non_functional_kernel=False, tags={'pointwise'})

function_signatures

    function_signatures = load_signatures(native_functions, deprecated_yaml_path, method=False, pyi=True)

function_signatures是一個PythonSignatureNativeFunctionPair的列表,其第零個元素如下:

PythonSignatureNativeFunctionPair(signature=PythonSignature(name='_cast_Byte', input_args=(PythonArgument(name='self', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, default_init=None), PythonArgument(name='non_blocking', type=BaseType(name=<BaseTy.bool: 9>), default='False', default_init=None)), input_kwargs=(), output_args=None, returns=PythonReturns(returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), tensor_options_args=(), method=False), function=NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='_cast_Byte', inplace=False, dunder_method=False, functional_overload=False), overload_name=''), arguments=Arguments(pre_self_positional=(), self_arg=SelfArgument(argument=Argument(name='self', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=None)), post_self_positional=(Argument(name='non_blocking', type=BaseType(name=<BaseTy.bool: 9>), default='False', annotation=None),), pre_tensor_options_kwarg_only=(), tensor_options=None, post_tensor_options_kwarg_only=(), out=()), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.ExactSame: 1>, python_module=None, category_override=None, variants={<Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=9), autogen=[], ufunc_inner_loop={}, structured=False, structured_delegate=None, structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=False, has_composite_implicit_autograd_kernel=True, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=False, has_composite_explicit_autograd_non_functional_kernel=False, tags=set()))

代表rand函數的元素如下,共六個,可與剛才的native_functions一一對應:

rand.names & rand.names_out
PythonSignatureNativeFunctionPair(signature=PythonSignature(name='rand', input_args=(PythonArgument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, default_init=None),), input_kwargs=(PythonArgument(name='names', type=OptionalType(elem=ListType(elem=BaseType(name=<BaseTy.Dimname: 5>), size=None)), default=None, default_init=None),), output_args=None, returns=PythonReturns(returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), tensor_options_args=(PythonArgument(name='dtype', type=OptionalType(elem=BaseType(name=<BaseTy.ScalarType: 2>)), default='None', default_init=None), PythonArgument(name='layout', type=OptionalType(elem=BaseType(name=<BaseTy.Layout: 10>)), default='None', default_init=None), PythonArgument(name='device', type=OptionalType(elem=BaseType(name=<BaseTy.Device: 11>)), default='None', default_init='torch::tensors::get_default_device()'), PythonArgument(name='pin_memory', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='False', default_init=None), PythonArgument(name='requires_grad', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='False', default_init=None)), method=False), function=NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='rand', inplace=False, dunder_method=False, functional_overload=False), 
overload_name='names'), arguments=Arguments(pre_self_positional=(), self_arg=None, post_self_positional=(Argument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, annotation=None),), pre_tensor_options_kwarg_only=(Argument(name='names', type=OptionalType(elem=ListType(elem=BaseType(name=<BaseTy.Dimname: 5>), size=None)), default=None, annotation=None),), tensor_options=TensorOptionsArguments(dtype=Argument(name='dtype', type=OptionalType(elem=BaseType(name=<BaseTy.ScalarType: 2>)), default='None', annotation=None), layout=Argument(name='layout', type=OptionalType(elem=BaseType(name=<BaseTy.Layout: 10>)), default='None', annotation=None), device=Argument(name='device', type=OptionalType(elem=BaseType(name=<BaseTy.Device: 11>)), default='None', annotation=None), pin_memory=Argument(name='pin_memory', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='None', annotation=None)), post_tensor_options_kwarg_only=(), out=()), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), use_const_ref_for_mutable_tensors=False, device_guard=False, device_check=<DeviceCheckType.NoCheck: 0>, python_module=None, category_override=None, variants={<Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=4254), 
autogen=[OperatorName(name=BaseOperatorName(base='rand', inplace=False, dunder_method=False, functional_overload=False), overload_name='names_out')], 
ufunc_inner_loop={}, structured=False, structured_delegate=None, structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=True, has_composite_explicit_autograd_non_functional_kernel=False, tags={'nondeterministic_seeded'}))

注意names會生成names_out函數。

rand.generator_with_names & rand.generator_with_names_out
PythonSignatureNativeFunctionPair(signature=PythonSignature(name='rand', input_args=(PythonArgument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, default_init=None),), input_kwargs=(PythonArgument(name='generator', type=OptionalType(elem=BaseType(name=<BaseTy.Generator: 1>)), default=None, default_init=None), PythonArgument(name='names', type=OptionalType(elem=ListType(elem=BaseType(name=<BaseTy.Dimname: 5>), size=None)), default=None, default_init=None)), output_args=None, returns=PythonReturns(returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), tensor_options_args=(PythonArgument(name='dtype', type=OptionalType(elem=BaseType(name=<BaseTy.ScalarType: 2>)), default='None', default_init=None), PythonArgument(name='layout', type=OptionalType(elem=BaseType(name=<BaseTy.Layout: 10>)), default='None', default_init=None), PythonArgument(name='device', type=OptionalType(elem=BaseType(name=<BaseTy.Device: 11>)), default='None', default_init='torch::tensors::get_default_device()'), PythonArgument(name='pin_memory', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='False', default_init=None), PythonArgument(name='requires_grad', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='False', default_init=None)), method=False), function=NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='rand', inplace=False, dunder_method=False, functional_overload=False), 
overload_name='generator_with_names'), arguments=Arguments(pre_self_positional=(), self_arg=None, post_self_positional=(Argument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, annotation=None),), pre_tensor_options_kwarg_only=(Argument(name='generator', type=OptionalType(elem=BaseType(name=<BaseTy.Generator: 1>)), default=None, annotation=None), Argument(name='names', type=OptionalType(elem=ListType(elem=BaseType(name=<BaseTy.Dimname: 5>), size=None)), default=None, annotation=None)), tensor_options=TensorOptionsArguments(dtype=Argument(name='dtype', type=OptionalType(elem=BaseType(name=<BaseTy.ScalarType: 2>)), default='None', annotation=None), layout=Argument(name='layout', type=OptionalType(elem=BaseType(name=<BaseTy.Layout: 10>)), default='None', annotation=None), device=Argument(name='device', type=OptionalType(elem=BaseType(name=<BaseTy.Device: 11>)), default='None', annotation=None), pin_memory=Argument(name='pin_memory', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='None', annotation=None)), post_tensor_options_kwarg_only=(), out=()), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), use_const_ref_for_mutable_tensors=False, device_guard=False, device_check=<DeviceCheckType.NoCheck: 0>, python_module=None, category_override=None, variants={<Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=4262), 
autogen=[OperatorName(name=BaseOperatorName(base='rand', inplace=False, dunder_method=False, functional_overload=False), overload_name='generator_with_names_out')], 
ufunc_inner_loop={}, structured=False, structured_delegate=None, structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=True, has_composite_explicit_autograd_non_functional_kernel=False, tags={'nondeterministic_seeded'}))

注意generator_with_names會生成generator_with_names_out函數。

rand
PythonSignatureNativeFunctionPair(signature=PythonSignature(name='rand', input_args=(PythonArgument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, default_init=None),), input_kwargs=(), output_args=None, returns=PythonReturns(returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), tensor_options_args=(PythonArgument(name='dtype', type=OptionalType(elem=BaseType(name=<BaseTy.ScalarType: 2>)), default='None', default_init=None), PythonArgument(name='layout', type=OptionalType(elem=BaseType(name=<BaseTy.Layout: 10>)), default='None', default_init=None), PythonArgument(name='device', type=OptionalType(elem=BaseType(name=<BaseTy.Device: 11>)), default='None', default_init='torch::tensors::get_default_device()'), PythonArgument(name='pin_memory', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='False', default_init=None), PythonArgument(name='requires_grad', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='False', default_init=None)), method=False), function=NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='rand', inplace=False, dunder_method=False, functional_overload=False), 
overload_name=''), arguments=Arguments(pre_self_positional=(), self_arg=None, post_self_positional=(Argument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, annotation=None),), pre_tensor_options_kwarg_only=(), tensor_options=TensorOptionsArguments(dtype=Argument(name='dtype', type=OptionalType(elem=BaseType(name=<BaseTy.ScalarType: 2>)), default='None', annotation=None), layout=Argument(name='layout', type=OptionalType(elem=BaseType(name=<BaseTy.Layout: 10>)), default='None', annotation=None), device=Argument(name='device', type=OptionalType(elem=BaseType(name=<BaseTy.Device: 11>)), default='None', annotation=None), pin_memory=Argument(name='pin_memory', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='None', annotation=None)), post_tensor_options_kwarg_only=(), out=()), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.ExactSame: 1>, python_module=None, category_override=None, variants={<Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=4270), 
autogen=[], ufunc_inner_loop={}, structured=False, structured_delegate=None, structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=True, has_composite_explicit_autograd_non_functional_kernel=False, tags={'nondeterministic_seeded'}))
rand.generator
PythonSignatureNativeFunctionPair(signature=PythonSignature(name='rand', input_args=(PythonArgument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, default_init=None),), input_kwargs=(PythonArgument(name='generator', type=OptionalType(elem=BaseType(name=<BaseTy.Generator: 1>)), default=None, default_init=None),), output_args=None, returns=PythonReturns(returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), tensor_options_args=(PythonArgument(name='dtype', type=OptionalType(elem=BaseType(name=<BaseTy.ScalarType: 2>)), default='None', default_init=None), PythonArgument(name='layout', type=OptionalType(elem=BaseType(name=<BaseTy.Layout: 10>)), default='None', default_init=None), PythonArgument(name='device', type=OptionalType(elem=BaseType(name=<BaseTy.Device: 11>)), default='None', default_init='torch::tensors::get_default_device()'), PythonArgument(name='pin_memory', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='False', default_init=None), PythonArgument(name='requires_grad', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='False', default_init=None)), method=False), function=NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='rand', inplace=False, dunder_method=False, functional_overload=False), 
overload_name='generator'), arguments=Arguments(pre_self_positional=(), self_arg=None, post_self_positional=(Argument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, annotation=None),), pre_tensor_options_kwarg_only=(Argument(name='generator', type=OptionalType(elem=BaseType(name=<BaseTy.Generator: 1>)), default=None, annotation=None),), tensor_options=TensorOptionsArguments(dtype=Argument(name='dtype', type=OptionalType(elem=BaseType(name=<BaseTy.ScalarType: 2>)), default='None', annotation=None), layout=Argument(name='layout', type=OptionalType(elem=BaseType(name=<BaseTy.Layout: 10>)), default='None', annotation=None), device=Argument(name='device', type=OptionalType(elem=BaseType(name=<BaseTy.Device: 11>)), default='None', annotation=None), pin_memory=Argument(name='pin_memory', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='None', annotation=None)), post_tensor_options_kwarg_only=(), out=()), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.ExactSame: 1>, python_module=None, category_override=None, variants={<Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=4275), 
autogen=[], ufunc_inner_loop={}, structured=False, structured_delegate=None, structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=True, has_composite_explicit_autograd_non_functional_kernel=False, tags={'nondeterministic_seeded'}))
rand.out
PythonSignatureNativeFunctionPair(signature=PythonSignature(name='rand', input_args=(PythonArgument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, default_init=None),), input_kwargs=(), output_args=PythonOutArgument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default='None', default_init=None, outputs=(PythonArgument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, default_init=None),)), returns=PythonReturns(returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), tensor_options_args=(PythonArgument(name='dtype', type=OptionalType(elem=BaseType(name=<BaseTy.ScalarType: 2>)), default='None', default_init=None), PythonArgument(name='layout', type=OptionalType(elem=BaseType(name=<BaseTy.Layout: 10>)), default='None', default_init=None), PythonArgument(name='device', type=OptionalType(elem=BaseType(name=<BaseTy.Device: 11>)), default='None', default_init='torch::tensors::get_default_device()'), PythonArgument(name='pin_memory', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='False', default_init=None), PythonArgument(name='requires_grad', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='False', default_init=None)), method=False), function=NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='rand', inplace=False, dunder_method=False, functional_overload=False), 
overload_name='out'), arguments=Arguments(pre_self_positional=(), self_arg=None, post_self_positional=(Argument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, annotation=None),), pre_tensor_options_kwarg_only=(), tensor_options=None, post_tensor_options_kwarg_only=(), out=(Argument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.ExactSame: 1>, python_module=None, category_override=None, variants={<Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=4280), 
autogen=[], ufunc_inner_loop={}, structured=False, structured_delegate=None, structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=True, has_composite_explicit_autograd_non_functional_kernel=False, tags={'nondeterministic_seeded'}))
rand.generator_out
PythonSignatureNativeFunctionPair(signature=PythonSignature(name='rand', input_args=(PythonArgument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, default_init=None),), input_kwargs=(PythonArgument(name='generator', type=OptionalType(elem=BaseType(name=<BaseTy.Generator: 1>)), default=None, default_init=None),), output_args=PythonOutArgument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default='None', default_init=None, outputs=(PythonArgument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, default_init=None),)), returns=PythonReturns(returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), tensor_options_args=(PythonArgument(name='dtype', type=OptionalType(elem=BaseType(name=<BaseTy.ScalarType: 2>)), default='None', default_init=None), PythonArgument(name='layout', type=OptionalType(elem=BaseType(name=<BaseTy.Layout: 10>)), default='None', default_init=None), PythonArgument(name='device', type=OptionalType(elem=BaseType(name=<BaseTy.Device: 11>)), default='None', default_init='torch::tensors::get_default_device()'), PythonArgument(name='pin_memory', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='False', default_init=None), PythonArgument(name='requires_grad', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='False', default_init=None)), method=False), function=NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='rand', inplace=False, dunder_method=False, functional_overload=False), 
overload_name='generator_out'), arguments=Arguments(pre_self_positional=(), self_arg=None, post_self_positional=(Argument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, annotation=None),), pre_tensor_options_kwarg_only=(Argument(name='generator', type=OptionalType(elem=BaseType(name=<BaseTy.Generator: 1>)), default=None, annotation=None),), tensor_options=None, post_tensor_options_kwarg_only=(), out=(Argument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.ExactSame: 1>, python_module=None, category_override=None, variants={<Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=4285), 
autogen=[], ufunc_inner_loop={}, structured=False, structured_delegate=None, structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=False, has_composite_implicit_autograd_kernel=True, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=False, has_composite_explicit_autograd_non_functional_kernel=False, tags={'nondeterministic_seeded'}))

代表add函數的元素如下,共三個,也可與剛才的native_functions一一對應。

add.Tensor && add.out
PythonSignatureNativeFunctionPair(signature=PythonSignature(name='add', input_args=(PythonArgument(name='self', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, default_init=None), PythonArgument(name='other', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, default_init=None)), input_kwargs=(PythonArgument(name='alpha', type=BaseType(name=<BaseTy.Scalar: 12>), default='1', default_init=None),), output_args=None, returns=PythonReturns(returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), tensor_options_args=(), method=False), function=NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='add', inplace=False, dunder_method=False, functional_overload=False), overload_name='Tensor'), arguments=Arguments(pre_self_positional=(), self_arg=SelfArgument(argument=Argument(name='self', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=None)), post_self_positional=(Argument(name='other', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=None),), pre_tensor_options_kwarg_only=(Argument(name='alpha', type=BaseType(name=<BaseTy.Scalar: 12>), default='1', annotation=None),), tensor_options=None, post_tensor_options_kwarg_only=(), out=()), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.NoCheck: 0>, python_module=None, category_override=None, variants={<Variant.function: 1>, <Variant.method: 2>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=497), autogen=[], ufunc_inner_loop={}, structured=False, structured_delegate=OperatorName(name=BaseOperatorName(base='add', inplace=False, dunder_method=False, functional_overload=False), overload_name='out'), structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=False, has_composite_explicit_autograd_non_functional_kernel=False, tags={'pointwise', 'canonical'}))
add_.Tensor && add.out
PythonSignatureNativeFunctionPair(signature=PythonSignature(name='add_', input_args=(PythonArgument(name='self', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, default_init=None), PythonArgument(name='other', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, default_init=None)), input_kwargs=(PythonArgument(name='alpha', type=BaseType(name=<BaseTy.Scalar: 12>), default='1', default_init=None),), output_args=None, returns=PythonReturns(returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), tensor_options_args=(), method=False), function=NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='add', inplace=True, dunder_method=False, functional_overload=False), overload_name='Tensor'), arguments=Arguments(pre_self_positional=(), self_arg=SelfArgument(argument=Argument(name='self', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=()))), post_self_positional=(Argument(name='other', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=None),), pre_tensor_options_kwarg_only=(Argument(name='alpha', type=BaseType(name=<BaseTy.Scalar: 12>), default='1', annotation=None),), tensor_options=None, post_tensor_options_kwarg_only=(), out=()), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.NoCheck: 0>, python_module=None, category_override=None, variants={<Variant.method: 2>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=509), autogen=[], ufunc_inner_loop={}, structured=False, structured_delegate=OperatorName(name=BaseOperatorName(base='add', inplace=False, dunder_method=False, functional_overload=False), overload_name='out'), structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=False, has_composite_explicit_autograd_non_functional_kernel=False, tags={'pointwise'}))
add.out
PythonSignatureNativeFunctionPair(signature=PythonSignature(name='add', input_args=(PythonArgument(name='self', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, default_init=None), PythonArgument(name='other', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, default_init=None)), input_kwargs=(PythonArgument(name='alpha', type=BaseType(name=<BaseTy.Scalar: 12>), default='1', default_init=None),), output_args=PythonOutArgument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default='None', default_init=None, outputs=(PythonArgument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, default_init=None),)), returns=PythonReturns(returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), tensor_options_args=(), method=False), function=NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='add', inplace=False, dunder_method=False, functional_overload=False), overload_name='out'), arguments=Arguments(pre_self_positional=(), self_arg=SelfArgument(argument=Argument(name='self', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=None)), post_self_positional=(Argument(name='other', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=None),), pre_tensor_options_kwarg_only=(Argument(name='alpha', type=BaseType(name=<BaseTy.Scalar: 12>), default='1', annotation=None),), tensor_options=None, post_tensor_options_kwarg_only=(), out=(Argument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.NoCheck: 0>, python_module=None, category_override=None, variants={<Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=520), autogen=[], ufunc_inner_loop={<UfuncKey.Generic: 7>: UfuncInnerLoop(name='add', supported_dtypes=<torchgen.utils.OrderedSet object at 0x7f600cff7910>, ufunc_key=<UfuncKey.Generic: 7>), <UfuncKey.ScalarOnly: 6>: UfuncInnerLoop(name='add', supported_dtypes=<torchgen.utils.OrderedSet object at 0x7f600cff7b80>, ufunc_key=<UfuncKey.ScalarOnly: 6>)}, structured=True, structured_delegate=None, structured_inherits='TensorIteratorBase', precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=False, has_composite_explicit_autograd_non_functional_kernel=False, tags={'pointwise'}))

sig_groups

sig_groups是一個PythonSignatureGroup的列表,PythonSignatureGroup則是由PythonSignatureNativeFunction組成的pair。

PythonSignatureGroupPythonSignatureNativeFunctionPair比起來多了一個outplace

    sig_groups = get_py_torch_functions(function_signatures)

sig_groups的第零個元素如下:

PythonSignatureGroup(signature=PythonSignature(name='__and__', input_args=(PythonArgument(name='self', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, default_init=None), PythonArgument(name='other', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, default_init=None)), input_kwargs=(), output_args=None, returns=PythonReturns(returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), tensor_options_args=(), method=False), base=NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='and', inplace=False, dunder_method=True, functional_overload=False), overload_name='Tensor'), arguments=Arguments(pre_self_positional=(), self_arg=SelfArgument(argument=Argument(name='self', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=None)), post_self_positional=(Argument(name='other', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=None),), pre_tensor_options_kwarg_only=(), tensor_options=None, post_tensor_options_kwarg_only=(), out=()), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.NoCheck: 0>, python_module=None, category_override=None, variants={<Variant.method: 2>, <Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=7635), autogen=[], ufunc_inner_loop={}, structured=False, structured_delegate=None, structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=False, has_composite_implicit_autograd_kernel=True, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=False, has_composite_explicit_autograd_non_functional_kernel=False, tags=set()), outplace=None)

代表rand函數的四個元素如下。原先的八個函數依據有沒有out被整理成兩兩一對,共四對。

rand.generator_with_names & rand.generator_with_names_out
PythonSignatureGroup(signature=PythonSignature(name='rand', input_args=(PythonArgument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, default_init=None),), input_kwargs=(PythonArgument(name='generator', type=OptionalType(elem=BaseType(name=<BaseTy.Generator: 1>)), default=None, default_init=None), PythonArgument(name='names', type=OptionalType(elem=ListType(elem=BaseType(name=<BaseTy.Dimname: 5>), size=None)), default=None, default_init=None)), output_args=None, returns=PythonReturns(returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), tensor_options_args=(PythonArgument(name='dtype', type=OptionalType(elem=BaseType(name=<BaseTy.ScalarType: 2>)), default='None', default_init=None), PythonArgument(name='layout', type=OptionalType(elem=BaseType(name=<BaseTy.Layout: 10>)), default='None', default_init=None), PythonArgument(name='device', type=OptionalType(elem=BaseType(name=<BaseTy.Device: 11>)), default='None', default_init='torch::tensors::get_default_device()'), PythonArgument(name='pin_memory', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='False', default_init=None), PythonArgument(name='requires_grad', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='False', default_init=None)), method=False), base=NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='rand', inplace=False, dunder_method=False, functional_overload=False), overload_name='generator_with_names'), arguments=Arguments(pre_self_positional=(), self_arg=None, post_self_positional=(Argument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, annotation=None),), pre_tensor_options_kwarg_only=(Argument(name='generator', type=OptionalType(elem=BaseType(name=<BaseTy.Generator: 1>)), default=None, annotation=None), Argument(name='names', type=OptionalType(elem=ListType(elem=BaseType(name=<BaseTy.Dimname: 5>), size=None)), default=None, annotation=None)), tensor_options=TensorOptionsArguments(dtype=Argument(name='dtype', type=OptionalType(elem=BaseType(name=<BaseTy.ScalarType: 2>)), default='None', annotation=None), layout=Argument(name='layout', type=OptionalType(elem=BaseType(name=<BaseTy.Layout: 10>)), default='None', annotation=None), device=Argument(name='device', type=OptionalType(elem=BaseType(name=<BaseTy.Device: 11>)), default='None', annotation=None), pin_memory=Argument(name='pin_memory', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='None', annotation=None)), post_tensor_options_kwarg_only=(), out=()), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), use_const_ref_for_mutable_tensors=False, device_guard=False, device_check=<DeviceCheckType.NoCheck: 0>, python_module=None, category_override=None, variants={<Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=4262), autogen=[OperatorName(name=BaseOperatorName(base='rand', inplace=False, dunder_method=False, functional_overload=False), overload_name='generator_with_names_out')], ufunc_inner_loop={}, structured=False, structured_delegate=None, structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=True, has_composite_explicit_autograd_non_functional_kernel=False, tags={'nondeterministic_seeded'}), outplace=None)
rand.generator & rand.generator_out
PythonSignatureGroup(signature=PythonSignature(name='rand', input_args=(PythonArgument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, default_init=None),), input_kwargs=(PythonArgument(name='generator', type=OptionalType(elem=BaseType(name=<BaseTy.Generator: 1>)), default=None, default_init=None),), output_args=PythonOutArgument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default='None', default_init=None, outputs=(PythonArgument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, default_init=None),)), returns=PythonReturns(returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), tensor_options_args=(PythonArgument(name='dtype', type=OptionalType(elem=BaseType(name=<BaseTy.ScalarType: 2>)), default='None', default_init=None), PythonArgument(name='layout', type=OptionalType(elem=BaseType(name=<BaseTy.Layout: 10>)), default='None', default_init=None), PythonArgument(name='device', type=OptionalType(elem=BaseType(name=<BaseTy.Device: 11>)), default='None', default_init='torch::tensors::get_default_device()'), PythonArgument(name='pin_memory', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='False', default_init=None), PythonArgument(name='requires_grad', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='False', default_init=None)), method=False), base=NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='rand', inplace=False, dunder_method=False, functional_overload=False), 
overload_name='generator'), arguments=Arguments(pre_self_positional=(), self_arg=None, post_self_positional=(Argument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, annotation=None),), pre_tensor_options_kwarg_only=(Argument(name='generator', type=OptionalType(elem=BaseType(name=<BaseTy.Generator: 1>)), default=None, annotation=None),), tensor_options=TensorOptionsArguments(dtype=Argument(name='dtype', type=OptionalType(elem=BaseType(name=<BaseTy.ScalarType: 2>)), default='None', annotation=None), layout=Argument(name='layout', type=OptionalType(elem=BaseType(name=<BaseTy.Layout: 10>)), default='None', annotation=None), device=Argument(name='device', type=OptionalType(elem=BaseType(name=<BaseTy.Device: 11>)), default='None', annotation=None), pin_memory=Argument(name='pin_memory', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='None', annotation=None)), post_tensor_options_kwarg_only=(), out=()), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.ExactSame: 1>, python_module=None, category_override=None, variants={<Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=4275), autogen=[], ufunc_inner_loop={}, structured=False, structured_delegate=None, structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=True, has_composite_explicit_autograd_non_functional_kernel=False, tags={'nondeterministic_seeded'}), outplace=NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='rand', inplace=False, dunder_method=False, functional_overload=False), 
overload_name='generator_out'), arguments=Arguments(pre_self_positional=(), self_arg=None, post_self_positional=(Argument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, annotation=None),), pre_tensor_options_kwarg_only=(Argument(name='generator', type=OptionalType(elem=BaseType(name=<BaseTy.Generator: 1>)), default=None, annotation=None),), tensor_options=None, post_tensor_options_kwarg_only=(), out=(Argument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.ExactSame: 1>, python_module=None, category_override=None, variants={<Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=4285), autogen=[], ufunc_inner_loop={}, structured=False, structured_delegate=None, structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=False, has_composite_implicit_autograd_kernel=True, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=False, has_composite_explicit_autograd_non_functional_kernel=False, tags={'nondeterministic_seeded'}))
rand.names & rand.names_out
PythonSignatureGroup(signature=PythonSignature(name='rand', input_args=(PythonArgument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, default_init=None),), input_kwargs=(PythonArgument(name='names', type=OptionalType(elem=ListType(elem=BaseType(name=<BaseTy.Dimname: 5>), size=None)), default=None, default_init=None),), output_args=None, returns=PythonReturns(returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), tensor_options_args=(PythonArgument(name='dtype', type=OptionalType(elem=BaseType(name=<BaseTy.ScalarType: 2>)), default='None', default_init=None), PythonArgument(name='layout', type=OptionalType(elem=BaseType(name=<BaseTy.Layout: 10>)), default='None', default_init=None), PythonArgument(name='device', type=OptionalType(elem=BaseType(name=<BaseTy.Device: 11>)), default='None', default_init='torch::tensors::get_default_device()'), PythonArgument(name='pin_memory', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='False', default_init=None), PythonArgument(name='requires_grad', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='False', default_init=None)), method=False), base=NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='rand', inplace=False, dunder_method=False, functional_overload=False), 
overload_name='names'), arguments=Arguments(pre_self_positional=(), self_arg=None, post_self_positional=(Argument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, annotation=None),), pre_tensor_options_kwarg_only=(Argument(name='names', type=OptionalType(elem=ListType(elem=BaseType(name=<BaseTy.Dimname: 5>), size=None)), default=None, annotation=None),), tensor_options=TensorOptionsArguments(dtype=Argument(name='dtype', type=OptionalType(elem=BaseType(name=<BaseTy.ScalarType: 2>)), default='None', annotation=None), layout=Argument(name='layout', type=OptionalType(elem=BaseType(name=<BaseTy.Layout: 10>)), default='None', annotation=None), device=Argument(name='device', type=OptionalType(elem=BaseType(name=<BaseTy.Device: 11>)), default='None', annotation=None), pin_memory=Argument(name='pin_memory', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='None', annotation=None)), post_tensor_options_kwarg_only=(), out=()), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), use_const_ref_for_mutable_tensors=False, device_guard=False, device_check=<DeviceCheckType.NoCheck: 0>, python_module=None, category_override=None, variants={<Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=4254), autogen=[OperatorName(name=BaseOperatorName(base='rand', inplace=False, dunder_method=False, functional_overload=False), 
overload_name='names_out')], ufunc_inner_loop={}, structured=False, structured_delegate=None, structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=True, has_composite_explicit_autograd_non_functional_kernel=False, tags={'nondeterministic_seeded'}), outplace=None)
rand & rand.out
PythonSignatureGroup(signature=PythonSignature(name='rand', input_args=(PythonArgument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, default_init=None),), input_kwargs=(), output_args=PythonOutArgument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default='None', default_init=None, outputs=(PythonArgument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, default_init=None),)), returns=PythonReturns(returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), tensor_options_args=(PythonArgument(name='dtype', type=OptionalType(elem=BaseType(name=<BaseTy.ScalarType: 2>)), default='None', default_init=None), PythonArgument(name='layout', type=OptionalType(elem=BaseType(name=<BaseTy.Layout: 10>)), default='None', default_init=None), PythonArgument(name='device', type=OptionalType(elem=BaseType(name=<BaseTy.Device: 11>)), default='None', default_init='torch::tensors::get_default_device()'), PythonArgument(name='pin_memory', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='False', default_init=None), PythonArgument(name='requires_grad', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='False', default_init=None)), method=False), base=NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='rand', inplace=False, dunder_method=False, functional_overload=False), 
overload_name=''), arguments=Arguments(pre_self_positional=(), self_arg=None, post_self_positional=(Argument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, annotation=None),), pre_tensor_options_kwarg_only=(), tensor_options=TensorOptionsArguments(dtype=Argument(name='dtype', type=OptionalType(elem=BaseType(name=<BaseTy.ScalarType: 2>)), default='None', annotation=None), layout=Argument(name='layout', type=OptionalType(elem=BaseType(name=<BaseTy.Layout: 10>)), default='None', annotation=None), device=Argument(name='device', type=OptionalType(elem=BaseType(name=<BaseTy.Device: 11>)), default='None', annotation=None), pin_memory=Argument(name='pin_memory', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='None', annotation=None)), post_tensor_options_kwarg_only=(), out=()), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.ExactSame: 1>, python_module=None, category_override=None, variants={<Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=4270), autogen=[], ufunc_inner_loop={}, structured=False, structured_delegate=None, structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=True, has_composite_explicit_autograd_non_functional_kernel=False, tags={'nondeterministic_seeded'}), outplace=NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='rand', inplace=False, dunder_method=False, functional_overload=False), 
overload_name='out'), arguments=Arguments(pre_self_positional=(), self_arg=None, post_self_positional=(Argument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, annotation=None),), pre_tensor_options_kwarg_only=(), tensor_options=None, post_tensor_options_kwarg_only=(), out=(Argument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.ExactSame: 1>, python_module=None, category_override=None, variants={<Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=4280), autogen=[], ufunc_inner_loop={}, structured=False, structured_delegate=None, structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=True, has_composite_explicit_autograd_non_functional_kernel=False, tags={'nondeterministic_seeded'}))

上面共有四個PythonSignatureGroup元素,先來看第一個元素,其base成員的funcoverload_namegenerator_with_namesautogenoverload_name則為generator_with_names_out。第二個的則分別為generatorgenerator_out。第三個的分別為namesnames_out。第四個的分別為空字串out

到這裡得到了八個rand相關函數。

add.Tensor & add.out
PythonSignatureGroup(signature=PythonSignature(name='add', input_args=(PythonArgument(name='self', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, default_init=None), PythonArgument(name='other', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, default_init=None)), input_kwargs=(PythonArgument(name='alpha', type=BaseType(name=<BaseTy.Scalar: 12>), default='1', default_init=None),), output_args=PythonOutArgument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default='None', default_init=None, outputs=(PythonArgument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, default_init=None),)), returns=PythonReturns(returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), tensor_options_args=(), method=False), base=NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='add', inplace=False, dunder_method=False, functional_overload=False), overload_name='Tensor'), arguments=Arguments(pre_self_positional=(), self_arg=SelfArgument(argument=Argument(name='self', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=None)), post_self_positional=(Argument(name='other', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=None),), pre_tensor_options_kwarg_only=(Argument(name='alpha', type=BaseType(name=<BaseTy.Scalar: 12>), default='1', annotation=None),), tensor_options=None, post_tensor_options_kwarg_only=(), out=()), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.NoCheck: 0>, python_module=None, category_override=None, variants={<Variant.function: 1>, <Variant.method: 2>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=497), autogen=[], ufunc_inner_loop={}, structured=False, structured_delegate=OperatorName(name=BaseOperatorName(base='add', inplace=False, dunder_method=False, functional_overload=False), overload_name='out'), structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=False, has_composite_explicit_autograd_non_functional_kernel=False, tags={'pointwise', 'canonical'}), outplace=NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='add', inplace=False, dunder_method=False, functional_overload=False), overload_name='out'), arguments=Arguments(pre_self_positional=(), self_arg=SelfArgument(argument=Argument(name='self', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=None)), post_self_positional=(Argument(name='other', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=None),), pre_tensor_options_kwarg_only=(Argument(name='alpha', type=BaseType(name=<BaseTy.Scalar: 12>), default='1', annotation=None),), tensor_options=None, post_tensor_options_kwarg_only=(), out=(Argument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.NoCheck: 0>, python_module=None, category_override=None, variants={<Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=520), autogen=[], ufunc_inner_loop={<UfuncKey.Generic: 7>: UfuncInnerLoop(name='add', supported_dtypes=<torchgen.utils.OrderedSet object at 0x7f600cff7910>, ufunc_key=<UfuncKey.Generic: 7>), <UfuncKey.ScalarOnly: 6>: UfuncInnerLoop(name='add', supported_dtypes=<torchgen.utils.OrderedSet object at 0x7f600cff7b80>, ufunc_key=<UfuncKey.ScalarOnly: 6>)}, structured=True, structured_delegate=None, structured_inherits='TensorIteratorBase', precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=False, has_composite_explicit_autograd_non_functional_kernel=False, tags={'pointwise'}))
add & add.Tensor & add.out
PythonSignatureGroup(signature=PythonSignatureDeprecated(name='add', input_args=(PythonArgument(name='self', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, default_init=None), PythonArgument(name='alpha', type=BaseType(name=<BaseTy.Scalar: 12>), default=None, default_init=None), PythonArgument(name='other', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, default_init=None)), input_kwargs=(), output_args=PythonOutArgument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default='None', default_init=None, outputs=(PythonArgument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, default_init=None),)), returns=PythonReturns(returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), tensor_options_args=(), method=False, deprecated_schema=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='add', inplace=False, dunder_method=False, functional_overload=False), overload_name=''), arguments=Arguments(pre_self_positional=(), self_arg=SelfArgument(argument=Argument(name='self', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=None)), post_self_positional=(Argument(name='alpha', type=BaseType(name=<BaseTy.Scalar: 12>), default=None, annotation=None), Argument(name='other', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=None)), pre_tensor_options_kwarg_only=(), tensor_options=None, post_tensor_options_kwarg_only=(), out=(Argument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), deprecated_args_exprs=('out', 'self', 'other', 'alpha')), base=NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='add', inplace=False, dunder_method=False, functional_overload=False), overload_name='Tensor'), arguments=Arguments(pre_self_positional=(), self_arg=SelfArgument(argument=Argument(name='self', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=None)), post_self_positional=(Argument(name='other', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=None),), pre_tensor_options_kwarg_only=(Argument(name='alpha', type=BaseType(name=<BaseTy.Scalar: 12>), default='1', annotation=None),), tensor_options=None, post_tensor_options_kwarg_only=(), out=()), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.NoCheck: 0>, python_module=None, category_override=None, variants={<Variant.function: 1>, <Variant.method: 2>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=497), autogen=[], ufunc_inner_loop={}, structured=False, structured_delegate=OperatorName(name=BaseOperatorName(base='add', inplace=False, dunder_method=False, functional_overload=False), overload_name='out'), structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=False, has_composite_explicit_autograd_non_functional_kernel=False, tags={'pointwise', 'canonical'}), outplace=NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='add', inplace=False, dunder_method=False, functional_overload=False), overload_name='out'), arguments=Arguments(pre_self_positional=(), self_arg=SelfArgument(argument=Argument(name='self', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=None)), post_self_positional=(Argument(name='other', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=None),), pre_tensor_options_kwarg_only=(Argument(name='alpha', type=BaseType(name=<BaseTy.Scalar: 12>), default='1', annotation=None),), tensor_options=None, post_tensor_options_kwarg_only=(), out=(Argument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.NoCheck: 0>, python_module=None, category_override=None, variants={<Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=520), autogen=[], ufunc_inner_loop={<UfuncKey.Generic: 7>: UfuncInnerLoop(name='add', supported_dtypes=<torchgen.utils.OrderedSet object at 0x7f600cff7910>, ufunc_key=<UfuncKey.Generic: 7>), <UfuncKey.ScalarOnly: 6>: UfuncInnerLoop(name='add', supported_dtypes=<torchgen.utils.OrderedSet object at 0x7f600cff7b80>, ufunc_key=<UfuncKey.ScalarOnly: 6>)}, structured=True, structured_delegate=None, structured_inherits='TensorIteratorBase', precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=False, has_composite_explicit_autograd_non_functional_kernel=False, tags={'pointwise'}))

unsorted_function_hints

     for group in sorted(sig_groups, key=lambda g: g.signature.name):name = group.signature.nameunsorted_function_hints[name] += generate_type_hints(group)named_tuple = returns_named_tuple_pyi(group.signature)if named_tuple is not None and not group.signature.deprecated:# deprecated namedtuples are currently not included for torch functionstuple_name, tuple_def = named_tupleif tuple_name in namedtuples:assert namedtuples[tuple_name] == tuple_defelse:namedtuples[tuple_name] = tuple_def

unsorted_function_hints是一個defaultdict,key為函數名稱,value則為list of string。

rand

代表rand函數的元素如下:

'rand': ['def rand(size: Sequence[Union[_int, SymInt]], *, generator: Optional[Generator], names: Optional[Sequence[Union[str, ellipsis, None]]], dtype: Optional[_dtype]=None, layout: Optional[_layout]=None, device: Optional[Union[_device, str, None]]=None, pin_memory: Optional[_bool]=False, requires_grad: Optional[_bool]=False) -> Tensor: ...', 'def rand(*size: _int, generator: Optional[Generator], names: Optional[Sequence[Union[str, ellipsis, None]]], dtype: Optional[_dtype]=None, layout: Optional[_layout]=None, device: Optional[Union[_device, str, None]]=None, pin_memory: Optional[_bool]=False, requires_grad: Optional[_bool]=False) -> Tensor: ...', 'def rand(size: Sequence[Union[_int, SymInt]], *, generator: Optional[Generator], out: Optional[Tensor]=None, dtype: Optional[_dtype]=None, layout: Optional[_layout]=None, device: Optional[Union[_device, str, None]]=None, pin_memory: Optional[_bool]=False, requires_grad: Optional[_bool]=False) -> Tensor: ...', 'def rand(*size: _int, generator: Optional[Generator], out: Optional[Tensor]=None, dtype: Optional[_dtype]=None, layout: Optional[_layout]=None, device: Optional[Union[_device, str, None]]=None, pin_memory: Optional[_bool]=False, requires_grad: Optional[_bool]=False) -> Tensor: ...', 'def rand(size: Sequence[Union[_int, SymInt]], *, names: Optional[Sequence[Union[str, ellipsis, None]]], dtype: Optional[_dtype]=None, layout: Optional[_layout]=None, device: Optional[Union[_device, str, None]]=None, pin_memory: Optional[_bool]=False, requires_grad: Optional[_bool]=False) -> Tensor: ...', 'def rand(*size: _int, names: Optional[Sequence[Union[str, ellipsis, None]]], dtype: Optional[_dtype]=None, layout: Optional[_layout]=None, device: Optional[Union[_device, str, None]]=None, pin_memory: Optional[_bool]=False, requires_grad: Optional[_bool]=False) -> Tensor: ...', 'def rand(size: Sequence[Union[_int, SymInt]], *, out: Optional[Tensor]=None, dtype: Optional[_dtype]=None, layout: Optional[_layout]=None, device: Optional[Union[_device, str, None]]=None, pin_memory: Optional[_bool]=False, requires_grad: Optional[_bool]=False) -> Tensor: ...', 'def rand(*size: _int, out: Optional[Tensor]=None, dtype: Optional[_dtype]=None, layout: Optional[_layout]=None, device: Optional[Union[_device, str, None]]=None, pin_memory: Optional[_bool]=False, requires_grad: Optional[_bool]=False) -> Tensor: ...']

上面是八個overload的rand函數,可以將它們分為四組:有generator及names參數,只有generator參數,只有name參數,沒有generator和names參數。每組又可分為size參數是Sequence的及是int的。到這裡已經可以與torch/_C/_VariableFunctions.pyi一 一對應了。

add

找到名為add的key,其value list共有三個元素,分別對應add.Tensoradd_.Tensoradd.out

'def add(input: Union[Tensor, Number], other: Union[Tensor, Number], *, alpha: Optional[Number]=1, out: Optional[Tensor]=None) -> Tensor: ...'
'def add(self: Tensor, alpha: Number, other: Tensor) -> Tensor: ...'
'def add(self: Tensor, alpha: Number, other: Tensor, *, out: Tensor) -> Tensor: ...'

function_hints

    function_hints = []for name, hints in sorted(unsorted_function_hints.items()):if len(hints) > 1:hints = ["@overload\n" + h for h in hints]function_hints += hints

function_hints是一個list of string:

['@overload\ndef __and_...ensor: ...', '@overload\ndef __and_...ensor: ...', '@overload\ndef __lshi...ensor: ...', '@overload\ndef __lshi...ensor: ...', '@overload\ndef __or__...ensor: ...', '@overload\ndef __or__...ensor: ...', '@overload\ndef __rshi...ensor: ...', '@overload\ndef __rshi...ensor: ...', '@overload\ndef __xor_...ensor: ...', '@overload\ndef __xor_...ensor: ...', 'def _adaptive_avg_po...ensor: ...', 'def _adaptive_avg_po...ensor: ...', 'def _add_batch_dim(i...ensor: ...', '@overload\ndef _add_r...ensor: ...', ...]

其第零個元素如下:

'@overload\ndef __and__(input: Tensor, other: Tensor) -> Tensor: ...'
rand

代表rand函數的八個元素如下。其實跟unsorted_function_hints裡的大同小異,差別只在於前面多加了’@overload\n’。

'@overload\ndef rand(size: Sequence[Union[_int, SymInt]], *, generator: Optional[Generator], names: Optional[Sequence[Union[str, ellipsis, None]]], dtype: Optional[_dtype]=None, layout: Optional[_layout]=None, device: Optional[Union[_device, str, None]]=None, pin_memory: Optional[_bool]=False, requires_grad: Optional[_bool]=False) -> Tensor: ...'
'@overload\ndef rand(*size: _int, generator: Optional[Generator], names: Optional[Sequence[Union[str, ellipsis, None]]], dtype: Optional[_dtype]=None, layout: Optional[_layout]=None, device: Optional[Union[_device, str, None]]=None, pin_memory: Optional[_bool]=False, requires_grad: Optional[_bool]=False) -> Tensor: ...'
'@overload\ndef rand(size: Sequence[Union[_int, SymInt]], *, generator: Optional[Generator], out: Optional[Tensor]=None, dtype: Optional[_dtype]=None, layout: Optional[_layout]=None, device: Optional[Union[_device, str, None]]=None, pin_memory: Optional[_bool]=False, requires_grad: Optional[_bool]=False) -> Tensor: ...'
'@overload\ndef rand(*size: _int, generator: Optional[Generator], out: Optional[Tensor]=None, dtype: Optional[_dtype]=None, layout: Optional[_layout]=None, device: Optional[Union[_device, str, None]]=None, pin_memory: Optional[_bool]=False, requires_grad: Optional[_bool]=False) -> Tensor: ...'
'@overload\ndef rand(size: Sequence[Union[_int, SymInt]], *, names: Optional[Sequence[Union[str, ellipsis, None]]], dtype: Optional[_dtype]=None, layout: Optional[_layout]=None, device: Optional[Union[_device, str, None]]=None, pin_memory: Optional[_bool]=False, requires_grad: Optional[_bool]=False) -> Tensor: ...'
'@overload\ndef rand(*size: _int, names: Optional[Sequence[Union[str, ellipsis, None]]], dtype: Optional[_dtype]=None, layout: Optional[_layout]=None, device: Optional[Union[_device, str, None]]=None, pin_memory: Optional[_bool]=False, requires_grad: Optional[_bool]=False) -> Tensor: ...'
'@overload\ndef rand(size: Sequence[Union[_int, SymInt]], *, out: Optional[Tensor]=None, dtype: Optional[_dtype]=None, layout: Optional[_layout]=None, device: Optional[Union[_device, str, None]]=None, pin_memory: Optional[_bool]=False, requires_grad: Optional[_bool]=False) -> Tensor: ...'
'@overload\ndef rand(*size: _int, out: Optional[Tensor]=None, dtype: Optional[_dtype]=None, layout: Optional[_layout]=None, device: Optional[Union[_device, str, None]]=None, pin_memory: Optional[_bool]=False, requires_grad: Optional[_bool]=False) -> Tensor: ...'
add

代表add函數的三個元素如下:

'@overload\ndef add(input: Union[Tensor, Number], other: Union[Tensor, Number], *, alpha: Optional[Number]=1, out: Optional[Tensor]=None) -> Tensor: ...'
'@overload\ndef add(self: Tensor, alpha: Number, other: Tensor) -> Tensor: ...'
'@overload\ndef add(self: Tensor, alpha: Number, other: Tensor, *, out: Tensor) -> Tensor: ...'

hinted_function_names

    # Generate __all__ directive# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~# Include only the functions that contain hints, to prevent undefined# symbols to be included in the `__all__` directive.hinted_function_names = [name for name, hint in unsorted_function_hints.items() if hint]

hinted_function_names是一個list of string,看起來就只是一個有hint的函數名稱的列表:

['sparse_csr_tensor', '_sparse_csr_tensor_unsafe', 'sparse_csc_tensor', '_sparse_csc_tensor_unsafe', 'sparse_bsr_tensor', '_sparse_bsr_tensor_unsafe', 'sparse_bsc_tensor', '_sparse_bsc_tensor_unsafe', 'set_flush_denormal', 'get_default_dtype', 'asarray', 'from_numpy', 'frombuffer', 'numel', ...]

其中也包含了:

'rand', 'rand_like', 'randint_like', 'randn', 'randn_like', 'randperm'

'add'

all_symbols

    all_symbols = sorted(list(namedtuples.keys()) + hinted_function_names)

all_symbols

['__and__', '__lshift__', '__or__', '__rshift__', '__xor__', '_adaptive_avg_pool2d', '_adaptive_avg_pool3d', '_add_batch_dim', '_add_relu', '_add_relu_', '_addmm_activation', '_aminmax', '_amp_foreach_non_fin...d_unscale_', '_amp_update_scale_', ...]

其中也包含了:

'rand', 'rand_like', 'randint_like', 'randn', 'randn_like', 'randperm'

'add'

all_directive

接下來將all_symbols轉為string,用\n切割成多段個組成一個字串的列表,即為all_directive

    all_directive = pformat(all_symbols, width=100, compact=True).split("\n")all_directive[0] = "__all__ = {}".format(all_directive[0])

第零個元素如下:

"__all__ = ['__and__', '__lshift__', '__or__', '__rshift__', '__xor__', '_adaptive_avg_pool2d',"

其中包含add的元素如下:

" 'adaptive_max_pool1d', 'add', 'addbmm', 'addcdiv', 'addcmul', 'addmm', 'addmv', 'addmv_', 'addr',"

包含rand的元素如下:

" 'rad2deg_', 'rand', 'rand_like', 'randint', 'randint_like', 'randn', 'randn_like', 'randperm',"

最後一個元素如下:

" 'vsplit', 'vstack', 'where', 'xlogy', 'xlogy_', 'zero_', 'zeros', 'zeros_like']"

env

到這裡為止得到了function_hintsall_directive,這兩個變數與其它變數共同組成了env

    env = {"namedtuple_defs": namedtuple_defs,"function_hints": function_hints,"tensor_method_hints": tensor_method_hints,"legacy_class_hints": legacy_class_hints,"legacy_storage_base_hints": legacy_storage_base_hints,"dtype_class_hints": dtype_class_hints,"dispatch_key_hints": dispatch_key_hints,"all_directive": all_directive,}

運算後的env如下:

{"namedtuple_defs":["_fake_quantize_per_t... Tensor)])","_fused_moving_avg_ob... Tensor)])","_linalg_det = NamedT... Tensor)])","_linalg_eigh = Named... Tensor)])","_linalg_slogdet = Na... Tensor)])","_linalg_solve_ex = N... Tensor)])","_linalg_svd = NamedT... Tensor)])","_lu_with_info = Name... Tensor)])","_unpack_dual = Named... Tensor)])","..."],"function_hints":["@overload\ndef __and_...ensor: ...","@overload\ndef __and_...ensor: ...","@overload\ndef __lshi...ensor: ...","@overload\ndef __lshi...ensor: ...","@overload\ndef __or__...ensor: ...","@overload\ndef __or__...ensor: ...","@overload\ndef __rshi...ensor: ...","@overload\ndef __rshi...ensor: ...","@overload\ndef __xor_...ensor: ...","..."],"tensor_method_hints":["def __abs__(self) ->...ensor: ...","def __add__(self, ot...ensor: ...","@overload\ndef __and_...ensor: ...","@overload\ndef __and_...ensor: ...","@overload\ndef __and_...ensor: ...","def __bool__(self) -....bool: ...","def __complex__(self...mplex: ...","def __div__(self, ot...ensor: ...","def __eq__(self, oth...[override]","..."],"legacy_class_hints":["class DoubleTensor(T...nsor): ...","class FloatTensor(Tensor): ...","class LongTensor(Tensor): ...","class IntTensor(Tensor): ...","class ShortTensor(Tensor): ...","class HalfTensor(Tensor): ...","class CharTensor(Tensor): ...","class ByteTensor(Tensor): ...","class BoolTensor(Tensor): ..."],"legacy_storage_base_hints":["class StorageBase(object): ..."],"dtype_class_hints":["float32: dtype = ...","float: dtype = ...","float64: dtype = ...","double: dtype = ...","float16: dtype = ...","bfloat16: dtype = ...","half: dtype = ...","uint8: dtype = ...","int8: dtype = ...","..."],"dispatch_key_hints":["Undefined: DispatchKey = ...","FPGA: DispatchKey = ...","ORT: DispatchKey = ...","Vulkan: DispatchKey = ...","Metal: DispatchKey = ...","MKLDNN: DispatchKey = ...","OpenGL: DispatchKey = ...","OpenCL: DispatchKey = ...","IDEEP: DispatchKey = ...","..."],"all_directive":["__all__ = ['__and__...,"," ...,"," '_aminmax', ...,"," ...,"," '_cast_Float', ...,"," ...,"," ...,"," ...,"," '_convolution_mode...,","..."]
}

接著把env傳入FileManager的成員函數write_with_template

    # ...fm.write_with_template("torch/_C/__init__.pyi","torch/_C/__init__.pyi.in",lambda: {"generated_comment": "@" + "generated from torch/_C/__init__.pyi.in",**env,},)fm.write_with_template("torch/_C/_VariableFunctions.pyi","torch/_C/_VariableFunctions.pyi.in",lambda: {"generated_comment": "@"+ "generated from torch/_C/_VariableFunctions.pyi.in",**env,},)fm.write_with_template("torch/_VF.pyi","torch/_C/_VariableFunctions.pyi.in",lambda: {"generated_comment": "@"+ "generated from torch/_C/_VariableFunctions.pyi.in",**env,},)fm.write_with_template("torch/return_types.pyi","torch/_C/return_types.pyi.in",lambda: {"generated_comment": "@" + "generated from torch/_C/return_types.pyi",**env,},)gen_nn_functional(fm)

可以看到這段代碼裡調用了FileManagerwrite_with_templategen_nn_functional,以下先看gen_nn_functional

參考Unpacking Operators in Python的Merging Dictionaries章節,{"a": 1, **my_dict}這種寫法是先把my_dict拆開(unpacking),然後再與"a": 1共同構成一個新的字典。

lambda: {}這種寫法則表示一個無需參數並回傳一個字典的lambda函數。

注意到在呼叫write_with_template時,最後一個參數後面多了一個,。根據Should I add a trailing comma after the last argument in a function call? [closed],在呼叫函數時,如果參數是分多行寫的,比較建議的寫法是在最後加上一個,

回想一開始看到的,共會由六個pyi.in檔生成六個對應的pyi檔,此處只生成了四個pyi檔,剩下兩個(functional.pyi_nn.pyi)則是在gen_nn_functional中調用FileManager.write_with_template生成。

FileManager.write_with_template函數會以模板為基礎,按照替換函數所指定的方式,生成pyi檔,本函數已獨立成篇,詳見PyTorch檔案生成機制中的FileManager.write_with_template。

gen_nn_functional

gen_nn_functional函數同樣位於tools/pyi/gen_pyi.py,它的作用是由torch/nn/functional.pyi.intorch/_C/_nn.pyi.in生成torch/nn/functional.pyitorch/_C/_nn.pyi.in

def gen_nn_functional(fm: FileManager) -> None:# Functions imported into `torch.nn.functional` from `torch`, perhaps being filtered# through an `_add_docstr` callimports = ["conv1d","conv2d","conv3d","conv_transpose1d","conv_transpose2d","conv_transpose3d","conv_tbc","avg_pool1d","relu_","selu_","celu_","rrelu_","pixel_shuffle","pixel_unshuffle","channel_shuffle","native_channel_shuffle","pdist","cosine_similarity",]# Functions generated by `torch._jit_internal.boolean_dispatch`dispatches = ["fractional_max_pool2d","fractional_max_pool3d","max_pool1d","max_pool2d","max_pool3d","adaptive_max_pool1d","adaptive_max_pool2d","adaptive_max_pool3d",]# Functions directly imported from `torch._C`from_c = ["avg_pool2d","avg_pool3d","hardtanh_","elu_","leaky_relu_","logsigmoid","softplus","softshrink","one_hot",]import_code = ["from .. import {0} as {0}".format(_) for _ in imports]# TODO make these types more precisedispatch_code = ["{}: Callable".format(_) for _ in (dispatches + from_c)]fm.write_with_template("torch/nn/functional.pyi","torch/nn/functional.pyi.in",lambda: {"imported_hints": import_code,"dispatched_hints": dispatch_code,},)# functional.pyi already contains the definitions for those functions# so, we don't export then to itfrom_c.extend(["hardtanh", "leaky_relu", "hardsigmoid"])dispatch_code = ["{}: Callable".format(_) for _ in (dispatches + from_c)]fm.write_with_template("torch/_C/_nn.pyi","torch/_C/_nn.pyi.in",lambda: {"imported_hints": import_code,"dispatched_hints": dispatch_code,},)

可以看到這個函數最後也是調用FileManagerwrite_with_template生成.pyi檔。

write_with_template已獨立成篇,詳見PyTorch檔案生成機制中的FileManager.write_with_template。

datapipe.pyi

回頭看CMakeLists.txt

file(GLOB_RECURSE datapipe_files "${TORCH_SRC_DIR}/utils/data/datapipes/*.py")
add_custom_command(OUTPUT"${TORCH_SRC_DIR}/utils/data/datapipes/datapipe.pyi"COMMAND"${PYTHON_EXECUTABLE}" ${TORCH_SRC_DIR}/utils/data/datapipes/gen_pyi.pyDEPENDS"${TORCH_SRC_DIR}/utils/data/datapipes/datapipe.pyi.in"${datapipe_files}WORKING_DIRECTORY"${TORCH_ROOT}"
)

datapipe.pyi也是由類似的方式透過utils/data/datapipes/gen_pyi.pydatapipe.pyi.in生成的。

torch/utils/data/datapipes/datapipe.pyi.in中的注釋:

# This base template ("datapipe.pyi.in") is generated from mypy stubgen with minimal editing for code injection
# The output file will be "datapipe.pyi". This is executed as part of torch/CMakeLists.txt
# Note that, for mypy, .pyi file takes precedent over .py file, such that we must define the interface for other
# classes/objects here, even though we are not injecting extra code into them at the moment.

生成結果

torch/_C/_VariableFunctions.pyi.in為例:

  • generated_comment

    # ${generated_comment}
    

    被替換成:

    # @generated from torch/_C/_VariableFunctions.pyi.in
    
  • function_hints

    ${function_hints}
    

    被替換成:

    @overload
    def __and__(input: Tensor, other: Tensor) -> Tensor: ...
    # ...
    def zeros_like(input: Tensor, *, memory_format: Optional[memory_format] = None, dtype: Optional[_dtype] = None, layout: Optional[_layout] = None, device: Optional[Union[_device, str, None]] = None, pin_memory: Optional[_bool] = False, requires_grad: Optional[_bool] = False) -> Tensor: ...
    
  • all_directive

    ${all_directive}
    

    被替換成:

    __all__ = ['__and__', '__lshift__', '__or__', '__rshift__', '__xor__', '_adaptive_avg_pool2d',
    # ...'view_copy', 'vsplit', 'vstack', 'where', 'xlogy', 'xlogy_', 'zero_', 'zeros', 'zeros_like']
    

其餘部份皆與torch/_C/_VariableFunctions.pyi.in相同。

使用pyi做類型檢查

torch/__init__.py中有以下這麼一段:

# Appease the type checker: it can't deal with direct setting of globals().
# Note that we will see "too many" functions when reexporting this way; there
# is not a good way to fix this problem.  Perhaps, try to redesign VariableFunctions
# so that this import is good enough
if TYPE_CHECKING:# Some type signatures pulled in from _VariableFunctions here clash with# signatures already imported. For now these clashes are ignored; see# PR #43339 for details.from torch._C._VariableFunctions import *  # type: ignore[misc] # noqa: F403

也就是說,在類型檢查功能有被開啟的情況下,會引入torch._C.VariableFunctions中的所有東西。

其中torch._C._VariableFunctions指的就是我們剛剛看到的torch/_C/_VariableFunctions.pyi

根據pyi文件是干嘛的?(一文读懂Python的存根文件和类型检查),在py檔和pyi檔名稱相同且置於同一資料夾的情況下不需要import pyi檔類型檢查就會啟動。在此處是因為py檔和pyi檔名稱不同,所以才要手動import pyi?

相关文章:

PyTorch中的pyi檔案生成機制

PyTorch中的pyi檔案生成機制 前言pyi檔由py生成pyi.in由pyi.in生成pyitorch/CMakeLists.txttools/pyi/gen_pyi.pygen_pyinative_functionsrand.names &#xff06; rand.names_outrand.generator_with_names & rand.generator_with_names_outrandrand.generatorrand.outran…...

GeoServer运行报错503,……Unmapped relationship: 7

Windows11运行GeoServer-2.19.0报错[org.geoserver.system.status.OSHISystemInfoCollector]……Unmapped relationship: 7 问题说明解决方法 问题说明 最近换了新电脑&#xff0c;在电脑上安装了一个geoserver2.19.0版本&#xff0c;但是运行就是报错&#xff0c;虽然最后提示…...

uniapp ui安装 阿里图标库使用 报错 Assignment to constant variable.

安装 ui uni-app官网 (dcloud.net.cn) &#xff08;一&#xff09;安装 pages.js配置 安装 sassnpm i sass -D 或 yarn add sass -D 安装 sass-loader npm i sass-loader10.1.1 -D 或 yarn add sass-loader10.1.1 -D安装 uni-uinpm i dcloudio/uni-ui 或 yarn a…...

Spring IOC容器实例化Bean整体流程图

SpringBean实例化的基本流程-CSDN博客 Spring容器中的BeanDefinitionReader读取器&#xff0c;读取xml配置文件&#xff0c;解析每一个bean标签&#xff0c;将bean标签中信息封装到BeanDefinition对象中&#xff0c;该对象的集合存储到BeanDefinitionMap中&#xff0c;然后Spri…...

【挑战开发100个项目 | 2. C语言图书管理系统】

本项目是一个基于C语言的简单图书管理系统&#xff0c;用户可以通过命令行界面实现图书的添加、删除、修改、查找以及列出所有图书的功能。适用于初学者学习c语言&#xff0c;也适用于高校学生课程设计&#xff0c;毕业设计参考。 一&#xff0c;开发环境需求 操作系统 &#x…...

二刷力扣--二叉树(2)

226.翻转二叉树 给你一棵二叉树的根节点 root &#xff0c;翻转这棵二叉树&#xff0c;并返回其根节点。 使用递归解决。 确定函数参数和返回值 函数参数为当前节点cur。无返回值。 def dd(cur):确定终止条件。当前节点为空则终止。 if not cur:return 单层逻辑 反转当前…...

【C++ Efficiency】使用运算符的复合形式取代其单独形式,效率更高

//单独形式 x x y; x x - y; //也可以写为复合形式 x y; x - y;效率问题 一般而言&#xff0c;复合操作符比其对应的单独形式效率高&#xff1a;因为单独形式需要返回一个新的对象&#xff0c;就会产生一个临时对象的构造和析构成本&#xff0c;复合版本则是直接写入左…...

uview的真机演示,微信小程序,当两个input框的时候,从一个input切换到两一个input的时候,键盘调不起来

项目场景&#xff1a; 项目相关背景&#xff1a; 例如&#xff1a;uview的真机演示&#xff0c;微信小程序&#xff0c;当两个input框的时候&#xff0c;从一个input切换到两一个input的时候&#xff0c;键盘调不起来 问题描述 遇到的问题&#xff1a; 例如&#xff1a;切…...

信息化发展58

安全系统 X 轴是“ 安全机制” 。安全机制可以理解为提供某些安全服务&#xff0c; 利用各种安全技术和技巧&#xff0c; 所形成的一个较为完善的结构体系。如“ 平台安全” 机制&#xff0c; 实际上就是指安全操作系统、安全数据库、应用开发运营的安全平台以及网络安全管理监…...

2023前端面试题

一.HTML篇 1.HTML是什么&#xff1f;它的缩写代表什么&#xff1f; HTML代表"超文本标记语言"&#xff08;Hypertext Markup Language&#xff09;&#xff0c;它是一种用于创建网页结构和内容的标记语言。 2.HTML文档的基本结构是什么&#xff1f; 基本的HTML结构包…...

Spring整合第三方框架-MyBatis原始操作代码

建议自己写一下 实体类&#xff0c;用于封装数据库数据 package com.example.pojo;import java.util.Date;public class Emp {private Integer id;private String username;private String password;private String name;private Integer gender;private String image;privat…...

比特币 ZK 赏金系列:第 2 部分——查找哈希冲突

在我们的零知识赏金 (ZKB) 系列的第二部分中&#xff0c;我们将其应用于解决哈希冲突难题。在这样的谜题中&#xff0c;两个不同的输入散列到相同的输出。此类赏金可用于&#xff1a; 充当煤矿中的金丝雀&#xff0c;给我们一个有价值的提醒。存在冲突是散列函数较弱的标志&…...

Android9底部导航栏出现空白按钮问题分析

Android9底部导航栏出现空白按钮问题分析 底部导航栏的初始化 进入NavigationBarView初始化: 进入NavigationBarView的onFinishInflater进入NavigationBarInflaterView NavigationBarInflaterView加载单个的button回到NavigationFragment的创建流程 多次调用NavigationBarView的…...

秦时明月沧海手游阵容推荐,秦时明月沧海角色强度

秦时明月沧海角色强度如何&#xff1f;在秦时明月沧海手游中&#xff0c;您可以从大量的角色卡牌中选择并发展&#xff0c;为了顺利通过各种副本&#xff0c;玩家们需要精心搭配阵容。那么&#xff0c;具体该如何配置最强的角色呢&#xff1f; 下面&#xff0c;小编将带各位玩家…...

基于微信小程序的大学生科技竞赛竞技报名系统设计与实现(源码+lw+部署文档+讲解等)

文章目录 前言系统主要功能&#xff1a;具体实现截图论文参考详细视频演示为什么选择我自己的网站自己的小程序&#xff08;小蔡coding&#xff09;有保障的售后福利 代码参考源码获取 前言 &#x1f497;博主介绍&#xff1a;✌全网粉丝10W,CSDN特邀作者、博客专家、CSDN新星计…...

crypto:摩丝

题目 根据题目所给的压缩包下载后解压&#xff0c;打开文本提示 摩斯密码&#xff0c;对照表可解码得到flag...

Docker最基本使用

1 安装&#xff1a; sudo apt-get -y install docker.io测试&#xff1a; sudo docker run hello-world成功&#xff1a; Hello from Docker! This message shows that your installation appears to be working correctly.2 查看 查看已有镜像&#xff1a; sudo docker i…...

vue2.x 迭代更新项目去掉缓存处理

找到build文件下的webpack.prod.conf.js文件 定义一个常量version const Version new Date().getTime(); 然后在.js和.css前面加上.${Version}就可以了&#xff08;注意得把原本的换成&#xff09;...

Linux高性能服务器编程 学习笔记 第八章 高性能服务器程序框架

TCP/IP协议在设计和实现上没有客户端和服务器的概念&#xff0c;在通信过程中所有机器都是对等的。但由于资源&#xff08;视频、新闻、软件等&#xff09;被数据提供者所垄断&#xff0c;所以几乎所有网络应用程序都采用了下图所示的C/S&#xff08;客户端/服务器&#xff09;…...

技术对比:Flutter vs. 传统桌面应用开发框架

在移动应用开发领域&#xff0c;Flutter已经赢得了广泛的认可和采用&#xff0c;成为了跨平台移动应用开发的瑞士军刀。然而&#xff0c;Flutter的魅力并不仅限于移动平台&#xff0c;它还可以用于开发桌面应用程序&#xff0c;为开发人员提供了一种全新的选择。本文将深入探讨…...

[C++ 网络协议] 异步通知I/O模型

1.什么是异步通知I/O模型 如图是同步I/O函数的调用时间流&#xff1a; 如图是异步I/O函数的调用时间流&#xff1a; 可以看出&#xff0c;同异步的差别主要是在时间流上的不一致。select属于同步I/O模型。epoll不确定是不是属于异步I/O模型&#xff0c;这个在概念上有些混乱&a…...

Postgresql事务测试

参考一个事务中 可以查询自己未提交的数据吗_最详细MySQL事务隔离级别及原理讲解&#xff01;&#xff08;二&#xff09;-CSDN博客 一个事务中 可以查询自己未提交的数据吗_趣说数据库事务隔离级别与原理_weixin_39747293的博客-CSDN博客 【MySql&#xff1a;当前读与快照读…...

【数据结构--排序】冒泡排序,选择排序,插入排序

&#x1f490; &#x1f338; &#x1f337; &#x1f340; &#x1f339; &#x1f33b; &#x1f33a; &#x1f341; &#x1f343; &#x1f342; &#x1f33f; &#x1f344;&#x1f35d; &#x1f35b; &#x1f364; &#x1f4c3;个人主页 &#xff1a;阿然成长日记 …...

vue pc端/手机移动端 — 下载导出当前表格页面pdf格式

一、需求&#xff1a;在手机端/pc端实现一个表格页面&#xff08;缴费单/体检报告单等&#xff09;的导出功能&#xff0c;便于用户在本地浏览打印。 二、实现&#xff1a;之前在pc端做过预览打印的功能&#xff0c;使用的是print.js之类的方法让当前页面直接唤起打印机的打印预…...

125. 验证回文串 【简单题】

题目 如果在将所有大写字符转换为小写字符、并移除所有非字母数字字符之后&#xff0c;短语正着读和反着读都一样。则可以认为该短语是一个 回文串 。 字母和数字都属于字母数字字符。 给你一个字符串 s&#xff0c;如果它是 回文串 &#xff0c;返回 true &#xff1b;否则…...

描述性统计分析

前言&#xff1a; 本专栏参考教材为《SPSS22.0从入门到精通》&#xff0c;由于软件版本原因&#xff0c;部分内容有所改变&#xff0c;为适应软件版本的变化&#xff0c;特此创作此专栏便于大家学习。本专栏使用软件为&#xff1a;SPSS25.0 本专栏所有的数据文件可在个人主页—…...

Visual Studio2019 C++ 编程问题集锦

“const char*” 类型的值不能用于初始化“char*"类型的实体 解决方案一&#xff1a; 点击项目->属性->C/C>语言->符合模式&#xff0c;将原来的“是”改为“否”即可。解决方案二&#xff1a; 在声明变量 char* 时改成 const char *即可...

链表的回文判断

思路: 找中间节点–>逆置->比较 代码&#xff1a; /*** Definition for singly-linked list.* struct ListNode {* int val;* struct ListNode *next;* };*/struct ListNode* middleNode(struct ListNode* head) { struct ListNode*slowhead; struct ListNode*f…...

281_JSON_两段例子的比较,哪一段更简洁、易懂、没有那么多嵌套

《第一份:》//组装Notificationif (bSendAINotification){BOOST_AUTO(iter_flashnotification, documentAll.FindMember("Notification"));if (iter_flashnotification != documentAll....

想要精通算法和SQL的成长之路 - 最长递增子序列 II(线段树的运用)

想要精通算法和SQL的成长之路 - 最长递增子序列 II&#xff08;线段树的运用&#xff09; 前言一. 最长递增子序列 II1.1 向下递推1.2 向上递推1.3 更新操作1.4 查询操作1.5 完整代码&#xff1a; 前言 想要精通算法和SQL的成长之路 - 系列导航 一. 最长递增子序列 II 原题链接…...