You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: windows/README.md
+36-30Lines changed: 36 additions & 30 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,40 +10,42 @@
10
10
-[Installation](#installation)
11
11
-[Extra Steps for C++ Runtime Usage](#extra-steps-for-c-runtime-usage)
12
12
-[Next Steps](#next-steps)
13
+
-[Limitations](#limitations)
13
14
14
15
## Overview
15
16
16
-
TensorRT-LLM is supported on bare-metal Windows for single-GPU inference. We provide a release wheel for Windows which can be downloaded from https://developer.nvidia.com/. Alternatively, you may build TensorRT-LLM for Windows from source. Building from source is an advanced option and is not necessary for building or running LLM engines. It is, however, required if you plan to use the C++ runtime directly or run C++ benchmarks.
17
+
TensorRT-LLM is supported on bare-metal Windows for single-GPU inference. The release supports GeForce 40-series GPUs.
18
+
19
+
The release wheel for Windows can be installed with `pip`. Alternatively, you may build TensorRT-LLM for Windows from source. Building from source is an advanced option and is not necessary for building or running LLM engines. It is, however, required if you plan to use the C++ runtime directly or run C++ benchmarks.
17
20
18
21
## Quick Start
19
22
20
23
If you encounter difficulties with any prerequisites, check the [Detailed Setup](#detailed-setup) instructions below.
-[TensorRT 9.1.0.4 for TensorRT-LLM](https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/secure/9.1.0/tars/tensorrt-9.1.0.4.windows10.x86_64.cuda-12.2.llm.beta.zip)
Install [Python3 >= 3.9](https://www.python.org/downloads/windows/). When installing, add to the system `Path` and click "Disable path length limit." The installation may only add the `python` command, but not the `python3` command. Navigate to the installation path, `C:\Users\<username>\AppData\Local\Programs\Python\Python39` (note `AppData` is a hidden folder), and copy `python.exe` to `python3.exe`.
39
+
Install [Python 3.10](https://www.python.org/downloads/windows/). Select "Add python.exe to PATH" at the start of the installation. The installation may only add the `python` command, but not the `python3` command. Navigate to the installation path, `%USERPROFILE%\AppData\Local\Programs\Python\Python310` (note `AppData` is a hidden folder), and copy `python.exe` to `python3.exe`.
38
40
39
41
### CUDA
40
-
Install the [CUDA 12.2 Toolkit](https://developer.nvidia.com/cuda-downloads?target_os=Windows&target_arch=x86_64). You may use the Express Installation option. Installation may require a restart.
42
+
Install the [CUDA 12.2 Toolkit](https://developer.nvidia.com/cuda-12-2-2-download-archive?target_os=Windows&target_arch=x86_64). You may use the Express Installation option. Installation may require a restart.
41
43
42
44
### Microsoft MPI
43
45
Download and install [Microsoft MPI](https://www.microsoft.com/en-us/download/details.aspx?id=57467). You will be prompted to choose between an `exe`, which installs the MPI executable, and an `msi`, which installs the MPI SDK. Download and install both.
44
46
45
47
### TensorRT-LLM Repo
46
-
It may be useful to create a single folder for holding TensorRT-LLM and its dependencies, such as `C:\Users\<username>\inference\`. We will assume this directory structure in further steps.
48
+
It may be useful to create a single folder for holding TensorRT-LLM and its dependencies, such as `%USERPROFILE%\inference\`. We will assume this directory structure in further steps.
47
49
48
50
Install [Git for Windows](https://git-scm.com/download/win).
Download and unzip [TensorRT 9.1.0.4](https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html#installing-zip). Move the folder to a location you can reference later, such as `C:\Users\<username>\inference\TensorRT`.
61
+
Download and unzip [TensorRT 9.1.0.4 for TensorRT-LLM](https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/secure/9.1.0/tars/tensorrt-9.1.0.4.windows10.x86_64.cuda-12.2.llm.beta.zip). Move the folder to a location you can reference later, such as `%USERPROFILE%\inference\TensorRT`.
60
62
61
-
Download and unzip [cuDNN](https://developer.nvidia.com/cudnn). Move the folder to a location you can reference later, such as `C:\Users\<username>\inference\cuDNN`.
63
+
Download and unzip [cuDNN](https://developer.nvidia.com/cudnn). Move the folder to a location you can reference later, such as `%USERPROFILE%\inference\cuDNN`.
62
64
63
-
You'll need to add libraries and binaries for TensorRT and cuDNN to your system's `Path` environment variable. To do so, click the Windows button and search for "environment variables." Select "Edit the system environment variables." A "System Properties" window will open. Select the "Environment Variables" button at the bottom right, then in the new window under "System variables" click "Path" then the "Edit" button. Add "New" lines for the `bin` and `lib` dirs of both TensorRT and cuDNN. Your `Path` should include lines like this:
65
+
You'll need to add libraries and binaries for TensorRT and cuDNN to your system's `Path` environment variable. To do so, click the Windows button and search for "environment variables." Select "Edit the system environment variables." A "System Properties" window will open. Select the "Environment Variables" button at the bottom right, then in the new window under "System variables" click "Path" then the "Edit" button. Add "New" lines for the `lib` dir of TensorRT and for the `bin` and `lib` dirs of cuDNN. Your `Path` should include lines like this:
64
66
65
67
```
66
-
C:\Users\<username>\inference\TensorRT\bin
67
-
C:\Users\<username>\inference\TensorRT\lib
68
-
C:\Users\<username>\inference\cuDNN\bin
69
-
C:\Users\<username>\inference\cuDNN\lib
68
+
%USERPROFILE%\inference\TensorRT\lib
69
+
%USERPROFILE%\inference\cuDNN\bin
70
+
%USERPROFILE%\inference\cuDNN\lib
70
71
```
71
72
72
73
Click "OK" on all the open dialogue windows. Be sure to close and re-open any existing Powershell or Git Bash windows so they pick up the new `Path`.
73
74
74
75
Now, to install the TensorRT core libraries, run Powershell and use `pip` to install the Python wheel:
You may run the following command to verify that your TensorRT installation is working properly:
@@ -91,21 +92,22 @@ Install [CMake](https://cmake.org/download/) and select the option to add it to
91
92
92
93
Download and install [Visual Studio 2022](https://visualstudio.microsoft.com/). When prompted to select more Workloads, check "Desktop development with C++."
93
94
94
-
TensorRT-LLM on Windows currently depends on NVTX assets that do not come packaged with the CUDA12.2 installer. To install these assets, download the [CUDA11.8 Toolkit](https://developer.nvidia.com/cuda-11-8-0-download-archive?target_os=Windows&target_arch=x86_64). During installation, select "Advanced installation." Nsight NVTX is located in the CUDA drop down. Deselect *all* packages, and select Nsight NVTX.
95
+
TensorRT-LLM on Windows currently depends on NVTX assets that do not come packaged with the CUDA12.2 installer. To install these assets, download the [CUDA11.8 Toolkit](https://developer.nvidia.com/cuda-11-8-0-download-archive?target_os=Windows&target_arch=x86_64). During installation, select "Advanced installation." Nsight NVTX is located in the CUDA drop down. Deselect all packages, and then select Nsight NVTX.
95
96
96
97
## Building from Source
97
98
98
99
*Advanced. Skip this section if you plan to use the pre-built TensorRT-LLM release wheel.*
99
100
100
101
In Powershell, from the `TensorRT-LLM` root folder, run:
101
102
```
102
-
python .\scripts\build_wheel.py -a <architecture> --trt_root <path_to_trt_root> --build_type Release -D "ENABLE_MULTI_DEVICE=0"
103
+
python .\scripts\build_wheel.py -a "89-real" --trt_root <path_to_trt_root> --build_type Release -D "ENABLE_MULTI_DEVICE=0"
103
104
```
104
-
`<architecture>` should correspond to the architecture or list of architectures you wish to support, e.g `"86-real;89-real"` to support GeForce 30-series and 40-series cards.
105
105
106
106
The `-D "ENABLE_MULTI_DEVICE=0"` is required on Windows. Multi-device inference is supported on Linux, but not on Windows.
107
107
108
-
The above command will generate `build\tensorrt_llm-<version>-py3-none-any.whl`. Other generated files include:
108
+
The `-a` flag specifies the device architecture. `"89-real"` supports GeForce 40-series cards.
109
+
110
+
The above command will generate `build\tensorrt_llm-0.5.0-py3-none-any.whl`. Other generated files include:
109
111
110
112
-`build\` - Contains the wheel and other built artifacts
111
113
-`cpp\build\` - Contains cpp-related build files
@@ -114,10 +116,14 @@ The above command will generate `build\tensorrt_llm-<version>-py3-none-any.whl`.
114
116
115
117
## Installation
116
118
117
-
In Powershell, from the root of this repo, run:
119
+
To download and install the wheel, in Powershell, run:
You may run the following command to verify that your TensorRT-LLM installation is working properly:
@@ -147,17 +153,17 @@ Building from source will produce the following library files:
147
153
-`th_common.exp`
148
154
-`th_common.lib`
149
155
150
-
The locations of the DLLs, in addition to some `torch` DLLs, must be added to the Windows `Path` in order to us the TensorRT-LLM C++ runtime. As in [Setup](#setup), append the locations of these libraries to your `Path`. When complete, your `Path` should include lines similar to these:
156
+
The locations of the DLLs, in addition to some `torch` DLLs, must be added to the Windows `Path` in order to us the TensorRT-LLM C++ runtime. As in [Detailed Setup](#detailed-setup), append the locations of these libraries to your `Path`. When complete, your `Path` should include lines similar to these:
Copy file name to clipboardExpand all lines: windows/examples/llama/README.md
+1Lines changed: 1 addition & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,6 +9,7 @@ The TensorRT-LLM LLaMA example code is located in [`examples/llama`](../../../ex
9
9
Rather, here we showcase how to run a quick benchmark using the provided `benchmark.py` script. This script builds, runs, and benchmarks an INT4-GPTQ quantized LLaMA model using TensorRT.
0 commit comments