Skip to content

Commit d0b56df

Browse files
nv-guomingzGuoming Zhang
andauthored
fix doc typo. (#114)
Co-authored-by: Guoming Zhang <37257613+nv-guomingz@users.noreply.github.com>
1 parent 1c4a0ee commit d0b56df

File tree

2 files changed

+3
-3
lines changed

2 files changed

+3
-3
lines changed

docs/source/python-api/tensorrt_llm.quantization.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
Qunatization
1+
Quantization
22
===========================
33

44
.. automodule:: tensorrt_llm

tests/quantization/test_mode.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -170,12 +170,12 @@ def test_int8_kv_cache(self):
170170
self.assertTrue(qm.is_int8_weight_only())
171171

172172
def test_failure_quant(self):
173-
# Expect failure if weights are not qunatized, but activations are.
173+
# Expect failure if weights are not quantized, but activations are.
174174
self.assertRaises(
175175
ValueError,
176176
lambda: QuantMode.from_description(False, True, False, False))
177177

178-
# Expect failure if per token and per channel quantization, but weights and activations are not qunatized.
178+
# Expect failure if per token and per channel quantization, but weights and activations are not quantized.
179179
self.assertRaises(
180180
ValueError,
181181
lambda: QuantMode.from_description(False, False, True, True))

0 commit comments

Comments
 (0)