There are two ways to go about this: grouping the sorted elements (which doesn't use .value_counts()) and binning.
Grouping sorted elements
Sort the values, compare each pair (using .diff()), then assign group numbers (using .cumsum()).
Then you can .groupby() and aggregate, getting the unique elements of each group and their size of course.
tolerance = 0.01
vals_sorted = df['val'].sort_values()
group_numbers = (
vals_sorted
.diff()
.gt(tolerance)
.cumsum()
.rename('group_number')
)
vals_sorted.groupby(group_numbers).agg(['unique', 'size'])
unique size
group_number
0 [5.01] 2
1 [5.08] 1
2 [5.54, 5.55, 5.56] 3
3 [6.1] 1
4 [6.3] 1
5 [6.7] 1
Binning
Create equally-sized bins and pass them to .value_counts(). This is a shortcut for pd.cut().
Lastly, since this is a categorical value count, zeroes are included, so filter them out.
I also sorted the result by the index so it's easier to compare it against the first solution.
import numpy as np
tolerance = 0.01
start = df['val'].min()
stop = df['val'].max()
step = 3 * tolerance
bins = np.arange(start, stop+step, step)
df['val'].value_counts(bins=bins)[lambda s: s > 0].sort_index()
val
(5.0089999999999995, 5.04] 2
(5.07, 5.1] 1
(5.52, 5.55] 2
(5.55, 5.58] 1
(6.09, 6.12] 1
(6.27, 6.3] 1
(6.69, 6.72] 1
Name: count, dtype: int64
The result isn't quite what you want, but it's close. Maybe you'd want to adjust the start value, e.g. start = df['val'].min() - 2*tolerance.