I would capture the position when the first touch and compare it with the last position captured when the Capsense would be released.
The user will place the finger on the slider and then slide across the slider and you will determine the distance moved/ direction based on the displacement. For example if user moves from 0 to 100 it is positive direction and if the user moves from 100 to 0 it is negative direction. In this case you need to register two events, Touch down and lift off and then calculate the displacement between these two events.
Could you please elaborate the need for two finger detection in your case? Will the user place two fingers at a time on the slider or will they just slide across the slider?
The operation you mentioned is already what I do. however, when a user puts his fingers on a rotary button he usually puts 3 to 4 fingers on it. The capsense library does not tolerate the detection of more one finger on a slider.
Imagine your rotary button for the volume audio car was fixed , when you want to increase or decrease the volume you don't use only one finger but at least three fingers.
that's why I need two or more fingers detections
Thanks for the explanation.
Our centroid algorithm can work only with one finger present in the slider. if you want more than more fingers, you may have to develop your own alogorithm but that will be quite difficult to implement. It will depend on many factors such as spacing between the slider widgets, spacing between the fingers when actually moving around the slider, and if there is change in the spacing between the fingers when actually moving around the slider will cause the algorithm to fail.
We recommend you to use a rotary encoder that will move around metal coils using our inductive sensing solution. We already have a code example to showcase rotary encoders. You can refer to the following link.
You can refer to the code example that is available in the above for the project.