I'm trying to drive multiple servos using a single PWM block and DEMUX. Normally, I have enough PWM blocks to drive 4 servos. However, there are some applications that we require to use nearly all TCPWM/UDBs. So I'd like to know if it is possible to drive multiple pins with PWM using a single PWM block and DEMUX.
The idea is as follows:
PWM frequency is 50 Hz. Duty cycle on time is around 0.5 ms to 5 ms. So, I thought that I could start pwm on pin 1, wait for a falling edge interrupt. After that, switch to pin 2 and start PWM again with counter reset and new compare value. Same stuff for pin 3 and 4. After we get a final falling edge on pin 4, I write the total elapsed on time to PWM counter and let it generate an interrupt on TC. After TC is reached, the processes loops. So, this should in theory, make me able to use 1 single PWM block and vary its compare match value to drive multiple pins. However, the project does not work as expected. There is much jitter on the servos. Can anyone help me with this issue?
Thanks, in advance.
Attached is the example project that I've been working on.
Servo PWM is comparably slow. You may use a general interrupt driven timebase built with a timer and a piece of software for the required number of output pins.
@odissey1 and @bobgoar: Thanks for the input! I actually came across that block when we were searching for how many PWM channels were available etc. at the time before using Cypress chips. I never cared to look at the block because it was not officially available at the Creator Suite and technically, just because of that, not applicable for most our products (We don't want stuff to fail because of possibly badly coded community blocks). However, upon your advice, I downloaded the block and the verilog file along with .c/.h files were available for us to look at and tinker. So yeah, maybe we can use that block.
@Bob Marlowe: Are you suggesting that I set up a timer with interrupt and use it to drive a software PWM? If that's the case, then yes, it is certainly possible for a small project that drives a couple of toy servos. However, we never really use software PWM because of CPU overhead. Some of our projects are simple. Yet others are very computationally intensive with all the capsense/adc modules in use etc. We also need to use interrupts with high priority. This means that there will be at least 0.5-1ms jitter on the output of those software PWM. And we also need high frequency PWM. Software PWM works kind of okay-ish for low frequency but a no-no for high.
It is possible that the demux is open-circuiting the output. However, the pins are drived in strong drive mode. I suppose pin output can't be floating but it can be 1-0 depending on demux output. If I was driving demux output high and then changed the selector to pin 2, would demux output son pin 1 stay high? If so, that's a problem.
Other than that, has anyone got any idea on why my code/algorithm does not work? Trying to make this work is important for me because if this works, then what usually happens next is that I gain enough confidence to apply unorthodox methods like this to solve some cumbersome coding problems. If this doesn't work, then what I usually do is assume all creativity would fail and go the old-fashioned way.
Hope someone can shed light on this issue.
"...any idea on why my code/algorithm does not work?"
1. This is your main loop
p1_movement = 420.0 + 87.0 * sin((double)PWM_compare_val_timer/(double)3000.0 * M_PI);
p2_movement = 750.0 + 187.0 * sin((double)PWM_compare_val_timer/(double)3000.0 * M_PI);
p3_movement = 550.0 + 30.0 * sin((double)PWM_compare_val_timer/(double)3000.0 * M_PI);
p4_movement = 500.0 + 300.0 * sin((double)PWM_compare_val_timer/(double)6000.0 * M_PI);
1. main loop is 100% busy, practically no yield to any ISR routines you have
2. most of calculated values pi1, pi2,... are not used (need only occasionally and only one value at a time)
3. each pi1-pi4 value takes > 10k CPU ticks to execute ("sine" is sloooow)
4. disabling global interrupts for 99% of the time gives practically no yield to other ISRs
5. Timer_1 ISR fires at ~200kHz rate (isn't it an overkill for 50Hz PWM update rate?). If it takes only 50 CPU ticks each time, total CPU clocks consumed by Timer_1 isr is roughly 200k x 50 = 10M CPU ticks (out of 24M ticks avaialable), or ~40% CPU just to increment PWM_compare_val_timer!
I like your idea of merging 4 servo pulses into single PWM, but it is a miracle that the code resulted in just a "small jittter". Adressing those issues should solve the problem. Nice idea.
Thanks for the input!
Actually, the algorithm works fine if I let the demux stick to one servo. It works fine for all servos and there is a sine movement for all of them. The sine timer is supposed to generate interrupt every 1 ms and gets divided by 3000 as to slow servo movement. Also this helps me to divide the pwm into more degree of revolutions. This part works fine. I am able to receive interrupt from pwm because that's where I change the compare value. Since I have sine movement, my ısr works fine. The problem starts when I let demux change pins based on the falling edge of pwm output. I don't know why that part doesn't work.
The code also looks a bit weird because I took stuff out to simplify and it worked so hey, just uploaded it. I'm still gonna try and work out your suggestions though.
I like doing unorthodox stuff just to try it out. If it works, Ill post the solution. I'll also keep you guys informed about the status/problems.
Hope we can figure this out.
attached are two projects with some variations, one with demux and another without it (to save resources). To make it work without glitches, the hardware timing is utilized using either a counter or a lookup table. Note that 4x demux takes away ~20% of PLD space, so in the second project the demux is abandoned and built-in Pin_enable function is used. This poses some constraint on pin selection though. As shown, each PWM output varies 0-5ms, total PRF 50Hz; each PWM resolution 1024. To save CPU resources, sin() function was replaced with 12-bit sine lookup table.
YouTube video of the output can be found here: https://youtu.be/qQQ731Kddq8
Wow! I think you've figured out a way to combine PWM and demux. I've tried your project with the servos and they work quite fine! Actually, it is a very nice idea to divide the 20ms period to 5ms (since duty cycle is always below 5ms on-time) and use the OV to drive the interrupt. It's a nice work around with our problem with the interrupt driven on-time PWM sequence. (still couldn't figure that out and I bet I won't be able to without an oscilloscope - Can't use the one at work- I'll have to program one of one of my older development boards sometime. It's difficult to profile the code without an oscilloscope).
I couldn't manage to get your second project to work though, but, I'm sure there's an explanation since you've proved that it works with an oscilloscope. I would have never thought of using a LUT and OE pin like that. Great idea!
Well, from time to time I use interrupt driven demux in our fpga projeccts. It is good to know that PSOC4 can be an alternative to expensive arm core + fpga chips when we need low resources. Sure, some of this stuff can be done with software but where's the fun in that! I'll have to look further into PSOC, I suppose.
By the way, thanks for the very detailed study and everything. From now on, we shall call your project "One PWM to Rule Them All".
Maximus, on second project pay attention to the output pins - they are not the same as on the first one. The Pin_enable has its drawback - now only 4 outputs are available out of 8-pin port, so I had to re-assign the pins, and less outputs are available for servo control. I think it will be challenging to make 32 PWM outputs on single PSoC4M.
On other subject, what chips you use for ARM+FPGA projects? (I am looking in that direction, but can't decide where to start, so need advise).
Sorry for the late reply. Last time I tried, somehow this forum thought I was a robot spammer.
Anyway, I only used Xilinx FPGA chips (e.g. Zynq 7000) for FPGA+Arm boards so I can not give you an unbiased opinion. However, I do know that the FPGA world is not as developed as MCUs and it is best to buy the more expensive FPGAs rather than use cheaper ones. It's a difficult task to make FPGA's work right even for the simplest cases.
If it's the first time you're starting FPGAs, you can use any FPGA board with an open core software MCU from for example : https://opencores.org/ to see if this work out for you. If you have the money and experience, go for zynq 7000 and higher powered ones.
Thanks again for your analysis on this subject.