Unusual current draw during connection using BCM20732S

Tip / Sign in to post questions, reply, level up, and achieve exciting badges. Know more

cross mob
legic_1490776
Level 5
Level 5
25 likes received 10 likes received First like received

I am seeing a very unusual current draw pattern in my application. 

I have set my connection interval to about .5 seconds to achieve low power, and the system is generally sleeping between intervals.  My application sends 4 notifications per second on average. 

When I measure the current used by the application, I can clearly see the connection times, and usually between those times the current is very low.  However, sometimes, it burns about 2mA between the two intervals.

Here are some plots showing the pattern:

unusual current draw.png

The x axis is seconds and the Y axis is microamps.  The current sampling rate is 5KHz. The baseline is close to 0; the peaks during transmit are around 25mA (this is using TX power level of 2dB).  The weird times it has a new baseline of 2mA minimum.

unusual current draw-zoom.png

Here is a zoomed in view.  You can see other stuff going on at 10Hz, and then when this weird thing happens...

unusual current draw-zoom2.png

You can see some periodicity to the observed current.

This looks a little bit similar to problems we had in the past with the PMU clock warmup time.  We have the value set to 5000 currently.

This is using the TN1337 lot code.

0 Likes
1 Solution

> I'm using SDK 1.1....Will your suggestion work for SDK 1.1?

Yes, it will, and like SDK 2.0, you don't want to go lower than about 1200.

> Some fraction of the time when I would try to enter HIDoff, it would appear to go to sleep, but instead of going down to 3 microamps, would rise to a permanent high level of 2500 microamps!!.....but it clearly goes haywire trying to get to the idle state and instead spends the time in a very BAD state.. This is eating up 10x the power it should!

The 2.5mA you see is not really a bad state. Since sleep is disabled during this interval (we will get to why in a bit below) the device is in and out of CPU active and pause. Ideally, it would be in pause, but when connected, the CPU also needs to take the bluetooth slot interrupts every 625uS and those are the smaller spikes you see on top of the ~2.5mA.

> do you think this might prevent it from getting into these failed sleep situations, or would this reduce the added cost of waking the processor up early?

This will reduce the chances of it getting into the failed sleep situations. If you zoom into the 20+ mA peak right before the failed sleep, I bet you'll see a single peak compared to two peaks in a 'good' case (one for RX and another for TX). This single peak corresponds to just the RX and the TX is missing because the device did not see the sync/preamble bits. This could happen due to a number of reasons - interference is generally the primary contributor, antenna gain (if it is not optimal), range, relative clock drift on both sides (when connection intervals are longer, the (LPO) clocks drift further apart in the time in between). The FW is designed a bit conservatively and it always assumes that a missed sync is due to larger than expected (LPO) clock drift on the device's side and so disables sleep till the next connection event (when awake/in pause, the xtal is used which is never more than ~20 ppm drift).

With the drift rate parameter of bleapputils_changeLPOSource(), what you are setting is the worst case drift of the LPO. The FW uses this to calculate the wake time and also the instant at which it should open its receive window and setting a larger drift rate will mean the receive window will be a bit larger. So the probability to getting the preamble bits is much higher.

View solution in original post

0 Likes
11 Replies